exam questions

Exam AWS Certified Solutions Architect - Associate SAA-C03 All Questions

View all questions & answers for the AWS Certified Solutions Architect - Associate SAA-C03 exam

Exam AWS Certified Solutions Architect - Associate SAA-C03 topic 1 question 956 discussion

A company is migrating its data processing application to the AWS Cloud. The application processes several short-lived batch jobs that cannot be disrupted. Data is generated after each batch job is completed. The data is accessed for 30 days and retained for 2 years.

The company wants to keep the cost of running the application in the AWS Cloud as low as possible.

Which solution will meet these requirements?

  • A. Migrate the data processing application to Amazon EC2 Spot Instances. Store the data in Amazon S3 Standard. Move the data to Amazon S3 Glacier Instant. Retrieval after 30 days. Set an expiration to delete the data after 2 years.
  • B. Migrate the data processing application to Amazon EC2 On-Demand Instances. Store the data in Amazon S3 Glacier Instant Retrieval. Move the data to S3 Glacier Deep Archive after 30 days. Set an expiration to delete the data after 2 years.
  • C. Deploy Amazon EC2 Spot Instances to run the batch jobs. Store the data in Amazon S3 Standard. Move the data to Amazon S3 Glacier Flexible Retrieval after 30 days. Set an expiration to delete the data after 2 years.
  • D. Deploy Amazon EC2 On-Demand Instances to run the batch jobs. Store the data in Amazon S3 Standard. Move the data to Amazon S3 Glacier Deep Archive after 30 days. Set an expiration to delete the data after 2 years.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
nebajp
Highly Voted 5 months, 4 weeks ago
Selected Answer: D
D is the correct answer for 30 days - use Amazon S3 standard 2 years Retaining - Glacier Deep Archive Can not be Disrupted - On-Demand Instances
upvoted 8 times
...
SR0312
Highly Voted 6 months ago
Selected Answer: B
Job cannot be disrupted - On demand
upvoted 5 times
...
FlyingHawk
Most Recent 2 weeks, 5 days ago
Selected Answer: D
1. The short-lived batch jobs cannot be disrupted -> on demand. A and C are out 2. The data is accessed for 30 days and retained for 2 years. means S3 Standard (30 days) -> S3 Glacier -> expire. S3 Glicer instant Retrieval has a minimum storage for 90 days, so B is out.
upvoted 1 times
...
LeonSauveterre
1 month ago
Selected Answer: D
B - Amazon S3 Glacier Instant Retrieval is an archive storage class that delivers the lowest-cost storage for long-lived data that is rarely accessed and requires retrieval in milliseconds. So this is wrong because "The data is accessed for 30 days".
upvoted 1 times
...
Anyio
1 month, 1 week ago
Selected Answer: D
The correct answer is D. Deploy Amazon EC2 On-Demand Instances to run the batch jobs. Store the data in Amazon S3 Standard. Move the data to Amazon S3 Glacier Deep Archive after 30 days. Set an expiration to delete the data after 2 years. Explanation: Option D: Correct. Since the batch jobs cannot be disrupted, using Amazon EC2 On-Demand Instances ensures the application runs without interruption. Initially storing data in Amazon S3 Standard allows for access during the 30-day period post-processing. Moving the data to Amazon S3 Glacier Deep Archive after 30 days optimizes storage costs for long-term retention of up to 2 years, the lowest cost for infrequent access. Setting an expiration ensures the data is deleted after 2 years, controlling costs and data lifecycle management.
upvoted 1 times
...
EllenLiu
1 month, 1 week ago
Selected Answer: D
I am always confused about the choice between s3 standard and s3 glacier instant... short period of storage + frequently retrieve ==> s3 standard long period of storage + infrequently retrieve ==> s3 glacier instant s3 standard: $0.023 per GB for storage / $0.0004 get, select per 1000 request s3 glacier instant: $0.004 per GB for storage /$0.01 get, select per 1000 request https://aws.amazon.com/s3/pricing/?nc=sn&loc=4
upvoted 2 times
...
spoved
4 months, 1 week ago
Selected Answer: B
https://aws.amazon.com/about-aws/whats-new/2021/11/amazon-s3-glacier-instant-retrieval-storage-class/ - The easiest way to store data in S3 Glacier Instant Retrieval is to use the S3 PUT API to upload data directly, or use S3 Lifecycle to transition data from the S3 Standard and S3 Standard-IA storage classes. - The company wants to keep the cost of running the application in the AWS Cloud as low as possible => B
upvoted 1 times
FlyingHawk
2 weeks, 5 days ago
S3 Glacier Instant Retrieval has a minimum storage duration of 90 days. If you move data to S3 Glacier Deep Archive before 90 days, you will be charged for the full 90 days of storage, even if the data is deleted or moved earlier. This makes it cost-ineffective for the company's requirement to move data after 30 days.
upvoted 1 times
...
...
[Removed]
5 months, 2 weeks ago
Selected Answer: D
D looks right
upvoted 3 times
...
pujithacg8
5 months, 4 weeks ago
D is correct
upvoted 2 times
...
flaviobrf
6 months ago
Selected Answer: D
I understand that D is the right anwser
upvoted 4 times
...
siheom
6 months ago
Selected Answer: C
I VOTE C
upvoted 1 times
officedepotadmin
5 months, 3 weeks ago
you voted wrong
upvoted 1 times
...
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago