Welcome to ExamTopics
ExamTopics Logo
- Expert Verified, Online, Free.
exam questions

Exam AWS Certified Solutions Architect - Professional SAP-C02 All Questions

View all questions & answers for the AWS Certified Solutions Architect - Professional SAP-C02 exam

Exam AWS Certified Solutions Architect - Professional SAP-C02 topic 1 question 376 discussion

A company that provides image storage services wants to deploy a customer-facing solution to AWS. Millions of individual customers will use the solution. The solution will receive batches of large image files, resize the files, and store the files in an Amazon S3 bucket for up to 6 months.

The solution must handle significant variance in demand. The solution must also be reliable at enterprise scale and have the ability to rerun processing jobs in the event of failure.

Which solution will meet these requirements MOST cost-effectively?

  • A. Use AWS Step Functions to process the S3 event that occurs when a user stores an image. Run an AWS Lambda function that resizes the image in place and replaces the original file in the S3 bucket. Create an S3 Lifecycle expiration policy to expire all stored images after 6 months.
  • B. Use Amazon EventBridge to process the S3 event that occurs when a user uploads an image. Run an AWS Lambda function that resizes the image in place and replaces the original file in the S3 bucket. Create an S3 Lifecycle expiration policy to expire all stored images after 6 months.
  • C. Use S3 Event Notifications to invoke an AWS Lambda function when a user stores an image. Use the Lambda function to resize the image in place and to store the original file in the S3 bucket. Create an S3 Lifecycle policy to move all stored images to S3 Standard-Infrequent Access (S3 Standard-IA) after 6 months.
  • D. Use Amazon Simple Queue Service (Amazon SQS) to process the S3 event that occurs when a user stores an image. Run an AWS Lambda function that resizes the image and stores the resized file in an S3 bucket that uses S3 Standard-Infrequent Access (S3 Standard-IA). Create an S3 Lifecycle policy to move all stored images to S3 Glacier Deep Archive after 6 months.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
thala
Highly Voted 1 year ago
Selected Answer: B
Considering the requirements, Option B (Amazon EventBridge with AWS Lambda and S3 Lifecycle Expiration Policy) seems to be the most cost-effective and appropriate solution. It combines the scalability and flexibility of AWS Lambda for image processing with the straightforward event handling of Amazon EventBridge, and appropriately manages the image lifecycle with an S3 expiration policy. While Option C is also a strong contender, the misalignment of the lifecycle policy with the requirement makes Option B a better fit. Option A might be more suitable for complex workflows but is likely not needed for this scenario, and Option D includes unnecessary long-term archival steps.
upvoted 16 times
AzureDP900
1 week, 2 days ago
Agreed with B using Amazon EventBridge, you can meet the company's requirements most cost-effectively: handle significant variance in demand Be reliable at enterprise scale Rerun processing jobs in the event of failure (not explicitly required but ensures reliability) Move stored images to a colder storage class after 6 months to reduce costs.
upvoted 1 times
...
kgpoj
3 months, 2 weeks ago
How do you rerun for failure with option B? SQS can handle "rerun", hence D
upvoted 3 times
...
...
yuliaqwerty
Highly Voted 11 months, 1 week ago
B is for sure A no because Step Function is not in list of s3 event destinations https://docs.aws.amazon.com/AmazonS3/latest/userguide/notification-how-to-event-types-and-destinations.html C and D has option for storing data longer than 6 months which is not required
upvoted 12 times
AloraCloud
1 month, 1 week ago
Yes it is .... https://docs.aws.amazon.com/AmazonS3/latest/userguide/EventBridge.html
upvoted 2 times
...
...
0b43291
Most Recent 1 week, 5 days ago
Selected Answer: D
Difficult one. Both options B and D meet the specific requirement of storing the files in an Amazon S3 bucket for up to 6 months. However, when considering the additional requirements of being reliable at enterprise scale, having the ability to rerun processing jobs in the event of failure, and being the most cost-effective solution, option D with Amazon SQS, AWS Lambda, and the S3 Lifecycle policy to transition to Glacier Deep Archive is still the better choice. No Rerun of jobs with B. Only D
upvoted 1 times
...
Halliphax
2 weeks, 1 day ago
Selected Answer: B
"Store the images in S3 for six months" - leaves only option B. Options C & D mean keeping the images in S3 forever and that's not the more cost effective option compared to just deleting the files as the question implies is a requirement.
upvoted 1 times
...
nimbus_00
2 weeks, 1 day ago
Selected Answer: D
You’ve got to have a buffer for reruns! For those concerned about the 6 months TTL in S3 remember glacier isn’t S3.
upvoted 2 times
...
Daniel76
3 weeks, 2 days ago
Selected Answer: B
C and D are out, for keeping data more than 6 months. A is out, due to S3 event destination does not include step function, which is anyway seldom use for one step action. Eventbridge does support retry if event fail to go off:
upvoted 1 times
...
TewatiaAmit
1 month ago
Selected Answer: D
SQS ensured that any failed jobs can be retried.
upvoted 2 times
...
mkgiz
2 months ago
Selected Answer: D
"ability to rerun processing jobs in the event of failure"
upvoted 3 times
...
2aa2222
3 months, 1 week ago
Let’s break down the question into some decisive pieces: 1.Millions of customers will use solution  this to me has to be a robust queuing solution (like SQS, not eventbridge, not step-function) 2.Store the files in an Amazon S3 bucket for up to 6 months This doesn’t talk about deleting the files. It says store in “an” S3 bucket for 6-months, which means it can definitely go to another “MOST cost-effective” bucket, i.e. Glacier Deep Archive 3.Solution must handle significant variance in demand  “significant” variance can be interpreted as infrequent usage. 4.Solution must also be reliable /ability to rerun processing in the event of failure – Only SQS can achieve this. My verdict: Answer = D
upvoted 3 times
...
felon124
3 months, 1 week ago
Selected Answer: D
1. The solution must also be reliable at enterprise scale and have the ability to rerun processing jobs in the event of failure. Use SQS , option A / B / C are not include AWS SQS. 2. The cost-effective solution , option D contains S3 Glacier Deep Archive to reduce S3 storage costs
upvoted 2 times
...
ff32d79
3 months, 1 week ago
Selected Answer: B
I go for B but this question is completely wrong. First of all, if you modify in the same bucket you are going to have pontential infinite loop... which means 3 answers are out. But why would you save things in glacier when they can be deleted? About ReRun, in EventBridge you can replay...
upvoted 1 times
Miquella_The_Rizzler
2 weeks, 5 days ago
That is not true at all you can definitely check logic for comparing the upload date to prevent infinite loop, not to mention we can implement all kind of logic to assign UUID for image. This is their use case: EventBridge Replay when: - You need to reprocess a batch of historical events - Original event order matters - After fixing system-wide issues - For disaster recovery scenarios SQS Acknowledgment when: - You need per-message processing guarantees - Want automatic retry handling Need individual message tracking
upvoted 1 times
...
...
tgv
3 months, 2 weeks ago
Selected Answer: D
Everybody is focused on choosing the MOST cost-effective option, but there's also this requirement: "The solution must also be reliable at enterprise scale and have the ability to rerun processing jobs in the event of failure." which I believe it can be achieved only by option D
upvoted 1 times
...
053081f
4 months, 3 weeks ago
I think this question is warding wrong. If we look at the requirement "store the files in an Amazon S3 bucket for up to 6 months." and decide that objects can be deleted after 6 months, C and D are excluded. But is that true? Would AWS create a problem involving such an elementary mistake?
upvoted 1 times
...
awsaz
4 months, 3 weeks ago
Selected Answer: A
A is the answer
upvoted 1 times
...
Helpnosense
5 months ago
Selected Answer: D
Vote D because the requirement "rerun processing jobs in the event of failure." Glacier Deep archive is also really cost-effective
upvoted 4 times
...
9f02c8d
6 months ago
Option D is right answer as it gets the batch files with significant variance in demand
upvoted 1 times
...
teo2157
6 months, 1 week ago
Selected Answer: A
Going for A as it's the only option that achieve reprocessing, B could be a good answer but it doesn´t allow any reporcessing.
upvoted 2 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...