Welcome to ExamTopics
ExamTopics Logo
- Expert Verified, Online, Free.
exam questions

Exam AWS Certified Solutions Architect - Associate SAA-C02 All Questions

View all questions & answers for the AWS Certified Solutions Architect - Associate SAA-C02 exam

Exam AWS Certified Solutions Architect - Associate SAA-C02 topic 1 question 375 discussion

A company is developing a new machine learning model solution in AWS. The models are developed as independent microservices that fetch about 1 GB of model data from Amazon S3 at startup and load the data into memory. Users access the models through an asynchronous API. Users can send a request or a batch of requests and specify where the results should be sent.
The company provides models to hundreds of users. The usage patterns for the models are irregular Some models could be unused for days or weeks. Other models could receive batches of thousands of requests at a time.
Which solution meets these requirements?

  • A. The requests from the API are sent to an Application Load Balancer (ALB). Models are deployed as AWS Lambda functions invoked by the ALB.
  • B. The requests from the API are sent to the models Amazon Simple Queue Service (Amazon SQS) queue. Models are deployed as AWS Lambda functions triggered by SQS events AWS Auto Scaling is enabled on Lambda to increase the number of vCPUs based on the SQS queue size.
  • C. The requests from the API are sent to the model's Amazon Simple Queue Service (Amazon SQS) queue. Models are deployed as Amazon Elastic Container Service (Amazon ECS) services reading from the queue AWS App Mesh scales the instances of the ECS cluster based on the SQS queue size.
  • D. The requests from the API are sent to the models Amazon Simple Queue Service (Amazon SQS) queue. Models are deployed as Amazon Elastic Container Service (Amazon ECS) services reading from the queue AWS Auto Scaling is enabled on Amazon ECS for both the cluster and copies of the service based on the queue size.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
NSF2
Highly Voted 3 years ago
In this case, looking at various options. A. This will work but there is no reliability as message can be lost. B. Lambda is managed service that adjust itself with the load, but you cant use autoscaling. C. Not sure about App Mesh, which is a service that provides application level networking. D in this case works, without a doubt. The requests from the API are sent to the models Amazon Simple Queue Service (Amazon SQS) queue. - Valid and will work Models are deployed as Amazon Elastic Container Service (Amazon ECS) services reading from the queue - Make sense AWS Auto Scaling is enabled on Amazon ECS for both the cluster and copies of the service based on the queue size - This makes absolute sense.
upvoted 46 times
gargaditya
2 years, 10 months ago
A will not work as data is 1 GB, Lambda has variables max size 4 KB and /tmp can use another 512 MB. (RAM not a concern here, just for reference--RAM can be between 128 MB and 10 GB) (max execution time is 15 minutes) (concurrency executions=1000,can be increased)
upvoted 5 times
...
...
israelaminu
Highly Voted 3 years ago
D is the answer, Lambda can't process due to the model size(1GB), directory memory allocation is 512MB.
upvoted 10 times
Abdullah22
3 years ago
To configure the memory for your function, set a value between 128 MB and 10,240 MB in 1-MB increments. https://docs.aws.amazon.com/lambda/latest/dg/configuration-memory.html anyway, lambda not the choice here because there is nothing ensure the decoupling not the memory limitation. ans could be D
upvoted 6 times
gargaditya
2 years, 10 months ago
You are talking about RAM, question days model data(NOT memory), which can be upto 512 MB using /tmp
upvoted 1 times
allanm
2 years, 3 months ago
The 512mb limit has been removed so it's not applicable anymore. Lambda now supports upto 10GB. https://aws.amazon.com/blogs/aws/aws-lambda-now-supports-up-to-10-gb-ephemeral-storage/
upvoted 2 times
...
...
...
orbpig
1 year, 11 months ago
totally wrong. please don't dislead
upvoted 2 times
...
orbpig
1 year, 11 months ago
memory is 128 MB and 10,240 MB in 1-MB increment
upvoted 2 times
...
...
cd93
Most Recent 1 year, 1 month ago
Selected Answer: D
A is wrong, question mentioned "asynchronous", so we need a queue B is wrong NOT because of the 512MB size limit of Lambda (the limit increased to 10GB in late 2022), but because we CAN'T directly increase vCPUs of Lambda, we can only increase memory of Lambda (as memory is increased, the number of vCPU also go up - 1 vCPU per 1769MB) C is wrong, App Mesh is not for managing auto scales So the only one left is D https://docs.aws.amazon.com/lambda/latest/dg/configuration-function-common.html
upvoted 1 times
...
BECAUSE
1 year, 4 months ago
Selected Answer: D
D is the answer
upvoted 1 times
...
sofiella
1 year, 8 months ago
The solution that meets these requirements is option B. The requests from the API are sent to the models Amazon Simple Queue Service (Amazon SQS) queue. Models are deployed as AWS Lambda functions triggered by SQS events AWS Auto Scaling is enabled on Lambda to increase the number of vCPUs based on the SQS queue size. By using Amazon SQS, the company can manage the requests to the models in an asynchronous manner, which can help to handle the irregular usage patterns. The models are deployed as AWS Lambda functions, which can be triggered by the events in the SQS queue, and automatically scale based on the size of the queue with the help of AWS Auto Scaling. This way, the company can handle the irregular traffic patterns and ensure that the models are available to users even during high traffic spikes.
upvoted 1 times
...
orbpig
1 year, 11 months ago
B is not right, since "Models are deployed as AWS Lambda functions triggered by SQS events" can be true. SQS message only can be polled, not push.
upvoted 1 times
orbpig
1 year, 11 months ago
above has a typo, made a change to B is not right, since "Models are deployed as AWS Lambda functions triggered by SQS events" can NOT be true. SQS message only can be polled, not push.
upvoted 1 times
...
...
ACloudptra
2 years, 2 months ago
Selected Answer: D
lambda scales automatically, has to be D
upvoted 1 times
...
tigerbaer
2 years, 2 months ago
Selected Answer: D
D is correct. No autoscaling for Lambda
upvoted 1 times
...
queen101
2 years, 2 months ago
DDDDDDDDDDDDDD
upvoted 1 times
...
Janan
2 years, 2 months ago
Selected Answer: B
B is the answer. SQS helps in de-coupling Lambda ensures there are no EC2 or ECS running without working on models for weeks The last sentence in answer B about auto-scaling https://docs.aws.amazon.com/autoscaling/application/userguide/services-that-can-integrate-lambda.html
upvoted 1 times
...
rav009
2 years, 11 months ago
ALB inovke Lambda is synchronous. But it mentioned that "Users access the models through an asynchronous API" So A is not right
upvoted 1 times
...
Radeeka
2 years, 11 months ago
No point of having models loaded to ECS as some of the models are not used for weeks. Lambada Memory Limit - https://aws.amazon.com/about-aws/whats-new/2020/12/aws-lambda-supports-10gb-memory-6-vcpu-cores-lambda-functions/#:~:text=relevant%20marketing%20content.-,AWS%20Lambda%20now%20supports%20up%20to%2010%20GB%20of%20memory,vCPU%20cores%20for%20Lambda%20Functions&text=AWS%20Lambda%20customers%20can%20now,previous%20limit%20of%203%2C008%20MB. Lambda Scaling - https://docs.aws.amazon.com/lambda/latest/dg/invocation-scaling.html So it must be either A or B.
upvoted 2 times
gargaditya
2 years, 10 months ago
ECS has 2 deployment modes-Fargate and EC2. Fargate is indeed serverless.
upvoted 1 times
...
...
KK_uniq
2 years, 12 months ago
What is wrong with C?
upvoted 1 times
EzBL
2 years, 3 months ago
"AWS App Mesh is a service mesh that provides application-level networking to make it easy for your services to communicate with each other across multiple types of compute infrastructure. App Mesh gives end-to-end visibility and high-availability for your applications." "application-level networking" not compute
upvoted 1 times
...
...
raghuisin
3 years ago
Ans: D I will go with option D, Models are built on microservice, workloads are unpredictable.
upvoted 4 times
...
zek
3 years ago
Question is not clear to me. If all model data is 1Gb then it is A, if each one is 1GB then answer is D.
upvoted 4 times
...
NJo
3 years ago
Those who think Lambda cannot be scaled, please look at this link -https://docs.aws.amazon.com/lambda/latest/dg/invocation-scaling.html. Also, the question says that 'Some models could be unused for days or weeks'. I wouldn't want to pay for those ECS containers when models on those containers are not used at all even though Contrainers in that ECS environemnts are running with minimum instances. I'd just go with Lambda which only triggers when there is a load and RAM requirements are also met. I'd go with B but happy to be corrected with logic and explanation with links.
upvoted 6 times
waqas
3 years ago
Read option B. Its says "AWS Auto Scaling is enabled on Lambda".......So B is wrong.... To me D suits more.
upvoted 1 times
NJo
3 years ago
Thanks for your comment. I am not sure what's wrong with 'AWS Auto Scaling for Lambda'. Lambda does support Auto Scaling as per the link below: https://docs.aws.amazon.com/lambda/latest/dg/invocation-scaling.html
upvoted 2 times
Iamrandom
3 years ago
Lambda has no possibility of NOT autoscaling, so what's the point in saying it's enabled? Clearly wrong answer.
upvoted 1 times
andwill1001
2 years, 12 months ago
So does that one statement make it wrong? Would Lamba not work in this scenario? If that's the case the people that wrote this answer are not friendly.
upvoted 1 times
...
...
...
...
NJo
3 years ago
Ok, I am tempted to change my opinion based on the fact that Lambda doesn't run for more than 15 minutes. The fact I missed considering here. The machine learning models may require running for longer time and hence D will be better option.
upvoted 4 times
...
...
syu31svc
3 years ago
A is out for sure B is also out since you don't scale Lambda I'm taking C over D https://aws.amazon.com/app-mesh/?aws-app-mesh-blogs.sort-by=item.additionalFields.createdDate&aws-app-mesh-blogs.sort-order=desc&whats-new-cards.sort-by=item.additionalFields.postDateTime&whats-new-cards.sort-order=desc: "You can use App Mesh with AWS Fargate, Amazon EC2, Amazon ECS, Amazon EKS, and Kubernetes running on AWS, to better run your application at scale."
upvoted 1 times
Iamrandom
3 years ago
wrong, it clearly states "developed as independent microservices" so appmesh doesn't fit
upvoted 3 times
...
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...