exam questions

Exam AWS Certified Solutions Architect - Professional SAP-C02 All Questions

View all questions & answers for the AWS Certified Solutions Architect - Professional SAP-C02 exam

Exam AWS Certified Solutions Architect - Professional SAP-C02 topic 1 question 33 discussion

A company is hosting a monolithic REST-based API for a mobile app on five Amazon EC2 instances in public subnets of a VPC. Mobile clients connect to the API by using a domain name that is hosted on Amazon Route 53. The company has created a Route 53 multivalue answer routing policy with the IP addresses of all the EC2 instances. Recently, the app has been overwhelmed by large and sudden increases to traffic. The app has not been able to keep up with the traffic.
A solutions architect needs to implement a solution so that the app can handle the new and varying load.
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Separate the API into individual AWS Lambda functions. Configure an Amazon API Gateway REST API with Lambda integration for the backend. Update the Route 53 record to point to the API Gateway API.
  • B. Containerize the API logic. Create an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. Run the containers in the cluster by using Amazon EC2. Create a Kubernetes ingress. Update the Route 53 record to point to the Kubernetes ingress.
  • C. Create an Auto Scaling group. Place all the EC2 instances in the Auto Scaling group. Configure the Auto Scaling group to perform scaling actions that are based on CPU utilization. Create an AWS Lambda function that reacts to Auto Scaling group changes and updates the Route 53 record.
  • D. Create an Application Load Balancer (ALB) in front of the API. Move the EC2 instances to private subnets in the VPC. Add the EC2 instances as targets for the ALB. Update the Route 53 record to point to the ALB.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
EricZhang
Highly Voted 2 years, 3 months ago
Selected Answer: A
Serverless requires least operational effort.
upvoted 36 times
dqwsmwwvtgxwkvgcvc
1 year, 7 months ago
I guess multivalue answer routing in Route53 is not proper load balancing so replacing multivalue answer routing with ALB would proper balance the load (with minimal effort)
upvoted 4 times
...
How can this be the answer ?? It says: Separate the API into individual AWS Lambda functions. Can you calculate the operational overhead to do that?
upvoted 21 times
scuzzy2010
1 year, 11 months ago
Separating would be development overhead, but once done, the operational overheard (operational = ongoing day-to-day) will be the least.
upvoted 13 times
24Gel
1 year ago
disagree, ASG in Option D, after set up, operational is not overheat as well
upvoted 1 times
24Gel
1 year ago
i mean Option C not D
upvoted 1 times
24Gel
1 year ago
never mind, A is simpler than C
upvoted 2 times
...
...
...
...
...
Jay_2pt0_1
1 year, 10 months ago
From any type of real-world perspective, this just can't be the answer IMHO. Surely AWS takes "real world" into account.
upvoted 1 times
...
...
jooncco
Highly Voted 2 years, 1 month ago
Selected Answer: C
Suppose there are a 100 REST APIs (Since this application is monolithic, it's quite common). Are you still going to copy and paste all those API codes into lambda? What if business logic changes? This is not MINIMAL. I would go with C.
upvoted 32 times
altonh
2 months, 1 week ago
Option C means your R53 is playing catch-up with your ASG. What happens if you scale down? Your clients will still have the terminated EC2 in their cache until the next TTL.
upvoted 1 times
...
chathur
1 year, 9 months ago
"Create an AWS Lambda function that reacts to Auto Scaling group changes and updates the Route 53 record. " This does not make any sense, why do you need to change R53 records using a Lambda?
upvoted 1 times
Vesla
1 year, 7 months ago
Because if you have 4 ec2 in your ASG you need to have 4 records in domain name if ASG scale up to 6 for example you need 2 add 2 records more in domain name
upvoted 4 times
liquen14
1 year ago
Too contrived in my opinion, and what about DNS caches in the clients?. You coul get stuck for a while with the previous list of servers. I think it's has to be A (but it would involve a considerable development effort) or D which is extremely easy to implement but and the same time it sounds a little bit fishy because they don't mention anything about ASG or scaling I hate this kind of questions and I don't understand what kind of useful insight they provide unless they want us to become masters of the art of dealing with ambiguity
upvoted 3 times
cnethers
9 months ago
Agree that D does not scale to meet demand, it's just a better way to load balance which was being done at R53 before so the scaling issue has not been resolved. Also agree A requires more dev effort and less ops effort, so I would have to lean to A... Answer selection is poor IMO
upvoted 1 times
...
...
...
...
scuzzy2010
2 years ago
It says "a monolithic REST-based API " - hence only 1 API. Initially I thought C, but I'll go with A as it says least operation overhead (not least implementation effort). Lambda has virtually no operation overhead compared to EC2.
upvoted 8 times
aviathor
1 year, 8 months ago
Answer A says "Separate the API into individual AWS Lambda functions." Makes me think there may be many APIs. However, we are looking to minimize operational effort, not development effort...
upvoted 1 times
...
Jay_2pt0_1
1 year, 10 months ago
A monolithic REST api likely has a gazillion individual APIs. This refactor would not be a small one.
upvoted 5 times
...
...
jainparag1
1 year, 3 months ago
Dealing with business logic change is applicable to existing solution or any solution based on the complexity. Rather it's easier to deal when these are microservices. You shouldn't hesitate to refactor your application by putting one time effort (dev overhead) to save significant operational overhead on daily basis. AWS is pushing for serverless only for this.
upvoted 1 times
...
...
ParamD
Most Recent 2 days, 13 hours ago
Selected Answer: C
D. doesn’t have auto scaling. B. EKS will add operational overhead A. Adds lots of lambda functions whose maintenance and management will add to operational overhead compared to current monolithic setup. C. Is the best fit of the available options, it will enable autoscaling and will allow upto 8 nodes from current 5, one lambda function to update route53 will add minimal operational overhead. Though D with Autoscaling would have allowed minimal operational overhead and more flexibility to scale.
upvoted 1 times
...
soulation
2 weeks, 5 days ago
Selected Answer: C
Less operational overhead. Much less development effort.
upvoted 1 times
...
SaqibTaqi
1 month, 1 week ago
Selected Answer: A
well... i have to say... none of the options here comply to least operational overhead... each and every option involves changing the application logic.. but foe the sake of it... A is the best answer.. It cannot be B as containerizing would not be suitable to use with IP addresses of the instances... ASG and ELB would not fit here as Route 53 records point to the static IP addresses of the instances.. so the best answer is A... But again... a lot of overhead invovled if someone goes on for implementation...
upvoted 1 times
...
sintesi_suffisso0
1 month, 3 weeks ago
Selected Answer: D
It can’t be A since we don’t know how much time the API needs to complete
upvoted 2 times
...
Shanmahi
2 months ago
Selected Answer: D
While all 4 options work well and general inclination is to go for "serverless", the least operational effort is certainly add an ALB to distribute the incoming traffic on the EC2 instances. In a "real-world" scenario, I would ideally place Route53 -> ALB -> EC2 instances in an ASG. However, in the given option choices, D with ALB meets the requirement well from operational complexity point of view.
upvoted 3 times
...
jerry00218
2 months, 2 weeks ago
Selected Answer: A
Serverless is the least operational effort
upvoted 1 times
...
thanhpolimi
2 months, 2 weeks ago
Selected Answer: D
D provides a balanced solution to handle increased and varying traffic loads while minimizing the complexity and maintenance overhead.
upvoted 2 times
...
grumpysloth
3 months ago
Selected Answer: C
Operational overhead to fix the scalabiltiy issue is minimal if we keep the EC2 instances as they are and use ASG. We know nothing about the code complexity or response time, it might be hours, so Lambda is nto an option IMHO. D is not an option because it doesn't include autoscaling, so it won't solve the issue.
upvoted 3 times
...
JOJO9
3 months, 1 week ago
Selected Answer: D
This approach leverages AWS managed services like the Application Load Balancer (ALB) and Auto Scaling groups, minimizing the operational overhead required to handle varying traffic loads. The ALB automatically distributes incoming traffic across the EC2 instances, while the instances can be placed in private subnets for better security. Additionally, the Auto Scaling group can be configured to automatically scale the EC2 instances based on metrics like CPU utilization, eliminating the need for manual scaling. By using these managed services, you can offload tasks like load balancing, health checks, and auto-scaling to AWS, reducing the operational burden on your team. Updating the Route 53 record to point to the ALB's DNS name ensures that traffic is seamlessly routed to the backend instances without the need for manual DNS updates or additional components like Lambda functions.
upvoted 2 times
...
Heman31in
3 months, 1 week ago
Selected Answer: D
Option Initial Effort Ongoing Overhead Scalability Cost Suitability for "Least Operational Overhead" A High Minimal Excellent (serverless) Cost-effective for low-to-medium traffic Poor (high re-architecture effort) B Very High High Excellent (with effort) Expensive Poor (requires Kubernetes expertise) C Moderate High Good (DNS-based scaling) Cheaper than ALB Moderate (Route 53 overhead) D Low Minimal Excellent (real-time ALB) Predictable Best
upvoted 2 times
...
wem
3 months, 1 week ago
Selected Answer: D
D Explanation: Benefits of an Application Load Balancer (ALB): Load Balancing: An ALB automatically distributes incoming traffic across multiple EC2 instances, ensuring high availability and efficient use of resources. Scalability: The ALB works seamlessly with an Auto Scaling group to scale the number of instances based on traffic load. Improved Security: Moving EC2 instances to private subnets ensures they are not directly exposed to the internet, reducing security risks. Operational Overhead: Minimal setup compared to other options. No need to manage DNS changes dynamically as the ALB provides a single, stable endpoint. Integration with Route 53: Route 53 can easily point to the ALB's DNS name, providing seamless updates to the clients. Handling Sudden Traffic Spikes: The ALB efficiently distributes traffic among available instances and works with Auto Scaling to dynamically adjust capacity to meet varying load demands.
upvoted 2 times
...
Tiger4Code
3 months, 2 weeks ago
Selected Answer: D
D: ALB --> EC2 Not A cos This would require re-architecting the application from a monolithic design to a microservices or serverless architecture, which introduces additional complexity. Managing AWS Lambda functions and API Gateway would involve more operational overhead and may not be the most straightforward solution for handling increased load.
upvoted 2 times
...
ahhatem
4 months ago
I don't think this question is correct! all the answers are illogical given that the solution is very straight forward! Nothing in the question suggest that anything out of the ordinary is needed or justified! I am guessing that the answer should have been option D with auto scaling... They can't seriously expect an architect to simply suggest to refactor the whole app as a solution!
upvoted 1 times
...
sashenka
4 months, 1 week ago
For those of you considering D... If the "new and varying load" goes 100x of current, how without Auto Scaling Group can the "five Amazon EC2 instances" handle it?
upvoted 1 times
...
konieczny69
4 months, 2 weeks ago
its monolithic - cant be lambda
upvoted 1 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago