Welcome to ExamTopics
ExamTopics Logo
- Expert Verified, Online, Free.
exam questions

Exam AWS Certified Solutions Architect - Professional SAP-C02 All Questions

View all questions & answers for the AWS Certified Solutions Architect - Professional SAP-C02 exam

Exam AWS Certified Solutions Architect - Professional SAP-C02 topic 1 question 33 discussion

A company is hosting a monolithic REST-based API for a mobile app on five Amazon EC2 instances in public subnets of a VPC. Mobile clients connect to the API by using a domain name that is hosted on Amazon Route 53. The company has created a Route 53 multivalue answer routing policy with the IP addresses of all the EC2 instances. Recently, the app has been overwhelmed by large and sudden increases to traffic. The app has not been able to keep up with the traffic.
A solutions architect needs to implement a solution so that the app can handle the new and varying load.
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Separate the API into individual AWS Lambda functions. Configure an Amazon API Gateway REST API with Lambda integration for the backend. Update the Route 53 record to point to the API Gateway API.
  • B. Containerize the API logic. Create an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. Run the containers in the cluster by using Amazon EC2. Create a Kubernetes ingress. Update the Route 53 record to point to the Kubernetes ingress.
  • C. Create an Auto Scaling group. Place all the EC2 instances in the Auto Scaling group. Configure the Auto Scaling group to perform scaling actions that are based on CPU utilization. Create an AWS Lambda function that reacts to Auto Scaling group changes and updates the Route 53 record.
  • D. Create an Application Load Balancer (ALB) in front of the API. Move the EC2 instances to private subnets in the VPC. Add the EC2 instances as targets for the ALB. Update the Route 53 record to point to the ALB.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
EricZhang
Highly Voted 1 year, 10 months ago
Selected Answer: A
Serverless requires least operational effort.
upvoted 33 times
lkyixoayffasdrlaqd
1 year, 8 months ago
How can this be the answer ?? It says: Separate the API into individual AWS Lambda functions. Can you calculate the operational overhead to do that?
upvoted 18 times
scuzzy2010
1 year, 6 months ago
Separating would be development overhead, but once done, the operational overheard (operational = ongoing day-to-day) will be the least.
upvoted 13 times
24Gel
7 months, 3 weeks ago
disagree, ASG in Option D, after set up, operational is not overheat as well
upvoted 1 times
24Gel
7 months, 3 weeks ago
i mean Option C not D
upvoted 1 times
24Gel
7 months, 3 weeks ago
never mind, A is simpler than C
upvoted 1 times
...
...
...
...
...
Jay_2pt0_1
1 year, 5 months ago
From any type of real-world perspective, this just can't be the answer IMHO. Surely AWS takes "real world" into account.
upvoted 1 times
...
dqwsmwwvtgxwkvgcvc
1 year, 2 months ago
I guess multivalue answer routing in Route53 is not proper load balancing so replacing multivalue answer routing with ALB would proper balance the load (with minimal effort)
upvoted 3 times
...
...
jooncco
Highly Voted 1 year, 9 months ago
Selected Answer: C
Suppose there are a 100 REST APIs (Since this application is monolithic, it's quite common). Are you still going to copy and paste all those API codes into lambda? What if business logic changes? This is not MINIMAL. I would go with C.
upvoted 29 times
chathur
1 year, 5 months ago
"Create an AWS Lambda function that reacts to Auto Scaling group changes and updates the Route 53 record. " This does not make any sense, why do you need to change R53 records using a Lambda?
upvoted 1 times
Vesla
1 year, 2 months ago
Because if you have 4 ec2 in your ASG you need to have 4 records in domain name if ASG scale up to 6 for example you need 2 add 2 records more in domain name
upvoted 4 times
liquen14
8 months, 1 week ago
Too contrived in my opinion, and what about DNS caches in the clients?. You coul get stuck for a while with the previous list of servers. I think it's has to be A (but it would involve a considerable development effort) or D which is extremely easy to implement but and the same time it sounds a little bit fishy because they don't mention anything about ASG or scaling I hate this kind of questions and I don't understand what kind of useful insight they provide unless they want us to become masters of the art of dealing with ambiguity
upvoted 3 times
cnethers
4 months, 2 weeks ago
Agree that D does not scale to meet demand, it's just a better way to load balance which was being done at R53 before so the scaling issue has not been resolved. Also agree A requires more dev effort and less ops effort, so I would have to lean to A... Answer selection is poor IMO
upvoted 1 times
...
...
...
...
scuzzy2010
1 year, 8 months ago
It says "a monolithic REST-based API " - hence only 1 API. Initially I thought C, but I'll go with A as it says least operation overhead (not least implementation effort). Lambda has virtually no operation overhead compared to EC2.
upvoted 8 times
aviathor
1 year, 4 months ago
Answer A says "Separate the API into individual AWS Lambda functions." Makes me think there may be many APIs. However, we are looking to minimize operational effort, not development effort...
upvoted 1 times
...
Jay_2pt0_1
1 year, 6 months ago
A monolithic REST api likely has a gazillion individual APIs. This refactor would not be a small one.
upvoted 5 times
...
...
jainparag1
11 months, 2 weeks ago
Dealing with business logic change is applicable to existing solution or any solution based on the complexity. Rather it's easier to deal when these are microservices. You shouldn't hesitate to refactor your application by putting one time effort (dev overhead) to save significant operational overhead on daily basis. AWS is pushing for serverless only for this.
upvoted 1 times
...
...
konieczny69
Most Recent 1 week, 2 days ago
its monolithic - cant be lambda
upvoted 1 times
...
konieczny69
1 week, 2 days ago
Its D A - more operational overhead not to mention that lambda might not be suitable B - too much work C- does not make much sense
upvoted 1 times
...
Karelito00
4 weeks, 1 day ago
Option C is correct. Option A: You have to migrate all the business logic to lambda functions. I don't know how the option A is marked as correct when the question says with the Least Operation Overhead. Option D is good because we have an ALB in front of the instances and we don't have to handle the load balancing in Route 53, however this option doesn't scale, so if we got a traffic increase the application will fail. Option C: we have an ASG, so our application instances scales horizontally on demand, the downside is that we have to keep managing the load balancing at Route 53.
upvoted 1 times
...
sashenka
1 month ago
Selected Answer: A
Operational overhead is the cost of the day-to-day operation of the service in question. The questions usually differentiated themselves by having answers that demonstrated knowing a service could potentially be expensive and you might be able to minimize the cost using an a different AWS service or sometimes not an AWS service at all. Whenever you see these in a question think "serverless" and "easiest to implement and operate day 2" it basically eliminates any answers that deploy infra (EC2 instances) that you will have to patch and manage.
upvoted 1 times
...
chris_spencer
1 month, 1 week ago
Selected Answer: A
After reading this discussions I am also with A.. because D has no AutoScaling
upvoted 1 times
...
fabriciollf
1 month, 2 weeks ago
Selected Answer: A
In my opinion the key is this part of the question "LEAST operational overhead". Serverless is the best fit here.
upvoted 1 times
...
Fastercut
1 month, 2 weeks ago
A: It requires huge code restructuring normally re-writting the code would be the last option of any architectural changes. B: Kubernetes kind of increases the operational overhead in terms of knowledge to handle them and complexity in configurations C: Supports scalling and meets all the requirements with minimal effort. D. ALB load balances effectively, would not exactly be able to “handle the new and varying load” I guess. A & C: Kind of satisfy the requirement and has the Least Overhead. But considering the other factors Personally I would go with option C
upvoted 1 times
...
AWSum1
1 month, 3 weeks ago
Selected Answer: A
Option A will have the LEAST operational effort.
upvoted 1 times
...
amministrazione
2 months, 1 week ago
D. Create an Application Load Balancer (ALB) in front of the API. Move the EC2 instances to private subnets in the VPC. Add the EC2 instances as targets for the ALB. Update the Route 53 record to point to the ALB.
upvoted 1 times
...
Jason666888
3 months ago
Selected Answer: A
It has to be A, period. Problem with C: muti-value has an upper limit: 8. Route 53 responds to DNS queries with up to eight healthy records and gives different answers to different DNS resolvers. Also you need to manage the elastic IP's attachment everytime when new instances scale up for route53 multi-value routing Problem with D: multi-value cannot work with load balancers. please check doc here: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy-multivalue.html
upvoted 2 times
Jason666888
3 months ago
So for option C, if you upscale to 8 instances and the API still get overwhelmed, then there's nothing more you can do about it
upvoted 1 times
...
...
Syre
3 months, 1 week ago
Selected Answer: C
A requires more work and it's not practical D cannot be the answer, why are we even moving instances to the private subnet in the first place? No security issues or other issues mentioned here.
upvoted 1 times
Reval
3 months, 1 week ago
Creating an Auto Scaling group and managing updates to Route 53 records via a Lambda function involves more complexity and management. The use of an ALB (as in Option D) is more efficient, as it inherently provides load balancing and scaling features without the need to update DNS records constantly.
upvoted 1 times
...
...
zolthar_z
3 months, 3 weeks ago
Selected Answer: D
Keep it simple, we can't assume if the API is a large/small application, the idea is make the least operational overhead and that is only add a ALB, We don't know the effort to move the application to a lambda, Answer is D
upvoted 3 times
8693a49
3 months, 1 week ago
True, some apps won't work well on Lambda. On the other hand option D is missing auto-scaling, which means it won't cope with increasing traffic. Assuming the app can be ported to Lambda, A satisfies all requirements: scalability and very low operational effort.
upvoted 1 times
...
...
Moghite
3 months, 3 weeks ago
Selected Answer: D
Response is D A- Requires significant refactoring of the application B- solution complex and requires containerizing application. C- The multi-value answer routings less flexible compared to using an ALB for load balancing.
upvoted 4 times
8693a49
3 months, 1 week ago
Refactoring is not operational effort. Operational effort is the routine work done once the application is in production (patching OS, monitoring logs, restarting servers, increasing capacity, etc). Serverless always has the lowest operational effort for the customer because AWS do it behind the scenes.
upvoted 1 times
...
mns0173
3 months, 1 week ago
ALB won't help you with scaling. Obviously clear case for C
upvoted 1 times
...
...
subbupro
4 months, 4 weeks ago
D is a perfect, least operation effort. C needs to write lamda func which is over head.
upvoted 1 times
...
[Removed]
5 months ago
Selected Answer: D
The least operational overhead solution is: D. Create an Application Load Balancer (ALB) in front of the API. Move the EC2 instances to private subnets. Add the instances as targets for the ALB. Update the Route 53 record to point to the ALB.
upvoted 17 times
cnethers
4 months, 2 weeks ago
D does not scale to meet demand, it's just a better way to load balance which was being done at R53 before so the scaling issue has not been resolved. A requires more dev effort (not a consideration in the question) and less ops effort, so I would have to lean to A... Answer selection is poor IMO for this question ..
upvoted 3 times
...
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...