Welcome to ExamTopics
ExamTopics Logo
- Expert Verified, Online, Free.
exam questions

Exam AWS Certified Solutions Architect - Professional SAP-C02 All Questions

View all questions & answers for the AWS Certified Solutions Architect - Professional SAP-C02 exam

Exam AWS Certified Solutions Architect - Professional SAP-C02 topic 1 question 50 discussion

A company wants to migrate its data analytics environment from on premises to AWS. The environment consists of two simple Node.js applications. One of the applications collects sensor data and loads it into a MySQL database. The other application aggregates the data into reports. When the aggregation jobs run, some of the load jobs fail to run correctly.

The company must resolve the data loading issue. The company also needs the migration to occur without interruptions or changes for the company’s customers.

What should a solutions architect do to meet these requirements?

  • A. Set up an Amazon Aurora MySQL database as a replication target for the on-premises database. Create an Aurora Replica for the Aurora MySQL database, and move the aggregation jobs to run against the Aurora Replica. Set up collection endpoints as AWS Lambda functions behind a Network Load Balancer (NLB), and use Amazon RDS Proxy to write to the Aurora MySQL database. When the databases are synced, disable the replication job and restart the Aurora Replica as the primary instance. Point the collector DNS record to the NLB.
  • B. Set up an Amazon Aurora MySQL database. Use AWS Database Migration Service (AWS DMS) to perform continuous data replication from the on-premises database to Aurora. Move the aggregation jobs to run against the Aurora MySQL database. Set up collection endpoints behind an Application Load Balancer (ALB) as Amazon EC2 instances in an Auto Scaling group. When the databases are synced, point the collector DNS record to the ALDisable the AWS DMS sync task after the cutover from on premises to AWS.
  • C. Set up an Amazon Aurora MySQL database. Use AWS Database Migration Service (AWS DMS) to perform continuous data replication from the on-premises database to Aurora. Create an Aurora Replica for the Aurora MySQL database, and move the aggregation jobs to run against the Aurora Replica. Set up collection endpoints as AWS Lambda functions behind an Application Load Balancer (ALB), and use Amazon RDS Proxy to write to the Aurora MySQL database. When the databases are synced, point the collector DNS record to the ALB. Disable the AWS DMS sync task after the cutover from on premises to AWS.
  • D. Set up an Amazon Aurora MySQL database. Create an Aurora Replica for the Aurora MySQL database, and move the aggregation jobs to run against the Aurora Replica. Set up collection endpoints as an Amazon Kinesis data stream. Use Amazon Kinesis Data Firehose to replicate the data to the Aurora MySQL database. When the databases are synced, disable the replication job and restart the Aurora Replica as the primary instance. Point the collector DNS record to the Kinesis data stream.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
OCHT
Highly Voted 1 year, 7 months ago
Selected Answer: C
Option A, B and D have some similarities with Option C but also have some key differences: Option A uses a Network Load Balancer (NLB) instead of an Application Load Balancer (ALB) and does not use AWS Database Migration Service (AWS DMS) for continuous data replication. Instead, it sets up the Aurora MySQL database as a replication target for the on-premises database. Option B does use AWS DMS for continuous data replication and sets up collection endpoints behind an ALB as Amazon EC2 instances in an Auto Scaling group. However, it does not create an Aurora Replica for the Aurora MySQL database or use Amazon RDS Proxy to write to the Aurora MySQL database. Option D does not use AWS DMS for continuous data replication or set up collection endpoints behind an ALB. Instead, it sets up collection endpoints as an Amazon Kinesis data stream and uses Amazon Kinesis Data Firehose to replicate the data to the Aurora MySQL database.
upvoted 16 times
...
amministrazione
Most Recent 2 months, 3 weeks ago
C. Set up an Amazon Aurora MySQL database. Use AWS Database Migration Service (AWS DMS) to perform continuous data replication from the on-premises database to Aurora. Create an Aurora Replica for the Aurora MySQL database, and move the aggregation jobs to run against the Aurora Replica. Set up collection endpoints as AWS Lambda functions behind an Application Load Balancer (ALB), and use Amazon RDS Proxy to write to the Aurora MySQL database. When the databases are synced, point the collector DNS record to the ALB. Disable the AWS DMS sync task after the cutover from on premises to AWS.
upvoted 1 times
...
ninomfr64
10 months, 3 weeks ago
Selected Answer: C
Not A. not clear how the on-premises database is replicated on the Aurora MySQL, also you cannot place Lambda behind NLB as BLB only supports private IPs, instances and ALB https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html Not B. this will keep executing the aggregation job and the load on the same database instance and this will not resolve loading issues Not D. using Kinesis Data Firehose to replicate the database is not recommended, the solution should involve DMS. also moving to Kinesis Data Stream for data load requires some changes on the customer side which is not part of the request. C is the right solution: use DMS to migrate on-premise database, move the aggregation job to the read replica, using Lambda (that supports node.js) behind ALB will not impact client side
upvoted 2 times
...
shaaam80
11 months, 3 weeks ago
Selected Answer: C
Answer C
upvoted 1 times
...
NikkyDicky
1 year, 4 months ago
Selected Answer: C
It's a c
upvoted 1 times
...
SkyZeroZx
1 year, 5 months ago
Selected Answer: C
Keyworks = DMS & RDS Proxy Then C
upvoted 2 times
...
leehjworking
1 year, 6 months ago
Selected Answer: C
AD: restart = interruption? B: ASG...Why?
upvoted 3 times
chikorita
1 year, 5 months ago
why ...oh...why?
upvoted 1 times
...
...
mfsec
1 year, 7 months ago
Selected Answer: C
ill go with C
upvoted 1 times
...
dev112233xx
1 year, 8 months ago
Selected Answer: C
C.. even though question didn’t mention the total time of each job. If the job takes more than 15m then Lambda can’t be used. Probably the solution with ASG and EC2 is better .. not sure!
upvoted 3 times
...
zejou1
1 year, 8 months ago
Selected Answer: C
ALB because you are pointing to to Lambda function, not a network address Look at AWS DMS feature https://aws.amazon.com/dms/features/ Main requirement - needs the migration to occur w/out interruptions or changes to the company's customers. C keeps it stupid simple w/ no service interruption
upvoted 1 times
...
vherman
1 year, 8 months ago
Could anybody explain why ALB? I'd go with API Gateway
upvoted 1 times
zejou1
1 year, 8 months ago
Application - you are using Lambda functions that will be sending api commands, you would use network when it is just about routing
upvoted 1 times
...
...
Sarutobi
1 year, 8 months ago
Selected Answer: C
I would say C.
upvoted 1 times
...
hobokabobo
1 year, 8 months ago
I have a feeling that none of the approaches will work. a) We have two sources that change the database: migration and new data coming in. In a relational database this results in inconsistent data. Constraints will not be fulfilled. b) until the database is fully synced the second database has inconsistent data. Some parts of relations and parts of entities are still missing. Constraints will not be fulfilled. None if the approaches addresses that aggregation tasks fail because of inconsistency of the data base.
upvoted 1 times
hobokabobo
1 year, 8 months ago
ACID principle: atomicity, consistency, isolation and durability. All solutions violate this basic principle of relational databases. https://en.wikipedia.org/wiki/ACID
upvoted 1 times
...
...
God_Is_Love
1 year, 8 months ago
Issue could be because of same db used for writing and reading heavily. solution to separate this into read replica only for reading. DMS for data migration to aws from onpremises.Writing app to DB and Reading app from DB for reports. Writing app needs RDSProxy and saves data.Reading app reads from replica. B is wrong because, Reading job (aggregation) needs to use replica which is mentioned in C. C is correct.
upvoted 2 times
...
Fatoch
1 year, 9 months ago
is it C or B? Same person answers two times two different answers
upvoted 1 times
...
zozza2023
1 year, 9 months ago
Selected Answer: C
C is corect
upvoted 3 times
...
masetromain
1 year, 10 months ago
Selected Answer: C
C. This option would meet the requirements of resolving the data loading issue and migrating without interruption or changes for the company's customers. By using AWS DMS for continuous data replication, the company can ensure that the data being migrated is up to date. By setting up an Aurora Replica and moving the aggregation jobs to run against it, the company can offload some of the read workload from the primary database and reduce the risk of issues with the load jobs. By using AWS Lambda functions behind an ALB and Amazon RDS Proxy to write to the Aurora MySQL database, the company can add an extra layer of security and scalability to the data collection process. Finally, by pointing the collector DNS record to the ALB after the databases are synced and disabling the AWS DMS sync task, the company can ensure a smooth cutover to the new environment.
upvoted 4 times
masetromain
1 year, 10 months ago
A. This option would not work as it would require to change the primary database and also it may cause interruption for the company's customers during the cutover process. B. This option would not work as it would not include Aurora Replica to offload the read workload, this would result in aggregation jobs running on the primary database which can cause the load jobs to fail during heavy loads. D. This option would not work as it would require to use kinesis data stream which may cause performance issues and also it may not be the best fit for this use case. Additionally, using Kinesis Data Firehose would add complexity to the data replication process, and may result in increased latency or data loss.
upvoted 2 times
...
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...