Welcome to ExamTopics
ExamTopics Logo
- Expert Verified, Online, Free.
exam questions

Exam AWS Certified Data Engineer - Associate DEA-C01 All Questions

View all questions & answers for the AWS Certified Data Engineer - Associate DEA-C01 exam

Exam AWS Certified Data Engineer - Associate DEA-C01 topic 1 question 166 discussion

A data engineer configured an AWS Glue Data Catalog for data that is stored in Amazon S3 buckets. The data engineer needs to configure the Data Catalog to receive incremental updates.

The data engineer sets up event notifications for the S3 bucket and creates an Amazon Simple Queue Service (Amazon SQS) queue to receive the S3 events.

Which combination of steps should the data engineer take to meet these requirements with LEAST operational overhead? (Choose two.)

  • A. Create an S3 event-based AWS Glue crawler to consume events from the SQS queue.
  • B. Define a time-based schedule to run the AWS Glue crawler, and perform incremental updates to the Data Catalog.
  • C. Use an AWS Lambda function to directly update the Data Catalog based on S3 events that the SQS queue receives.
  • D. Manually initiate the AWS Glue crawler to perform updates to the Data Catalog when there is a change in the S3 bucket.
  • E. Use AWS Step Functions to orchestrate the process of updating the Data Catalog based on S3 events that the SQS queue receives.
Show Suggested Answer Hide Answer
Suggested Answer: AC 🗳️

Comments

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
tucobbad
1 week, 1 day ago
Selected Answer: AC
- Option A suggests creating an S3 event-based AWS Glue crawler to consume events from the SQS queue. This option is appropriate as it allows the crawler to automatically respond to events, thereby reducing manual intervention and ensuring timely updates to the Data Catalog - Option C involves using an AWS Lambda function to directly update the Data Catalog based on S3 events received from the SQS queue. This is a strong candidate as it automates the update process without the need for manual scheduling or intervention, thus minimizing operational overhead. AWS Glue Crawlers can consume events from an SQS queue: https://docs.aws.amazon.com/glue/latest/dg/crawler-s3-event-notifications.html
upvoted 1 times
...
pikuantne
2 weeks ago
Selected Answer: AB
Based on this article (Option 1 for the architecture) it should be AB: 1. Run the crawler on a schedule. 2. Crawler polls for object create events in the SQS queue 3a. If there are events, crawler updates the Data Catalog 3b. If not, crawler stops
upvoted 1 times
...
ae35a02
2 weeks, 3 days ago
Selected Answer: BC
AWS Glue Crawlers can not consupe events from an SQS queue D introduce a manual operation E introduce more complexity so BC
upvoted 1 times
tucobbad
1 week, 1 day ago
Answer is A and C In fact, AWS Glue Crawlers can consume events indeed: https://docs.aws.amazon.com/glue/latest/dg/crawler-s3-event-notifications.html
upvoted 1 times
...
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...