AWS Certified Data Engineer - Associate DEA-C01 Actual Exam Questions

Last updated on Dec. 5, 2024.
Vendor:Amazon
Exam Code:AWS Certified Data Engineer - Associate DEA-C01
Exam Name:AWS Certified Data Engineer - Associate DEA-C01
Exam Questions:204
 

Topic 1 - Exam A

Question #1 Topic 1

A data engineer is configuring an AWS Glue job to read data from an Amazon S3 bucket. The data engineer has set up the necessary AWS Glue connection details and an associated IAM role. However, when the data engineer attempts to run the AWS Glue job, the data engineer receives an error message that indicates that there are problems with the Amazon S3 VPC gateway endpoint.
The data engineer must resolve the error and connect the AWS Glue job to the S3 bucket.
Which solution will meet this requirement?

  • A. Update the AWS Glue security group to allow inbound traffic from the Amazon S3 VPC gateway endpoint.
  • B. Configure an S3 bucket policy to explicitly grant the AWS Glue job permissions to access the S3 bucket.
  • C. Review the AWS Glue job code to ensure that the AWS Glue connection details include a fully qualified domain name.
  • D. Verify that the VPC's route table includes inbound and outbound routes for the Amazon S3 VPC gateway endpoint.
Reveal Solution Hide Solution   Discussion   16

Correct Answer: D 🗳️

Question #2 Topic 1

A retail company has a customer data hub in an Amazon S3 bucket. Employees from many countries use the data hub to support company-wide analytics. A governance team must ensure that the company's data analysts can access data only for customers who are within the same country as the analysts.
Which solution will meet these requirements with the LEAST operational effort?

  • A. Create a separate table for each country's customer data. Provide access to each analyst based on the country that the analyst serves.
  • B. Register the S3 bucket as a data lake location in AWS Lake Formation. Use the Lake Formation row-level security features to enforce the company's access policies.
  • C. Move the data to AWS Regions that are close to the countries where the customers are. Provide access to each analyst based on the country that the analyst serves.
  • D. Load the data into Amazon Redshift. Create a view for each country. Create separate IAM roles for each country to provide access to data from each country. Assign the appropriate roles to the analysts.
Reveal Solution Hide Solution   Discussion   9

Correct Answer: B 🗳️

Question #3 Topic 1

A media company wants to improve a system that recommends media content to customer based on user behavior and preferences. To improve the recommendation system, the company needs to incorporate insights from third-party datasets into the company's existing analytics platform.
The company wants to minimize the effort and time required to incorporate third-party datasets.
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Use API calls to access and integrate third-party datasets from AWS Data Exchange.
  • B. Use API calls to access and integrate third-party datasets from AWS DataSync.
  • C. Use Amazon Kinesis Data Streams to access and integrate third-party datasets from AWS CodeCommit repositories.
  • D. Use Amazon Kinesis Data Streams to access and integrate third-party datasets from Amazon Elastic Container Registry (Amazon ECR).
Reveal Solution Hide Solution   Discussion   15

Correct Answer: A 🗳️

Question #4 Topic 1

A financial company wants to implement a data mesh. The data mesh must support centralized data governance, data analysis, and data access control. The company has decided to use AWS Glue for data catalogs and extract, transform, and load (ETL) operations.
Which combination of AWS services will implement a data mesh? (Choose two.)

  • A. Use Amazon Aurora for data storage. Use an Amazon Redshift provisioned cluster for data analysis.
  • B. Use Amazon S3 for data storage. Use Amazon Athena for data analysis.
  • C. Use AWS Glue DataBrew for centralized data governance and access control.
  • D. Use Amazon RDS for data storage. Use Amazon EMR for data analysis.
  • E. Use AWS Lake Formation for centralized data governance and access control.
Reveal Solution Hide Solution   Discussion   11

Correct Answer: BE 🗳️

Question #5 Topic 1

A data engineer maintains custom Python scripts that perform a data formatting process that many AWS Lambda functions use. When the data engineer needs to modify the Python scripts, the data engineer must manually update all the Lambda functions.
The data engineer requires a less manual way to update the Lambda functions.
Which solution will meet this requirement?

  • A. Store a pointer to the custom Python scripts in the execution context object in a shared Amazon S3 bucket.
  • B. Package the custom Python scripts into Lambda layers. Apply the Lambda layers to the Lambda functions.
  • C. Store a pointer to the custom Python scripts in environment variables in a shared Amazon S3 bucket.
  • D. Assign the same alias to each Lambda function. Call reach Lambda function by specifying the function's alias.
Reveal Solution Hide Solution   Discussion   9

Correct Answer: B 🗳️

file Viewing page 1 out of 41 pages.
Viewing questions 1-5 out of 204 questions
Next Questions
Browse atleast 50% to increase passing rate cup
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago