Welcome to ExamTopics
ExamTopics Logo
- Expert Verified, Online, Free.
exam questions

Exam Certified Data Engineer Associate All Questions

View all questions & answers for the Certified Data Engineer Associate exam

Exam Certified Data Engineer Associate topic 1 question 74 discussion

Actual exam question from Databricks's Certified Data Engineer Associate
Question #: 74
Topic #: 1
[All Certified Data Engineer Associate Questions]

Which of the following must be specified when creating a new Delta Live Tables pipeline?

  • A. A key-value pair configuration
  • B. The preferred DBU/hour cost
  • C. A path to cloud storage location for the written data
  • D. A location of a target database for the written data
  • E. At least one notebook library to be executed
Show Suggested Answer Hide Answer
Suggested Answer: E 🗳️

Comments

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Stemix
Highly Voted 10 months ago
Selected Answer: E
Correct answer is E. storage location is optional. "(Optional) Enter a Storage location for output data from the pipeline. The system uses a default location if you leave Storage location empty"
upvoted 7 times
...
hakimipous
Most Recent 3 days, 16 hours ago
Selected Answer: C
C is correct
upvoted 1 times
...
Colje
1 month, 3 weeks ago
D. A location of a target database for the written data Why this is correct: When creating a Delta Live Tables (DLT) pipeline, you must specify the target database where the resulting data will be written. This ensures that the output of the pipeline is stored properly. Why the other options are incorrect: A. A key-value pair configuration: While configurations are useful, they are not mandatory when setting up a DLT pipeline. B. The preferred DBU/hour cost: You don't specify a cost directly; the DBU is associated with the cluster used. C. A path to cloud storage location for the written data: While storage paths may be specified, the target database location is required. E. At least one notebook library: You specify the transformation logic (which could be in notebooks), but this is not a strict requirement for setting up the pipeline itself.
upvoted 1 times
...
80370eb
3 months, 2 weeks ago
Selected Answer: E
This is a key requirement for creating a Delta Live Tables pipeline. You need to specify notebooks that contain the ETL logic to be executed by the pipeline.
upvoted 1 times
...
Shinigami76
5 months, 2 weeks ago
C, just tested on databricks DLT
upvoted 1 times
...
benni_ale
6 months, 4 weeks ago
Selected Answer: E
tbf C is correct as well but the question is probably hinting for E
upvoted 1 times
...
BigMF
8 months, 1 week ago
Selected Answer: C
Per Databaricks documentation (see below), you need to select a destination for datasets published by the pipeline, either the Hive metastore or Unity Catalog I think A is incorrect because it uses the term "Notebook Library" and not just "Notebook". Databricks doc: https://docs.databricks.com/en/delta-live-tables/tutorial-pipelines.html
upvoted 1 times
7082935
2 months, 4 weeks ago
"you need to select a destination for datasets published by the pipeline". This is true if you have a notebook that is writing out a result dataset. However, nothing in this question or documentation states that a Delta Live Tables Pipeline --MUST-- contain a notebook that write dataset results.
upvoted 1 times
...
...
azure_bimonster
10 months, 1 week ago
Selected Answer: E
As per Pipeline creating steps, choosing a Notebook is mandatory whereas specifying a location is optional. I would go with answer E
upvoted 1 times
...
Azure_2023
10 months, 1 week ago
Selected Answer: E
https://docs.databricks.com/en/delta-live-tables/tutorial-pipelines.html E. The only non-optional selection is a notebook
upvoted 2 times
...
Garyn
10 months, 4 weeks ago
Selected Answer: E
E. At least one notebook library to be executed. Explanation: https://docs.databricks.com/en/delta-live-tables/tutorial-pipelines.html Delta Live Tables pipelines execute notebook libraries as part of their operations. These notebooks contain the logic, code, or instructions defining the data processing steps, transformations, or actions to be performed within the pipeline. Specifying at least one notebook library to be executed is crucial when creating a new Delta Live Tables pipeline, as it defines the sequence of operations and the logic to be executed on the data within the pipeline, aligning with the documentation provided.
upvoted 2 times
...
saaaaaa
11 months, 1 week ago
Selected Answer: E
This should be E. As per the link https://docs.databricks.com/en/delta-live-tables/tutorial-pipelines.html Create a pipeline Click Jobs Icon Workflows in the sidebar, click the Delta Live Tables tab, and click Create Pipeline. Give the pipeline a name and click File Picker Icon to select a notebook. Select Triggered for Pipeline Mode. (Optional) Enter a Storage location for output data from the pipeline. The system uses a default location if you leave Storage location empty. (Optional) Specify a Target schema to publish your dataset to the Hive metastore or a Catalog and a Target schema to publish your dataset to Unity Catalog. See Publish datasets. (Optional) Click Add notification to configure one or more email addresses to receive notifications for pipeline events. See Add email notifications for pipeline events. Click Create.
upvoted 2 times
...
55f31c8
12 months ago
Selected Answer: C
https://docs.databricks.com/en/delta-live-tables/index.html#what-is-a-delta-live-tables-pipeline
upvoted 1 times
...
Huroye
1 year ago
The correct answer is E. DLT tables needs a notebook where you have to specify the processing info
upvoted 3 times
...
kishore1980
1 year ago
Selected Answer: C
storage location is required to be specified to control the object storage location for data written by the pipeline.
upvoted 2 times
...
meow_akk
1 year, 1 month ago
Ans E : i think it might be E - https://docs.databricks.com/en/delta-live-tables/settings.html - this doc says that target schema and storage may be optional so it leaves us with E
upvoted 3 times
Syd
1 year ago
Answer is E Storage and location are optional. https://docs.databricks.com/en/delta-live-tables/tutorial-pipelines.html
upvoted 1 times
...
...
kishanu
1 year, 1 month ago
Selected Answer: C
A path to a cloud storage location for the written data - considering this option is talking about the source data being stored in cloud storage and being ingested to DLT using an autoloader.
upvoted 3 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...