exam questions

Exam Professional Data Engineer All Questions

View all questions & answers for the Professional Data Engineer exam

Exam Professional Data Engineer topic 1 question 231 discussion

Actual exam question from Google's Professional Data Engineer
Question #: 231
Topic #: 1
[All Professional Data Engineer Questions]

You recently deployed several data processing jobs into your Cloud Composer 2 environment. You notice that some tasks are failing in Apache Airflow. On the monitoring dashboard, you see an increase in the total workers memory usage, and there were worker pod evictions. You need to resolve these errors. What should you do? (Choose two.)

  • A. Increase the directed acyclic graph (DAG) file parsing interval.
  • B. Increase the Cloud Composer 2 environment size from medium to large.
  • C. Increase the maximum number of workers and reduce worker concurrency.
  • D. Increase the memory available to the Airflow workers.
  • E. Increase the memory available to the Airflow triggerer.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
ML6
Highly Voted 7 months, 2 weeks ago
Selected Answer: D
If an Airflow worker pod is evicted, all task instances running on that pod are interrupted, and later marked as failed by Airflow. The majority of issues with worker pod evictions happen because of out-of-memory situations in workers. You might want to: - (D) Increase the memory available to workers. - (C) Reduce worker concurrency. In this way, a single worker handles fewer tasks at once. This provides more memory or storage to each individual task. If you change worker concurrency, you might also want to increase the maximum number of workers. In this way, the number of tasks that your environment can handle at once stays the same. For example, if you reduce worker Concurrency from 12 to 6, you might want to double the maximum number of workers. Source: https://cloud.google.com/composer/docs/composer-2/optimize-environments
upvoted 7 times
...
taka5094
Most Recent 6 days, 13 hours ago
Selected Answer: C
CD On the Monitoring dashboard, in the Workers section, observe the Worker Pods evictions graphs for your environment. The Total workers memory usage graph shows a total perspective of the environment. A single worker can still exceed the memory limit, even if the memory utilization is healthy at the environment level. According to your observations, you might want to: - Increase the memory available to workers. - Reduce worker concurrency. In this way, a single worker handles fewer tasks at once. This provides more memory or storage to each individual task. If you change worker concurrency, you might also want to increase the maximum number of workers. In this way, the number of tasks that your environment can handle at once stays the same. For example, if you reduce worker Concurrency from 12 to 6, you might want to double the maximum number of workers.
upvoted 1 times
...
desertlotus1211
1 week, 3 days ago
Answer is B,D
upvoted 1 times
...
Anudeep58
4 months, 1 week ago
Selected Answer: D
Answer C,D According to your observations, you might want to: Increase the memory available to workers. Reduce worker concurrency. In this way, a single worker handles fewer tasks at once. This provides more memory or storage to each individual task. If you change worker concurrency, you might also want to increase the maximum number of workers. In this way, the number of tasks that your environment can handle at once stays the same. For example, if you reduce worker Concurrency from 12 to 6, you might want to double the maximum number of workers.
upvoted 1 times
desertlotus1211
1 week, 3 days ago
Reducing concurrency can reduce memory pressure per worker, but won't help if the memory limit per pod is too low
upvoted 1 times
...
...
virat_kohli
4 months, 1 week ago
Selected Answer: D
C. Increase the maximum number of workers and reduce worker concurrency. Most Voted D. Increase the memory available to the Airflow workers.
upvoted 1 times
...
ML6
7 months, 2 weeks ago
Selected Answer: D
If an Airflow worker pod is evicted, all task instances running on that pod are interrupted, and later marked as failed by Airflow. The majority of issues with worker pod evictions happen because of out-of-memory situations in workers. You might want to: - Increase the memory available to workers. - Reduce worker concurrency. In this way, a single worker handles fewer tasks at once. This provides more memory or storage to each individual task. If you change worker concurrency, you might also want to increase the maximum number of workers. In this way, the number of tasks that your environment can handle at once stays the same. For example, if you reduce worker Concurrency from 12 to 6, you might want to double the maximum number of workers. Source: https://cloud.google.com/composer/docs/composer-2/optimize-environments
upvoted 2 times
...
qq589539483084gfrgrgfr
8 months, 2 weeks ago
Selected Answer: C
CD It is clear
upvoted 3 times
...
Matt_108
8 months, 3 weeks ago
Selected Answer: C
C & D to me
upvoted 2 times
...
GCP001
8 months, 3 weeks ago
Selected Answer: C
C and D Check ref for memory optimization - https://cloud.google.com/composer/docs/composer-2/optimize-environments
upvoted 4 times
AllenChen123
8 months, 3 weeks ago
Agree. Straightforward. https://cloud.google.com/composer/docs/composer-2/optimize-environments#monitor-scheduler -> Figure 3. Graph that displays worker pod evictions
upvoted 4 times
...
...
qq589539483084gfrgrgfr
8 months, 4 weeks ago
Selected Answer: B
B&D See this- https://cloud.google.com/composer/docs/composer-2/troubleshooting-dags#task-fails-without-logs go through the suggested fixes for If there are airflow-worker pods that show Evicted
upvoted 1 times
...
Jordan18
9 months ago
Selected Answer: C
C and D
upvoted 2 times
...
raaad
9 months ago
Selected Answer: B
B&D: B : - Scaling up the environment size can provide more resources, including memory, to the Airflow workers. If worker pod evictions are occurring due to insufficient memory, increasing the environment size to allocate more resources could alleviate the problem and improve the stability of your data processing jobs. D: - Increase the memory available to the Airflow workers. - Directly increasing the memory allocation for Airflow workers can address the issue of high memory usage and worker pod evictions. More memory per worker means that each worker can handle more demanding tasks or a higher volume of tasks without running out of memory.
upvoted 3 times
GCP001
8 months, 2 weeks ago
why not B ) It s not decreasing concurrency which may cause issue again
upvoted 2 times
...
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago