exam questions

Exam Professional Machine Learning Engineer All Questions

View all questions & answers for the Professional Machine Learning Engineer exam

Exam Professional Machine Learning Engineer topic 1 question 136 discussion

Actual exam question from Google's Professional Machine Learning Engineer
Question #: 136
Topic #: 1
[All Professional Machine Learning Engineer Questions]

You work for an online publisher that delivers news articles to over 50 million readers. You have built an AI model that recommends content for the company’s weekly newsletter. A recommendation is considered successful if the article is opened within two days of the newsletter’s published date and the user remains on the page for at least one minute.

All the information needed to compute the success metric is available in BigQuery and is updated hourly. The model is trained on eight weeks of data, on average its performance degrades below the acceptable baseline after five weeks, and training time is 12 hours. You want to ensure that the model’s performance is above the acceptable baseline while minimizing cost. How should you monitor the model to determine when retraining is necessary?

  • A. Use Vertex AI Model Monitoring to detect skew of the input features with a sample rate of 100% and a monitoring frequency of two days.
  • B. Schedule a cron job in Cloud Tasks to retrain the model every week before the newsletter is created.
  • C. Schedule a weekly query in BigQuery to compute the success metric.
  • D. Schedule a daily Dataflow job in Cloud Composer to compute the success metric.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
TNT87
Highly Voted 1 year, 7 months ago
Selected Answer: C
Option C is the best answer. Since all the information needed to compute the success metric is available in BigQuery and is updated hourly, scheduling a weekly query in BigQuery to compute the success metric is the simplest and most cost-effective way to monitor the model's performance. By comparing the computed success metric against the acceptable baseline, you can determine when the model's performance has degraded below the threshold, and retrain the model accordingly. This approach avoids the cost of additional monitoring infrastructure and leverages existing data processing capabilities.
upvoted 8 times
...
fitri001
Most Recent 6 months, 1 week ago
Selected Answer: C
Weekly checks are frequent enough to catch performance degradation before the next newsletter (5-week threshold). The success metric can be directly calculated within the query, providing a clear indication for retraining.
upvoted 2 times
fitri001
6 months, 1 week ago
A. Vertex AI Model Monitoring for feature skew: This monitors data drift, which can be helpful, but it doesn't directly address the success metric of article opens and dwell time. B. Cron job for weekly retraining: Retraining every week, regardless of performance, is excessive and costly, considering the 12-hour training time. D. Daily Dataflow job: While daily computation provides more data points, it might be overkill compared to a weekly check. Additionally, Cloud Composer adds complexity for a simple task.
upvoted 1 times
...
...
julliet
1 year, 5 months ago
Selected Answer: C
As we have all the data in BigQuery
upvoted 2 times
...
M25
1 year, 5 months ago
Selected Answer: C
Went with C
upvoted 3 times
...
Antmal
1 year, 6 months ago
Selected Answer: A
Option A because when using Vertex AI Model Monitoring, you can set up automated monitoring of the model's performance by detecting skew of the input features, which can help you identify any changes in the data distribution that may impact the model's performance. Setting the sample rate to 100% ensures that all incoming data is monitored, and a monitoring frequency of two days allows for timely detection of any deviations from the expected data distribution
upvoted 1 times
Antmal
1 year, 5 months ago
I have changed my mind. I will choose C
upvoted 1 times
...
...
John_Pongthorn
1 year, 8 months ago
Selected Answer: C
This question tweak from this article surely. https://cloud.google.com/blog/topics/developers-practitioners/continuous-model-evaluation-bigquery-ml-stored-procedures-and-cloud-scheduler
upvoted 2 times
...
John_Pongthorn
1 year, 9 months ago
The anwner is on here https://cloud.google.com/blog/topics/developers-practitioners/continuous-model-evaluation-bigquery-ml-stored-procedures-and-cloud-scheduler
upvoted 2 times
...
hiromi
1 year, 10 months ago
Selected Answer: C
C (not sure)
upvoted 1 times
...
pshemol
1 year, 10 months ago
Selected Answer: C
"All the information needed to compute the success metric is available in BigQuery" and "on average its performance degrades below the acceptable baseline after five weeks" so once per week is enough to check models performance. And it's the cheapest solution too.
upvoted 3 times
...
mil_spyro
1 year, 10 months ago
Selected Answer: D
This can help to ensure that the model’s performance is above the baseline, while minimizing cost by avoiding unnecessary retraining.
upvoted 1 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago