exam questions

Exam Professional Machine Learning Engineer All Questions

View all questions & answers for the Professional Machine Learning Engineer exam

Exam Professional Machine Learning Engineer topic 1 question 242 discussion

Actual exam question from Google's Professional Machine Learning Engineer
Question #: 242
Topic #: 1
[All Professional Machine Learning Engineer Questions]

Your team is training a large number of ML models that use different algorithms, parameters, and datasets. Some models are trained in Vertex AI Pipelines, and some are trained on Vertex AI Workbench notebook instances. Your team wants to compare the performance of the models across both services. You want to minimize the effort required to store the parameters and metrics. What should you do?

  • A. Implement an additional step for all the models running in pipelines and notebooks to export parameters and metrics to BigQuery.
  • B. Create a Vertex AI experiment. Submit all the pipelines as experiment runs. For models trained on notebooks log parameters and metrics by using the Vertex AI SDK.
  • C. Implement all models in Vertex AI Pipelines Create a Vertex AI experiment, and associate all pipeline runs with that experiment.
  • D. Store all model parameters and metrics as model metadata by using the Vertex AI Metadata API.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
fitri001
6 months, 1 week ago
Selected Answer: B
Why B? Centralized Tracking: Vertex AI Experiments provides a central location to track and compare models trained in both pipelines and notebooks. Reduced Overhead: Submitting pipelines as experiment runs leverages the existing pipeline infrastructure for logging and avoids creating additional pipeline steps for all models. Notebook Integration: Vertex AI SDK allows notebooks to log parameters and metrics directly to the experiment, simplifying data collection from notebooks. why not C? C. All Models in Pipelines: Moving all models to pipelines might not be feasible or desirable. Pipelines are best suited for automated, repeatable training, while notebooks offer flexibility for exploration.
upvoted 3 times
...
omermahgoub
6 months, 2 weeks ago
Selected Answer: B
B. Create a Vertex AI experiment. Submit all the pipelines as experiment runs. For models trained on notebooks log parameters and metrics by using the Vertex AI SDK.
upvoted 2 times
...
guilhermebutzke
8 months, 2 weeks ago
Selected Answer: B
My Answer: B A: Not Correct: Not the best approach compared with Vertex AI experiment that does the same B: CORRECT: By submitting all pipelines as experiment runs, you can centralize the storage of parameters and metrics for models trained in Vertex AI Pipelines. This approach minimizes effort by providing a unified platform for storing and comparing model performance across different services. C: Not Correct: not feasible or ideal for models trained on Vertex AI Workbench notebook instances. D: Not Correct: If only basic parameter and metric storage is needed, and your team prioritizes simplicity over in-depth comparison, option D could be an alternative. For more complex scenarios requiring comprehensive analysis and comparison across diverse models, option B with Vertex AI Experiments
upvoted 3 times
...
b1a8fae
9 months, 1 week ago
Selected Answer: B
Divided between B and C. But logging parameters of models sounds easier than re-implementing a large amount of models as Vertex AI pipelines.
upvoted 3 times
...
shadz10
9 months, 2 weeks ago
Selected Answer: B
B is The correct answer here I believe - Vertex AI experiments - provides a unified way to store and compare model runs. pipeline runs - It provides a unified way to store and compare model runs. notebook instances - models trained on Vertex AI Workbench notebook instances, logging parameters and metrics using the Vertex AI SDK provides a consistent way to record the necessary information.
upvoted 1 times
...
pikachu007
9 months, 2 weeks ago
Selected Answer: C
Options A and B: Logging metrics to BigQuery involves additional setup and integration efforts. Option D: Loading Vertex ML Metadata into a pandas DataFrame for visualization requires manual work and doesn't leverage built-in visualization tools.
upvoted 1 times
felipepin
8 months, 1 week ago
On option B there are no Logging metrics to BigQuery suggested. Hence why B is correct.
upvoted 2 times
...
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago