exam questions

Exam Professional Machine Learning Engineer All Questions

View all questions & answers for the Professional Machine Learning Engineer exam

Exam Professional Machine Learning Engineer topic 1 question 147 discussion

Actual exam question from Google's Professional Machine Learning Engineer
Question #: 147
Topic #: 1
[All Professional Machine Learning Engineer Questions]

Your data science team needs to rapidly experiment with various features, model architectures, and hyperparameters. They need to track the accuracy metrics for various experiments and use an API to query the metrics over time. What should they use to track and report their experiments while minimizing manual effort?

  • A. Use Vertex Al Pipelines to execute the experiments. Query the results stored in MetadataStore using the Vertex Al API.
  • B. Use Vertex Al Training to execute the experiments. Write the accuracy metrics to BigQuery, and query the results using the BigQuery API.
  • C. Use Vertex Al Training to execute the experiments. Write the accuracy metrics to Cloud Monitoring, and query the results using the Monitoring API.
  • D. Use Vertex Al Workbench user-managed notebooks to execute the experiments. Collect the results in a shared Google Sheets file, and query the results using the Google Sheets API.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
fitri001
6 months ago
Selected Answer: A
Vertex AI Pipelines: Pipelines are designed for automating experiment execution. You can define different steps like data preprocessing, training with various configurations, and evaluation. This allows rapid experimentation with minimal manual intervention. Vertex ML Metadata: Vertex AI Pipelines integrate seamlessly with Vertex ML Metadata, which automatically tracks experiment runs, metrics, and artifacts. This eliminates the need for manual data collection in spreadsheets. Vertex AI API: The Vertex AI API allows you to programmatically query the Vertex ML Metadata store. You can retrieve experiment details, including accuracy metrics, for further analysis or visualization.
upvoted 3 times
fitri001
6 months ago
B. BigQuery and Monitoring are not designed for experiment tracking: BigQuery excels at large-scale data analysis, and Cloud Monitoring is primarily for monitoring system health. While you could store metrics, querying them for experiment comparisons would be cumbersome. C. Manual collection in Google Sheets: This approach is highly error-prone and inefficient for rapid experimentation. Version control and querying metrics across multiple experiments would be challenging. D. Vertex AI Training (standalone): While Vertex AI Training can run experiments, it lacks built-in experiment tracking and querying functionalities. You'd need to develop custom solutions for managing metrics.
upvoted 1 times
...
...
M25
1 year, 5 months ago
Selected Answer: A
Went with A
upvoted 2 times
...
TNT87
1 year, 7 months ago
Selected Answer: A
Option A is the best approach to track and report experiments while minimizing manual effort. The Vertex AI Pipelines provide a powerful tool for automating machine learning workflows, including data preparation, training, and deployment. MetadataStore can be used to track the performance of different models by logging accuracy metrics and other important information. The Vertex AI API can then be used to query the metadata store and retrieve the results of different experiments.
upvoted 1 times
...
chidstar
1 year, 8 months ago
Selected Answer: A
Vertex AI Pipelines covers everything. "Vertex AI Pipelines helps you to automate, monitor, and govern your ML systems by orchestrating your ML workflow in a serverless manner, and storing your workflow's artifacts using Vertex ML Metadata. By storing the artifacts of your ML workflow in Vertex ML Metadata, you can analyze the lineage of your workflow's artifacts — for example, an ML model's lineage may include the training data, hyperparameters, and code that were used to create the model."
upvoted 1 times
...
TNT87
1 year, 8 months ago
https://cloud.google.com/vertex-ai/docs/ml-metadata/analyzing
upvoted 1 times
...
Scipione_
1 year, 8 months ago
Selected Answer: A
Your goal is to use API to query results while minimizing manual effort. The answer 'A' achieves the goal and requires less manual effort
upvoted 1 times
...
RaghavAI
1 year, 8 months ago
Selected Answer: A
its A - Use Vertex Al Pipelines to execute the experiments. Query the results stored in MetadataStore using the Vertex Al API.
upvoted 1 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago