exam questions

Exam Professional Machine Learning Engineer All Questions

View all questions & answers for the Professional Machine Learning Engineer exam

Exam Professional Machine Learning Engineer topic 1 question 172 discussion

Actual exam question from Google's Professional Machine Learning Engineer
Question #: 172
Topic #: 1
[All Professional Machine Learning Engineer Questions]

You created an ML pipeline with multiple input parameters. You want to investigate the tradeoffs between different parameter combinations. The parameter options are
• Input dataset
• Max tree depth of the boosted tree regressor
• Optimizer learning rate

You need to compare the pipeline performance of the different parameter combinations measured in F1 score, time to train, and model complexity. You want your approach to be reproducible, and track all pipeline runs on the same platform. What should you do?

  • A. 1. Use BigQueryML to create a boosted tree regressor, and use the hyperparameter tuning capability.
    2. Configure the hyperparameter syntax to select different input datasets: max tree depths, and optimizer learning rates. Choose the grid search option.
  • B. 1. Create a Vertex AI pipeline with a custom model training job as part of the pipeline. Configure the pipeline’s parameters to include those you are investigating.
    2. In the custom training step, use the Bayesian optimization method with F1 score as the target to maximize.
  • C. 1. Create a Vertex AI Workbench notebook for each of the different input datasets.
    2. In each notebook, run different local training jobs with different combinations of the max tree depth and optimizer learning rate parameters.
    3. After each notebook finishes, append the results to a BigQuery table.
  • D. 1. Create an experiment in Vertex AI Experiments.
    2. Create a Vertex AI pipeline with a custom model training job as part of the pipeline. Configure the pipeline’s parameters to include those you are investigating.
    3. Submit multiple runs to the same experiment, using different values for the parameters.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
fitri001
6 months, 1 week ago
Selected Answer: D
Vertex AI Experiments: This service allows you to group and track different pipeline runs associated with the same experiment. This facilitates comparing runs with various parameter combinations. Vertex AI Pipelines: Pipelines enable you to define a workflow for training your model. You can include a custom training step within the pipeline and configure its parameters as needed. This ensures reproducibility as all runs follow the same defined workflow. Submitting multiple runs: By submitting multiple pipeline runs to the same experiment with different parameter values, you can efficiently explore various configurations and track their performance metrics like F1 score, training time, and model complexity within Vertex AI Experiments.
upvoted 3 times
fitri001
6 months, 1 week ago
A. BigQuery ML: BigQuery ML doesn't offer functionalities like Vertex AI Pipelines for building and managing workflows. It also lacks experiment tracking capabilities. C. Vertex AI Workbench notebooks: While Vertex AI Workbench provides notebooks for running training jobs, this approach wouldn't be reproducible. Each notebook would be a separate entity, making it difficult to track runs and manage different parameter combinations.
upvoted 1 times
...
...
pinimichele01
6 months, 3 weeks ago
Selected Answer: D
Vertex AI Experiment was created to compare runs.
upvoted 1 times
...
36bdc1e
9 months, 2 weeks ago
D The best option for investigating the tradeoffs between different parameter combinations is to create an experiment in Vertex AI Experiments,
upvoted 2 times
...
BlehMaks
9 months, 2 weeks ago
Selected Answer: D
Vertex AI Experiment was created to compare runs. A is incorrect because you can't create a boosted tree using BigQueryML https://cloud.google.com/bigquery/docs/bqml-introduction#supported_models
upvoted 1 times
...
pikachu007
9 months, 3 weeks ago
Selected Answer: D
Given the objective of investigating parameter tradeoffs while ensuring reproducibility and tracking, option D - "Create an experiment in Vertex AI Experiments and submit multiple runs to the same experiment, using different values for the parameters" seems to be the most suitable. This approach provides a structured and trackable environment within Vertex AI Experiments, allowing multiple runs with varied parameters to be monitored for F1 score, training times, and potentially model complexity, enabling a comprehensive analysis of parameter combinations' tradeoffs.
upvoted 1 times
...
vale_76_na_xxx
9 months, 3 weeks ago
I go with D : https://cloud.google.com/vertex-ai/docs/evaluation/introduction#tabular
upvoted 1 times
...
b1a8fae
9 months, 3 weeks ago
Selected Answer: D
You want to investigate tradeoffs between different parameter combinations and track all runs on the same platform -> clearly D. Vertex AI experiments etcetera.
upvoted 1 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago