Welcome to ExamTopics
ExamTopics Logo
- Expert Verified, Online, Free.
exam questions

Exam Professional Machine Learning Engineer All Questions

View all questions & answers for the Professional Machine Learning Engineer exam

Exam Professional Machine Learning Engineer topic 1 question 21 discussion

Actual exam question from Google's Professional Machine Learning Engineer
Question #: 21
Topic #: 1
[All Professional Machine Learning Engineer Questions]

You have deployed multiple versions of an image classification model on AI Platform. You want to monitor the performance of the model versions over time. How should you perform this comparison?

  • A. Compare the loss performance for each model on a held-out dataset.
  • B. Compare the loss performance for each model on the validation data.
  • C. Compare the receiver operating characteristic (ROC) curve for each model using the What-If Tool.
  • D. Compare the mean average precision across the models using the Continuous Evaluation feature.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
chohan
Highly Voted 3 years, 5 months ago
Answer is D
upvoted 13 times
...
Danny2021
Highly Voted 3 years, 2 months ago
D is correct. Choose the feature / capability GCP provides is always a good bet. :)
upvoted 6 times
...
jkkim_jt
Most Recent 1 month ago
Selected Answer: D
[B] Compare the loss performance for each model on the validation data. --> Not validation data but testing data
upvoted 1 times
...
bludw
5 months ago
Selected Answer: A
The answer is A. I am not sure why people choose B vs A as you may overfit your validation set. And you are using your held-out set really rare == no option to overfit.
upvoted 2 times
...
Wookjae
5 months, 3 weeks ago
Continuous Evaluation feature is deprecated.
upvoted 1 times
Goosemoose
5 months, 3 weeks ago
so is the what if tool
upvoted 1 times
...
Goosemoose
5 months, 3 weeks ago
so it looks like that B is the best answer
upvoted 2 times
...
...
saadci
5 months, 3 weeks ago
Selected Answer: B
In the official study guide, this was the explanation given for answer B : "The image classification model is a deep learning model. You minimize the loss of deep learning models to get the best model. So comparing loss performance for each model on validation data is the correct answer."
upvoted 3 times
...
Sum_Sum
1 year ago
Selected Answer: D
D - because you are using a Google provided feature. remember in this exam its important to always choose the google services over anything else
upvoted 4 times
...
claude2046
1 year, 1 month ago
mAP is for object detection, so the answer should be B
upvoted 1 times
...
Liting
1 year, 4 months ago
Selected Answer: D
Went with D, using continuous evaluation feature seems correct to me.
upvoted 1 times
...
SamuelTsch
1 year, 4 months ago
Selected Answer: D
I choose by myself D. But as I read the post here https://www.v7labs.com/blog/mean-average-precision, I was not sure about D. It wrote mAP is commonly used for object detection or instance segmentation tasks. Validation Dataset in GCP context: not trained dataset and not seen dataset
upvoted 1 times
...
Voyager2
1 year, 5 months ago
Selected Answer: D
D. Compare the mean average precision across the models using the Continuous Evaluation feature https://cloud.google.com/vertex-ai/docs/evaluation/introduction Vertex AI provides model evaluation metrics, such as precision and recall, to help you determine the performance of your models... Vertex AI supports evaluation of the following model types: AuPRC: The area under the precision-recall (PR) curve, also referred to as average precision. This value ranges from zero to one, where a higher value indicates a higher-quality model.
upvoted 1 times
...
M25
1 year, 6 months ago
Selected Answer: D
Went with D
upvoted 1 times
...
lucaluca1982
1 year, 7 months ago
Selected Answer: B
I go for B. Option D is good when we are already in production
upvoted 1 times
...
prakashkumar1234
1 year, 8 months ago
o monitor the performance of the model versions over time, you should compare the loss performance for each model on the validation data. Therefore, option B is the correct answer.
upvoted 1 times
Jarek7
1 year, 6 months ago
Please, How? B is not monitoring. It is a validation. The definition of monitoring states: "observe and check the progress or quality of (something) over a period of time" So it is a continuous process. Each option A,B,C are just one time check, not monitoring.
upvoted 3 times
...
...
Fatiy
1 year, 8 months ago
Selected Answer: B
The best option to monitor the performance of multiple versions of an image classification model on AI Platform over time is to compare the loss performance for each model on the validation data. Option B is the best approach because comparing the loss performance of each model on the validation data is a common method to monitor machine learning model performance over time. The validation data is a subset of the data that is not used for model training, but is used to evaluate its performance during training and to compare different versions of the model. By comparing the loss performance of each model on the same validation data, you can determine which version of the model has better performance.
upvoted 4 times
...
enghabeth
1 year, 9 months ago
Selected Answer: D
If you have multiple model versions in a single model and have created an evaluation job for each one, you can view a chart comparing the mean average precision of the model versions over time
upvoted 1 times
...
guilhermebutzke
1 year, 9 months ago
Guys, I not sure about the answer D ... And maybe you could help me in my arguments. I think choose loss to compare the model performance is better than see for metrics. For example, when can build an image model classification that has good precision metrics, because the class in unbalanced, but the loss could be terrible because of kind of loss choose that penalizes classes. so, losses are better than metrics to available models, and the answer is in A or B. I thought that the A could be the answer because I see validation as a part of the training process. So, If we want to test the model performance over time, we have to use new data, which I suppose to be the held-out data.
upvoted 3 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...