Welcome to ExamTopics
ExamTopics Logo
- Expert Verified, Online, Free.
exam questions

Exam Professional Machine Learning Engineer All Questions

View all questions & answers for the Professional Machine Learning Engineer exam

Exam Professional Machine Learning Engineer topic 1 question 229 discussion

Actual exam question from Google's Professional Machine Learning Engineer
Question #: 229
Topic #: 1
[All Professional Machine Learning Engineer Questions]

You work for a manufacturing company. You need to train a custom image classification model to detect product defects at the end of an assembly line. Although your model is performing well, some images in your holdout set are consistently mislabeled with high confidence. You want to use Vertex AI to understand your model’s results. What should you do?

  • A. Configure feature-based explanations by using Integrated Gradients. Set visualization type to PIXELS, and set clip_percent_upperbound to 95.
  • B. Create an index by using Vertex AI Matching Engine. Query the index with your mislabeled images.
  • C. Configure feature-based explanations by using XRAI. Set visualization type to OUTLINES, and set polarity to positive.
  • D. Configure example-based explanations. Specify the embedding output layer to be used for the latent space representation.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
YangG
1 month, 1 week ago
Selected Answer: D
It is to understand why model is making specific mistakes, so example-based explanation makes sense to me.
upvoted 1 times
...
eico
2 months, 3 weeks ago
Selected Answer: D
https://cloud.google.com/vertex-ai/docs/explainable-ai/overview#example-based "Improve your data or model: One of the core use cases for example-based explanations is helping you understand why your model made certain mistakes in its predictions, and using those insights to improve your data or model. [...] For example, suppose we have a model that classifies images as either a bird or a plane, and that it is misclassifying the following bird as a plane with high confidence. You can use Example-based explanations to retrieve similar images from the training set to figure out what is happening." Not A: Integrated Gradients is recommended for low-contrast images, such as X-rays https://cloud.google.com/vertex-ai/docs/explainable-ai/overview#compare-methods Not C: Cannot set Outlines for XRAI https://cloud.google.com/ai-platform/prediction/docs/ai-explanations/visualizing-explanations
upvoted 1 times
...
VipinSingla
8 months, 2 weeks ago
Selected Answer: D
Improve your data or model: One of the core use cases for example-based explanations is helping you understand why your model made certain mistakes in its predictions, and using those insights to improve your data or model. https://cloud.google.com/vertex-ai/docs/explainable-ai/overview
upvoted 3 times
...
guilhermebutzke
9 months, 1 week ago
My Answer: A According to this documentation: https://cloud.google.com/vertex-ai/docs/explainable-ai/visualization-settings This option A aligns with using Integrated Gradients, which is suitable for feature-based explanations. Setting the visualization type to PIXELS allows for per-pixel attribution, which can help in understanding the specific regions of the image influencing the model's decision. Additionally, setting the clip_percent_upperbound parameter to 95 helps in filtering out noise and focusing on areas of strong attribution, which is crucial for understanding mislabeled images with high confidence. Option C suggests using XRAI for feature-based explanations and setting the visualization type to OUTLINES, along with setting the polarity to positive. However, based on the provided documentation, XRAI is recommended to have its visualization type set to PIXELS, not OUTLINES.
upvoted 3 times
...
sonicclasps
9 months, 4 weeks ago
Selected Answer: A
Although Xrai could be an option, it doesn't not allow you to set those options, so only other answer is A https://cloud.google.com/vertex-ai/docs/explainable-ai/visualization-settings#visualization_options
upvoted 2 times
vaibavi
9 months, 3 weeks ago
Why not it's D? https://cloud.google.com/vertex-ai/docs/explainable-ai/overview
upvoted 1 times
vaibavi
9 months, 3 weeks ago
For example, suppose we have a model that classifies images as either a bird or a plane, and that it is misclassifying the following bird as a plane with high confidence. You can use Example-based explanations to retrieve similar images from the training set to figure out what is happening.
upvoted 1 times
sonicclasps
9 months, 3 weeks ago
yes you are correct, but having to specify the output layer to be used is definitely no guarantee that you'll get examples that are easily interpretable (imo)
upvoted 1 times
...
...
...
...
shadz10
10 months, 1 week ago
Selected Answer: A
Going with A Not c - For XRAI, Pixels is the default setting and shows areas of attribution. Outlines is not recommended for XRAI. https://cloud.google.com/ai-platform/prediction/docs/ai-explanations/visualizing-explanations
upvoted 2 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...