Welcome to ExamTopics
ExamTopics Logo
- Expert Verified, Online, Free.
exam questions

Exam Professional Machine Learning Engineer All Questions

View all questions & answers for the Professional Machine Learning Engineer exam

Exam Professional Machine Learning Engineer topic 1 question 206 discussion

Actual exam question from Google's Professional Machine Learning Engineer
Question #: 206
Topic #: 1
[All Professional Machine Learning Engineer Questions]

You are building a predictive maintenance model to preemptively detect part defects in bridges. You plan to use high definition images of the bridges as model inputs. You need to explain the output of the model to the relevant stakeholders so they can take appropriate action. How should you build the model?

  • A. Use scikit-learn to build a tree-based model, and use SHAP values to explain the model output.
  • B. Use scikit-learn to build a tree-based model, and use partial dependence plots (PDP) to explain the model output.
  • C. Use TensorFlow to create a deep learning-based model, and use Integrated Gradients to explain the model output.
  • D. Use TensorFlow to create a deep learning-based model, and use the sampled Shapley method to explain the model output.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
dija123
5 months ago
Selected Answer: C
Use Integrated Gradients to explain the model output
upvoted 2 times
...
pinimichele01
7 months, 3 weeks ago
Selected Answer: C
https://cloud.google.com/vertex-ai/docs/explainable-ai/overview
upvoted 2 times
...
Shark0
7 months, 3 weeks ago
Selected Answer: C
Given the scenario of using high definition images as inputs for predictive maintenance on bridges, and the need to explain the model output to stakeholders, the most appropriate choice would be: C. Use TensorFlow to create a deep learning-based model, and use Integrated Gradients to explain the model output. Integrated Gradients is a method used to explain the predictions of deep learning models by attributing the contribution of each pixel in the input image to the final prediction. This would provide insights into which parts of the bridge images are most influential in the model's decision-making process, helping stakeholders understand why a particular prediction was made and allowing them to take appropriate action.
upvoted 2 times
...
BlehMaks
10 months, 2 weeks ago
Selected Answer: C
https://cloud.google.com/ai-platform/prediction/docs/ai-explanations/overview#compare-methods
upvoted 2 times
pinimichele01
7 months, 3 weeks ago
https://cloud.google.com/vertex-ai/docs/explainable-ai/overview this is right, your is deprecated!
upvoted 1 times
...
...
pikachu007
10 months, 2 weeks ago
Selected Answer: C
Handling image input: Deep learning models excel in processing complex visual data like high-definition images, making them ideal for extracting relevant features from bridge images for defect detection. Explainability with Integrated Gradients: Integrated Gradients is a powerful technique specifically designed to explain the predictions of deep learning models. It attributes model output to specific input features, providing insights into how the model makes decisions. Visualization: Integrated Gradients can generate visual explanations, such as heatmaps, that highlight image regions most influential to predictions, aiding in understanding and trust for stakeholders.
upvoted 1 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...