Welcome to ExamTopics
ExamTopics Logo
- Expert Verified, Online, Free.
exam questions

Exam Professional Machine Learning Engineer All Questions

View all questions & answers for the Professional Machine Learning Engineer exam

Exam Professional Machine Learning Engineer topic 1 question 264 discussion

Actual exam question from Google's Professional Machine Learning Engineer
Question #: 264
Topic #: 1
[All Professional Machine Learning Engineer Questions]

You work for a retail company. You have been tasked with building a model to determine the probability of churn for each customer. You need the predictions to be interpretable so the results can be used to develop marketing campaigns that target at-risk customers. What should you do?

  • A. Build a random forest regression model in a Vertex AI Workbench notebook instance. Configure the model to generate feature importances after the model is trained.
  • B. Build an AutoML tabular regression model. Configure the model to generate explanations when it makes predictions.
  • C. Build a custom TensorFlow neural network by using Vertex AI custom training. Configure the model to generate explanations when it makes predictions.
  • D. Build a random forest classification model in a Vertex AI Workbench notebook instance. Configure the model to generate feature importances after the model is trained.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
YangG
1 month, 1 week ago
Selected Answer: B
Probability --> regression model
upvoted 1 times
...
wences
2 months, 1 week ago
Selected Answer: D
Churn probability is required; linear regression will give the label, and classification will provide the likelihood as requested.
upvoted 1 times
...
tardigradum
3 months, 2 weeks ago
Selected Answer: D
We can't use AutoML due to the lack of explicability. AutoML is a black box, and we can't know which model is GCP using under the hood: Whether is true that you can use the feature importance tool when using AutoML, GCP doesn't publicly disclose the specific models used internally for each type of problem (classification, regression, etc.). AutoML employs a wide range of algorithms, from linear models and decision trees to more complex neural networks. Consequently, the lack of explicability lead us to discard any AutoML option. Regarding the classification/regression discussion, as Roulle says "Churn problems are cases of classification. We don't predict the label, but the probability of belonging to a given class (churn or not). We then set a threshold to indicate the probability at which we can affirm that the person will or will not unsubscribe."
upvoted 2 times
...
Roulle
4 months, 3 weeks ago
Selected Answer: D
Churn problems are cases of classification. We don't predict the label, but the probability of belonging to a given class (churn or not). We then set a threshold to indicate the probability at which we can affirm that the person will or will not unsubscribe. We can eliminate all responses that mention regression (A & B). A random forest is therefore less complex to interpret than a neural network. So I'm pretty sure it's D
upvoted 1 times
...
gscharly
7 months, 1 week ago
agree with Yan_X. This is a classification problem, so regression should not be used (rule out A&B). Neural networks don't have explainable features by default, and Random Forest provides global explanations...
upvoted 1 times
pinimichele01
7 months, 1 week ago
probability of churn for each customer......
upvoted 1 times
...
...
fitri001
7 months, 1 week ago
Selected Answer: B
Since interpretability is key for your churn prediction model to inform marketing campaigns, --> Choose an interpretable model: Logistic Regression: This is a classic choice for interpretability. It provides coefficients for each feature, indicating how a unit increase in that feature impacts the probability of churn. Easy to understand and implement, it's a good starting point. Decision Trees with Rule Extraction: Decision trees are inherently interpretable, with each branch representing a decision rule. By extracting these rules, you can understand the specific factors leading to churn (e.g., "Customers with low tenure and high number of support tickets are more likely to churn").
upvoted 2 times
...
pinimichele01
7 months, 2 weeks ago
Selected Answer: B
the probability of churn for each customer -> regression -> B
upvoted 1 times
...
Yan_X
7 months, 3 weeks ago
I don't know which one is correct... As D is 'after the model is trained', so not for each prediction. And B 'AutoML tabular regression model' is regression, but for not classification problem...
upvoted 2 times
...
guilhermebutzke
9 months, 1 week ago
Selected Answer: B
My Answer: B “the probability of churn for each customer”: the probability is a number. So regression problem. (A,B, C) “predictions to be interpretable”: explainable in predict not in the model (B,C) Choosing between “Build an AutoML tabular regression model” and “Build a custom TensorFlow neural network by using Vertex AI custom training”, I think B could be the most relevant for the problem. However I also think that others no enough information in the text to choose between the two.
upvoted 3 times
...
sonicclasps
9 months, 4 weeks ago
Selected Answer: B
the question asks for explainability for predictions, answer D does not provide that. Although not the ideal solution, B is the only answer that suits the requirements, because churn can also be expressed as a probability.
upvoted 1 times
tavva_prudhvi
9 months, 2 weeks ago
But, in Option B is says "AutoML Regression" if the problem statement is about classification!
upvoted 1 times
...
...
daidai75
10 months ago
Selected Answer: D
The answer is D. 1.Churn prediction is a classification problem: We want to categorize customers as either churning or not churning, not predict a continuous value like revenue. Therefore, a classification model is needed. 2.Random forest models are interpretable: Feature importances provide insights into which features contribute most to the model's predictions, making them a good choice for understanding why customers churn. This interpretability is crucial for developing targeted marketing campaigns. 3.Vertex AI Workbench is a suitable platform: It provides notebook instances for building and training models, making it a good choice for this task.
upvoted 2 times
...
shadz10
10 months, 1 week ago
Selected Answer: D
https://cloud.google.com/bigquery/docs/xai-overview
upvoted 1 times
...
pikachu007
10 months, 2 weeks ago
Selected Answer: D
Option A: Regression, not classification, is used for random forest model, which is not appropriate for predicting probabilities. Option B: While AutoML tabular can generate model explanations, random forests inherently provide more granular insights into feature importance. Option C: Neural networks can be less interpretable than tree-based models, and generating explanations for them often requires additional techniques and libraries.
upvoted 2 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...