exam questions

Exam DP-100 All Questions

View all questions & answers for the DP-100 exam

Exam DP-100 topic 5 question 22 discussion

Actual exam question from Microsoft's DP-100
Question #: 22
Topic #: 5
[All DP-100 Questions]

HOTSPOT -
You train a classification model by using a decision tree algorithm.
You create an estimator by running the following Python code. The variable feature_names is a list of all feature names, and class_names is a list of all class names. from interpret.ext.blackbox import TabularExplainer explainer = TabularExplainer(model, x_train, features=feature_names, classes=class_names)
You need to explain the predictions made by the model for all classes by determining the importance of all features.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.
Hot Area:

Show Suggested Answer Hide Answer
Suggested Answer:

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
claudiapatricia777
Highly Voted 3 years ago
Answer is : Yes: No doubt - 2 - Yes: feature and class are optional arguments - 3 - Yes: Mimic also supports Tree based algorithms.
upvoted 16 times
Ben999
3 months, 4 weeks ago
Y,Y,N. - For MimicExplainer you would need to import the MimicExplainer class, which is not the case here.
upvoted 1 times
...
...
dushmantha
Highly Voted 3 years, 1 month ago
Answer should be yes: no doubt no: there is no way that explainer knows what is class variable yes: explainers has no restrictions to be used in a tree based method
upvoted 8 times
deyoz
8 months, 3 weeks ago
field classes is optional
upvoted 1 times
deyoz
8 months, 3 weeks ago
oh yes i overlooked the phrase "as expected" . i totally agree with your answer. the tone of the question give some hint that the model works without these parameters , but might not work as expected.
upvoted 1 times
...
...
...
haby
Most Recent 10 months, 1 week ago
1- Yes 2- No - features and classes fields are optional, true, but without adding them, they work but can't work "as expected" 3- Yes
upvoted 1 times
haby
10 months, 1 week ago
My bad, 2nd is Yes. features and classes only change visualization result.
upvoted 1 times
...
...
phdykd
1 year, 8 months ago
YYY. 2 - Yes: feature and class are optional arguments
upvoted 2 times
...
phdykd
1 year, 8 months ago
YES YES YES
upvoted 1 times
...
casiopa
1 year, 10 months ago
1-Yes 2-Yes 3-No 3- could be a NO because for a MimicExplainer you would need to specify the argument: explainable_model. Otherwise, a MimicExplainer is a valid choice. Ex: explainer = MimicExplainer(model, x_train, explainable_model=DecisionTreeExplainableModel, features=feature_names, classes=class_names)
upvoted 1 times
...
pancman
2 years, 6 months ago
You can refer to TabularExplainer documentation here: https://interpret-community.readthedocs.io/en/latest/api_reference/interpret_community.html?highlight=tabularexplainer#interpret_community.TabularExplainer
upvoted 1 times
...
dija123
2 years, 10 months ago
1- Yes 2- Yes as "features" and "classes" fields are optional https://docs.microsoft.com/en-us/azure/machine-learning/how-to-machine-learning-interpretability-aml 3- Yes
upvoted 3 times
...
azayra
2 years, 12 months ago
yes , yes and yes
upvoted 2 times
...
snsnsnsn
3 years, 1 month ago
on 2/9/21
upvoted 1 times
...
saurabh288
3 years, 3 months ago
MimicExplainer can also be used here.
upvoted 3 times
...
ljljljlj
3 years, 3 months ago
On exam 2021/7/10
upvoted 6 times
...
Srik33
3 years, 3 months ago
Why cant MIMIC be used here , they also can be used for Linerar Regression black box models
upvoted 3 times
YipingRuan
3 years, 3 months ago
You can use one of the following interpretable models as your surrogate model: LightGBM (LGBMExplainableModel), Linear Regression (LinearExplainableModel) https://docs.microsoft.com/en-us/azure/machine-learning/how-to-machine-learning-interpretability
upvoted 1 times
thhvancouver
3 years, 2 months ago
According to the documentation: You can use one of the following interpretable models as your surrogate model: LightGBM (LGBMExplainableModel), Linear Regression (LinearExplainableModel), Stochastic Gradient Descent explainable model (SGDExplainableModel), and Decision Tree (DecisionTreeExplainableModel). So a MimicExplainer can also be used with Decision Tree.
upvoted 5 times
...
...
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago