C is partially correct. However, going with A since it contains the method itself. Using mlflow.shap.log_explanation, you can automatically calculate and log Shapley feature importance plots, providing insights into the importance of different features in the model's predictions.
Answer A.
https://mlflow.org/docs/latest/python_api/mlflow.shap.html#mlflow.shap.log_explanation
"... computes and logs explanations of an ML model’s output. Explanations are logged as a directory of artifacts containing the following items generated by SHAP (SHapley Additive exPlanations).
- Base values
- SHAP values (computed using shap.KernelExplainer)
- Summary bar plot (shows the average impact of each feature on model output)"
mlflow.shap: Automatically calculates and logs Shapley feature importance plots.
# Generate and log SHAP plot for first 5 records
mlflow.shap.log_explanation(rf.predict, X_train[:5])
A voting comment increases the vote count for the chosen answer by one.
Upvoting a comment with a selected answer will also increase the vote count towards that answer by one.
So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.
hugodscarvalho
10 months agorandom_data_guy
11 months agomozuca
11 months agoBokNinja
11 months, 1 week ago