exam questions

Exam AWS Certified Machine Learning Engineer - Associate MLA-C01 All Questions

View all questions & answers for the AWS Certified Machine Learning Engineer - Associate MLA-C01 exam

Exam AWS Certified Machine Learning Engineer - Associate MLA-C01 topic 1 question 15 discussion

A company has deployed an XGBoost prediction model in production to predict if a customer is likely to cancel a subscription. The company uses Amazon SageMaker Model Monitor to detect deviations in the F1 score.
During a baseline analysis of model quality, the company recorded a threshold for the F1 score. After several months of no change, the model's F1 score decreases significantly.
What could be the reason for the reduced F1 score?

  • A. Concept drift occurred in the underlying customer data that was used for predictions.
  • B. The model was not sufficiently complex to capture all the patterns in the original baseline data.
  • C. The original baseline data had a data quality issue of missing values.
  • D. Incorrect ground truth labels were provided to Model Monitor during the calculation of the baseline.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
ninomfr64
3 weeks, 3 days ago
Selected Answer: A
A. Yes, concept drift is an evolution of data that invalidates the data model. It happens when the statistical properties of the target variable, which the model is trying to predict, change over time in unforeseen ways. This causes problems because the predictions become less accurate as time passes. B. No, if it was the case the F1 would have been low since the begin and this is not justifying a change after months C. No, same as B D. No, incorrect labels in the baseline calculations would undermine F1 baseline value, but this is not explain a significant drop after months
upvoted 1 times
...
motk123
1 month, 2 weeks ago
Selected Answer: A
Concept Drift: Occurs when the statistical properties of the data used for predictions change over time, causing the model to underperform on current data. Why Not the Other Options? B. If the model complexity was insufficient, the issue would have been detected during the initial evaluation or baseline analysis, not after months of stable performance. C. A data quality issue would have impacted the model's performance immediately after deployment, not months later. D. Incorrect labels during baseline calculation could result in an inaccurate baseline F1 score, but it wouldn't explain a significant drop after stable performance over months.
upvoted 3 times
...
Saransundar
1 month, 2 weeks ago
Selected Answer: A
Concept Drift: Refers to the change in the statistical properties of the underlying data distribution over time --> Decrease F1 score --> perform poorly on new data
upvoted 3 times
...
GiorgioGss
1 month, 4 weeks ago
Selected Answer: A
Option A could be the only one possible reason for drifting "after several months".
upvoted 2 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago