Welcome to ExamTopics
ExamTopics Logo
- Expert Verified, Online, Free.
exam questions

Exam AWS Certified Machine Learning - Specialty All Questions

View all questions & answers for the AWS Certified Machine Learning - Specialty exam

Exam AWS Certified Machine Learning - Specialty topic 1 question 227 discussion

A company has hired a data scientist to create a loan risk model. The dataset contains loan amounts and variables such as loan type, region, and other demographic variables. The data scientist wants to use Amazon SageMaker to test bias regarding the loan amount distribution with respect to some of these categorical variables.

Which pretraining bias metrics should the data scientist use to check the bias distribution? (Choose three.)

  • A. Class imbalance
  • B. Conditional demographic disparity
  • C. Difference in proportions of labels
  • D. Jensen-Shannon divergence
  • E. Kullback-Leibler divergence
  • F. Total variation distance
Show Suggested Answer Hide Answer
Suggested Answer: DEF 🗳️

Comments

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
MultiCloudIronMan
2 weeks ago
Selected Answer: ABC
Jensen-Shannon divergence, Kullback-Leibler divergence, and Total variation distance—are all measures of statistical distance between probability distributions. They are useful for understanding how different two distributions are, but they are not specifically designed to measure bias in categorical variables.
upvoted 1 times
...
rookiee1111
6 months, 2 weeks ago
Selected Answer: ABC
It is leaning towards bias in data, rather than probability distribution.
upvoted 1 times
...
AIWave
8 months, 2 weeks ago
Selected Answer: BCF
BIAS in _data_ before training uses following matrices B - helps assess whether there is bias in how loan amounts are distributed among different categories C - ompares the proportions of positive (e.g., approved loans) and negative (e.g., rejected loans) outcomes across different facets (demographic groups) F - High total variation distance between between the predicted and observed labels suggests potential bias
upvoted 3 times
...
vikaspd
11 months, 2 weeks ago
Selected Answer: DEF
Question asks for distributions. DEF are distributions. ABC are imbalances or disparities.
upvoted 1 times
...
seifskl
1 year ago
Selected Answer: DEF
Since it is indicated in the official web site that D, E and F are used to determine how different the distributions for loan application outcomes are for different demographic groups https://docs.aws.amazon.com/sagemaker/latest/dg/clarify-measure-data-bias.html
upvoted 2 times
...
loict
1 year, 2 months ago
Selected Answer: ABC
All valid answers ? They are listed as "pre-training bias" here https://docs.aws.amazon.com/sagemaker/latest/dg/clarify-measure-data-bias.html
upvoted 2 times
wendaz
1 year ago
Jensen-Shannon divergence, Kullback-Leibler divergence, and Total variation distance, are used to measure differences between probability distributions, but they are not specifically pretraining bias metrics for checking bias distribution concerning categorical variables in this context.
upvoted 1 times
...
...
Mickey321
1 year, 2 months ago
Selected Answer: BDF
confusing but lean towards B D and F
upvoted 1 times
Mickey321
1 year, 2 months ago
Confusing with DEF
upvoted 1 times
...
...
jyrajan69
1 year, 4 months ago
All are valid answers, so definitely un scored question https://docs.aws.amazon.com/sagemaker/latest/dg/clarify-measure-data-bias.html
upvoted 4 times
...
cox1960
1 year, 6 months ago
Selected Answer: DEF
answers "How different are the distributions for loan application outcomes for different demographic groups?"
upvoted 2 times
...
Ahmedhadi_
1 year, 6 months ago
Selected Answer: DEF
https://docs.aws.amazon.com/sagemaker/latest/dg/clarify-measure-data-bias.html
upvoted 3 times
...
Mllb
1 year, 7 months ago
Selected Answer: ABC
D. Jensen-Shannon divergence and E. Kullback-Leibler divergence are post-training bias metrics that measure the distance between two probability distributions. They are not pretraining bias metrics and cannot be used to check the bias distribution of the dataset. F. Total variation distance is a post-training bias metric that measures the difference between two probability distributions. It is not a pretraining bias metric and cannot be used to check the bias distribution of the dataset. Send a message...
upvoted 2 times
injoho
1 year, 6 months ago
The are all pretrainimg metrics https://docs.aws.amazon.com/sagemaker/latest/dg/clarify-measure-data-bias.html
upvoted 3 times
brianb08
1 year, 4 months ago
This is correct.... they are all valid answers. Seems this is one of the un-scored questions... those 15 that are used to calibrate or test possible future questions.
upvoted 1 times
...
...
...
jackzhao
1 year, 7 months ago
Agree with austinoy, answer should be DEF.
upvoted 3 times
...
sevosevo
1 year, 8 months ago
https://docs.aws.amazon.com/sagemaker/latest/dg/clarify-measure-data-bias.html
upvoted 1 times
austinoy
1 year, 8 months ago
based on the link shouldn't DEF be the answers?
upvoted 1 times
...
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...