exam questions

Exam AWS Certified Machine Learning - Specialty All Questions

View all questions & answers for the AWS Certified Machine Learning - Specialty exam

Exam AWS Certified Machine Learning - Specialty topic 1 question 178 discussion

A machine learning (ML) specialist is using Amazon SageMaker hyperparameter optimization (HPO) to improve a model's accuracy. The learning rate parameter is specified in the following HPO configuration:

During the results analysis, the ML specialist determines that most of the training jobs had a learning rate between 0.01 and 0.1. The best result had a learning rate of less than 0.01. Training jobs need to run regularly over a changing dataset. The ML specialist needs to find a tuning mechanism that uses different learning rates more evenly from the provided range between MinValue and MaxValue.
Which solution provides the MOST accurate result?

  • A. Modify the HPO configuration as follows: Select the most accurate hyperparameter configuration form this HPO job.
  • B. Run three different HPO jobs that use different learning rates form the following intervals for MinValue and MaxValue while using the same number of training jobs for each HPO job: ✑ [0.01, 0.1] ✑ [0.001, 0.01] ✑ [0.0001, 0.001] Select the most accurate hyperparameter configuration form these three HPO jobs.
  • C. Modify the HPO configuration as follows: Select the most accurate hyperparameter configuration form this training job.
  • D. Run three different HPO jobs that use different learning rates form the following intervals for MinValue and MaxValue. Divide the number of training jobs for each HPO job by three: ✑ [0.01, 0.1] ✑ [0.001, 0.01] [0.0001, 0.001] Select the most accurate hyperparameter configuration form these three HPO jobs.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
ovokpus
Highly Voted 1 year, 10 months ago
Selected Answer: C
"Choose logarithmic scaling when you are searching a range that spans several orders of magnitude. For example, if you are tuning a Tune a linear learner model model, and you specify a range of values between .0001 and 1.0 for the learning_rate hyperparameter, searching uniformly on a logarithmic scale gives you a better sample of the entire range than searching on a linear scale would, because searching on a linear scale would, on average, devote 90 percent of your training budget to only the values between .1 and 1.0, leaving only 10 percent of your training budget for the values between .0001 and .1." based on the above from this link https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning-define-ranges.html C is clearly the answer
upvoted 10 times
...
edvardo
Highly Voted 1 year, 11 months ago
Selected Answer: C
I would choose C: https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning-define-ranges.html
upvoted 5 times
DJiang
1 year, 11 months ago
But according to the doc you gave, "Logarithmic scaling works only for ranges that have only values greater than 0." I think choosing ScalingType=Linear is the best fit, but there's no such option.
upvoted 2 times
587df71
4 months ago
Yes and 0.0001 is greater than 0
upvoted 1 times
...
...
...
kaike_reis
Most Recent 8 months, 2 weeks ago
Selected Answer: C
C is the way.
upvoted 1 times
...
cox1960
12 months ago
Selected Answer: C
not A since you choose reverse logarithmic scaling when you are searching a range that is highly sensitive to small changes that are very close to 1.
upvoted 4 times
...
Mllb
1 year ago
Selected Answer: B
It should be B. In logarithmic parameters the min value is the maximum value. This is the reason that C is not correct
upvoted 4 times
...
ovokpus
1 year, 10 months ago
"Choose logarithmic scaling when you are searching a range that spans several orders of magnitude. For example, if you are tuning a Tune a linear learner model model, and you specify a range of values between .0001 and 1.0 for the learning_rate hyperparameter, searching uniformly on a logarithmic scale gives you a better sample of the entire range than searching on a linear scale would, because searching on a linear scale would, on average, devote 90 percent of your training budget to only the values between .1 and 1.0, leaving only 10 percent of your training budget for the values between .0001 and .1."
upvoted 3 times
...
rhuanca
1 year, 11 months ago
B looks better , because learning rates were splited up base on a previous experience (0.1 - 0,01) in this case we are changing the structure . On the other hand A and B only change scaletype and this means no real changes
upvoted 2 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago