exam questions

Exam AWS Certified Machine Learning - Specialty All Questions

View all questions & answers for the AWS Certified Machine Learning - Specialty exam

Exam AWS Certified Machine Learning - Specialty topic 1 question 274 discussion

A machine learning (ML) specialist is training a linear regression model. The specialist notices that the model is overfitting. The specialist applies an L1 regularization parameter and runs the model again. This change results in all features having zero weights.

What should the ML specialist do to improve the model results?

  • A. Increase the L1 regularization parameter. Do not change any other training parameters.
  • B. Decrease the L1 regularization parameter. Do not change any other training parameters.
  • C. Introduce a large L2 regularization parameter. Do not change the current L1 regularization value.
  • D. Introduce a small L2 regularization parameter. Do not change the current L1 regularization value.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
KarinaAsh
5 months ago
Selected Answer: B
Correct Answer B why not D ? While introducing a small L2 regularization might help in some cases, it doesn't address the main issue, which is that the L1 regularization is too strong. The primary problem needs to be addressed first.
upvoted 1 times
...
vkbajoria
1 year ago
Selected Answer: B
Decrease L1 will ensure that all features are not zero
upvoted 1 times
...
F1Fan
1 year, 1 month ago
From Claude 3: Based on the AWS documentation and industry best practices, introducing a small L2 regularization parameter in addition to the existing L1 regularization (Option D) is a recommended approach to address overfitting while retaining the feature selection capabilities of L1 regularization [3]. The combination of L1 and L2 regularization, known as Elastic Net regularization, can effectively handle collinearity and provide a balance between sparsity and weight shrinkage, potentially improving the model results and addressing the overfitting issue [4]. While decreasing the L1 regularization parameter (Option B) may seem like a logical step, it does not directly address the overfitting problem and may not be effective in the specific scenario where all features have zero weights. Introducing a small L2 regularization parameter (Option D) is a more appropriate solution based on AWS documentation and industry best practices
upvoted 1 times
...
endeesa
1 year, 4 months ago
Selected Answer: B
Seems like a problem of too much regularization, so I would start by decreasing L1
upvoted 1 times
...
wimalik
1 year, 5 months ago
Ans: B Decreasing the L1 regularization parameter would reduce the penalization on the coefficients, allowing some features to contribute to the model without being driven to zero. This adjustment can help in achieving a balance between overfitting and underfitting.
upvoted 1 times
...
chet100
1 year, 7 months ago
leaning towards C .. confusing though
upvoted 1 times
...
kaike_reis
1 year, 8 months ago
Selected Answer: B
Letter B is correct, since L1 is very strong and is eliminating all variables. Letters A - C would harm even more and Letter D would not change the current result at all.
upvoted 1 times
goku58
1 year, 7 months ago
May I know how adding L2 regularization parameter would harm even more? Both regularizations can prevent overfitting. I'm leaning towards C thinking that even if all L1 (absolute) terms go to 0, there would still be L2 (squared) terms non-0, thereby improving the model. But I'm not sure if keeping L1 value same would still result in all feature going to 0.
upvoted 1 times
...
...
Mickey321
1 year, 8 months ago
Selected Answer: B
you applied an L1 regularization parameter and got all features having zero weights, it means that your parameter value was too high and caused too much shrinkage. This resulted in a model that underfits the data and has poor performance.
upvoted 2 times
...
awsarchitect5
1 year, 9 months ago
Also don’t think any options given improve the model
upvoted 1 times
kaike_reis
1 year, 8 months ago
L1 is too strong, you can improve a little be by decreasing lambda
upvoted 1 times
...
...
awsarchitect5
1 year, 9 months ago
Selected Answer: B
Can’t apply L2 on zero weights
upvoted 3 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago