Welcome to ExamTopics
ExamTopics Logo
- Expert Verified, Online, Free.
exam questions

Exam Professional Machine Learning Engineer All Questions

View all questions & answers for the Professional Machine Learning Engineer exam

Exam Professional Machine Learning Engineer topic 1 question 40 discussion

Actual exam question from Google's Professional Machine Learning Engineer
Question #: 40
Topic #: 1
[All Professional Machine Learning Engineer Questions]

You have a functioning end-to-end ML pipeline that involves tuning the hyperparameters of your ML model using AI Platform, and then using the best-tuned parameters for training. Hypertuning is taking longer than expected and is delaying the downstream processes. You want to speed up the tuning job without significantly compromising its effectiveness. Which actions should you take? (Choose two.)

  • A. Decrease the number of parallel trials.
  • B. Decrease the range of floating-point values.
  • C. Set the early stopping parameter to TRUE.
  • D. Change the search algorithm from Bayesian search to random search.
  • E. Decrease the maximum number of trials during subsequent training phases.
Show Suggested Answer Hide Answer
Suggested Answer: CE 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
gcp2021go
Highly Voted 3 years, 3 months ago
I think should CE. I can't find any reference regarding B can reduce tuning time.
upvoted 19 times
...
Paul_Dirac
Highly Voted 3 years, 5 months ago
Answer: B & C (Ref: https://cloud.google.com/ai-platform/training/docs/using-hyperparameter-tuning) (A) Decreasing the number of parallel trials will increase tuning time. (D) Bayesian search works better and faster than random search since it's selective in points to evaluate and uses knowledge of previouls evaluated points. (E) maxTrials should be larger than 10*the number of hyperparameters used. And spanning the whole minimum space (10*num_hyperparams) already takes some time. So, lowering maxTrials has little effect on reducing tuning time.
upvoted 16 times
Goosemoose
5 months, 3 weeks ago
Bayesian search should cost more time, because it can converge in fewer iterations than the other algorithms but not necessarily in a faster time because trials are dependent and thus require sequentiality
upvoted 1 times
...
dxxdd7
3 years, 2 months ago
In your link, when they mentionned maxTrials they said that "In most cases there is a point of diminishing returns after which additional trials have little or no effect on the accuracy" They also say that it can affect time and cost I think i'd rather go with CE
upvoted 10 times
...
...
TornikePirveli
Most Recent 3 months, 1 week ago
In the PMLE book it's grid search instead of Bayesian search and that makes sense, but there is also marked Decrease the number of parallel trials as correct answer, which I think should be wrong.
upvoted 1 times
...
nktyagi
3 months, 3 weeks ago
Selected Answer: AB
With Vertex AI hyperparameter tuning, you can configure the number of trials and the search algorithm as well as range of parameters.
upvoted 1 times
...
PhilipKoku
5 months, 3 weeks ago
Selected Answer: CD
C) and D)
upvoted 2 times
...
pinimichele01
7 months, 2 weeks ago
Selected Answer: CE
see pawan94
upvoted 2 times
...
pawan94
10 months, 3 weeks ago
C and E, if you reference the latest docs of hptune job on vertex ai : 1. A not possible (refer: https://cloud.google.com/vertex-ai/docs/training/using-hyperparameter-tuning#:~:text=the%20benefit%20of%20reducing%20the%20time%20the) , if you reduce the number of parallel trials then the speed of overall completion gets negatively affected. . The question is about how to speed up the process but not changing the model params. Changing the optimization algorithm would lead to unexpected results. So in my opinion C and E ( after carefully reading the updated docs) and please don't believe everything CHATGPT says . I encountered so many questions where the LLM's are giving completely wrong answers
upvoted 4 times
...
fragkris
11 months, 3 weeks ago
Selected Answer: CD
I chose C and D
upvoted 3 times
...
Sum_Sum
1 year ago
Selected Answer: CD
Chat GPT says: . Set the early stopping parameter to TRUE. Early Stopping: Enabling early stopping allows the tuning process to terminate a trial if it becomes clear that it's not producing promising results. This prevents wasting time on unpromising trials and can significantly speed up the hyperparameter tuning process. It helps to focus resources on more promising parameter combinations. D. Change the search algorithm from Bayesian search to random search. Random Search Algorithm: Random search, as opposed to Bayesian optimization, doesn't attempt to build a model of the objective function. While Bayesian search can be more efficient in finding the optimal parameters, random search is often faster per iteration. Random search can be particularly effective when the hyperparameter space is large, as it doesn't require as much computational power to select the next set of parameters to evaluate.
upvoted 3 times
...
Voyager2
1 year, 5 months ago
Selected Answer: CE
C&E This video explains very well the max trials and parallel trials https://youtu.be/8hZ_cBwNOss This link explains early stopping See https://cloud.google.com/ai-platform/training/docs/using-hyperparameter-tuning#early-stopping
upvoted 3 times
...
rexduo
1 year, 6 months ago
Selected Answer: CE
A increase time, B HP tuning job normally bottle neck is not at model size, D did reduce time, but might significantly hurt effectiveness
upvoted 1 times
...
CloudKida
1 year, 6 months ago
Selected Answer: AC
Running parallel trials has the benefit of reducing the time the training job takes (real time—the total processing time required is not typically changed). However, running in parallel can reduce the effectiveness of the tuning job overall. That is because hyperparameter tuning uses the results of previous trials to inform the values to assign to the hyperparameters of subsequent trials. When running in parallel, some trials start without having the benefit of the results of any trials still running. You can specify that AI Platform Training must automatically stop a trial that has become clearly unpromising. This saves you the cost of continuing a trial that is unlikely to be useful. To permit stopping a trial early, set the enableTrialEarlyStopping value in the HyperparameterSpec to TRUE.
upvoted 1 times
...
M25
1 year, 6 months ago
Selected Answer: CE
Went with C & E
upvoted 1 times
...
kucuk_kagan
1 year, 8 months ago
Selected Answer: AD
To speed up the tuning job without significantly compromising its effectiveness, you can take the following actions: A. Decrease the number of parallel trials: By reducing the number of parallel trials, you can limit the amount of computational resources being used at a given time, which may help speed up the tuning job. However, reducing the number of parallel trials too much could limit the exploration of the parameter space and result in suboptimal results. D. Change the search algorithm from Bayesian search to random search: Bayesian optimization is a computationally intensive method that requires more time and resources than random search. By switching to a simpler method like random search, you may be able to speed up the tuning job without compromising its effectiveness. However, random search may not be as efficient in finding the best hyperparameters as Bayesian optimization.
upvoted 1 times
...
Yajnas_arpohc
1 year, 8 months ago
Selected Answer: DE
Early stopping is for training, not hyperparameter tuning
upvoted 1 times
...
Fatiy
1 year, 9 months ago
Selected Answer: AD
The two actions that can speed up hyperparameter tuning without compromising effectiveness are decreasing the number of parallel trials and changing the search algorithm from Bayesian search to random search.
upvoted 2 times
...
shankalman717
1 year, 9 months ago
Selected Answer: CD
B. Decrease the range of floating-point values: Reducing the range of the hyperparameters will decrease the search space and the time it takes to find the optimal hyperparameters. However, if the range is too narrow, it may not be possible to find the best hyperparameters. C. Set the early stopping parameter to TRUE: Setting the early stopping parameter to true will stop the trial when the performance has stopped improving. This will help to reduce the number of trials needed and thus speed up the hypertuning job without compromising its effectiveness. D.Changing the search algorithm from Bayesian search to random search could also be a valid action to speed up the hypertuning job. Random search can explore the hyperparameter space more efficiently and with less computation cost compared to Bayesian search, especially when the search space is large and complex. However, it may not be as effective as Bayesian search in finding the best hyperparameters in some cases.
upvoted 1 times
tavva_prudhvi
1 year, 8 months ago
D might not be the correct option, as for random search it might be faster but there might be a chance of decreased accuracy and this violates the questions as it says, to not comprise efficiency!
upvoted 1 times
...
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...