exam questions

Exam Professional Machine Learning Engineer All Questions

View all questions & answers for the Professional Machine Learning Engineer exam

Exam Professional Machine Learning Engineer topic 1 question 40 discussion

Actual exam question from Google's Professional Machine Learning Engineer
Question #: 40
Topic #: 1
[All Professional Machine Learning Engineer Questions]

You have a functioning end-to-end ML pipeline that involves tuning the hyperparameters of your ML model using AI Platform, and then using the best-tuned parameters for training. Hypertuning is taking longer than expected and is delaying the downstream processes. You want to speed up the tuning job without significantly compromising its effectiveness. Which actions should you take? (Choose two.)

  • A. Decrease the number of parallel trials.
  • B. Decrease the range of floating-point values.
  • C. Set the early stopping parameter to TRUE.
  • D. Change the search algorithm from Bayesian search to random search.
  • E. Decrease the maximum number of trials during subsequent training phases.
Show Suggested Answer Hide Answer
Suggested Answer: CE 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
gcp2021go
Highly Voted 3 years, 8 months ago
I think should CE. I can't find any reference regarding B can reduce tuning time.
upvoted 20 times
...
Paul_Dirac
Highly Voted 3 years, 9 months ago
Answer: B & C (Ref: https://cloud.google.com/ai-platform/training/docs/using-hyperparameter-tuning) (A) Decreasing the number of parallel trials will increase tuning time. (D) Bayesian search works better and faster than random search since it's selective in points to evaluate and uses knowledge of previouls evaluated points. (E) maxTrials should be larger than 10*the number of hyperparameters used. And spanning the whole minimum space (10*num_hyperparams) already takes some time. So, lowering maxTrials has little effect on reducing tuning time.
upvoted 16 times
Goosemoose
10 months, 2 weeks ago
Bayesian search should cost more time, because it can converge in fewer iterations than the other algorithms but not necessarily in a faster time because trials are dependent and thus require sequentiality
upvoted 1 times
...
dxxdd7
3 years, 7 months ago
In your link, when they mentionned maxTrials they said that "In most cases there is a point of diminishing returns after which additional trials have little or no effect on the accuracy" They also say that it can affect time and cost I think i'd rather go with CE
upvoted 10 times
...
...
bc3f222
Most Recent 1 month ago
Selected Answer: CE
apart from early stopping which no one has doubt about E (reduce max trails has the lowest propensity to reduce performance)
upvoted 1 times
...
vinevixx
3 months ago
Selected Answer: BC
Decreasing the range of floating-point values reduces the search space for the hyperparameter tuning process. A smaller search space allows the algorithm to converge faster to an optimal solution by focusing only on a narrower range of values. This approach speeds up tuning without significantly compromising effectiveness, as the range is constrained to more reasonable values. Why C is correct: Setting the early stopping parameter to TRUE enables the tuning process to stop trials early if it becomes clear that a given trial is not improving or yielding promising results. This prevents unnecessary computation and saves time by discarding underperforming configurations early in the process.
upvoted 1 times
...
Ankit267
3 months, 2 weeks ago
Selected Answer: CE
C & E are the choices
upvoted 1 times
...
TornikePirveli
8 months ago
In the PMLE book it's grid search instead of Bayesian search and that makes sense, but there is also marked Decrease the number of parallel trials as correct answer, which I think should be wrong.
upvoted 1 times
...
nktyagi
8 months, 2 weeks ago
Selected Answer: AB
With Vertex AI hyperparameter tuning, you can configure the number of trials and the search algorithm as well as range of parameters.
upvoted 1 times
...
PhilipKoku
10 months, 2 weeks ago
Selected Answer: CD
C) and D)
upvoted 2 times
...
pinimichele01
1 year ago
Selected Answer: CE
see pawan94
upvoted 3 times
...
pawan94
1 year, 3 months ago
C and E, if you reference the latest docs of hptune job on vertex ai : 1. A not possible (refer: https://cloud.google.com/vertex-ai/docs/training/using-hyperparameter-tuning#:~:text=the%20benefit%20of%20reducing%20the%20time%20the) , if you reduce the number of parallel trials then the speed of overall completion gets negatively affected. . The question is about how to speed up the process but not changing the model params. Changing the optimization algorithm would lead to unexpected results. So in my opinion C and E ( after carefully reading the updated docs) and please don't believe everything CHATGPT says . I encountered so many questions where the LLM's are giving completely wrong answers
upvoted 4 times
...
fragkris
1 year, 4 months ago
Selected Answer: CD
I chose C and D
upvoted 3 times
...
Sum_Sum
1 year, 5 months ago
Selected Answer: CD
Chat GPT says: . Set the early stopping parameter to TRUE. Early Stopping: Enabling early stopping allows the tuning process to terminate a trial if it becomes clear that it's not producing promising results. This prevents wasting time on unpromising trials and can significantly speed up the hyperparameter tuning process. It helps to focus resources on more promising parameter combinations. D. Change the search algorithm from Bayesian search to random search. Random Search Algorithm: Random search, as opposed to Bayesian optimization, doesn't attempt to build a model of the objective function. While Bayesian search can be more efficient in finding the optimal parameters, random search is often faster per iteration. Random search can be particularly effective when the hyperparameter space is large, as it doesn't require as much computational power to select the next set of parameters to evaluate.
upvoted 3 times
...
Voyager2
1 year, 10 months ago
Selected Answer: CE
C&E This video explains very well the max trials and parallel trials https://youtu.be/8hZ_cBwNOss This link explains early stopping See https://cloud.google.com/ai-platform/training/docs/using-hyperparameter-tuning#early-stopping
upvoted 4 times
...
rexduo
1 year, 11 months ago
Selected Answer: CE
A increase time, B HP tuning job normally bottle neck is not at model size, D did reduce time, but might significantly hurt effectiveness
upvoted 2 times
...
CloudKida
1 year, 11 months ago
Selected Answer: AC
Running parallel trials has the benefit of reducing the time the training job takes (real time—the total processing time required is not typically changed). However, running in parallel can reduce the effectiveness of the tuning job overall. That is because hyperparameter tuning uses the results of previous trials to inform the values to assign to the hyperparameters of subsequent trials. When running in parallel, some trials start without having the benefit of the results of any trials still running. You can specify that AI Platform Training must automatically stop a trial that has become clearly unpromising. This saves you the cost of continuing a trial that is unlikely to be useful. To permit stopping a trial early, set the enableTrialEarlyStopping value in the HyperparameterSpec to TRUE.
upvoted 1 times
...
M25
1 year, 11 months ago
Selected Answer: CE
Went with C & E
upvoted 2 times
...
kucuk_kagan
2 years ago
Selected Answer: AD
To speed up the tuning job without significantly compromising its effectiveness, you can take the following actions: A. Decrease the number of parallel trials: By reducing the number of parallel trials, you can limit the amount of computational resources being used at a given time, which may help speed up the tuning job. However, reducing the number of parallel trials too much could limit the exploration of the parameter space and result in suboptimal results. D. Change the search algorithm from Bayesian search to random search: Bayesian optimization is a computationally intensive method that requires more time and resources than random search. By switching to a simpler method like random search, you may be able to speed up the tuning job without compromising its effectiveness. However, random search may not be as efficient in finding the best hyperparameters as Bayesian optimization.
upvoted 1 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago