exam questions

Exam AWS Certified Machine Learning - Specialty All Questions

View all questions & answers for the AWS Certified Machine Learning - Specialty exam

Exam AWS Certified Machine Learning - Specialty topic 1 question 53 discussion

A company's Machine Learning Specialist needs to improve the training speed of a time-series forecasting model using TensorFlow. The training is currently implemented on a single-GPU machine and takes approximately 23 hours to complete. The training needs to be run daily.
The model accuracy is acceptable, but the company anticipates a continuous increase in the size of the training data and a need to update the model on an hourly, rather than a daily, basis. The company also wants to minimize coding effort and infrastructure changes.
What should the Machine Learning Specialist do to the training solution to allow it to scale for future demand?

  • A. Do not change the TensorFlow code. Change the machine to one with a more powerful GPU to speed up the training.
  • B. Change the TensorFlow code to implement a Horovod distributed framework supported by Amazon SageMaker. Parallelize the training to as many machines as needed to achieve the business goals.
  • C. Switch to using a built-in AWS SageMaker DeepAR model. Parallelize the training to as many machines as needed to achieve the business goals.
  • D. Move the training to Amazon EMR and distribute the workload to as many machines as needed to achieve the business goals.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
JayK
Highly Voted 3 years, 7 months ago
the answer is B. using Hovord distribution results in less coding effort
upvoted 38 times
...
cybe001
Highly Voted 3 years, 6 months ago
Answer is B. "minimize coding effort and infrastructure changes" If we use DeepAR then the code and infra has to be changed to work with DeepAR.
upvoted 15 times
...
ninomfr64
Most Recent 10 months, 1 week ago
Selected Answer: C
A. NO, this will not address training dataset continuous increase B. NO, this will require code effort and infrastructure change C. YES, a built-in model ensure low code effort, so only infrastructure change needed* D. This will not work * they say current model accuracy is acceptable, we doo expect good results with DeepAR as it allows to automatically pick among 5 different models what works best for the customer
upvoted 4 times
ninomfr64
10 months, 1 week ago
DeepAR doesn't pick among 5 models. However, I still think that switching to DeepAR can assure accuracy and minimize coding effort as the model is built-in
upvoted 2 times
...
...
VR10
1 year, 2 months ago
A comes with minimum changes, but it wont scale. B code changes are minimum but infrastructure still needs to be changed to achieve a distributed solution. C. Is even more significant infra and code change. D. wont work. It is really subjective and tricky. Could be A or B, depending on what change is considered "SMALL". For scalability, B seems better. for quick win A could work. I keep going back and forth.
upvoted 1 times
...
loict
1 year, 7 months ago
Selected Answer: B
A. NO - one time shot and not scalable B. YES - best practice C. NO - DeepAR is for forecasting D. NO - code will not benefit from parallelization without change
upvoted 4 times
...
Mickey321
1 year, 8 months ago
Selected Answer: B
option B
upvoted 2 times
...
kaike_reis
1 year, 8 months ago
Selected Answer: B
Note that we want to increase training speed, minimize code and infrastructure modification effort on AWS. Letter A would only delay the problem and increase costs too much. The solution that best translates the problem would be Letter B: we would keep the code in tensorflow and use Horovod to make our training faster through parallelization. Letter D is too complex and would change the execution infrastructure a lot and Letter C would be too abrupt a turn as we would throw our model away.
upvoted 2 times
...
ZSun
2 years ago
A is better option even though B helps. Firstly, you only have One GPU, in this case distributed training Horovod doesn't help much; Secondly, the question is about minimize "coding effort" not minimize budget. adding distributed framework require much more coding, but increase gpu instance only require single click.
upvoted 1 times
...
Valcilio
2 years, 1 month ago
Selected Answer: B
Horovod distribution is accepted by sagemaker, making easy to implement!
upvoted 1 times
...
AjoseO
2 years, 2 months ago
Selected Answer: B
Hovord distribution will allow the Machine Learning Specialist to take advantage of Amazon SageMaker's built-in support for Horovod, which is a popular, open-source distributed deep learning framework. Implementing Horovod in TensorFlow will allow the Specialist to parallelize the training across multiple GPUs or instances, which can significantly reduce the time it takes to train the model. This will allow the company to meet its requirement to update the model on an hourly basis, and minimize coding effort and infrastructure changes as it leverages the existing TensorFlow code and infrastructure, along with the scalability and ease of use of Amazon SageMaker.
upvoted 4 times
...
joe3232
2 years, 2 months ago
Are is there a 23X differential between the weakest and strongest GPU in AWS? (and allow for future growth). I don' tthink so.
upvoted 1 times
...
vbal
2 years, 3 months ago
Answer:C- built-in sagemaker DeepAR model. minimize coding & infra changes.
upvoted 1 times
cpal012
2 years, 1 month ago
But they are happy with it - just want it to go faster. Not throw the whole thing out.
upvoted 1 times
...
...
KingGuo
2 years, 9 months ago
Selected Answer: B
the answer is B. using Hovord distribution results in less coding effort
upvoted 2 times
...
John_Pongthorn
3 years, 2 months ago
Selected Answer: A
Most likely , it is A because it is based on AWS teachnoloy, why we have to use open source we exam AWS ML , the answer should be relevant to AWS technology inevitably https://aws.amazon.com/sagemaker/distributed-training/
upvoted 1 times
...
cloud_trail
3 years, 5 months ago
This one reminds me of an old saying by Yogi Berra: "When you come to a fork in the road, take it." If you see Horovod as an option in a question about scaling TF, take it. Answer is B.
upvoted 9 times
...
RaniaSayed
3 years, 5 months ago
I Think it's B https://aws.amazon.com/blogs/machine-learning/launching-tensorflow-distributed-training-easily-with-horovod-or-parameter-servers-in-amazon-sagemaker/ & https://aws.amazon.com/blogs/machine-learning/multi-gpu-and-distributed-training-using-horovod-in-amazon-sagemaker-pipe-mode/
upvoted 5 times
...
harmanbirstudy
3 years, 5 months ago
Seen similar question on udemy/whizlab , its always Horvord when Tensorflow needs scaling. ANWSER is B
upvoted 5 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago