exam questions

Exam AWS Certified Machine Learning Engineer - Associate MLA-C01 All Questions

View all questions & answers for the AWS Certified Machine Learning Engineer - Associate MLA-C01 exam

Exam AWS Certified Machine Learning Engineer - Associate MLA-C01 topic 1 question 103 discussion

A company plans to deploy an ML model for production inference on an Amazon SageMaker endpoint. The average inference payload size will vary from 100 MB to 300 MB. Inference requests must be processed in 60 minutes or less.

Which SageMaker inference option will meet these requirements?

  • A. Serverless inference
  • B. Asynchronous inference
  • C. Real-time inference
  • D. Batch transform
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
chris_spencer
1 week, 1 day ago
Selected Answer: B
Agree with B. In general, real-time inference supports payloads up to 5 MB for synchronous requests, while asynchronous inference can support larger payloads, often up to 5 GB. The use case in this questions involves inference payloads of 100 MB to 300 MB and needs to be processed in under 60 minutes, Asynchronous Inference is the best choice for handling large payloads without strict real-time requirements.
upvoted 1 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago