exam questions

Exam AWS Certified AI Practitioner AIF-C01 All Questions

View all questions & answers for the AWS Certified AI Practitioner AIF-C01 exam

Exam AWS Certified AI Practitioner AIF-C01 topic 1 question 27 discussion

A company has developed an ML model for image classification. The company wants to deploy the model to production so that a web application can use the model.
The company needs to implement a solution to host the model and serve predictions without managing any of the underlying infrastructure.
Which solution will meet these requirements?

  • A. Use Amazon SageMaker Serverless Inference to deploy the model.
  • B. Use Amazon CloudFront to deploy the model.
  • C. Use Amazon API Gateway to host the model and serve predictions.
  • D. Use AWS Batch to host the model and serve predictions.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Moon
1 week, 2 days ago
Selected Answer: A
A: Use Amazon SageMaker Serverless Inference to deploy the model. Explanation: Amazon SageMaker Serverless Inference is a fully managed solution for deploying machine learning models without managing the underlying infrastructure. It automatically provisions compute capacity, scales based on request traffic, and serves predictions efficiently. This makes it an ideal choice for hosting a model and serving predictions for a web application with minimal management overhead. Why not the other options? B: Use Amazon CloudFront to deploy the model: Amazon CloudFront is a content delivery network (CDN) C: Use Amazon API Gateway to host the model and serve predictions: Amazon API Gateway is used to create APIs for accessing services. D: Use AWS Batch to host the model and serve predictions: AWS Batch is designed for batch processing and job scheduling, not for real-time inference or hosting ML models for web applications.
upvoted 1 times
...
Blair77
1 month, 4 weeks ago
Selected Answer: A
Serverless deployment: SageMaker Serverless Inference allows you to deploy ML models without managing any underlying infrastructure, which directly meets the company's requirement.
upvoted 1 times
...
minime
1 month, 4 weeks ago
A. Use Amazon SageMaker Serverless Inference to deploy the model. With serverless inference, there's no need to manage any infra.
upvoted 1 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago