exam questions

Exam AWS Certified AI Practitioner AIF-C01 All Questions

View all questions & answers for the AWS Certified AI Practitioner AIF-C01 exam

Exam AWS Certified AI Practitioner AIF-C01 topic 1 question 62 discussion

A company is building an ML model to analyze archived data. The company must perform inference on large datasets that are multiple GBs in size. The company does not need to access the model predictions immediately.
Which Amazon SageMaker inference option will meet these requirements?

  • A. Batch transform
  • B. Real-time inference
  • C. Serverless inference
  • D. Asynchronous inference
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
ExamTopicsPrepare
1 week, 1 day ago
Selected Answer: A
A. Batch transform ✅ Explanation: Batch Transform is ideal for processing large datasets in bulk when immediate responses are not needed. It supports multiple GB-sized datasets and can handle inference without requiring an endpoint to be always active. Since the company is working with archived data and does not need real-time predictions, batch processing is the most efficient and cost-effective choice.
upvoted 1 times
...
viejito
4 weeks ago
Selected Answer: D
asynchronous inference is the most appropriate choice for the company's specific needs, as it provides a balance between processing large datasets and not requiring immediate results.
upvoted 2 times
djeong95
3 days, 19 hours ago
Amazon SageMaker Asynchronous Inference is a capability in SageMaker AI that queues incoming requests and processes them asynchronously. This option is ideal for requests with large payload sizes (up to 1GB), long processing times (up to one hour), and near real-time latency requirements. Asynchronous Inference enables you to save on costs by autoscaling the instance count to zero when there are no requests to process, so you only pay when your endpoint is processing requests. A is more suitable here.
upvoted 1 times
...
...
Blair77
2 months, 3 weeks ago
Selected Answer: A
Batch transform is specifically designed to handle large volumes of data, including datasets that are multiple GBs in size. This aligns perfectly with the company's requirement to perform inference on large datasets.
upvoted 3 times
...
GriffXX
2 months, 3 weeks ago
Selected Answer: A
Info on Batch Transform matches up with the details of 'large datsets' and 'don't need projections immediately. https://docs.aws.amazon.com/sagemaker/latest/dg/batch-transform.html
upvoted 1 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago