exam questions

Exam AWS Certified AI Practitioner AIF-C01 All Questions

View all questions & answers for the AWS Certified AI Practitioner AIF-C01 exam

Exam AWS Certified AI Practitioner AIF-C01 topic 1 question 10 discussion

A company wants to use language models to create an application for inference on edge devices. The inference must have the lowest latency possible.
Which solution will meet these requirements?

  • A. Deploy optimized small language models (SLMs) on edge devices.
  • B. Deploy optimized large language models (LLMs) on edge devices.
  • C. Incorporate a centralized small language model (SLM) API for asynchronous communication with edge devices.
  • D. Incorporate a centralized large language model (LLM) API for asynchronous communication with edge devices.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Jessiii
2 weeks, 6 days ago
Selected Answer: A
Optimized small language models (SLMs) are specifically designed to run efficiently on edge devices with limited resources (such as memory and processing power). Deploying smaller, optimized models directly on the edge devices allows for near-instantaneous inference with minimal latency, as the data doesn't need to travel to a central server for processing.
upvoted 1 times
...
Moon
2 months ago
Selected Answer: A
A: Deploy optimized small language models (SLMs) on edge devices. Explanation: Deploying optimized small language models (SLMs) on edge devices ensures low latency because the inference happens directly on the device without relying on cloud communication. Small language models are lightweight and designed to run efficiently on devices with limited resources, making them ideal for edge computing.
upvoted 4 times
...
Aryan_10
2 months, 1 week ago
Selected Answer: A
Lowest latency possible - SLM
upvoted 1 times
...
Nicocacik
3 months ago
Selected Answer: A
Low latency with edge devices -> SLM
upvoted 1 times
...
Blair77
3 months, 3 weeks ago
A is good - Minimal latency: SLMs are designed to run efficiently on resource-constrained devices, offering fast inference directly on the device.
upvoted 1 times
...
jove
3 months, 4 weeks ago
Selected Answer: A
SLM on edge devices
upvoted 2 times
...
tccusa
4 months ago
Selected Answer: A
SLM on edge devices is the correct solution.
upvoted 2 times
...
galliaj
4 months ago
Using Optimized Small Language Models (SLMs) on edge devices is the best choice because they are designed to run efficiently within the resource constraints of edge hardware. This minimizes latency and helps deliver fast inference times while using less computational power and memory. The problem with trying to use centralized APIs is the associated latentcy.
upvoted 4 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago