A company plans to deploy an ML model for production inference on an Amazon SageMaker endpoint. The average inference payload size will vary from 100 MB to 300 MB. Inference requests must be processed in 60 minutes or less.
Which SageMaker inference option will meet these requirements?
chris_spencer
1 week, 1 day ago