AWS Posted September 9, 2022 Share Posted September 9, 2022 Amazon SageMaker enables customers to deploy ML models to make predictions (also known as inference) for any use case. You can now deploy large models (up to 500GB) for inference on Amazon SageMaker’s Real-time and Asynchronous Inference options by configuring the maximum EBS volume size and timeout quotas. This launch enables customers to leverage SageMaker's fully managed Real-time and Asynchronous inference capabilities to deploy and manage large ML models such as variants of GPT and OPT. View the full article Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.