Jump to content

New Amazon SageMaker integration with NVIDIA NIM inference microservices


Recommended Posts

You can now achieve even better price-performance of large language models (LLMs) running on NVIDIA accelerated computing infrastructure when using Amazon SageMaker with newly integrated NVIDIA NIM inference microservices. SageMaker is a fully managed service that makes it easy to build, train, and deploy machine learning and LLMs, and NIM, part of the NVIDIA AI Enterprise software platform, provides high-performance AI containers for inference with LLMs.

View the full article

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...