AWS Posted October 7, 2020 Share Posted October 7, 2020 Amazon EC2 has the cloud’s broadest and most capable portfolio of hardware-accelerated instances featuring GPUs, FPGAs, and our own custom ML inference chip, AWS Inferentia. G4dn instances offer the best price/performance for GPU based ML inference, training less-complex ML models, graphics applications others that need access to NVIDIA libraries such as CUDA, CuDNN and NVENC. View the full article Quote Link to comment Share on other sites More sharing options...
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.