Jump to content

Join Canonical at 2024 GTC AI Conference


Ubuntu

Recommended Posts

As a key technology partner with NVIDIA, Canonical is proud to showcase our joint solutions at NVIDIA GTC again. Join us in person at NVIDIA GTC on March 18-21, 2024 to explore what’s next in AI and accelerated computing. We will be at booth 1601 in the MLOps & LLMOps Pavilion, demonstrating how open source AI solutions can take your models to production, from edge to cloud.

Register for GTC now!

AI on Ubuntu – from cloud to edge

As the world becomes more connected, there is a growing need to extend data processing beyond the data centre to edge devices in the field. As we all know, cloud computing provides numerous resources for AI adoption, processing, storage, and analysis, but it cannot support every use case.  Deploying models to edge devices can expand the scope of AI devices by enabling you to process some of the data locally and achieve real-time insights without relying exclusively on the centralised data centre or cloud. This is especially relevant when AI applications would be impractical or impossible to deploy in a centralised cloud or enterprise data centre due to issues related to latency, bandwidth and privacy. 

Therefore, a solution that enables scalability, reproducibility, and portability is the ideal choice for a production-grade project.  Canonical delivers a comprehensive AI stack with the open source software which your organisation might need for your AI projects from cloud to edge, giving you:

  • The same experience on edge devices and on any cloud, whether private or public or hybrid
  • Low-ops, streamlined lifecycle management
  • A modular and open source suite for reusable deployments
NVIDIA-GTC-2024-blog-banner.png

Book a meeting with us

To put our AI stack to the test, during NVIDIA GTC 2024, we will present how our Kubernetes-based AI infrastructure solutions can help create a blueprint for smart cities, leveraging best-in-class NVIDIA hardware capabilities. We will cover both training in the cloud and data centres, and showcase the solution deployed at the edge on Jetson Orin based devices. Please check out the details below and meet our expert on-site.

Canonical’s invited talk at GTC

Accelerate Smart City Edge AI Deployment With Open-Source Cloud-Native Infrastructure [S61494]

Abstract:

Artificial intelligence is no longer confined to data centres; it has expanded to operate at the edge. Some models require low latency, necessitating execution close to end-users. This is where edge computing, optimised for AI, becomes essential. In the most popular use cases for modern smart cities, many envision city-wide assistants deployed as “point-of-contact” devices that are available on bus stops, subways, etc. They interact with backend infrastructure to take care of changing conditions while users travel around the city. That creates a need to process local data gathered from infrastructure like internet-of-things gateways, smart cameras, or buses. Thanks to NVIDIA Jetson modules, these data can be processed locally for fast, low-latency AI-driven insights. Then, as device-local computational capabilities are limited, data processing should be offloaded to the edge or backend infrastructure. With the power of Tegra SoC, data can first be aggregated at the edge devices to be later sent to the cloud for further processing. Open-source deployment mechanisms enable such complex setups through automated management, Day 2 operations, and security. Canonical, working alongside NVIDIA, has developed an open-source software infrastructure that simplifies the deployment of multiple Kubernetes clusters at the edge with access to GPU. We’ll go over those mechanisms, and how they orchestrate the deployment of Kubernetes-based AI/machine learning infrastructure across the smart cities blueprint to profit from NVIDIA hardware capabilities, both on devices and cloud instances.

Presenter: Gustavo Sanchez, AI Solutions Architect, Canonical

GTC-2024-Presentation-promotion-image.pn

Build and scale your AI projects with Canonical and NVIDIA

Starting a deep learning pilot within an enterprise has its set of challenges, but scaling projects to production-grade deployments  brings a host of additional difficulties. These chiefly relate to the increased hardware, software, and operational requirements that come with larger and more complex initiatives.

Canonical and NVIDIA offer an integrated end-to-end solution – from a hardware optimised Ubuntu to application orchestration and MLOps. We enable organisations to develop, optimise and scale ML workloads.

Canonical will showcase 3 demos to walk you through our joint solutions with NVIDIA on AI/ML:

  • Accelerate smart city Edge AI deployments with open-source cloud-native infrastructure – Striving for an architecture to solve Edge AI challenges like software efficiency, security, monitoring and day 2 operations. Canonical, working alongside with NVIDIA, has developed an open-source software infrastructure that simplifies training on private and public clouds as well deployments and operations of AI models on clusters at the edge with access to NVIDIA GPU capabilities.
  • End-to-end MLOps with Hybrid Cloud capable Open-Source tooling –  Cost optimization, data privacy, and HPC performance on GPUs are some of the reasons companies have to consider private cloud, hybrid cloud and multi cloud solutions for their Data and AI infrastructure. Open-source cloud agnostic infrastructure for Machine Learning Operations gives companies flexibility to expand beyond public cloud vendor lock-ins, alignment with restricted data compliance constraints and capabilities to take full advantage of their hardware resources, while automating day to day operations.
  • LLM and RAG open-source infrastructure – This demo shows an implementation of an end-to-end  solution from data collection and cleaning to training and inference usage of an open-source large language model integrated using the retrieval augmented generation technique on an open-source vector database. It shows how to scrape information out of your publicly available company website to be embedded into the vector database and to be consumed by the LLM model.

Visit our Canonical booth 1601 at GTC to check them out.

Come and meet us at NVIDIA GTC 2024

If you are interested in building or scaling your AI projects with open source solutions, we are here to help you. Visit ubuntu.com/nvidia to explore our joint data centre offerings.

Book a meeting with us

Learn more about our joint solutions

Explore Canonical & Ubuntu at Past GTCs

View the full article

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...