Search the Community
Showing results for tags 'edge ai'.
-
Welcome to the new era, where AI is driving innovation and rapidly changing what applications look like, how they’re designed and built, and how they’re delivered. Businesses in nearly every industry are racing to apply AI in their products and operations to better engage customers, increase productivity, and gain a competitive edge. Most companies are familiar with the transformative capabilities of AI but are unclear where and how to best apply AI in their business. To accelerate AI development and integration, organizations need a trusted partner with AI expertise that can help them determine their AI strategy and provide comprehensive, unified services, infrastructure, and tools specifically designed for AI. Realize cutting-edge AI capabilities Join Microsoft at the NVIDIA GTC AI Conference March 18-21 Register now Companies of all sizes are turning to Microsoft for innovative, secure, and responsible AI services. Microsoft provides full-stack AI solutions—including developer tools, application services, and supercomputing infrastructure built specifically for AI, plus a global team of AI experts that can help organizations accelerate their AI production. Continuously investing to deliver the latest responsible and secure AI technologies, Microsoft is committed to helping companies transform their business with AI. Hear how Microsoft is helping organizations around the world achieve more with Microsoft AI in this video. Delivering cutting-edge AI services In 2023, Microsoft unveiled yet another round of AI innovations, from AI services to silicon, that can help any business accelerate AI production. Whether you need to add intelligence to your existing applications or create new ones from scratch, Microsoft Azure has the right AI services and infrastructure. Microsoft Azure AI Studio, now in preview, empowers organizations and developers to innovate with AI. The platform, accessibly and responsibly designed, provides a one-stop shop to seamlessly explore, build, test, and deploy AI solutions using state-of-the-art AI tools and machine learning models. Developers can build generative AI applications, including copilot experiences, using out-of-the-box and customizable tooling and models with built-in security and compliance. Microsoft Copilot and Microsoft Copilot for Azure transform productivity and business processes, from office workers and front-line workers to developers and IT professionals. Azure AI Services offers pre-built cognitive services that can perform tasks like vision, speech, and decision making, to custom machine learning solutions that can be built and deployed using Azure Machine Learning. Azure OpenAI Service offers industry-leading coding and language AI models and the latest advancements in generative AI for content creation, conversation AI, and data grounding. Azure supercomputing infrastructure provides the latest AI-optimized silicon featuring new Microsoft custom-designed chips—Azure Maia 100 and Azure Cobalt 100, NVIDIA® H100 and H200 Tensor Core graphics processing unit (GPU)-optimized Azure virtual machines, and NVIDA AI foundry service on Azure. Experience the innovation at NVIDIA GTC Join Microsoft at the NVIDIA GTC AI Conference March 18–21 at booth #1108 in San Jose, CA (or virtually) to discover how these cutting-edge Azure AI services and supercomputing infrastructure can help power your AI transformation. Through in-person and on-demand sessions, live discussions, and hands-on demos, attendees can: Get to know the core Azure AI services and technologies that power some of the world’s largest and most complex AI models and applications. Discover how to accelerate the delivery of generative AI and large language models. Explore how Azure AI Studio and purpose-built cloud infrastructure can accelerate AI development and deployment. Learn from best practices and customer experiences to speed your AI journey. Attend Microsoft sessions Register for a Microsoft in-person or on-demand session at NVIDIA GTC. See the full list of and register for Microsoft sessions. Featured sessions S63275 Power your AI transformation with the Microsoft Cloud S63277 Unlocking Generative AI in the Enterprise with NVIDIA on Azure S63274 The Next Level of GenAI with Azure OpenAI Service and Copilot S63273 Deep Dive into Training and Inferencing Large Language Models on Azure S63276 Behind the scenes with Azure AI infrastructure Talks and panels S61190 The Small Models Revolution S62777 The Role of Generative AI in Modern Medicine S61936 A Deep Dive into Sustainable Cloud Computing S62336 ONNX Runtime: Accelerated AI Deployment for PC Apps S62730 Generative AI Adoption and Operational Challenges in Government S62783 Digitalizing the World’s Largest Industries with OpenUSD and Generative AI S62504 Optimizing Your AI Strategy to Develop and Deploy Novel Deep Learning Models in the Cloud for Medical Image Analysis S62447 Best Practices in Networking for AI: Perspectives from Cloud Service Providers Meet with Microsoft Visit Microsoft in person at booth #1108 where we’ll showcase the latest in AI services and supercomputing infrastructure. Join live discussion sessions (in-booth theater), connect with Microsoft AI experts, and try out the latest technology and hardware. Can’t join in person? Visit us virtually via the NVIDIA GTC site starting March 18. Get hands-on training Microsoft is proud to host NVIDIA Hands-On Training at GTC. Attend full-day, hands-on, instructor-led workshops offered onsite and virtually. Participants will have access to a fully-configured GPU-accelerated server in the cloud and the chance to earn an NVIDIA certificate of subject matter competency. Register today We’re excited to meet with you in person or virtually at the NVIDIA GTC AI Conference. With over 900 inspiring sessions, 200+ exhibits, 20+ technical workshops, and tons of unique networking events, discover what’s next in AI and accelerated computing at NVIDIA GTC. Register today. Learn more about Azure AI Azure AI Solutions NVIDIA | Accelerated Computing in Microsoft Azure The post Explore cutting-edge AI solutions with Microsoft at NVIDIA GTC appeared first on Microsoft Azure Blog. View the full article
-
edge ai Edge AI: what, why and how with open source
Ubuntu posted a topic in Artificial Intelligence
Edge AI is transforming the way that devices interact with data centres, challenging organisations to stay up to speed with the latest innovations. From AI-powered healthcare instruments to autonomous vehicles, there are plenty of use cases that benefit from artificial intelligence on edge computing. This blog will dive into the topic, capturing key considerations when starting an edge AI project, main benefits, challenges and how open source fits into the picture. What is Edge AI? AI at the edge, or Edge AI, refers to the combination of artificial intelligence and edge computing. It aims to execute machine learning models on interconnected edge devices. It enables devices to make smarter decisions, without always connecting to the cloud to process the data. It is called edge, because the machine learning model runs near the user rather than in a data centre. Edge AI is growing in popularity as industries identify new use cases and opportunities to optimise their workflows, automate business processes or unlock new chances to innovate. Self-driving cars, wearable devices, security cameras, and smart home appliances are among the technologies that take advantage of edge AI capabilities to deliver information to users in real-time when it is most essential. Benefits of edge AI Nowadays, algorithms are capable of understanding different tasks such as text, sound or images. They are particularly useful in places occupied by end users with real-world problems. These AI applications would be impractical or even impossible to deploy in a centralised cloud or enterprise data centre due to issues related to latency, bandwidth and privacy. Some of the most important benefits of edge AI are: Real time insights: Since data is analysed real time, close to the user, edge AI enables real time processing and reduces the time needed to complete activities and derive insight.Cost savings: Depending on the use case, some data can often be processed at the edge where it is collected, so it doesn’t all have to be sent to the data centre for training the machine learning algorithms. This reduces the cost of storing the data, as well as training the model. At the same time, organisations often utilise edge AI to reduce the power consumption of the edge devices, by optimising the time they are on and off, which again leads to cost reduction.High availability: Having a decentralised way of training and running the model enables organisations to ensure that their edge devices benefit from the model even if there is a problem within the data centre.Privacy: Edge AI can analyse data in real time without exposing it to humans, increasing the privacy of appearance, voice or identity of the objects involved. For example, surveillance cameras do not need someone to look at them, but rather have machine learning models that send alerts depending on the use case or need.Sustainability: Using edge AI to reduce the power consumption of edge devices doesn’t just minimise costs, it also enables organisations to become more sustainable. With edge AI, enterprises can avoid utilising their devices unless they are needed. Use cases in the industrial sector Across verticals, enterprises are quickly developing and deploying edge AI models to address a wide variety of use cases. To get a better sense of the value that edge AI can deliver, let’s take a closer look at how it is being used in the industrial sector. Industrial manufacturers struggle with large facilities that often use a significant number of devices. A survey fielded in the spring of 2023 by Arm found that edge computing and machine learning were among the top five technologies that will have the most impact on manufacturing in the coming years. Edge AI use cases are often tied to the modernisation of existing manufacturing factories. They include production scheduling, quality inspection, and asset maintenance – but applications go beyond that. Their main objective is to improve the efficiency and speed of automation tasks like product assembly and quality control. Some of the most prominent use cases of Edge AI in manufacturing include: Real-time detection of defects as part of quality inspection processes that use deep neural networks for analysing product images. Often, this also enables predictive maintenance, helping manufacturers minimise the need to reactively fix their components by instead addressing potential issues preemptively. Execution of real-time production assembly tasks based on low-latency operations of industrial robots. Remote support of technicians on field tasks based on augmented reality (AR) and mixed reality (MR) devices; Low latency is the primary driver of edge AI in the industrial sector. However, some use cases also benefit from improved security and privacy. For example, 3D printers3d printers can use edge AI to protect intellectual property through a centralised cloud infrastructure. Best practices for edge AI Compared to other kinds of AI projects, running AI at the edge comes with a unique set of challenges. To maximise the value of edge AI and avoid common pitfalls, we recommend following these best practices: Edge device: At the heart of Edge AI are the devices which end up running the models. They all have different architectures, features and dependencies. Ensure that the capabilities of your hardware align with the requirements of your AI model, and ensure that the software – such as the operating system – is certified on the edge device.. Security: Both in the data centres and on the edge devices there are artefacts that could compromise the security of an organisation. Whether we talk about the data used for training, the ML infrastructure used for developing or deploying the ML model, or the operating system of the edge device, organisations need to protect all these artefacts. Take advantage of the appropriate security capabilities to safeguard these components, such as secure packages, secure boot of the OS from the edge device, or full-disk encryption on the device.Machine learning size: Depending on the use case, the size of the machine learning model is different. It needs to fit on the end device that it is intended to run, so developers need to optimise the model size dictate the chances to successfully deploying it.Network connection: The machine learning lifecycle is an iterative process, so models need to be periodically updated. Therefore, the network connection influences both the data collection process as well as the model deployment capabilities. Organisations need to check and ensure there is a reliable network connection before building deploying models or building an AI strategy.Latency: Organisations often use edge AI for real-time processing, so the latency needs to be minimal. For example, retailers need instant alerts when fraud is detected and cannot ask customers to wait at the cashiers for minutes before confirming payment. Depending on the use case, latency needs to be assessed and considered when choosing the tooling and model update cadence.Scalability: Scale is often limited to the cloud bandwidth to move and process information. It leads to high costs. To ensure a broader range of scalability, the data collection and part of the data processing should happen at the edge. Remote management: Organisations often have multiple devices or multiple remote locations, so scaling to all of them brings new challenges related to their management. To address these challenges, ensure that you have mechanisms in place for easy, remote provisioning and automated updates. Edge AI with open source Open source is at the centre of the artificial intelligence revolution, and open source solutions can provide an effective path to addressing many of the best practices described above. When it comes to edge devices, open source technology can be used to ensure the security, robustness and reliability of both the device and machine learning model. It gives organisations the flexibility to choose from a wide spectrum of tools and technologies, benefit from community support and quickly get started without a huge investment. Open source tooling is available across all layers of the stack, from the operating system that runs on the edge device, to the MLOps platform used for training, to the frameworks used to deploy the machine learning model. Edge AI with Canonical Canonical delivers a comprehensive AI stack with all the open source software organisations need for their edge AI projects. Canonical offers an end-to-end MLOps solution that enables you to train your models. Charmed Kubeflow is the foundation of the solution, and it is seamlessly integrated with leading open source tooling such as MLflow for model registry or Spark for data streaming. It gives organisations flexibility to develop their models on any cloud platform and any Kubernetes distribution, offering capabilities such as user management, security maintenance of the used packages or managed services. The operating system that the device runs plays an important role. Ubuntu Core is the distribution of the open source Ubuntu operating system dedicated to IoT devices. It has capabilities such as secure boot and full disk encryption to ensure the security of the device. For certain use cases, running a small cloud, such as Microcloud enables unattended edge clusters to leverage machine learning. Packaging models as snaps makes them easy to maintain and update in production. Snaps offer a variety of benefits including OTA updates, auto rollback in case of failure and no touch deployment. At the same time, for managing the lifecycle of the machine learning of the model and the remote management, brand stores are an ideal solution.. To learn more about Canonical’s edge AI solutions, get in touch. Further reading 5 Edge Computing Examples You Should Know How a real-time kernel reduces latency in telco edge clouds MLOps Toolkit Explained View the full article -
edge ai Edge AI: what, why and how with open source
Ubuntu posted a topic in Artificial Intelligence
Edge AI is transforming the way that devices interact with data centres, challenging organisations to stay Edge AI is transforming the way that devices interact with data centres, challenging organisations to stay up to speed with the latest innovations. From AI-powered healthcare instruments to autonomous vehicles, there are plenty of use cases that benefit from artificial intelligence on edge devices. This blog will dive into the topic, capturing key considerations when starting an edge AI project, the main benefits, challenges and how open source fits into the picture. What is Edge AI? AI at the edge, or edge AI, refers to the combination of artificial intelligence and edge computing. It aims to execute machine learning models on connected edge devices. It enables devices to make smarter decisions, without always connecting to the cloud to process the data. It is called edge, because the machine learning model runs near the user rather than in a data centre. Edge AI is growing in popularity as industries identify new use cases and opportunities to optimise their workflows, automate business processes or unlock new chances to innovate. Self-driving cars, wearable devices, Industrial assembly lines, and smart home appliances are among the technologies that take advantage of edge AI capabilities to deliver information to users in real-time when it is most essential. Benefits of edge AI Algorithms are capable of understanding different inputs such as text, sound or images. They are particularly useful in places occupied by end users with real-world problems. These AI applications would be impractical or even impossible to deploy in a centralised cloud or enterprise data centre due to issues related to latency, bandwidth and privacy. Some of the most important benefits of edge AI are: Real time insights: Since data is analysed real time, close to the user, edge AI enables real time processing and reduces the time needed to complete activities and derive insight.Cost savings: Depending on the use case, some data can be processed at the edge where it is collected, so it doesn’t all have to be sent to the data centre for training the machine learning algorithms. This reduces the cost of storing the data, as well as training the model. At the same time, organisations often utilise edge AI to reduce the power consumption of the edge devices, by optimising the time they are on and off, which again leads to cost reduction.High availability: Having a decentralised way of training and running the model enables organisations to ensure that their edge devices benefit from the model even if there is a problem within the data centre.Privacy: Edge AI can analyse data in real time without exposing it to humans, increasing the privacy of appearance, voice or identity of the objects involved. For example, surveillance cameras do not need someone to look at them, but rather have machine learning models that send alerts depending on the use case or need.Sustainability: Using edge AI to reduce the power consumption of edge devices doesn’t just minimise costs, it also enables organisations to become more sustainable. With edge AI, enterprises can avoid utilising their devices unless they are needed. Use cases in the industrial sector Across verticals, enterprises are quickly developing and deploying edge AI models to address a wide variety of use cases. To get a better sense of the value that edge AI can deliver, let’s take a closer look at how it is being used in the industrial sector. Industrial manufacturers struggle with large facilities that often use a significant number of devices. A survey fielded in the spring of 2023 by Arm found that edge computing and machine learning were among the top five technologies that will have the most impact on manufacturing in the coming years. Edge AI use cases are often tied to the modernisation of existing manufacturing factories. They include production scheduling, quality inspection, and asset maintenance – but applications go beyond that. Their main objective is to improve the efficiency and speed of automation tasks like product assembly and quality control. Some of the most prominent use cases of Edge AI in manufacturing include: Real-time detection of defects as part of quality inspection processes that use deep neural networks for analysing product images. Often, this also enables predictive maintenance, helping manufacturers minimise the need to reactively fix their components by instead addressing potential issues preemptively. Execution of real-time production assembly tasks based on low-latency operations of industrial robots. Remote support of technicians on field tasks based on augmented reality (AR) and mixed reality (MR) devices; Low latency is the primary driver of edge AI in the industrial sector. However, some use cases also benefit from improved security and privacy. For example, 3D printers3d printers can use edge AI to protect intellectual property through a centralised cloud infrastructure. Best practices for edge AI Compared to other kinds of AI projects, running AI at the edge comes with a unique set of challenges. To maximise the value of edge AI and avoid common pitfalls, we recommend following these best practices: Edge device: At the heart of Edge AI are the devices which end up running the models. They all have different architectures, features and dependencies. Ensure that the capabilities of your hardware align with the requirements of your AI model, and ensure that the software – such as the operating system – is certified on the edge device.. Security: Both in the data centres and on the edge devices there are artefacts that could compromise the security of an organisation. Whether we talk about the data used for training, the ML infrastructure used for developing or deploying the ML model, or the operating system of the edge device, organisations need to protect all these artefacts. Take advantage of the appropriate security capabilities to safeguard these components, such as secure packages, secure boot of the OS from the edge device, or full-disk encryption on the device.Machine learning size: Depending on the use case, the size of the machine learning model is different. It needs to fit on the end device that it is intended to run, so developers need to optimise the model size and dictate the chances to successfully deploy it.Network connection: The machine learning lifecycle is an iterative process, so models need to be periodically updated. Therefore, the network connection influences both the data collection process as well as the model deployment capabilities. Organisations need to check and ensure there is a reliable network connection before building deploying models or building an AI strategy.Latency: Organisations often use edge AI for real-time processing, so the latency needs to be minimal. For example, retailers need instant alerts when fraud is detected and cannot ask customers to wait at the cashiers for minutes before confirming payment. Depending on the use case, latency needs to be assessed and considered when choosing the tooling and model update cadence.Scalability: Scale is often limited to the cloud bandwidth to move and process information. It leads to high costs. To ensure a broader range of scalability, the data collection and part of the data processing should happen at the edge. Remote management: Organisations often have multiple devices or multiple remote locations, so scaling to all of them brings new challenges related to their management. To address these challenges, ensure that you have mechanisms in place for easy, remote provisioning and automated updates. Edge AI with open source Open source is at the centre of the artificial intelligence revolution, and open source solutions can provide an effective path to addressing many of the best practices described above. When it comes to edge devices, open source technology can be used to ensure the security, robustness and reliability of both the device and machine learning model. It gives organisations the flexibility to choose from a wide spectrum of tools and technologies, benefit from community support and quickly get started without a huge investment. Open source tooling is available across all layers of the stack, from the operating system that runs on the edge device, to the MLOps platform used for training, to the frameworks used to deploy the machine learning model. Edge AI with Canonical CaCanonical delivers a comprehensive AI stack with all the open source software which your organisation might need for your edge AI projects. Canonical offers an end-to-end MLOps solution that enables you to train your models. Charmed Kubeflow is the foundation of the solution, and it is seamlessly integrated with leading open source tooling such as MLflow for model registry or Spark for data streaming. It gives organisations flexibility to develop their models on any cloud platform and any Kubernetes distribution, offering capabilities such as user management, security maintenance of the used packages or managed services. The operating system that the device runs plays an important role. Ubuntu Core is the distribution of the open source Ubuntu operating system dedicated to IoT devices. It has capabilities such as secure boot and full disk encryption to ensure the security of the device. For certain use cases, running a small cloud, such as Microcloud enables unattended edge clusters to leverage machine learning. Packaging models as snaps makes them easy to maintain and update in production. Snaps offer a variety of benefits including OTA updates, auto rollback in case of failure and no touch deployment. At the same time, for managing the lifecycle of the machine learning of the model and the remote management, brand stores are an ideal solution. Get Started with edge AI Explore the Canonical solution further with our MLOps Toolkit to discover the key factors to consider when building your machine learning toolkit which includes: Hardware and software that is already tested and validated on the marketOpen source machine learning tools for data processing and models buildingContainer solutions for orchestrationCloud computing with multiple optionsProduction-grade solutions that can be rolled out within an enterprise Download the MLOps Toolkit here. To learn more about Canonical’s edge AI solutions, get in touch. Further reading 5 Edge Computing Examples You Should Know How a real-time kernel reduces latency in telco edge clouds MLOps Toolkit Explained View the full article -
Transforming the landscape of edge computing and IIoT with Ubuntu Certified Hardware [Nuremberg, Germany. , 13 November 2023] — InoNet Computer GmbH, a Eurotech Company, known for engineering and manufacturing of embedded systems and Edge AI computers, has entered into a strategic partnership with Canonical, the publisher of Ubuntu. Together they are set to deliver a robust platform for deploying IoT solutions, introducing cutting-edge Ubuntu certified computers. This partnership aligns InoNet’s hardware with Canonical’s Hardware Certification programme, ensuring stringent standards for reliability, software compatibility and ongoing maintenance. Under this programme, certified hardware gains access to continuous support and essential security updates, providing customers with peace of mind that their systems will remain secure and operational. At the heart of this collaborative effort is the InoNet Concepion®-tXf -L-v3 computer, fueled by Intel® processors, a pinnacle of reliability and seamless performance. Thanks to Canonical’s hardware certification, backed up by extensive real-word testing, the device ensures compatibility with every minor update beyond the initial validation. This means that, when minor Ubuntu updates and patches are released, the pre-certified hardware is less likely to encounter issues or require significant adjustments to function with these updates. As a result, users can rest assured that their software stacks will operate as specified by Ubuntu. Moreover, the integration of hardware drivers into the Ubuntu kernel simplifies usage and enhances compatibility with future kernel updates. “InoNet’s collaboration with Canonical is a milestone in our commitment to deliver quality products to our customers. With access to reliable, performant and Ubuntu-certified hardware they can seamlessly and efficiently use edge computing technology to optimise their business processes, up to full AI applications. Ubuntu-certified hardware, backed by software compatibility, long-term support and security updates, ensures a level of reliability that sets important industry standards. This collaboration not only minimises integration risks for our customers, but also accelerates time-to-market. I am thrilled that together we can enable Edge AI applications paving the way for a smarter, safer and more sustainable future,” said Ralph Ostertag, CEO, InoNet. The combination of Ubuntu with Concepion®-tXf -L-v3 forms an adaptable platform for IoT solutions on edge servers, IPCs, and Edge AI Systems. Certified for both reliability and performance, this collaboration between InoNet and Canonical not only enhances these attributes but also streamlines processes, mitigates risks and accelerates time-to-market across various industry sectors. “Through the Ubuntu Certified Programme, we are able to provide secure, fully supported IoT solutions across the full stack. We are excited to have InoNet joining the Ubuntu ecosystem. With this partnership, we can deliver the best possible Linux experience out-of-the-box, with long term security and reliability support. We look forward to seeing more innovative Edge AI applications brought to InoNet customers.“ said Joe Dulin, VP of Devices Sales, Canonical. About InoNet Computer GmbH InoNet Computer GmbH, a German company and a subsidiary of Eurotech, designs and produces computer systems for industrial applications. Founded in 1998, the company’s extensive reach extends across diverse verticals, including manufacturing, industrial automation, transportation, logistics, and healthcare. With a reputation for innovation and reliability, InoNet is committed to delivering state-of-the-art solutions to meet the evolving needs of enterprises in the era of IoT, automotive and edge computing. About Canonical Canonical, the publisher of Ubuntu, provides open source security, support and services. Our portfolio covers critical systems, from the smallest devices to the largest clouds, from the kernel to containers, from databases to AI. With customers that include top tech brands, emerging startups, governments and home users, Canonical delivers trusted open source for everyone. View the full article
-
edge ai Maximize Performance in Edge AI Applications
KDnuggets posted a topic in Artificial Intelligence
This article provides an overview of the strategies for optimizing AI system performance in edge AI deployments.View the full article -
Edge AI is a term used to describe the installation of artificial intelligence (AI) software on hardware. Rather than being centralized in a cloud or data center, they are found at the network’s edge, near to where data is generated or consumed. Edge AI can be applied to several types of devices, such as smartphones, laptops, cameras, robots, drones, cars, sensors, and IoT devices. This article will explain the following content: How to Use Edge AI? Benefits of Edge AI Challenges of Edge AI How to Use Edge AI? To use edge AI, one needs to consider the following steps: Identify the use case The first step is to identify the problem or opportunity that edge AI can address. For example, one may want to use edge AI for face recognition, voice control, gesture recognition, or anomaly detection. Choose the device The next step is to choose the device that will run the edge AI application. The device should have sufficient hardware capabilities and software compatibility to support the desired AI technique and model. For example, one may choose a smartphone, a smart camera, a smart speaker, or a smart sensor. Train the model The third step is to train the AI model that will perform the edge AI task. The model can be trained either on the device or on the cloud. The model should be optimized for accuracy, speed, and size according to the device specifications and use case requirements. For example, one may use TensorFlow Lite, PyTorch Mobile or ONNX Runtime for model training and optimization. Deploy the model The last step is to deploy the AI model on the device and test its functionality and performance. The model can be deployed either manually or automatically using tools such as Firebase ML Kit or AWS IoT Greengrass. The model should be monitored and updated regularly to ensure its effectiveness. For example, one may use Google Cloud IoT Core or Azure IoT Hub for model management and analytics. Benefits of Edge AI Some of the benefits of edge AI include: Reduced latency Edge AI can provide real-time or near-real-time responses to user requests or events, without relying on network connectivity or cloud availability. This can improve user experience and performance for devices that require low latency, such as video streaming, augmented reality, and autonomous driving. Reduced bandwidth Edge AI can decrease the quantity of data that is required to be transmitted to and from the cloud, which can save bandwidth costs and network resources. This can also enhance data security and privacy. Reduced power consumption Edge AI can optimize the energy efficiency of devices by using local processing instead of cloud computing. This can extend the battery life of devices and decrease the environmental effect of data centers. Challenges of Edge AI Some of the challenges of edge AI include: Limited resources Edge devices may have limited computational power, memory, storage, and battery capacity compared to cloud servers. This can restrict the accuracy of AI models that can execute on edge devices. Therefore, edge AI may require specialized hardware or software optimization techniques to achieve high performance and quality. Model Management Edge AI may involve a large number of distributed devices that need to be updated and synchronized with the latest AI models and data. This can pose challenges for model deployment, monitoring, and maintenance. Therefore, edge AI may require efficient model management tools and frameworks to facilitate edge-cloud collaboration and coordination. Security risks Edge devices may be more vulnerable to physical tampering or cyberattacks than cloud servers. It can compromise the confidentiality of data and AI models on edge devices. Therefore, edge AI may require robust security measures and protocols to protect edge devices and data. Conclusion Edge AI can enable faster, more efficient, and more secure processing of data and AI algorithms, as well as reduce the bandwidth and latency issues associated with cloud-based AI. Edge AI can also leverage several types of AI techniques, such as ML, DL, CV, and NLP. This article has explained Edge AI in detail. View the full article
-
Enterprises struggle to bring AI and automation to the edge due to strict requirements and regulations across verticals. Long-term support, zero-trust security, and built-in functional safety are only a few challenges faced by players who wish to accelerate their technology adoption. At Canonical, we are excited by the promise of bringing secure AI and automation to the edge, and we look forward to providing a stable, open-source foundation for NVIDIA IGX, a new, industrial-grade edge AI platform announced by NVIDIA today. IGX is purpose built for high performance, proactive safety, and end-to-end security in regulated environments. The first product under the IGX platform is NVIDIA IGX Orin, designed to deliver ultra-fast performance in size and power. It’s ideal for use cases in manufacturing, logistics, energy, retail and healthcare. NVIDIA IGX brings functional safety to the industrial edge Organisations are extending to the edge, pushing workloads closer to where users and embedded systems connect to the network. NVIDIA IGX is charting the course for customers to navigate the shift to the edge by bringing built-in functional safety. Three layers of functional safety — reactive, proactive and predictive — play a crucial role at the industrial edge. Whereas the reactive layer is about mitigating the severity of threats, and predictive safety comprises anticipating future exposure based on past performance, proactivity is about identifying concerns before events occur. Designed for industrial and medical certifications, IGX Orin redefines industrial computing by delivering proactive safety in regulated environments. The prevention and detection of random and systematic hardware errors are safety features crucial for environments where humans and robots work together. IGX Orin features a programmable safety microcontroller unit built into the board design, enabling functional safety to become a reality. Enterprises aiming to tap into the fourth industrial revolution can now rely on IGX Orin’s cutting-edge 275 tera-ops per second of AI performance to proactively prevent damage, reduce costs and improve factory efficiency. Even at the edge, requirements and regulations vary, from automotive to industrial to medical use cases. NVIDIA IGX Amanda Saunders, Senior Manager of Edge AI at NVIDIA, said, ”The growth of AI and automation at the edge has led to new requirements in specialised markets. With NVIDIA IGX Orin, we are helping customers seize the opportunity at the edge by bringing AI, security, and proactive safety to regulated markets like industrial automation and medical devices.” In addition, IGX Orin takes power optimisations from the mobile system-on-a-chip world to a server form factor. The bleeding-edge performance of IGX Orin, shipping with an NVIDIA ConnectX-7 SmartNIC, capable of 200 gigabits per second, is an energy-efficient system built for low-latency applications with real-time constraints. Trusted and secure underlying OSs for a new generation of industrial use cases Ubuntu, backed by Canonical, is the most popular open-source operating system (OS) for developers, with commercial-grade support available for production deployments. Such support means that Ubuntu is not just the reference OS for innovators and developers, but also the vehicle enabling enterprises to run secure AI workloads at the edge without users having to worry about the stability and security of the underlying platform. With Ubuntu Core, the application-centric OS for embedded Linux devices, the built-in security of IGX Orin devices can be enhanced, beyond bug fixes and CVE patches. Industrial pioneers will benefit from Ubuntu Core’s state-of-the-art security features, from full-disk encryption to strict confinement. Similarly, every edge application on top of Ubuntu Core sits in a secure, sandboxed environment. By using an OS designed for utmost reliability and optimised for security, world-leading suppliers and manufacturers are free to concentrate their efforts and redirect resources towards their value-add activities. Bringing high performance to the edge The 22.04 LTS release of Ubuntu brought increased energy-performance features. Running Ubuntu on the new IGX Orin will provide developers with significant usability, battery and performance improvements. Real-time kernel support by Canonical is also available for Ubuntu users, guaranteeing ultra-low latency and security for critical infrastructure. “As factories strive for increased overall equipment effectiveness and reduced process downtime, there is a greater need for high-performance and energy-efficient systems built for real-time applications at the edge,” said Edoardo Barbieri, Product Manager at Canonical. “Real-time Ubuntu will power the next generation of industrial innovations by providing a deterministic response time to their extreme low-latency requirements. Powered by IGX Orin’s high performance, we will deliver minimal latency for enterprise workloads at the edge.” Using Ubuntu will also enable the community to leverage the open-source ecosystem of applications and AI-based workloads. Ideal to accelerate industrial transformation Companies deploying smart automation solutions using Ubuntu Core have plenty to look forward to with the IGX platform. Take, for instance, robotics: as the automation market grows, so do robotics development and the need for functional safety in environments with close human interactions. Ubuntu Core developers are pushing robotics to new heights, from warehouses to hospitals. NVIDIA IGX’s safety architecture and features will allow robotics companies to accelerate the adoption of their products in safety-critical environments. “As factories strive for increased overall equipment effectiveness and reduced process downtime, Orin IGX and Ubuntu deliver the perfect combination of high performance, built-in functional safety, end-to-end security and long-term support,” Edoardo Barbieri, Product Manager at Canonical, said, “By delivering proactive safety in regulated environments, we can now predict machine failure based on vibrations before they happen. With proactive part replacement and by preventing downtime, we are bringing the future of industrial automation forward.” Ready to redefine industrial computing By redefining industrial computing, NVIDIA IGX will meet the enterprise-grade demands of high-performance systems at the edge. Forward-thinkers and innovators are now in the driver’s seat to push the envelope of AI and robotics in regulated markets. With the introduction of NVIDIA IGX, enterprises get a boost to bring AI and automation to the edge. Canonical is looking forward to working with NVIDIA to deliver the highest performance, ease of use, and industrial readiness with Ubuntu and Ubuntu Core on the NVIDIA IGX platform. Resources For more information about NVIDIA IGX, please check out the NVIDIA blog here. To explore more about Ubuntu Core, please read the blog and watch the on-demand webinar. To join us at GTC 2022, please check the blog here. View the full article
-
- ubuntu
- ubuntu core
-
(and 1 more)
Tagged with:
-
Forum Statistics
63.6k
Total Topics61.7k
Total Posts