Jump to content

Search the Community

Showing results for tags 'mlops'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • General Discussion
    • Artificial Intelligence
    • DevOpsForum News
  • DevOps & SRE
    • DevOps & SRE General Discussion
    • Databases, Data Engineering & Data Science
    • Development & Programming
    • CI/CD, GitOps, Orchestration & Scheduling
    • Docker, Containers, Microservices, Serverless & Virtualization
    • Infrastructure-as-Code
    • Kubernetes & Container Orchestration
    • Linux
    • Logging, Monitoring & Observability
    • Security, Governance, Risk & Compliance
  • Cloud Providers
    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

Found 10 results

  1. Join us on a journey of becoming a professional MLOps engineer by mastering essential tools, frameworks, key concepts, and processes in the field.View the full article
  2. Begin your MLOps journey with these comprehensive free resources available on GitHub.View the full article
  3. The recent boom in the AI landscape has seen larger and more complex models give rise to mind-blowing AI capabilities across a range of applications. At the same time, these larger models are driving up costs for AI compute astronomically; state-of-the-art LLMs cost tens of millions of dollars (or more) to train, with hundreds of billions of parameters and trillions of tokens of data to learn. ML teams need access to compute that is both scalable and price-efficient. They need the right infrastructure to operationalize ML activities and enhance developer productivity when working with large models. Moreover, they must maintain guardrails for orchestration and deployment to production. Developing, refining, optimizing, deploying, and monitoring ML models can be challenging and complex in the current AI landscape. However, the efficient orchestration, cost-effective performance, and scalability present in Google Kubernetes Engine (GKE), in tandem with Weights & Biases (W&B) Launch's user-friendly interface, simplifies the model development and deployment process for machine learning researchers. This integration seamlessly connects ML researchers to their training and inference infrastructure, making the management and deployment of machine learning models easier. In this blog, we show you how to use W&B Launch to set up access to either GPUs or Cloud Tensor Processing Units (TPUs) on GKE once, and from then easily grant ML researchers frictionless access to compute. W&B LaunchW&B is an ML developer platform designed to enable ML teams to build, track, and deploy better models faster. As the system of record for ML activities, from experiment tracking to model registry management, W&B improves collaboration, boosts productivity, and overall helps simplify the complexity of modern ML workflows. W&B Launch connects ML practitioners to their cloud compute infrastructure. After a one-time configuration by an ML platform team, ML researchers can then select the target environment in which they want to launch training or inference jobs. W&B Launch automatically packages up all the code and dependencies for that job and sends it to the target environment, taking advantage of more powerful compute or parallelization to execute jobs faster and at greater scale. With jobs packaged up, practitioners can easily rerun jobs with small tweaks, such as changing hyperparameters or training datasets. ML teams also use W&B Launch to automate model evaluation and deployment workflows to manage shared compute resources more efficiently. “We’re using W&B Launch to enable easy access to compute resources to dramatically scale our training workloads,” said Mike Seddon, Head of Machine Learning and Artificial Intelligence at VisualCortex. “Having that ability to create queues to each cluster and activate them is exactly what we want to do.” Creating a GKE ClusterGKE offers a fully-managed environment for deploying, managing, and scaling containerized applications using Kubernetes. ML teams often choose GKE over managing an open-source Kubernetes cluster because it provides the industry's only fully managed Kubernetes with a 99.9% Pod-level SLA backed by Google SREs, which reduces operational overhead and can improve an organization's security posture. To start using W&B Launch with GKE, create a GKE cluster with TPUs or with GPUs W&B Launch Jobs with GKEThe W&B Launch agent builds container images inside of GKE, capturing all the dependencies for that particular run. Once W&B Launch is configured with the GKE cluster, ML engineers can easily start training jobs by accessing powerful GPUs or Google Cloud TPUs to accelerate and supercharge AI development. To get started, create an account, install W&B and start tracking your machine learning experiments in minutes. You can then set up your W&B Launch queue and W&B Launch agent on your GKE cluster. W&B Launch queues must be configured to point to a specific target resource, along with any additional configuration specific to that resource. A Launch queue that points to a GKE cluster would include environment variables or set a custom namespace for its Launch queue configuration. When an agent receives a job from a queue, it also receives the queue configuration. Once you’ve created your queue, you can set up your W&B Launch agent, which are long running processes that poll one or more W&B Launch queues for jobs, in a first in first out (FIFO) order. The agent then submits the job to the target resource — your Cloud TPU nodes within your GKE cluster — along with the configuration options specified. Check out our documentation for more information on setting up your GKE cluster and agent. Creating a W&B Launch jobNow that W&B Launch is set up with GKE, job execution can be handled through the W&B UI. Identify the previously executed training run that has been tracked in W&B.Select the specific code version used for the job under the version history tab.You will see the W&B Launch button on the upper right hand corner of the Python source screen.After clicking on the W&B Launch button, you’ll be able to change any parameters for the experiment, and select the GKE environment under the “Queue” menu.A common use case for W&B Launch is to execute a number of hyperparameter tuning jobs in parallel. Setting up a hyperparameter sweep is simple: select “Sweep” on the left-hand toolbar, enter the range of the sweep for the hyperparameters, and select the “GKE” queue for the environment. ConclusionW&B Launch with GKE is a powerful combination to provide ML researchers and ML platform teams the compute resources and automation they need to rapidly increase the rate of experimentation for AI projects. To learn more, check out the full W&B Launch documentation and this repository of pre-built W&B Launch jobs.
  4. The recent boom in the AI landscape has seen larger and more complex models give rise to mind-blowing AI capabilities across a range of applications. At the same time, these larger models are driving up costs for AI compute astronomically; state-of-the-art LLMs cost tens of millions of dollars (or more) to train, with hundreds of billions of parameters and trillions of tokens of data to learn. ML teams need access to compute that is both scalable and price-efficient. They need the right infrastructure to operationalize ML activities and enhance developer productivity when working with large models. Moreover, they must maintain guardrails for orchestration and deployment to production. Developing, refining, optimizing, deploying, and monitoring ML models can be challenging and complex in the current AI landscape. However, the efficient orchestration, cost-effective performance, and scalability present in Google Kubernetes Engine (GKE), in tandem with Weights & Biases (W&B) Launch's user-friendly interface, simplifies the model development and deployment process for machine learning researchers. This integration seamlessly connects ML researchers to their training and inference infrastructure, making the management and deployment of machine learning models easier. In this blog, we show you how to use W&B Launch to set up access to either GPUs or Cloud Tensor Processing Units (TPUs) on GKE once, and from then easily grant ML researchers frictionless access to compute. W&B LaunchW&B is an ML developer platform designed to enable ML teams to build, track, and deploy better models faster. As the system of record for ML activities, from experiment tracking to model registry management, W&B improves collaboration, boosts productivity, and overall helps simplify the complexity of modern ML workflows. W&B Launch connects ML practitioners to their cloud compute infrastructure. After a one-time configuration by an ML platform team, ML researchers can then select the target environment in which they want to launch training or inference jobs. W&B Launch automatically packages up all the code and dependencies for that job and sends it to the target environment, taking advantage of more powerful compute or parallelization to execute jobs faster and at greater scale. With jobs packaged up, practitioners can easily rerun jobs with small tweaks, such as changing hyperparameters or training datasets. ML teams also use W&B Launch to automate model evaluation and deployment workflows to manage shared compute resources more efficiently. “We’re using W&B Launch to enable easy access to compute resources to dramatically scale our training workloads,” said Mike Seddon, Head of Machine Learning and Artificial Intelligence at VisualCortex. “Having that ability to create queues to each cluster and activate them is exactly what we want to do.” Creating a GKE ClusterGKE offers a fully-managed environment for deploying, managing, and scaling containerized applications using Kubernetes. ML teams often choose GKE over managing an open-source Kubernetes cluster because it provides the industry's only fully managed Kubernetes with a 99.9% Pod-level SLA backed by Google SREs, which reduces operational overhead and can improve an organization's security posture. To start using W&B Launch with GKE, create a GKE cluster with TPUs or with GPUs W&B Launch Jobs with GKEThe W&B Launch agent builds container images inside of GKE, capturing all the dependencies for that particular run. Once W&B Launch is configured with the GKE cluster, ML engineers can easily start training jobs by accessing powerful GPUs or Google Cloud TPUs to accelerate and supercharge AI development. To get started, create an account, install W&B and start tracking your machine learning experiments in minutes. You can then set up your W&B Launch queue and W&B Launch agent on your GKE cluster. W&B Launch queues must be configured to point to a specific target resource, along with any additional configuration specific to that resource. A Launch queue that points to a GKE cluster would include environment variables or set a custom namespace for its Launch queue configuration. When an agent receives a job from a queue, it also receives the queue configuration. Once you’ve created your queue, you can set up your W&B Launch agent, which are long running processes that poll one or more W&B Launch queues for jobs, in a first in first out (FIFO) order. The agent then submits the job to the target resource — your Cloud TPU nodes within your GKE cluster — along with the configuration options specified. Check out our documentation for more information on setting up your GKE cluster and agent. Creating a W&B Launch jobNow that W&B Launch is set up with GKE, job execution can be handled through the W&B UI. Identify the previously executed training run that has been tracked in W&B.Select the specific code version used for the job under the version history tab.You will see the W&B Launch button on the upper right hand corner of the Python source screen.After clicking on the W&B Launch button, you’ll be able to change any parameters for the experiment, and select the GKE environment under the “Queue” menu.A common use case for W&B Launch is to execute a number of hyperparameter tuning jobs in parallel. Setting up a hyperparameter sweep is simple: select “Sweep” on the left-hand toolbar, enter the range of the sweep for the hyperparameters, and select the “GKE” queue for the environment. ConclusionW&B Launch with GKE is a powerful combination to provide ML researchers and ML platform teams the compute resources and automation they need to rapidly increase the rate of experimentation for AI projects. To learn more, check out the full W&B Launch documentation and this repository of pre-built W&B Launch jobs.
  5. Last year, we published the Big Book of MLOps, outlining guiding principles, design considerations, and reference architectures for Machine Learning Operations (MLOps). Since then, Databricks has added key features simplifying MLOps, and Generative AI has brought new requirements to MLOps platforms and processes. We are excited to announce a new version of the Big Book of MLOps covering these product updates and Generative AI requirements. This blog post highlights key updates in the eBook, which can be downloaded here ... View the full article
  6. In this episode we’re talking about observability in MLOps When we think of observability, we talk about alerts, dashboards, logs, things that are helping people know better what’s happening with their system. Is it correct? People used to talk about the three pillars of observability. I kind of don’t agree with that way of looking at it. But it’s, it’s basically in its simplest form, it’s logs, metrics and traces. View the full article
  7. Data scientists and machine learning engineers are often looking for tools that could ease their work. Kubeflow and MLFlow are two of the most popular open-source tools in the machine learning operations (MLOps) space. They are often considered when kickstarting a new AI/ML initiative, so comparisons between them are not surprising. This blog covers a very controversial topic, answering a question that many people from the industry have: Kubeflow vs MLFlow: Which one is better? Both products have powerful capabilities but their initial goal was very different. Kubeflow was designed as a tool for AI at scale, and MLFlow for experiment tracking. In this article, you will learn about the two solutions, including the similarities, differences, benefits and how to choose between them. Kubeflow vs MLFlow: which one is right for you? Watch our webinar What is Kubeflow? Kubeflow is an open-source end-to-end MLOps platform started by Google a couple of years ago. It runs on any CNCF-compliant Kubernetes and enables professionals to develop and deploy machine learning models. Kubeflow is a suite of tools that automates machine learning workflows, in a portable, reproducible and scalable manner. Kubeflow gives a platform to perform MLOps practices, providing tooling to: spin up a notebook do data preparation build pipelines to automate the entire ML process perform AutoML and training on top of Kubernetes. serve machine learning models using Kserve Kubeflow added KServe to the default bundle, offering a wide range of serving frameworks, such as NVIDIA Triton Inference Server are available. Whether you use Tensorflow, PyTorch, or PaddlePaddle, Kubeflow enables you to identify the best suite of parameters for getting the best model performance. Kubeflow has an end-to-end approach to handling machine learning processes on Kubernetes. It provides capabilities that help big teams also work proficiently together, using concepts like namespace isolation. Charmed Kubeflow is Canonical’s official distribution. Charmed Kubeflow facilitates faster project delivery, enables reproducibility and uses the hardware at its fullest potential. With the ability to run on any cloud, the MLOps platform is compatible with both public clouds, such as AWS or Azure, as well as private clouds. Furthermore, it is compatible with legacy HPC clusters, as well as high-end AI-dedicated hardware, such as NVIDIA’s GPUs or DGX. Charmed Kubeflow benefits from a wide range of integrations with various tools such as Prometheus and Grafana, as part of Canonical Observability Stack, Spark or NVIDIA Triton. It is a modular solution that can decompose into different applications, such that professionals can run AI at scale or at the edge. What is MLFlow? MLFlow is an open-source platform, started by DataBricks a couple of years ago. It is used for managing machine learning workflows. It has various functions, such as experiment tracking. MLFlow can be integrated within any existing MLOps process, but it can also be used to build new ones. It provides standardised packaging, to be able to reuse the models in different environments. However, the most important part is the model registry component, which can be used with different ML tools. It provides guidance on how to use machine learning workloads, without being an opinionated tool that constrains users in any manner. Charmed MLFlow is Canonical’s distribution of MLFlow. At the moment, it is available in Beta. We welcome all data scientists, machine learning engineers or AI enthusiasts to try it out and share feedback. It is a chance to become an open source contributor while simplifying your work in the industry. Kubeflow vs MLFlow Both Kubeflow and MLFlow are open source solutions designed for the machine learning landscape. They received massive support from industry leaders, as well as a striving community whose contributions are making a difference in the development of the project. The main purpose of Kubeflow and MLFlow is to create a collaborative environment for data scientists and machine learning engineers, to develop and deploy machine learning models in a scalable, portable and reproducible manner. However, comparing Kubeflow and MLFlow is like comparing apples to oranges. From the very beginning, they were designed for different purposes. The projects evolved over time and now have overlapping features. But most importantly, they have different strengths. On one hand, Kubeflow is proficient when it comes to machine learning workflow automation, using pipelines, as well as model development. On the other hand, MLFlow is great for experiment tracking and model registry. Also, from a user perspective, MLFlow requires fewer resources and is easier to deploy and use by beginners, whereas Kubeflow is a heavier solution, ideal for scaling up machine learning projects. Overall, Kubeflow and MLFlow should not be compared on a one-to-one basis. Kubeflow allows users to use Kubernetes for machine learning in a proper way and MLFlow is an agnostic platform that can be used with anything, from VSCode to JupyterLab, from SageMake to Kubeflow. The best way, if the layer underneath is Kubernetes, is to integrate Kubeflow and MLFlow and use them together. Charmed Kubeflow and Charmed MLFlow, for instance, are integrated, providing the best of both worlds. The process of getting them together is easy and smooth since we already prepared a guide for you. Kubeflow vs MLFlow: which one is right for you? Follow our guide How to choose between Kubeflow and MLFlow? Choosing between Kubeflow and MLFlow is quite simple once you understand the role of each of them. MLFlow is recommended to track machine learning models and parameters, or when data scientists or machine learning engineers deploy models into different platforms. Kubeflow is ideal when you need a pipeline engine to automate some of your workflows. It is a production-grade tool, very good for enterprises looking to scale their AI initiatives and cover the entire machine learning lifecycle within one tool and validate its integrations. Watch our webinar Future of Kubeflow and MLFlow Kubeflow and MLFlow are two of the most exciting open-source projects in the ML world. While they have overlapping features, they are best suited for different purposes and they work well when integrated. Long term, they are very likely going to evolve, with Kubeflow and MLFlow working closely in the upstream community to offer a smooth experience to the end user. MLFlow is going to stay the tool of choice for beginners. With the transition to scaled-up AI initiatives, MLFlow is also going to improve, and we’re likely to see a better-defined journey between the tools. Will they compete with each other head-to-head eventually and fulfil the same needs? Only time will tell. Start your MLOps journey with Canonical Canonical has both Charmed Kubeflow and Charmed MLFlow as part of a growing MLOps ecosystem. It offers security patching, upgrades and updates of the stack, as well as a widely integrated set of tools that goes beyond machine learning, including observability capabilities or big data tools. Canonical MLOps stack, you can be tried for free, but we also have enterprise support and managed services. If you need consultancy services, check out our 4 lanes, available in the datasheet. Get in touch for more details Learn more about Canonical MLOps Ubuntu AI publicationA guide to MLOpsAI in retail: use case, benefits, toolsHow to secure MLOps tooling? View the full article
  8. In the advancement of the recent technology space, there have been some excitements in data science, deep learning, artificial intelligence, and big data. These advancements have led to the development of a dynamic ecosystem for data analysis. However, data analysis became more complicated as the data increased and there was the need to bring in new algorithms relating to machine learning to help data scientists have a better analyzing experience. MLOps was birthed from the development of these algorithms since it was required to deploy resources, version codebases, integrate data, and even test procedures. Since then, it has been widely used across several industries. View the full article
  9. Developing machine learning (ML) models are a taunting task for data scientists, however, managing these models in production can be even harder. In order to have successful results, data scientists need to recognize the model drift, retrain the model with updated data sets, improve performance, and maintain the underlying technology platforms. Hence, developing production-ready models are something difficult and long to achieve. New challenges always appear once ML models are deployed to production and used within the business processes. With more organizations adopting ML, there is a need to be aware of model management and operations. This is where MLOps – Machine Learning Operations – comes into play to make model management and operations easier and faster... The post The rise of MLOps appeared first on DevOps Online. View the full article
  10. Algorithmia today launched a performance monitoring for machine learning (ML) model that tracks algorithm inference and operations metrics generated by the enterprise edition of its namesake platform for building these models. Company CEO Diego Oppenheimer said Algorithmia Insights provides a level of observability into ML models that DevOps teams have come to expect from applications. […] The post Algorithmia Allies With Datadog on MLOps Observability appeared first on DevOps.com. View the full article
  • Forum Statistics

    43.2k
    Total Topics
    42.5k
    Total Posts
×
×
  • Create New...