Jump to content

Search the Community

Showing results for tags 'ml'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • General Discussion
    • Artificial Intelligence
    • DevOpsForum News
  • DevOps & SRE
    • DevOps & SRE General Discussion
    • Databases, Data Engineering & Data Science
    • Development & Programming
    • CI/CD, GitOps, Orchestration & Scheduling
    • Docker, Containers, Microservices, Serverless & Virtualization
    • Infrastructure-as-Code
    • Kubernetes & Container Orchestration
    • Linux
    • Logging, Monitoring & Observability
    • Security, Governance, Risk & Compliance
  • Cloud Providers
    • Amazon Web Services
    • Google Cloud Platform
    • Microsoft Azure

Calendars

  • DevOps Events

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

  1. Fabric Madness part 5Image by author and ChatGPT. “Design an illustration, with imagery representing multiple machine learning models, focusing on basketball data” prompt. ChatGPT, 4, OpenAI, 25th April. 2024. https://chat.openai.com.A Huge thanks to Martim Chaves who co-authored this post and developed the example scripts. So far in this series, we’ve looked at how to use Fabric for collecting data, feature engineering, and training models. But now that we have our shiny new models, what do we do with them? How do we keep track of them, and how do we use them to make predictions? This is where MLFlow’s Model Registry comes in, or what Fabric calls an ML Model. A model registry allows us to keep track of different versions of a model and their respective performances. This is especially useful in production scenarios, where we need to deploy a specific version of a model for inference. A Model Registry can be seen as source control for ML Models. Fundamentally, each version represents a distinct set of model files. These files contain the model’s architecture, its trained weights, as well as any other files necessary to load the model and use it. In this post, we’ll discuss how to log models and how to use the model registry to keep track of different versions of a model. We’ll also discuss how to load a model from the registry and use it to make predictions. Registering a ModelThere are two ways to register a model in Fabric: via code or via the UI. Let’s look at both. Registering a Model using codeIn the previous post we looked at creating experiments and logging runs with different configurations. Logging or registering a model can be done using code within a run. To do that, we just have to add a couple of lines of code. # Start the training job with `start_run()` with mlflow.start_run(run_name="logging_a_model") as run: # Previous code... # Train model # Log metrics # Calculate predictions for training set predictions = model.predict(X_train_scaled_df) # Create Signature # Signature required for model loading later on signature = infer_signature(np.array(X_train_scaled_df), predictions) # Model File Name model_file_name = model_name + "_file" # Log model mlflow.tensorflow.log_model(best_model, model_file_name, signature=signature) # Get model URI model_uri = f"runs:/{run.info.run_id}/{model_file_name}" # Register Model result = mlflow.register_model(model_uri, model_name)In this code snippet, we first calculate the predictions for the training set. Then create a signature, which is essentially the input and output shape of the model. This is necessary to ensure that the model can be loaded later on. MLFlow has functions to log models made with different commonly used packages, such as TensorFlow, PyTorch, and scikit-learn. When mlflow.tensorflow.log_model is used, a folder is saved, as an artifact, attached to the run, containing the files needed to load and run the model. In these files, the architecture along with with trained weights of the model and any other configuration necessary for reconstruction are found. This makes it possible to load the model later, either to do inference, fine-tune it, or any other regular model operations without having to re-run the original code that created it. The model’s URI is used as a “path” to the model file, and is made up of the run ID and the name of the file used for the model. Once we have the model’s URI, we can register a ML Model, using the model’s URI. What’s neat about this is that if a model with the same name already exists, a new version is added. That way we can keep track of different versions of the same model, and see how they perform without having overly complex code to manage this. In our previous post, we ran three experiments, one for each model architecture being tested with three different learning rates. For each model architecture, an ML Model was created, and for each learning rate, a version was saved. In total we now have 9 versions to choose from, each with a different architecture and learning rate. Registering a Model using the UIAn ML Model can also be registered via Fabric’s UI. Model versions can be imported from the experiments that have been created. Fig. 1 — Creating a ML Model using the UI. Image by author.After creating an ML Model, we can import a model from an existing experiment. To do that, in a run, we have to select Save in the Save run as an ML Model section. Fig. 2 — Creating a new version of the created ML Model from a run. Image by author.Selecting Best ModelNow that we have registered all of the models, we can select the best one. This can be done either via the UI or code. This can be done by opening each experiment, selecting the list view, and selecting all of the available runs. After finding the best run, we would have to check which model and version that would be. Fig. 3 — Inspecting Experiment. Image by author.Alternatively, it can also be done via code, by getting all of the versions of all of the ML Models performance, and selecting the version with the best score. from mlflow.tracking import MlflowClient client = MlflowClient() mlmodel_names = list(model_dict.keys()) best_score = 2 metric_name = "brier" best_model = {"model_name": "", "model_version": -1} for mlmodel in mlmodel_names: model_versions = client.search_model_versions(filter_string=f"name = '{mlmodel}'") for version in model_versions: # Get metric history for Brier score and run ID metric_history = client.get_metric_history(run_id=version.run_id, key=metric_name) # If score better than best score, save model name and version if metric_history: last_value = metric_history[-1].value if last_value < best_score: best_model["model_name"] = mlmodel best_model["model_version"] = version.version best_score = last_value else: continueIn this code snippet, we get a list of all of the available ML Models. Then, we iterate over this list and get all of the available versions of each ML Model. Getting a list of the versions of an ML Model can be done using the following line: model_versions = client.search_model_versions(filter_string=f"name = '{mlmodel}'")Then, for each version, we simply have to get its metric history. That can be done with the following line: metric_history = client.get_metric_history(run_id=version.run_id, key=metric_name)After that, we simply have to keep track of the best performing version. At the end of this, we had found the best performing model overall, regardless of architecture and hyperparameters. Loading the Best ModelAfter finding the best model, using it to get the final predictions can be done using the following code snippet: # Load the best model loaded_best_model = mlflow.pyfunc.load_model(f"models:/{best_model['model_name']}/{best_model['model_version'].version}") # Evaluate the best model final_brier_score = evaluate_model(loaded_best_model, X_test_scaled_df, y_test) print(f"Best final Brier score: {final_brier_score}")Loading the model can be done using mlflow.pyfunc.load_model(), and the only argument that is needed is the model's path. The path of the model is made up of its name and version, in a models:/[model name]/[version] format. After that, we just have to make sure that the input is the same shape and the features are in the same order as when it was trained - and that's it! Using the test set, we calculated the final Brier Score, 0.20. ConclusionIn this post we discussed the ideas behind a model registry, and why it’s beneficial to use one. We showed how Fabric’s model registry can be used, through the ML Model tool, either via the UI or code. Finally, we looked at loading a model from the registry, to do inference. This concludes our Fabric series. We hope you enjoyed it and that you learned something new. If you have any questions or comments, feel free to reach out to us. We’d love to hear from you! Originally published at https://nobledynamic.com on April 29, 2024. Models, MLFlow, and Microsoft Fabric was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story. View the full article
  2. Want to build and deploy robust machine learning systems to production? Start learning MLOps today with these courses from Google.View the full article
  3. The construction of big data applications based on open source software has become increasingly uncomplicated since the advent of projects like Data on EKS, an open source project from AWS to provide blueprints for building data and machine learning (ML) applications on Amazon Elastic Kubernetes Service (Amazon EKS). In the realm of big data, securing data on cloud applications is crucial. This post explores the deployment of Apache Ranger for permission management within the Hadoop ecosystem on Amazon EKS. We show how Ranger integrates with Hadoop components like Apache Hive, Spark, Trino, Yarn, and HDFS, providing secure and efficient data management in a cloud environment. Join us as we navigate these advanced security strategies in the context of Kubernetes and cloud computing. Overview of solution The Amber Group’s Data on EKS Platform (DEP) is a Kubernetes-based, cloud-centered big data platform that revolutionizes the way we handle data in EKS environments. Developed by Amber Group’s Data Team, DEP integrates with familiar components like Apache Hive, Spark, Flink, Trino, HDFS, and more, making it a versatile and comprehensive solution for data management and BI platforms. The following diagram illustrates the solution architecture. Effective permission management is crucial for several key reasons: Enhanced security – With proper permission management, sensitive data is only accessible to authorized individuals, thereby safeguarding against unauthorized access and potential security breaches. This is especially important in industries handling large volumes of sensitive or personal data. Operational efficiency – By defining clear user roles and permissions, organizations can streamline workflows and reduce administrative overhead. This system simplifies managing user access, saves time for data security administrators, and minimizes the risk of configuration errors. Scalability and compliance – As businesses grow and evolve, a scalable permission management system helps with smoothly adjusting user roles and access rights. This adaptability is essential for maintaining compliance with various data privacy regulations like GDPR and HIPAA, making sure that the organization’s data practices are legally sound and up to date. Addressing big data challenges – Big data comes with unique challenges, like managing large volumes of rapidly evolving data across multiple platforms. Effective permission management helps tackle these challenges by controlling how data is accessed and used, providing data integrity and minimizing the risk of data breaches. Apache Ranger is a comprehensive framework designed for data governance and security in Hadoop ecosystems. It provides a centralized framework to define, administer, and manage security policies consistently across various Hadoop components. Ranger specializes in fine-grained access control, offering detailed management of user permissions and auditing capabilities. Ranger’s architecture is designed to integrate smoothly with various big data tools such as Hadoop, Hive, HBase, and Spark. The key components of Ranger include: Ranger Admin – This is the central component where all security policies are created and managed. It provides a web-based user interface for policy management and an API for programmatic configuration. Ranger UserSync – This service is responsible for syncing user and group information from a directory service like LDAP or AD into Ranger. Ranger plugins – These are installed on each component of the Hadoop ecosystem (like Hive and HBase). Plugins pull policies from the Ranger Admin service and enforce them locally. Ranger Auditing – Ranger captures access audit logs and stores them for compliance and monitoring purposes. It can integrate with external tools for advanced analytics on these audit logs. Ranger Key Management Store (KMS) – Ranger KMS provides encryption and key management, extending Hadoop’s HDFS Transparent Data Encryption (TDE). The following flowchart illustrates the priority levels for matching policies. The priority levels are as follows: Deny list takes precedence over allow list Deny list exclude has a higher priority than deny list Allow list exclude has a higher priority than allow list Our Amazon EKS-based deployment includes the following components: S3 buckets – We use Amazon Simple Storage Service (Amazon S3) for scalable and durable Hive data storage MySQL database – The database stores Hive metadata, facilitating efficient metadata retrieval and management EKS cluster – The cluster is comprised of three distinct node groups: platform, Hadoop, and Trino, each tailored for specific operational needs Hadoop cluster applications – These applications include HDFS for distributed storage and YARN for managing cluster resources Trino cluster application – This application enables us to run distributed SQL queries for analytics Apache Ranger – Ranger serves as the central security management tool for access policy across the big data components OpenLDAP – This is integrated as the LDAP service to provide a centralized user information repository, essential for user authentication and authorization Other cloud services resources – Other resources include a dedicated VPC for network security and isolation By the end of this deployment process, we will have realized the following benefits: A high-performing, scalable big data platform that can handle complex data workflows with ease Enhanced security through centralized management of authentication and authorization, provided by the integration of OpenLDAP and Apache Ranger Cost-effective infrastructure management and operation, thanks to the containerized nature of services on Amazon EKS Compliance with stringent data security and privacy regulations, due to Apache Ranger’s policy enforcement capabilities Deploy a big data cluster on Amazon EKS and configure Ranger for access control In this section, we outline the process of deploying a big data cluster on AWS EKS and configuring Ranger for access control. We use AWS CloudFormation templates for quick deployment of a big data environment on Amazon EKS with Apache Ranger. Complete the following steps: Upload the provided template to AWS CloudFormation, configure the stack options, and launch the stack to automate the deployment of the entire infrastructure, including the EKS cluster and Apache Ranger integration. After a few minutes, you’ll have a fully functional big data environment with robust security management ready for your analytical workloads, as shown in the following screenshot. On the AWS web console, find the name of your EKS cluster. In this case, it’s dep-demo-eks-cluster-ap-northeast-1. For example: aws eks update-kubeconfig --name dep-eks-cluster-ap-northeast-1 --region ap-northeast-1 ## Check pod status. kubectl get pods --namespace hadoop kubectl get pods --namespace platform kubectl get pods --namespace trino After Ranger Admin is successfully forwarded to port 6080 of localhost, go to localhost:6080 in your browser. Log in with user name admin and the password you entered earlier. By default, you have already created two policies: Hive and Trino, and granted all access to the LDAP user you created (depadmin in this case). Also, the LDAP user sync service is set up and will automatically sync all users from the LDAP service created in this template. Example permission configuration In a practical application within a company, permissions for tables and fields in the data warehouse are divided based on business departments, isolating sensitive data for different business units. This provides data security and orderly conduct of daily business operations. The following screenshots show an example business configuration. The following is an example of an Apache Ranger permission configuration. The following screenshots show users associated with roles. When performing data queries, using Hive and Spark as examples, we can demonstrate the comparison before and after permission configuration. The following screenshot shows an example of Hive SQL (running on superset) with privileges denied. The following screenshot shows an example of Spark SQL (running on IDE) with privileges denied. The following screenshot shows an example of Spark SQL (running on IDE) with permissions permitting. Based on this example and considering your enterprise requirements, it becomes feasible and flexible to manage permissions in the data warehouse effectively. Conclusion This post provided a comprehensive guide on permission management in big data, particularly within the Amazon EKS platform using Apache Ranger, that equips you with the essential knowledge and tools for robust data security and management. By implementing the strategies and understanding the components detailed in this post, you can effectively manage permissions, implementing data security and compliance in your big data environments. About the Authors Yuzhu Xiao is a Senior Data Development Engineer at Amber Group with extensive experience in cloud data platform architecture. He has many years of experience in AWS Cloud platform data architecture and development, primarily focusing on efficiency optimization and cost control of enterprise cloud architectures. Xin Zhang is an AWS Solutions Architect, responsible for solution consulting and design based on the AWS Cloud platform. He has a rich experience in R&D and architecture practice in the fields of system architecture, data warehousing, and real-time computing. View the full article
  4. This article is an overview of a particular subset of data structures useful in machine learning and AI development, along with explanations and example implementations.View the full article
  5. Artificial Intelligence (AI) and Machine Learning (ML) have emerged as transformative technologies, driving innovation across many industries. However, in addition to their benefits, AI and ML systems bring unique security challenges that demand a proactive and comprehensive approach. A new methodology that applies the principles of DevSecOps to AI and ML security, called AISecOps, ensures […] The article AISecOps: Applying DevSecOps to AI and ML Security appeared first on Build5Nines. View the full article
  6. until
    About Experience everything that Summit has to offer. Attend all the parties, build your session schedule, enjoy the keynotes and then watch it all again on demand. Expo access to 150 + partners and 100’s of Databricks experts 500 + breakout sessions and keynotes 20 + Hands-on trainings Four days food and beverage Networking events and parties On-Demand session streaming after the event Join leading experts, researchers and open source contributors — from Databricks and across the data and AI community — who will speak at Data + AI Summit. Over 500 sessions covering everything from data warehousing, governance and the latest in generative AI. Join thousands of data leaders, engineers, scientists and architects to explore the convergence of data and AI. Explore the latest advances in Apache Spark™, Delta Lake, MLflow, PyTorch, dbt, Presto/Trino and much more. You’ll also get a first look at new products and features in the Databricks Data Intelligence Platform. Connect with thousands of data and AI community peers and grow your professional network in social meetups, on the Expo floor or at our event party. Register https://dataaisummit.databricks.com/flow/db/dais2024/landing/page/home Further Details https://www.databricks.com/dataaisummit/
  7. A fundamental requirement for any data-driven organization is to have a streamlined data delivery mechanism. With organizations collecting data at a rate like never before, devising data pipelines for adequate flow of information for analytics and Machine Learning tasks becomes crucial for businesses. As organizations gather information from multiple sources and data can come in […]View the full article
  8. Amazon Personalize is excited to announce automatic training for solutions. With automatic training, developers can set a cadence for their Personalize solutions to automatically retrain using the latest data from their dataset group. This process creates a newly trained machine learning (ML) model, also known as a solution version, and maintains the relevance of Amazon Personalize recommendations for end users. View the full article
  9. In their haste to deploy LLM tools, organizations may overlook crucial security practices. The rise in threats like Remote Code Execution indicates an urgent need to improve security measures in AI development. The post Vulnerabilities for AI and ML Applications are Skyrocketing appeared first on Security Boulevard. View the full article
  10. Join us on a journey of becoming a professional MLOps engineer by mastering essential tools, frameworks, key concepts, and processes in the field.View the full article
  11. MLflow is an open source platform, used for managing machine learning workflows. It was launched back in 2018 and has grown in popularity ever since, reaching 10 million users in November 2022. AI enthusiasts and professionals have struggled with experiment tracking, model management and code reproducibility, so when MLflow was launched, it addressed pressing problems in the market. MLflow is lightweight and able to run on an average-priced machine. But it also integrates with more complex tools, so it’s ideal to run AI at scale. A short history Since MLflow was first released in June 2018, the community behind it has run a recurring survey to better understand user needs and ensure the roadmap s address real-life challenges. About a year after the launch, MLflow 1.0 was released, introducing features such as improved metric visualisations, metric X coordinates, improved search functionality and HDFS support. Additionally, it offered Python, Java, R, and REST API stability. MLflow 2.0 landed in November 2022, when the product also celebrated 10 million users. This version incorporates extensive community feedback to simplify data science workflows and deliver innovative, first-class tools for MLOps. Features and improvements include extensions to MLflow Recipes (formerly MLflow Pipelines) such as AutoML, hyperparameter tuning, and classification support, as well as improved integrations with the ML ecosystem, a revamped MLflow Tracking UI, a refresh of core APIs across MLflow’s platform components, and much more. In September 2023, Canonical released Charmed MLflow, a distribution of the upstream project. Why use MLflow? MLflow is often considered the most popular ML platform. It enables users to perform different activities, including: Reproducing results: ML projects usually start with simplistic plans and tend to go overboard, resulting in an overwhelming quantity of experiments. Manual or non-automated tracking implies a high chance of missing out on finer details. ML pipelines are fragile, and even a single missing element can throw off the results. The inability to reproduce results and codes is one of the top challenges for ML teams. Easy to get started: MLflow can be easily deployed and does not require heavy hardware to run. It is suitable for beginners who are looking for a solution to better see and manage their models. For example, this video shows how Charmed MLflow can be installed in less than 5 minutes. Environment agnostic: The flexibility of MLflow across libraries and languages is possible because it can be accessed through a REST API and Command Line Interface (CLI). Python, R, and Java APIs are also available for convenience. Integrations: While MLflow is popular in itself, it does not work in a silo. It integrates seamlessly with leading open source tools and frameworks such as Spark, Kubeflow, PyTorch or TensorFlow. Works anywhere: MLflow runs on any environment, including hybrid or multi-cloud scenarios, and on any Kubernetes. MLflow components MLFlow is an end-to-end platform for managing the machine learning lifecycle. It has four primary components: MLflow Tracking MLflow Tracking enables you to track experiments, with the primary goal of comparing results and the parameters used. It is crucial when it comes to measuring performance, as well as reproducing results. Tracked parameters include metrics, hyperparameters, features and other artefacts that can be stored on local systems or remote servers. MLflow Models MLflow Models provide professionals with different formats for packaging their models. This gives flexibility in where models can be used, as well as the format in which they will be consumed. It encourages portability across platforms and simplifies the management of the machine learning models. MLflow projects Machine learning projects are packaged using MLflow Projects. It ensures reusability, reproducibility and portability. A project is a directory that is used to give structure to the ML initiative. It contains the descriptor file used to define the project structure and all its dependencies. The more complex a project is, the more dependencies it has. They come with risks when it comes to version compatibility as well as upgrades. MLflow project is useful especially when running ML at scale, where there are larger teams and multiple models being built at the same time. It enables collaboration between team members who are looking to jointly work on a project or transfer knowledge between them or to production environments. MLflow model registry Model Registry enables you to have a centralised place where ML models are stored. It helps with simplifying model management throughout the full lifecycle and how it transitions between different stages. It includes capabilities such as versioning and annotating, and provides APIs and a UI. Key concepts of MLflow MLflow is built around two key concepts: runs and experiments. In MLflow, each execution of your ML model code is referred to as a run. All runs are associated with an experiment. An MLflow experiment is the primary unit for MLflow runs. It influences how runs are organised, accessed and maintained. An experiment has multiple runs, and it enables you to efficiently go through those runs and perform activities such as visualisation, search and comparisons. In addition, experiments let you run artefacts and metadata for analysis in other tools. Kubeflow vs MLflow Both Kubeflow and MLFlow are open source solutions designed for the machine learning landscape. They received massive support from industry leaders, and are driven by a thriving community whose contributions are making a difference in the development of the projects. The main purpose of both Kubeflow and MLFlow is to create a collaborative environment for data scientists and machine learning engineers, and enable teams to develop and deploy machine learning models in a scalable, portable and reproducible manner. However, comparing Kubeflow and MLflow is like comparing apples to oranges. From the very beginning, they were designed for different purposes. The projects evolved over time and now have overlapping features. But most importantly, they have different strengths. On the one hand, Kubeflow is proficient when it comes to machine learning workflow automation, using pipelines, as well as model development. On the other hand, MLFlow is great for experiment tracking and model registry. From a user perspective, MLFlow requires fewer resources and is easier to deploy and use by beginners, whereas Kubeflow is a heavier solution, ideal for scaling up machine learning projects. Read more about Kubefllow vs. MLflow Go to the blog Charmed MLflow vs the upstream project Charmed MLflow is Canonical’s distribution of the upstream project. It is part of Canonical’s growing MLOps portfolio. It has all the features of the upstream project, to which we add enterprise-grade capabilities such as: Simplified deployment: the time to deployment is less than 5 minutes, enabling users to also upgrade their tools seamlessly. Simplified upgrades using our guides. Automated security scanning: The bundle is scanned at a regular cadence.. Security patching: Charmed MLflow follows Canonical’s process and procedure for security patching. Vulnerabilities are prioritised based on severity, the presence of patches in the upstream project, and the risk of exploitation. Maintained images: All Charmed MLflow images are actively maintained. Comprehensive testing: Charmed MLflow is thoroughly tested on multiple platforms, including public cloud, local workstations, on-premises deployments, and various CNCF-compliant Kubernetes distributions. Get started easily with Charmed MLflow Further reading [Whitepaper] Tookit to machine learning [Blog] What is MLOps? [Webinar] Kubeflow vs. MLflow [Blog] LLMs explained Book a meeting View the full article
  12. Artificial General Intelligence, when it exists, will be able to do many tasks better than humans. For now, the machine learning systems and generative AI solutions available on the market are a stopgap to ease the cognitive load on engineers, until machines which think like people exist. Generative AI is currently dominating headlines, but its backbone, neural networks, have been in use for decades. These Machine Learning (ML) systems historically acted as cruise control for large systems that would be difficult to constantly maintain by hand. The latest algorithms also proactively respond to errors and threats, alerting teams and recording logs of unusual activity. These systems have developed further and can even predict certain outcomes based on previously observed patterns. This ability to learn and respond is being adapted to all kinds of technology. One that persists is the use of AI tools in envirotech. Whether it's enabling new technologies with vast data processing capabilities, or improving the efficiency of existing systems by intelligently adjusting inputs to maximize efficiency, AI at this stage of development is so open ended it could theoretically be applied to any task. AI’s undeniable strengths GenAI isn’t inherently energy intensive. A model or neural network is no more energy inefficient than any other piece of software when it is operating, but the development of these AI tools is what generates the majority of the energy costs. The justification for this energy consumption is that the future benefits of the technology are worth the cost in energy and resources. Some reports suggest many AI applications are ‘solutions in search of a problem’, and many developers are using vast amounts of energy to develop tools that could produce dubious energy savings at best. One of the biggest benefits of machine learning is its ability to read through large amounts of data, and summarize insights for humans to act on. Reporting is a laborious and frequently manual process, time saved reporting can be shifted to actioning machine learning insights and actively addressing business-related emissions. Businesses are under increasing pressure to start reporting on Scope 3 emissions, which are the hardest to measure, and the biggest contributor of emissions for most modern companies. Capturing and analyzing these disparate data sources would be a smart use of AI, but would still ultimately require regular human guidance. Monitoring solutions already exist on the market to reduce the demand on engineers, so taking this a step further with AI is an unnecessary and potentially damaging innovation. Replacing the engineer with an AI agent reduces human labor, but removes a complex interface, just to add equally complex programming in front of it. That isn’t to say innovation should be discouraged. It’s a noble aim, but do not be sold a fairy tale that this will happen without any hiccups. Some engineers will be replaced eventually by this technology, but the industry should approach it carefully. Consider self-driving cars. They're here, they're doing better than an average human-driver. But in some edge cases they can be dangerous. The difference is that it is very easy to see this danger, compared to the potential risks of AI. Today’s ‘clever’ machines are like naive humans AI agents at the present stage of development are comparable to human employees - they need training and supervision, and will gradually become out of date unless re-trained from time to time. Similarly, as has been observed with ChatGPT, models can degrade over time. The mechanics that drive this degradation are not clear, but these systems are delicately calibrated, and this calibration is not a permanent state. The more flexible the model, the more likely it can misfire and function suboptimally. This can manifest as data or concept drift, an issue where a model invalidates itself over time. This is one of many inherent issues with attaching probabilistic models to deterministic tools. A concerning area of development is the use of AI in natural language inputs, trying to make it easier for less technical employees or decision makers to save on hiring engineers. Natural language outputs are ideal for translating the expert, subject specific outputs from monitoring systems, in a way that makes the data accessible for those who are less data literate. Despite this strength even summarizations can be subject to hallucinations where data is fabricated, this is an issue that persists in LLMs and could create costly errors where AI is used to summarize mission critical reports. The risk is we create AI overlays for systems that require deterministic inputs. Trying to make the barrier to entry for complex systems lower is admirable, but these systems require precision. AI agents cannot explain their reasoning, or truly understand a natural language input and work out the real request in the way a human can. Moreover, it adds another layer of energy consuming software to a tech stack for minimal gain. We can’t leave it all to AI The rush to ‘AI everything’ is producing a tremendous amount of wasted energy, with 14,000 AI startups currently in existence, how many will actually produce tools that will benefit humanity? While AI can improve the efficiency of a data center by managing resources, ultimately that doesn't manifest into a meaningful energy saving as in most cases that free capacity is then channeled into another application, using any saved resource headroom, plus the cost of yet more AI powered tools. Can AI help achieve sustainability goals? Probably, but most of the advocates fall down at the ‘how’ part of that question, in some cases suggesting that AI itself will come up with new technologies. Climate change is now an existential threat with so many variables to account for it stretches the comprehension of the human mind. Rather than tackling this problem directly, technophiles defer responsibility to AI in the hope it will provide a solution at some point in future. The future is unknown, and climate change is happening now. Banking on AI to save us is simply crossing our fingers and hoping for the best dressed up as neo-futurism. We've listed the best collaboration platform for teams. This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro View the full article
  13. At Databricks, we’re committed to building the most efficient and performant training tools for large-scale AI models. With the recent release of DBRX... View the full article
  14. As a data scientist or machine learning engineer, you’re constantly challenged with building accurate models and deploying and scaling them effectively. The demand for AI-driven solutions is skyrocketing, and mastering the art of scaling machine learning (ML) applications has become more critical than ever. This is where Kubernetes emerges as a game-changer, often abbreviated as K8s. In this blog, we’ll see how you can leverage Kubernetes to scale machine learning applications. Understanding Kubernetes for ML applications Kubernetes or K8s provides a framework for automating the deployment and management of containerized applications. Its architecture revolves around clusters composed of physical or virtual machine nodes. Within these clusters, Kubernetes manages containers via Pods, the most minor deployable units that can hold one or more containers. One significant advantage of Kubernetes for machine learning applications is its ability to handle dynamic workloads efficiently. With features like auto-scaling, load balancing, and service discovery, Kubernetes ensures that your ML models can scale to meet varying demands. Understanding TensorFlow The open-source framework TensorFlow, developed by Google, is used to build and train machine learning models. TensorFlow integrates with Kubernetes, allowing you to deploy and manage TensorFlow models at scale. Deploying TensorFlow on Kubernetes involves containerizing your TensorFlow application and defining Kubernetes resources such as Deployments and Services. By utilizing Kubernetes features like horizontal pod autoscaling, you can automatically scale the number of TensorFlow serving instances based on the incoming request traffic, ensuring optimal performance under varying workloads. Exploring PyTorch Facebook’s PyTorch, developed by Facebook, is popular among researchers and developers because of its dynamic computational graph and easy-to-use API. Like TensorFlow, PyTorch can be deployed on Kubernetes clusters, offering flexibility and ease of use for building and deploying deep learning models. Deploying PyTorch models on Kubernetes involves packaging your PyTorch application into containers and defining Kubernetes resources to manage deployment. While PyTorch may have a slightly different workflow than TensorFlow, it offers similar scalability benefits when deployed on Kubernetes. Best practices for scaling ML applications on Kubernetes You can deploy TensorFlow on Kubernetes using various methods, such as StatefulSets and DaemonSets. Together, TensorFlow and Kubernetes provide a powerful platform for building and deploying large-scale machine learning applications. With Kubernetes handling infrastructure management and TensorFlow offering advanced machine learning capabilities, you can efficiently scale your ML applications to meet the demands of modern businesses. Follow these best practices for scaling ML applications: Containerization of ML models: Begin by containerizing your ML models using Docker. This process involves encapsulating your model, its dependencies, and any necessary preprocessing or post-processing steps into a Docker container. This ensures that your ML model can run consistently across different environments. Utilize Kubernetes operators: Kubernetes Operators are custom controllers that extend Kubernetes’ functionality to automate complex tasks. Leveraging Operators specific to TensorFlow or PyTorch can streamline the deployment and management of ML workloads on Kubernetes. These Operators handle scaling, monitoring, and automatic update rollout, reducing operational overhead. Horizontal Pod Autoscaling (HPA): You can implement HPA to adjust the number of replicas based on CPU or memory usage. This allows your ML application to scale up or down in response to changes in workload, ensuring optimal performance and resource utilization. Resource requests and limits: You can effectively manage resource allocation by defining requests and limits for your Kubernetes pods. Resource requests specify the amount of CPU and memory required by each pod, while limits prevent pods from exceeding a certain threshold. Tuning these parameters ensures that your ML application receives sufficient resources without impacting other workloads running on the cluster. Distributed training and inference: Consider distributed training and inference techniques to distribute computation across multiple nodes for large-scale ML workloads. Kubernetes facilitates the orchestration of distributed training jobs by coordinating the execution of tasks across pods. The APIs in TensorFlow and PyTorch enable the effective use of cluster resources. Model versioning and rollbacks: Implement versioning mechanisms for your ML models to enable easy rollback in case of issues with new releases. Kubernetes’ declarative approach to configuration management lets you define desired state configurations for your ML deployments. By versioning these configurations and leveraging features like Kubernetes’ Deployment Rollback, you can quickly revert to a previous model version if necessary. Monitoring and logging: Monitoring and logging solutions give you insights into the performance of your ML applications. Monitoring metrics such as request latency, error rates, and resource utilization help you identify bottlenecks and optimize performance. Security and compliance: Ensure that your ML deployments on Kubernetes adhere to security best practices and compliance requirements. Implement security measures such as pod security policies and role-based access control (RBAC) to control access and protect sensitive data. Regularly update dependencies and container images to patch vulnerabilities and mitigate security risks. Scaling ML applications on Kubernetes Deploying machine learning applications on Kubernetes offers a scalable and efficient solution for managing complex workloads in production environments. By following best practices such as containerization, leveraging Kubernetes Operators, implementing autoscaling, and optimizing resource utilization, organizations can harness the full potential of frameworks like TensorFlow or PyTorch to scale their ML applications effectively. Integrating Kubernetes with distributed training techniques enables efficient utilization of cluster resources while versioning mechanisms and monitoring solutions ensure reliability and performance. By embracing these best practices, organizations can deploy resilient, scalable, and high-performance ML applications that meet the demands of modern business environments. The post Tensorflow or PyTorch + K8s = ML apps at scale appeared first on Amazic. View the full article
  15. RudderStack expands Data Governance and AI/ML features to enable organizations to unlock value from their customer data with confidenceView the full article
  16. Data science’s essence lies in machine learning algorithms. Here are ten algorithms that are a great introduction to machine learning for any beginner!View the full article
  17. Begin your MLOps journey with these comprehensive free resources available on GitHub.View the full article
  18. I had the pleasure to be invited by Canonical’s AI/ML Product Manager, Andreea Munteanu, to one of the recent episodes of the Canonical AI/ML podcast. As an enthusiast of automotive and technology with a background in software, I was very eager to share my insights into the influence of artificial intelligence (AI) in the automotive industry. I have a strong belief that the intersection of AI and cars represents a pivotal point where innovation meets practical implementation, and leads to safer, more efficient and more user-friendly cars. In the episode, several key issues in the use of AI in cars and automotive in general came up. It’s not just the use of AI that we should be thinking about, but a whole range of safety, ethics, and privacy concerns that can eclipse simple technical challenges. This underscores the importance of considering the broader societal impacts and ethical implications of integrating AI into automotive technologies. This blog explores the key takeaways from the engaging conversation we’ve had, diving into the present and future implications of AI in the world of automobiles. We talked about a lot in the half-hour discussion, but a stand-out moment for me was when we spoke about the impact AI implementation has on costs. I’ll get more into why I thought this was the most important part of our discussion in a bit, but for now you can listen to the entire conversation yourself in the podcast episode. AI is everywhere in automotive AI is already embedded in every aspect of the automotive sector. This key role is not just limited to autonomous vehicles: AI is integral to manufacturing processes, predictive maintenance, and supply chain management. In almost every part of the automobiles – whether it’s conceptualising and building cars, driving them, or monitoring their performance throughout their lifecycle – AI is critical. Safety considerations Cars driving themselves around makes people very nervous, especially when algorithms are tasked with making intricate split-second decisions that boil down to “don’t swerve into oncoming traffic”. It’s no surprise that safety is the paramount factor in vehicle AI conversations. Therefore, it is imperative to address the safety concerns associated with the integration of AI in automotive technology. “Would you protect the driver and the vehicle occupants versus all the surrounding pedestrians? In some cases, the vehicle will have to choose”* Bertrand Boisseau It’s a troubling ethical concern: do machines have a right to make decisions about human life, and what are the limits to that decision-making process? AI and autonomous vehicle engineers have their work cut out for them, as these decisions are incredibly complex and happen at the speed of life. When a glitch happens on your desktop, it’s not so bad because you’re not travelling at 100 km/hr through 2-lane traffic with oncoming trucks and pedestrians on every side. While these challenges are significant and lead to a lot of uncertainty about whether it is safe to let Autonomous Driving (AD) vehicles drive around at the maximum speed limit, we should pause for a second to reflect on the extreme and ongoing testing and retesting that they undergo. Driverless cars often make headlines when accidents happen. But it’s important to remember that accidents are part of driving, whether it’s with a human or autonomous tech. In reality, driving carries risks, and you’re likely to get in a car accident in your lifetime. So, while one accident might spark concerns, it’s crucial to see it in the bigger picture of transportation safety. Also, a study comparing human ride-hail drivers and self-driving cars in San Francisco revealed that human drivers are more likely to crash, cause crashes, and injure others than autonomous vehicles. Human drivers had a crash rate of 50.5 crashes per million miles, while self-driving cars had a lower rate of 23 crashes per million miles. Additionally, the development of robust fail-safe mechanisms and redundant systems can serve as safeguards against potential algorithmic errors or malfunctions. Furthermore, ongoing collaboration between industry stakeholders, regulatory bodies, and research institutions fosters the establishment of comprehensive safety standards and guidelines for the integration of AI in automotive technology. By prioritising safety considerations and adopting a multi-faceted approach encompassing technological innovation, rigorous testing, and regulatory oversight, the automotive industry can effectively address the safety challenges associated with AI integration, paving the way for safer and reliable autonomous driving systems. Diverse applications beyond driving While self-driving cars often take centre stage, AI solves a broader spectrum of problems for the automotive industry: optimising manufacturing processes; predictive maintenance for parts replacement; and enhancing supply chain management efficiency, to name a few. It will also transform the in-car experience with advanced voice recognition and personalised assistance. “I do believe that having advanced personal assistant will be noticeable for the user. Once you start putting voice recognition in there, it can become, I think, very useful.”* Bertrand Boisseau Challenges and concerns On the podcast, we mention that safety is the most obvious concern when it comes to the use of AI in cars, but there are even greater challenges and concerns that developer automotive industry figures should be thinking about. These include privacy issues, the role of regulation in the use of AI, public trust in AI systems, job displacement fears, and the substantial costs associated with running AI/ML models, both in terms of processing power and energy consumption. “You want to make sure that whatever is sent to the training models still complies with data privacy concerns: how do you collect data, how do you share vehicle data -which is usually private data-, how do you train these models?”* Bertrand Boisseau When it comes to training machine learning models for autonomous vehicles, maintaining data privacy is crucial. We need to be mindful of how we collect and share vehicle data, ensuring it aligns with privacy concerns. It’s vital to gather data ethically and responsibly, while also validating its quality to prevent biases and inaccuracies. After all, if we feed the models with flawed data (from bad drivers, for example), we risk compromising their performance and safety. So, robust data validation processes are essential to ensure the effectiveness and reliability of autonomous vehicle technology. The evolution of jobs As AI evolves, so too do the nature of jobs in the automotive industry. Take developers as an example: as AI gains a stronger foothold in automotive development, our roles will transform from manually coding algorithms to focusing on simulating and validating AI models. “I don’t agree with the idea of having job displacement in any way, but I do think that there is going to be a shift [in] the market, and there is a clear skill gap or understanding gap.”* Andreea Munteanu The industry faces a growing need for individuals with expertise in both AI and automotive engineering, bridging the gap between technology and traditional automotive skills. However, it’s also crucial to acknowledge the widespread concerns about the potential impact of autonomous vehicles on various job sectors within transportation, including taxi drivers, delivery drivers, truck drivers, valets, and e-hailing service contractors. While autonomous technology is advancing rapidly, broad legislation still typically mandates the presence of a human driver to take over the wheel if necessary, meaning fully human-free cars aren’t imminent. The use of open source Open source software will play a key role in the automotive sector. Open source software presents indispensable advantages such as unparalleled transparency, enabling thorough inspection and auditability of the codebase. “Open source software in general and even [especially] in AI/ML would be the wiser choice in most cases.”* Bertrand Boisseau This transparency not only fosters trust and reliability but also empowers developers to identify and rectify potential issues swiftly, ensuring the highest standards of quality and security. Additionally, going with closed source might mean that Original Equipment Manufacturers (OEMs), or even the customers, have to pay extra fees per year just for licences. Imagine having a “smarter” car that becomes useless if a licence lapses or expires. Open source cuts down on these costs since you’re not constrained by licences, making software cheaper to create, keep up, and expand. Fewer closed source licences mean less complexity in the user experience. The adoption of open-source models, tools, and frameworks is likely to grow, especially as companies aim to balance innovation and security. Data privacy As AI becomes increasingly integrated into the automotive industry, ensuring robust data privacy measures is paramount. The vast amounts of data generated by connected vehicles, ranging from driver behaviour to location information, raise significant privacy concerns. It’s essential to implement strict and clear data protection protocols to safeguard sensitive information from unauthorised access or misuse. Additionally, transparent data collection practices and clear consent mechanisms must be established to ensure that users have control over their data. Failure to address data privacy issues adequately not only risks violating privacy regulations but also erodes consumer trust, hindering widespread adoption of AI-driven automotive technologies. With the implementation of EU policies such as GDPR, fines can be as high as 10 million euros or up to 2% of the company’s entire global turnover of the preceding fiscal year (whichever is higher), further emphasising the importance of robust data privacy measures. AI can reduce costs in automotive Cost considerations are another crucial aspect of integrating AI into the automotive industry. While AI technologies hold immense potential to optimise operations, enhance safety, and improve the driving experience, they often come with significant upfront and ongoing costs. The automotive industry is also fiercely focused on cost optimisation: cars that are more expensive are a severe risk for sales, especially in saturated markets. What good is AI and all the hardware and infrastructure it will need if it just leads to cars that their usual buyers can no longer afford? Additionally, ensuring compatibility with existing systems and regulatory compliance may incur other expenses. Moreover, there are ongoing costs associated with maintaining and updating AI systems, as well as training personnel to effectively use and manage these technologies. However, despite the initial investment, the potential long-term benefits, such as increased efficiency, reduced accidents, and improved customer satisfaction, can outweigh the costs over time. Therefore, while cost is a critical factor to consider, automotive companies must carefully weigh the upfront investment against the potential long-term returns and strategic advantages offered by AI integration. Regulations: the wild west won’t stay wild forever Navigating regulatory frameworks generally presents significant challenges. This is already true for the integration of AI into the automotive industry. Regulators are often slow to react to the rapid pace of technological advancements, resulting in a lag between the emergence of new AI-driven automotive technologies and the establishment of appropriate regulations. This delay can create uncertainty and hinder innovation within the industry as companies navigate ambiguous regulatory landscapes. However, once regulatory wheels are set in motion, they can hit like a truck, with stringent requirements and compliance measures impacting the entire automotive ecosystem. The sudden imposition of regulations can disrupt ongoing projects, necessitate costly adjustments, and delay the deployment of AI technologies. Therefore, automotive companies must remain vigilant and proactive in engaging with regulators, advocating for clear and forward-thinking regulatory frameworks that balance innovation with safety and compliance. By fostering collaboration and dialogue between industry stakeholders and regulators, the automotive industry can navigate regulatory challenges more effectively and ensure the responsible and sustainable integration of AI technologies. Reconciling AI and sustainability Sustainability and energy consumption are crucial topics of debate in the automotive industry, especially concerning the integration of AI technologies. Data centres, which are essential for processing the vast amounts of data generated by AI-driven systems, consume substantial amounts of energy. The energy usage of a single data centre can be equivalent to that of a small town, highlighting the significant environmental impact associated with AI infrastructure. “If you need processing power, you need energy. The big [AI/ML] players have also been saying that we will need to build nuclear power plants to run all the requests.”* Bertrand Boisseau Similarly, badly optimised, individual autonomous cars, with their sophisticated sensor systems and computational requirements, might also consume considerable energy during operation. As the automotive industry embraces AI, it must address the sustainability implications of increased energy consumption and explore strategies to minimise environmental impact, such as optimising algorithms for efficiency, utilising renewable energy sources, and implementing energy-saving technologies. Addressing criticisms of automotive automation Automation in the automotive industry presents significant potential, yet it’s essential to address ongoing discussions surrounding the broader concept of automation, particularly in social media and consumer circles. Questions arise, challenging the value of autonomous driving and whether every aspect of a car’s operation needs to be automated. While these debates hold merit, they often overlook the broader implications and benefits that automation can bring. Arguments against automation often highlight concerns regarding the potential loss of manual driving skills and the ability to react to unforeseen situations beyond the scope of automated systems. However, it’s crucial to consider that historical transitions in automotive technology, such as the shift from manual to automatic transmission or the adoption of adaptive cruise control, have not resulted in increased accidents — quite the opposite, in fact. On top of that, the advancement of automation extends beyond driverless vehicles alone, encompassing a multitude of frameworks, optimisations, and breakthroughs with far-reaching impacts. Drawing parallels to other technological achievements, such as the space program, sheds light on the extensive benefits that arise from ambitious projects despite initial scepticism. Much like criticisms were raised against space exploration, which questioned its necessity or deemed it a misallocation of resources, the collective efforts in the automotive industry toward automation yield a number of innovations and enhancements. These advancements not only streamline operation and maintenance but also significantly enhance safety for drivers and road users alike. Therefore, while discussions surrounding automation provoke diverse perspectives, embracing its potential fosters progress and innovation within the automotive landscape, and beyond. The future of AI in automotive In the future, AI in the automotive industry will certainly be widespread; but the application of AI will dominate more specific use cases, such as autonomous driving systems, personal assistants or predictive maintenance. The reasons for this are quite simple: the data processing and warehousing for each automated vehicle become difficult to design and expensive to run, especially when the financial returns on AI products and their long-term financial sustainability are still unproven. There are still strong challenges when it comes to generating revenue from AI investments, particularly in the automotive realm, where return on investment and sustainable business models are still evolving. I found our podcast conversation on AI in the automotive industry incredibly engaging, especially when we delved into the potential impact on safety and driving experiences. It’s fascinating to envision how AI will revolutionise not just the way we drive, but also how vehicles are manufactured and maintained. As AI paves the roads of tomorrow, the integration of AI into the automotive industry promises a transformative journey. As a passionate car enthusiast, I think we’re headed towards a new era of innovation. AI will be in our cars, homes, jobs, buses, and perhaps even our law-making offices. As it grows and evolves, it’ll be even more important to keep track of its progression and adoption – which is why I’m glad that podcasts like ours exist. If you want to stay ahead of AI/ML and GenAI in the automotive industry – or indeed, any industry – and watch its interplay with open source applications, follow the Ubuntu AI Podcasts by Canonical. *quotations edited for clarity and brevity Listen to the podcast episode Contact Us Further reading Want to learn more about Software Defined Vehicles? Download our guide! Learn about the next-generation automotive operating system: EB corbos Linux – built on Ubuntu How to choose an OS for software development in automotive View the full article
  19. Today, enterprises are focused on enhancing decision-making with the power of AI and machine learning (ML). But the complexity of ML models and data science techniques often leaves behind organizations without data scientists or with limited data science resources. And for those organizations with strong data analyst resources, complex ML models and frameworks may seem overwhelming, potentially preventing them from driving faster, higher-quality insights. That’s why Snowflake Cortex ML Functions were developed: to abstract away the complexity of ML frameworks and algorithms, automate much of the data science process, and democratize ML for everyone. These functions make activities such as data quality monitoring through anomaly detection, or retail sales forecasting through time series forecasting, faster, easier and more robust — especially for data analysts, data engineers, and citizen data scientists. As a continuation of this suite of functions, Snowflake Cortex ML Classification is now in public preview. It enables data analysts to categorize data into predefined classes or labels, and both binary classification (two classes) and multi-class classification (more than two classes) are supported. All of this can be done with a simple SQL command, for use cases such as lead scoring or churn prediction. How ML Classification works Imagine you are a data analyst on a marketing team and want to ensure your team takes quick action on the highest-priority sales leads, optimizing the value from investments in sales and marketing. With ML Classification, you can easily classify certain leads as having a higher likelihood to convert, and thus give them a higher priority for follow-up. And for those with a low likelihood to convert, your marketing team can choose to nurture those or contact them less frequently. ML Classification can be accomplished in two simple steps: First, train a machine learning model using your CRM data for all leads you’ve pursued in the past and labeled as either “Converted” or “Not converted.” Then, use that model to classify your new set of leads as likely to convert or not. When you generate your Snowflake ML Classification predictions, you’ll get not only the predicted “class” (likely to convert vs. not likely), but also the probability of that prediction. That way, you can prioritize outreach and marketing to leads that have the highest probability of converting — even within all leads that are likely to convert. Here’s how to use Classification with just a few lines of SQL: -- Train a model on all historical leads. CREATE OR REPLACE SNOWFLAKE.ML.CLASSIFICATION my_lead_model&lpar; INPUT_DATA => SYSTEM$REFERENCE&lpar;'TABLE', 'historical_leads'&rpar;, TARGET_COLNAME => 'CONVERT' &rpar;; -- Generate predictions. CREATE TABLE my_predictions AS SELECT my_lead_model!PREDICT&lpar;object_construct&lpar;*&rpar;&rpar; as prediction FROM new_leads; The above SQL generates an ML model you can use repeatedly to assess whether new leads are likely to convert. It also generates a table of predictions that includes not only the expected class (likely to convert vs. not likely) but also the probability of each class. If you’re interested in pulling out just the predicted class and probability of that class, you can use the following SQL to parse the results: CREATE TABLE my_predictions AS SELECT prediction:class as convert_or_not, prediction['probability']['"1"'] as convert_probability FROM (SELECT my_lead_model!PREDICT(object_construct(*)) as prediction FROM new_leads); To support your assessment of the model (“Is this good enough for my team to use?”) and understanding of the model (“What parts of the data I’ve trained the model on are most useful to the model?”), this classification function produces evaluation metrics and feature importance data. -- Get evaluation metrics CALL my_lead_model!SHOW_EVALUATION_METRICS(); CALL my_lead_model!SHOW_GLOBAL_EVALUATION_METRICS(); CALL my_lead_model!SHOW_CONFUSION_MATRIX(); -- Get feature importances CALL my_lead_model!SHOW_FEATURE_IMPORTANCE(); ML Classification can be used for other use cases as well, such as churn prediction. For example, customers classified as having a high likelihood to churn can be targeted with special offers, personalized communication or other retention efforts. The two problems we describe above — churn prediction and lead scoring — are binary classification problems, where the value we’re predicting takes on just two values. This classification function can also solve multi-class problems, where the value we’re predicting takes on three or more values. For example, say your marketing team segments customers into threethree groups (Bronze, Silver, and Gold) (Bronze, Silver, and Gold) based on their purchasing habits, demographic and psychographic characteristics. This classification function could help you bucket new customers and prospects into those three value-based segments with ease. -- Train a model on all existing customers. CREATE OR REPLACE SNOWFLAKE.ML.CLASSIFICATION my_marketing_model( INPUT_DATA => SYSTEM$REFERENCE('TABLE', 'customers'), TARGET_COLNAME => 'value_grouping' ); -- Generate predictions for prospects. CREATE TABLE my_value_predictions AS SELECT my_marketing_model!PREDICT(object_construct(*)) as prediction FROM prospects; -- Parse results. CREATE TABLE my_predictions_parsed AS SELECT prediction:class as value_grouping, prediction['probability'][class] as probability FROM my_value_predictions; How Faraday uses Snowflake Cortex ML Classification Faraday, a customer behavior prediction platform, has been using ML Classification during private preview. For Faraday, having classification models right next to their customers’ Snowflake data accelerates their use of next-generation AI/ML and drives value for their customers. “Snowflake Cortex ML Functions allow our data engineering team to run complex ML models where our customers’ data lives. This provides us out-of-the-box data science resources and means we don’t have to move our customers’ data to run this analysis,” said Seamus Abshere, Co-Founder and CTO at Faraday. “The public release of Cortex ML Classification is a big unlock; it disrupts a long tradition of separating data engineering and data science.” What’s next? To continue improving the ML Classification experience, we plan to release support for text and timestamps in training and prediction data. We are also continuously improving the amount of data that can be used in training and prediction and the speed of training and prediction – as well as model accuracy. Not only do we want to put AI and ML in the hands of all data analysts and data engineers, but we want to empower business users, too. That’s why the Snowflake Cortex UI is now in private preview. This clickable user interface helps our Snowflake customers discover Snowflake Cortex functions from Snowsight and guides users through the process of selecting data, setting parameters and scheduling recurring training and prediction for AI and ML models — all through an easy-to-use interface. To learn more about Snowflake Cortex ML functions, visit Snowflake documentation or try out this Quickstart. The post Predict Known Categorical Outcomes with Snowflake Cortex ML Classification, Now in Public Preview appeared first on Snowflake. View the full article
  20. Bandwidth estimation (BWE) and congestion control play an important role in delivering high-quality real-time communication (RTC) across Meta’s family of apps. We’ve adopted a machine learning (ML)-based approach that allows us to solve networking problems holistically across cross-layers such as BWE, network resiliency, and transport. We’re sharing our experiment results from this approach, some of the challenges we encountered during execution, and learnings for new adopters. Our existing bandwidth estimation (BWE) module at Meta is based on WebRTC’s Google Congestion Controller (GCC). We have made several improvements through parameter tuning, but this has resulted in a more complex system, as shown in Figure 1. Figure 1: BWE module’s system diagram for congestion control in RTC. One challenge with the tuned congestion control (CC)/BWE algorithm was that it had multiple parameters and actions that were dependent on network conditions. For example, there was a trade-off between quality and reliability; improving quality for high-bandwidth users often led to reliability regressions for low-bandwidth users, and vice versa, making it challenging to optimize the user experience for different network conditions. Additionally, we noticed some inefficiencies in regards to improving and maintaining the module with the complex BWE module: Due to the absence of realistic network conditions during our experimentation process, fine-tuning the parameters for user clients necessitated several attempts. Even after the rollout, it wasn’t clear if the optimized parameters were still applicable for the targeted network types. This resulted in complex code logics and branches for engineers to maintain. To solve these inefficiencies, we developed a machine learning (ML)-based, network-targeting approach that offers a cleaner alternative to hand-tuned rules. This approach also allows us to solve networking problems holistically across cross-layers such as BWE, network resiliency, and transport. Network characterization An ML model-based approach leverages time series data to improve the bandwidth estimation by using offline parameter tuning for characterized network types. For an RTC call to be completed, the endpoints must be connected to each other through network devices. The optimal configs that have been tuned offline are stored on the server and can be updated in real-time. During the call connection setup, these optimal configs are delivered to the client. During the call, media is transferred directly between the endpoints or through a relay server. Depending on the network signals collected during the call, an ML-based approach characterizes the network into different types and applies the optimal configs for the detected type. Figure 2 illustrates an example of an RTC call that’s optimized using the ML-based approach. Figure 2: An example RTC call configuration with optimized parameters delivered from the server and based on the current network type. Model learning and offline parameter tuning On a high level, network characterization consists of two main components, as shown in Figure 3. The first component is offline ML model learning using ML to categorize the network type (random packet loss versus bursty loss). The second component uses offline simulations to tune parameters optimally for the categorized network type. Figure 3: Offline ML-model learning and parameter tuning. For model learning, we leverage the time series data (network signals and non-personally identifiable information, see Figure 6, below) from production calls and simulations. Compared to the aggregate metrics logged after the call, time series captures the time-varying nature of the network and dynamics. We use FBLearner, our internal AI stack, for the training pipeline and deliver the PyTorch model files on demand to the clients at the start of the call. For offline tuning, we use simulations to run network profiles for the detected types and choose the optimal parameters for the modules based on improvements in technical metrics (such as quality, freeze, and so on.). Model architecture From our experience, we’ve found that it’s necessary to combine time series features with non-time series (i.e., derived metrics from the time window) for a highly accurate modeling. To handle both time series and non-time series data, we’ve designed a model architecture that can process input from both sources. The time series data will pass through a long short-term memory (LSTM) layer that will convert time series input into a one-dimensional vector representation, such as 16×1. The non-time series data or dense data will pass through a dense layer (i.e., a fully connected layer). Then the two vectors will be concatenated, to fully represent the network condition in the past, and passed through a fully connected layer again. The final output from the neural network model will be the predicted output of the target/task, as shown in Figure 4. Figure 4: Combined-model architecture with LSTM and Dense Layers Use case: Random packet loss classification Let’s consider the use case of categorizing packet loss as either random or congestion. The former loss is due to the network components, and the latter is due to the limits in queue length (which are delay dependent). Here is the ML task definition: Given the network conditions in the past N seconds (10), and that the network is currently incurring packet loss, the goal is to characterize the packet loss at the current timestamp as RANDOM or not. Figure 5 illustrates how we leverage the architecture to achieve that goal: Figure 5: Model architecture for a random packet loss classification task. Time series features We leverage the following time series features gathered from logs: Figure 6: Time series features used for model training. BWE optimization When the ML model detects random packet loss, we perform local optimization on the BWE module by: Increasing the tolerance to random packet loss in the loss-based BWE (holding the bitrate). Increasing the ramp-up speed, depending on the link capacity on high bandwidths. Increasing the network resiliency by sending additional forward-error correction packets to recover from packet loss. Network prediction The network characterization problem discussed in the previous sections focuses on classifying network types based on past information using time series data. For those simple classification tasks, we achieve this using the hand-tuned rules with some limitations. The real power of leveraging ML for networking, however, comes from using it for predicting future network conditions. We have applied ML for solving congestion-prediction problems for optimizing low-bandwidth users’ experience. Congestion prediction From our analysis of production data, we found that low-bandwidth users often incur congestion due to the behavior of the GCC module. By predicting this congestion, we can improve the reliability of such users’ behavior. Towards this, we addressed the following problem statement using round-trip time (RTT) and packet loss: Given the historical time-series data from production/simulation (“N” seconds), the goal is to predict packet loss due to congestion or the congestion itself in the next “N” seconds; that is, a spike in RTT followed by a packet loss or a further growth in RTT. Figure 7 shows an example from a simulation where the bandwidth alternates between 500 Kbps and 100 Kbps every 30 seconds. As we lower the bandwidth, the network incurs congestion and the ML model predictions fire the green spikes even before the delay spikes and packet loss occur. This early prediction of congestion is helpful in faster reactions and thus improves the user experience by preventing video freezes and connection drops. Figure 7: Simulated network scenario with alternating bandwidth for congestion prediction Generating training samples The main challenge in modeling is generating training samples for a variety of congestion situations. With simulations, it’s harder to capture different types of congestion that real user clients would encounter in production networks. As a result, we used actual production logs for labeling congestion samples, following the RTT-spikes criteria in the past and future windows according to the following assumptions: Absent past RTT spikes, packet losses in the past and future are independent. Absent past RTT spikes, we cannot predict future RTT spikes or fractional losses (i.e., flosses). We split the time window into past (4 seconds) and future (4 seconds) for labeling. Figure 8: Labeling criteria for congestion prediction Model performance Unlike network characterization, where ground truth is unavailable, we can obtain ground truth by examining the future time window after it has passed and then comparing it with the prediction made four seconds earlier. With this logging information gathered from real production clients, we compared the performance in offline training to online data from user clients: Figure 9: Offline versus online model performance comparison. Experiment results Here are some highlights from our deployment of various ML models to improve bandwidth estimation: Reliability wins for congestion prediction connection_drop_rate -0.326371 +/- 0.216084 last_minute_quality_regression_v1 -0.421602 +/- 0.206063 last_minute_quality_regression_v2 -0.371398 +/- 0.196064 bad_experience_percentage -0.230152 +/- 0.148308 transport_not_ready_pct -0.437294 +/- 0.400812 peer_video_freeze_percentage -0.749419 +/- 0.180661 peer_video_freeze_percentage_above_500ms -0.438967 +/- 0.212394 Quality and user engagement wins for random packet loss characterization in high bandwidth peer_video_freeze_percentage -0.379246 +/- 0.124718 peer_video_freeze_percentage_above_500ms -0.541780 +/- 0.141212 peer_neteq_plc_cng_perc -0.242295 +/- 0.137200 total_talk_time 0.154204 +/- 0.148788 Reliability and quality wins for cellular low bandwidth classification connection_drop_rate -0.195908 +/- 0.127956 last_minute_quality_regression_v1 -0.198618 +/- 0.124958 last_minute_quality_regression_v2 -0.188115 +/- 0.138033 peer_neteq_plc_cng_perc -0.359957 +/- 0.191557 peer_video_freeze_percentage -0.653212 +/- 0.142822 Reliability and quality wins for cellular high bandwidth classification avg_sender_video_encode_fps 0.152003 +/- 0.046807 avg_sender_video_qp -0.228167 +/- 0.041793 avg_video_quality_score 0.296694 +/- 0.043079 avg_video_sent_bitrate 0.430266 +/- 0.092045 Future plans for applying ML to RTC From our project execution and experimentation on production clients, we noticed that a ML-based approach is more efficient in targeting, end-to-end monitoring, and updating than traditional hand-tuned rules for networking. However, the efficiency of ML solutions largely depends on data quality and labeling (using simulations or production logs). By applying ML-based solutions to solving network prediction problems – congestion in particular – we fully leveraged the power of ML. In the future, we will be consolidating all the network characterization models into a single model using the multi-task approach to fix the inefficiency due to redundancy in model download, inference, and so on. We will be building a shared representation model for the time series to solve different tasks (e.g., bandwidth classification, packet loss classification, etc.) in network characterization. We will focus on building realistic production network scenarios for model training and validation. This will enable us to use ML to identify optimal network actions given the network conditions. We will persist in refining our learning-based methods to enhance network performance by considering existing network signals. The post Optimizing RTC bandwidth estimation with machine learning appeared first on Engineering at Meta. View the full article
  21. The domain of GenAI and LLMs has been democratized and tasks that were once purely in the domain of AI/ML developers must now be reasoned with by regular application developers into everyday products and business logic. This is leading to new products and services across banking, security, healthcare, and more with generative text, images, and videos. Moreover, GenAI’s potential economic impact is substantial, with estimates it could add trillions of dollars annually to the global economy. Docker offers an ideal way for developers to build, test, run, and deploy the NVIDIA AI Enterprise software platform — an end-to-end, cloud-native software platform that brings generative AI within reach for every business. The platform is available to use in Docker containers, deployable as microservices. This enables teams to focus on cutting-edge AI applications where performance isn’t just a goal — it’s a necessity. This week, at the NVIDIA GTC global AI conference, the latest release of NVIDIA AI Enterprise was announced, providing businesses with the tools and frameworks necessary to build and deploy custom generative AI models with NVIDIA AI foundation models, the NVIDIA NeMo framework, and the just-announced NVIDIA NIM inference microservices, which deliver enhanced performance and efficient runtime. This blog post summarizes some of the Docker resources available to customers today. Docker Hub Docker Hub is the world’s largest repository for container images with an extensive collection of AI/ML development-focused container images, including leading frameworks and tools such as PyTorch, TensorFlow, Langchain, Hugging Face, and Ollama. With more than 100 million pull requests for AI/ML-related images, Docker Hub’s significance to the developer community is self-evident. It not only simplifies the development of AI/ML applications but also democratizes innovation, making AI technologies accessible to developers across the globe. NVIDIA’s Docker Hub library offers a suite of container images that harness the power of accelerated computing, supplementing NVIDIA’s API catalog. Docker Hub’s vast audience — which includes approximately 27 million monthly active IPs, showcasing an impressive 47% year-over-year growth — can use these container images to enhance AI performance. Docker Hub’s extensive reach, underscored by an astounding 26 billion monthly image pulls, suggests immense potential for continued growth and innovation. Docker Desktop with NVIDIA AI Workbench Docker Desktop on Windows and Mac helps deliver NVIDIA AI Workbench developers a smooth experience on local and remote machines. NVIDIA AI Workbench is an easy-to-use toolkit that allows developers to create, test, and customize AI and machine learning models on their PC or workstation and scale them to the data center or public cloud. It simplifies interactive development workflows while automating technical tasks that halt beginners and derail experts. AI Workbench makes workstation setup and configuration fast and easy. Example projects are also included to help developers get started even faster with their own data and use cases. Docker engineering teams are collaborating with NVIDIA to improve the user experience with NVIDIA GPU-accelerated platforms through recent improvements to the AI Workbench installation on WSL2. Check out how NVIDIA AI Workbench can be used locally to tune a generative image model to produce more accurate prompted results: In a near-term update, AI Workbench will use the Container Device Interface (CDI) to govern local and remote GPU-enabled environments. CDI is a CNCF-sponsored project led by NVIDIA and Intel, which exposes NVIDIA GPUs inside of containers to support complex device configurations and CUDA compatibility checks. This simplifies how research, simulation, GenAI, and ML applications utilize local and cloud-native GPU resources. With Docker Desktop 4.29 (which includes Moby 25), developers can configure CDI support in the daemon and then easily make all NVIDIA GPUs available in a running container by using the –device option via support for CDI devices. docker run --device nvidia.com/gpu=all <image> <command> LLM-powered apps with Docker GenAI Stack The Docker GenAI Stack lets teams easily integrate NVIDIA accelerated computing into their AI workflows. This stack, designed for seamless component integration, can be set up on a developer’s laptop using Docker Desktop for Windows. It helps deliver the power of NVIDIA GPUs and NVIDIA NIM to accelerate LLM inference, providing tangible improvements in application performance. Developers can experiment and modify five pre-packaged applications to leverage the stack’s capabilities. Accelerate AI/ML development with Docker Desktop Docker Desktop facilitates an accelerated machine learning development environment on a developer’s laptop. By tapping NVIDIA GPU support for containers, developers can leverage tools distributed via Docker Hub, such as PyTorch and TensorFlow, to see significant speed improvements in their projects, underscoring the efficiency gains possible with NVIDIA technology on Docker. Securing the software supply chain Docker Hub’s registry and tools, including capabilities for build, digital signing, Software Bill of Materials (SBOM), and vulnerability assessment via Docker Scout, allow customers to ensure the quality and integrity of container images from end to end. This comprehensive approach not only accelerates the development of machine learning applications but also secures the GenAI and LLM software supply chain, providing developers with the confidence that their applications are built on a secure and efficient foundation. “With exploding interest in AI from a huge range of developers, we are excited to work with NVIDIA to build tooling that helps accelerate building AI applications. The ecosystem around Docker and NVIDIA has been building strong foundations for many years and this is enabling a new community of enterprise AI/ML developers to explore and build GPU accelerated applications.” Justin Cormack, Chief Technology Officer, Docker “Enterprise applications like NVIDIA AI Workbench can benefit enormously from the streamlining that Docker Desktop provides on local systems. Our work with the Docker team will help improve the AI Workbench user experience for managing GPUs on Windows.” Tyler Whitehouse, Principal Product Manager, NVIDIA Learn more By leveraging Docker Desktop and Docker Hub with NVIDIA technologies, developers are equipped to harness the revolutionary power of AI, grow their skills, and seize opportunities to deliver innovative applications that push the boundaries of what’s possible. Check out NVIDIA’s Docker Hub library and NVIDIA AI Enterprise to get started with your own AI solutions. View the full article
  22. Learn how to automate machine learning training and evaluation using scikit-learn pipelines, GitHub Actions, and CML.View the full article
  23. Ready to become a SAS Certified Specialist in Statistics for Machine Learning? Here’s everything you need to know about the recently released certification from SAS. View the full article
  24. Artificial intelligence (AI) and machine learning (ML) can play a transformative role across the software development lifecycle, with a special focus on enhancing continuous testing (CT). CT is especially critical in the context of continuous integration/continuous deployment (CI/CD) pipelines, where the need for speed and efficiency must be balanced with the demands for quality and […] View the full article
  25. Learn how to enhance the quality of your machine learning code using Scikit-learn Pipeline and ColumnTransformer.View the full article
  • Forum Statistics

    43.6k
    Total Topics
    43.2k
    Total Posts
×
×
  • Create New...