Jump to content

Search the Community

Showing results for tags 'google bigquery ml'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

There are no results to display.

There are no results to display.


Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


LinkedIn Profile URL


About Me


Cloud Platforms


Cloud Experience


Development Experience


Current Role


Skills


Certifications


Favourite Tools


Interests

Found 2 results

  1. The preprocessing and transformation of raw data into features constitutes a pivotal yet time-intensive phase within the machine learning (ML) process. This holds particularly true when data scientists or data engineers are required to transfer data across diverse platforms for the purpose of carrying out MLOps. In February 2023 we announced the preview of two new capabilities for BigQuery ML: more data preprocessing functions and the ability to export the BigQuery ML TRANSFORM clause as part of the model artifact. Today, these features are going GA and have even more capabilities for optimizing your ML workflow. In this blogpost, we describe how we streamline feature engineering by keeping it close to ML training and serving, with the following new functionalities: More manual preprocessing functions that give the flexibility users need to prepare their data as features for ML while also enabling simplified serving by embedding the preprocessing steps directly in the model. More seamless integration with Vertex AI amplifies this embedded preprocessing by making it fast to host BigQuery ML models on Vertex AI Prediction Endpoints for serverless online predictions that scale to meet your applications demand. Ability to export the BigQuery ML TRANSFORM clause as part of the model artifact which makes the BigQuery ML models portable and can be used in other workflows where the same preprocessing steps are needed. Feature EngineeringThe manual preprocessing functions are big timesavers for setting up your data columns as features for ML. The list of available preprocessing functions now includes: ML.MAX_ABS_SCALER Scale a numerical column to the range [-1, 1] without centering by dividing by the maximum absolute value. ML.ROBUST_SCALER Scale a numerical column by centering with the median (optional) and dividing by the quantile range of choice ([25, 75] by default). ML.NORMALIZER Turn a numerical array into a unit norm array for any p-norm: 0, 1, >1, +inf. The default is 2 resulting in a normalized array where the sum of squares is 1. ML.IMPUTER Replace missing values in a numerical or categorical input with the mean, median or mode (most frequent). ML.ONE_HOT_ENCODER One-hot encodes a categorical input. Also, it optionally does dummy encoding by dropping the most frequent value. It is also possible to limit the size of the encoding by specifying k for the k most frequent categories and/or a lower threshold for the frequency of categories. ML.MULTI_HOT_ENCODER Encode an array of strings with integer values representing categories. It is possible to limit the size of the encoding by specifying k for the k most frequent categories and/or a lower threshold for the frequency of categories. ML.LABEL_ENCODER Encode a categorical input to integer values [0, n categories] where 0 represents NULL and excluded categories. You can exclude categories by specifying k for k most frequent categories and/or a lower threshold for the frequency of categories. Step-by-step examples of all preprocessing functionsThis first tutorial shows how to use each of the preprocessing functions. In the interactive notebook a data sample and multiple uses of each function are used to highlight the operation and options available to adapt these functions to any feature engineering tasks. For example, the task of imputing missing values has different options depending on the data type of the column (string or numeric). The example below (from the interactive notebook) shows each possible way to impute missing value for each data type: code_block[StructValue([(u'code', u"SELECT\r\n num_column,\r\n ML.IMPUTER(num_column, 'mean') OVER() AS num_imputed_mean,\r\n ML.IMPUTER(num_column, 'median') OVER() AS num_imputed_median,\r\n ML.IMPUTER(num_column, 'most_frequent') OVER() AS num_imputed_mode,\r\n string_column,\r\n ML.IMPUTER(string_column, 'most_frequent') OVER() AS string_imputed_mode,\r\n FROM\r\n UNNEST([1, 1, 2, 3, 4, 5, NULL]) AS num_column WITH OFFSET pos1,\r\n UNNEST(['a', 'a', 'b', 'c', 'd', 'e', NULL]) AS string_column WITH OFFSET pos2\r\n WHERE pos1 = pos2\r\n ORDER BY num_column"), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e259af45a10>)])]The table that follows shows the inputs with missing values highlighted in red and the outputs with imputed values for the different strategies highlighted in green. Visit the notebook linked above for this and more examples of all the preprocessing functions. Training with the TRANSFORM clauseNow, when exporting models with a TRANSFORM clause even more SQL functions are supported for the accompanying exported preprocessing model. Supported SQL functions include: Manual preprocessing functions Operators Conditional expressions Mathematical functions Conversion functions String functions Date, Datetime, Time, and Timestamp functions To host a BigQuery ML trained model on Vertex AI you can bypass the export steps and automatically register the model to the Vertex AI Model Registry during training. Then, when you deploy the model to a Vertex AI Prediction Endpoint for online prediction the TRANSFORM clauses preprocessing is also included in the endpoint for seamless training-serving workflows. This means there is no need to apply preprocessing functions again before getting predictions from the online endpoint! Serving models is also as simple as always within BigQuery ML using the PREDICT function. Step-by-step guide to incorporating manual preprocessing inside the model with the inline TRANSFORM clause:In this tutorial, we will use the bread recipe competition dataset to predict judges rating using linear regression and boosted tree models. Objective: To demonstrate how to preprocess data using the new functions, register the model with Vertex AI Model Registry, and deploy the model for online prediction with Vertex AI Prediction endpoints. Dataset: Each row represents a bread recipe with columns for each ingredient (flour, salt, water, yeast) and procedure (mixing time, mixing speed, cooking temperature, resting time). There are also columns that include judges ratings of the final product from each recipe. Overview of the tutorial: Step 1 shows how to use the TRANSFORM statement while training the model. Step 2 demonstrates how to deploy the model for online prediction using Vertex AI Prediction Endpoints. A final example is given to show how to export the model and access the transform model directly. For the best learning experience, follow this blog post alongside the tutorial notebook. Step 1: Create models using an inline TRANSFORM clauseUsing the BigQuery ML manual preprocessing function highlighted above and additional BigQuery functions to prepare input columns into features within a TRANSFORM clause is very similar to writing SQL. The added benefit of having the preprocessing logic embedded within the trained model is that the preprocessing is incorporated in the prediction routine both within BigQuery with ML.PREDICT and outside of BigQuery, like the Vertex AI Model Registry for deployment to Vertex AI Prediction Endpoints. The query below creates a model to predict judge A’s rating for bread recipes. The TRANSFORM statement uses multiple numerical preprocessing functions to scale columns into features. The values needed for scaling are stored and used at prediction to scale prediction instances as well. The contestant_id column is not particularly helpful for prediction as new seasons will have new contestants but the order of contestants could be helpful if, perhaps, contestants are getting generally better at bread baking. To transform contestants into ordered labels the ML.LABEL_ENCODER function is used. Using columns like season and round as features might not be helpful for predicting future values. A more general indicator of time would be the year and week within the year. Turning the airdate (date on which the episode aired) into features with the EXTRACT function is done directly in the TRANSFORM clause as well. code_block[StructValue([(u'code', u"CREATE OR REPLACE MODEL `statmike-mlops-349915.feature_engineering.bqml_feature_engineering_transform`\r\nTRANSFORM (\r\n JUDGE_A,\r\n ML.LABEL_ENCODER(contestant_id) OVER() as contestant,\r\n EXTRACT(YEAR FROM airdate) as year,\r\n EXTRACT(ISOWEEK FROM airdate) as week,\r\n\r\n ML.MIN_MAX_SCALER(flourAmt) OVER() as scale_flourAmt, \r\n ML.ROBUST_SCALER(saltAmt) OVER() as scale_saltAmt,\r\n ML.MAX_ABS_SCALER(yeastAmt) OVER() as scale_yeastAmt,\r\n ML.STANDARD_SCALER(water1Amt) OVER() as scale_water1Amt,\r\n ML.STANDARD_SCALER(water2Amt) OVER() as scale_water2Amt,\r\n\r\n ML.STANDARD_SCALER(waterTemp) OVER() as scale_waterTemp,\r\n ML.ROBUST_SCALER(bakeTemp) OVER() as scale_bakeTemp,\r\n ML.MIN_MAX_SCALER(ambTemp) OVER() as scale_ambTemp,\r\n ML.MAX_ABS_SCALER(ambHumidity) OVER() as scale_ambHumidity,\r\n\r\n ML.ROBUST_SCALER(mix1Time) OVER() as scale_mix1Time,\r\n ML.ROBUST_SCALER(mix2Time) OVER() as scale_mix2Time,\r\n ML.ROBUST_SCALER(mix1Speed) OVER() as scale_mix1Speed,\r\n ML.ROBUST_SCALER(mix2Speed) OVER() as scale_mix2Speed,\r\n ML.STANDARD_SCALER(proveTime) OVER() as scale_proveTime,\r\n ML.MAX_ABS_SCALER(restTime) OVER() as scale_restTime,\r\n ML.MAX_ABS_SCALER(bakeTime) OVER() as scale_bakeTime\r\n)\r\nOPTIONS (\r\n model_type = 'BOOSTED_TREE_REGRESSOR',\r\n booster_type = 'GBTREE',\r\n num_parallel_tree = 25,\r\n early_stop = TRUE,\r\n min_rel_progress = 0.01,\r\n tree_method = 'HIST',\r\n subsample = 0.85, \r\n input_label_cols = ['JUDGE_A'],\r\n enable_global_explain = TRUE,\r\n data_split_method = 'AUTO_SPLIT',\r\n l1_reg = 10,\r\n l2_reg = 10,\r\n MODEL_REGISTRY = 'VERTEX_AI',\r\n VERTEX_AI_MODEL_ID = 'bqml_bqml_feature_engineering_transform',\r\n VERTEX_AI_MODEL_VERSION_ALIASES = ['run-20230705114026']\r\n ) AS\r\nSELECT *\r\nFROM `statmike-mlops-349915.feature_engineering.bread`"), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e259af45a50>)])]Note that the model training used options to directly register the model in Vertex AI Model Registry. This bypasses the need to export and subsequently register the model artifacts in the Vertex AI Model Registry while also keeping the two locations connected so that if the model is removed from BigQuery it is also removed from Vertex AI. It also enables a very simple path to online predictions as shown in Step 2 below. In the interactive notebook the resulting model is also used with the many other functions to enable an end-to-end MLOps journey directly in BigQuery: ML.FEATURE_INFO to review summary information for each input feature used to train the model ML.TRAINING_INFO to see details from each training iteration of the model ML.EVALUATE to review model metrics ML.FEATURE_IMPORTANCE to review the feature importance scores from the construction of the boosted tree ML.GLOBAL_EXPLAIN to get aggregated feature attribution for features across the evaluation data ML.EXPLAIN_PREDICT to get prediction and feature attributions for each instance of the input ML.PREDICT to get predictions for input instances Step 2: Serve online predictions with Vertex AI Prediction EndpointsBy using options to register the resulting model in the Vertex AI Model Registry during step 1 the path to online predictions is made very simple. Models in the Vertex AI Model Registry can be deployed to Vertex AI Prediction Endpoints where they can serve predictions from Vertex AI API using any of the client libraries (Python, Java, Node.js), gcloud ai, REST or gRPC. The process can be done directly from the Vertex AI console as shown here and is demonstrated below with the popular Python client for Vertex AI, named google-cloud-aiplatform. Setting up the Python environment to work with the Vertex AI client requires just an import and setting the project and region for resources: code_block[StructValue([(u'code', u'from google.cloud import aiplatform\r\naiplatform.init(project = PROJECT_ID, location = REGION)'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e259af45890>)])]Connecting to the model in the Vertex AI Model Registry is done using the model name which was specified in the CREATE MODEL statement with the option VERTEX_AI_MODEL_ID: code_block[StructValue([(u'code', u"vertex_model = aiplatform.Model(model_name = 'bqml_bqml_bqml_feature_engineering_transform')"), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e259af53d50>)])]Creating a Vertex AI Prediction Endpoints requires just a display_name: code_block[StructValue([(u'code', u'endpoint = aiplatform.Endpoint.create(display_name = "bqml_feature_engineering")'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e25989a4f50>)])]The action of deploying the model to the endpoint requires specifying the compute environment with: traffic_percentage: percentage of requests routed to the model machine_type: the compute specification min_replica_count and max_replica_count: the compute environment's minimum and maximum number of machines used in scaling to meet the demand for predictions. code_block[StructValue([(u'code', u"endpoint.deploy(\r\n model = vertex_model,\r\n deployed_model_display_name = vertex_model.display_name,\r\n traffic_percentage = 100,\r\n machine_type = 'n1-standard-2',\r\n min_replica_count = 1,\r\n max_replica_count = 1\r\n)"), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e2598ad0f90>)])]Request a prediction by sending an input instance with key:value pairs for each feature. Note that the features are the raw features rather than needing to preprocess them into the model features like contestant, year, week and other scaled features: code_block[StructValue([(u'code', u"endpoint.predict(instances = ['contestant_id': 'c_1',\r\n 'airdate': '2003-05-26',\r\n 'flourAmt': 484.28986452656386,\r\n 'saltAmt': 9,\r\n 'yeastAmt': 10,\r\n 'mix1Time': 5,\r\n 'mix1Speed': 3,\r\n 'mix2Time': 5,\r\n 'mix2Speed': 5,\r\n 'water1Amt': 311.66349401065276,\r\n 'water2Amt': 98.61283742264706,\r\n 'waterTemp': 46,\r\n 'proveTime': 105.67304373851782,\r\n 'restTime': 44,\r\n 'bakeTime': 28,\r\n 'bakeTemp': 435.39349280229476,\r\n 'ambTemp': 51.27996072412186,\r\n 'ambHumidity': 61.44333141984406\r\n])"), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e25989831d0>)])]The response returned is the predicted score from Judge A of a 73.5267944335937 which is also confirmed in the tutorial notebook using the model in BigQuery with ML.PREDICT. Not the best bread, but a great prediction since the actual answer is 75.0! (Optional) Exporting Models With Inline TRANSFORM clauseWhile there is no longer a need to export the model for use in Vertex AI thanks to the direct registration options available during model creation, it can still be very helpful to make BigQuery ML models portable for use elsewhere or in more complex workflows like model co-hosting with deployment resource pools or workflows with multiple models using NVIDIA Triton on Vertex AI Prediction. When exporting BigQuery ML models to GCS the TRANSFORM clause is also exported as a separate model in a subfolder named /transform. This means even the transform model is portable and can be used in other workflows where the same preprocessing steps are needed. If you used BigQuery time or date functions (Date functions, Datetime functions, Time functions and Timestamp functions) then you might wonder how the exported TensorFlow model that represents the TRANSFORM clause handles those data types. We implemented a TensorFlow Custom op that can be easily added to your custom serving environment via the bigquery-ml-utils Python package. To initiate the export to GCS use the BigQuery EXPORT MODEL statement: code_block[StructValue([(u'code', u"EXPORT MODEL `statmike-mlops-349915.feature_engineering.bqml_feature_engineering_transform`\r\n OPTIONS (URI = 'gs://statmike-mlops-349915-us-central1-bqml-exports/bqml/model')"), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3e2598983350>)])]The tutorial notebooks show the folder structure and contents and how to use the TensorFlow SavedModel CLIto review the transform models input and output signature. ConclusionBigQuery ML preprocessing functions give the flexibility users need to prepare their data as features for ML while also enabling simplified serving by embedding the preprocessing steps directly in the model. Creating a seamless integration with Vertex AI amplifies this embedded preprocessing by making it fast to host BigQuery ML models on Vertex AI Prediction Endpoints for serverless online predictions that scale to meet your applications demand. Ultimately making building models easy while making the models useful through simple serving options. In the future you can expect to see even more ways to simplify ML workflows with BigQuery ML while seamlessly integrating with Vertex AI.
  2. Preprocessing and transforming raw data into features is a critical but time consuming step in the ML process. This is especially true when a data scientist or data engineer has to move data across different platforms to do MLOps. In this blogpost, we describe how we streamline this process by adding two feature engineering capabilities in BigQuery ML Our previous blog outlines the data to AI journey with BigQuery ML, highlighting two powerful features that simplify MLOps - data preprocessing functions for feature engineering and the ability to export BigQuery ML TRANSFORM statement as part of the model artifact. In this blog post, we share how to use these features for creating a seamless experience from BigQuery ML to Vertex AI. Data Preprocessing Functions Preprocessing and transforming raw data into features is a critical but time consuming step when operationalizing ML. We recently announced the public preview of advanced feature engineering functions in BigQuery ML. These functions help you impute, normalize or encode data. When this is done inside the database, BigQuery, the entire process becomes easier, faster, and more secure to preprocess data. Here is a list of the new functions we are introducing in this release. The full list of preprocessing functions can be found here. ML.MAX_ABS_SCALER Scale a numerical column to the range [-1, 1] without centering by dividing by the maximum absolute value. ML.ROBUST_SCALER Scale a numerical column by centering with the median (optional) and dividing by the quantile range of choice ([25, 75] by default). ML.NORMALIZER Turn an input numerical array into a unit norm array for any p-norm: 0, 1, >1, +inf. The default is 2 resulting in a normalized array where the sum of squares is 1. ML.IMPUTER Replace missing values in a numerical or categorical input with the mean, median or mode (most frequent). ML.ONE_HOT_ENCODER One-hot encode a categorical input. Also, it optionally does dummy encoding by dropping the most frequent value. It is also possible to limit the size of the encoding by specifying k for the k most frequent categories and/or a lower threshold for the frequency of categories. ML.LABEL_ENCODER Encode a categorical input to integer values [0, n categories] where 0 represents NULL and excluded categories. You can exclude categories by specifying k for k most frequent categories and/or a lower threshold for the frequency of categories. Model Export with TRANSFORM Statement You can now export BigQuery ML models that include a feature TRANSFORM statement. The ability to include TRANSFORM statements makes models more portable when exporting them for online prediction. This capability also works when BigQuery ML models are registered with Vertex AI Model Registry and deployed to Vertex AI Prediction endpoints. More details about exporting models can be found in BigQuery ML Exporting models. These new features are available through the Google Cloud Console, BigQuery API, and client libraries. Step-by-step guide to use the two features In this tutorial, we will use the bread recipe competition dataset to predict judges rating using linear regression and boosted tree models. Objective: To demonstrate how to preprocess data using the new functions, register the model with Vertex AI Model Registry, and deploy the model for online prediction with Vertex AI Prediction endpoints. Dataset: Each row represents a bread recipe with columns for each ingredient (flour, salt, water, yeast) and procedure (mixing time, mixing speed, cooking temperature, resting time). There are also columns that include judges ratings of the final product from each recipe. Overview of the tutorial: Steps 1 and 2 show how to use the TRANSFORM statement. Steps 3 and 4 demonstrate how to manually export and register the models. Steps 5 through 7 show how to deploy a model to Vertex AI Prediction endpoint. For the best learning experience, follow this blog post alongside the tutorial notebook. Step 1: Transform BigQuery columns into ML features with SQL Before training an ML model, exploring the data within columns is essential to identifying the data type, distribution, scale, missing patterns, and extreme values. BigQuery ML enables this exploratory analysis with SQL. With the new preprocessing functions it is now even easier to transform BigQuery columns into ML features with SQL while iterating to find the optimal transformation. For example, when using the ML.MAX_ABS_SCALER function for an input column, each value is divided by the maximum absolute value (10 in the example): code_block [StructValue([(u'code', u'SELECT\r\n input_column,\r\n ML.MAX_ABS_SCALER (input_column) OVER() AS scale_column\r\nFROM\r\n UNNEST([0, -1, 2, -3, 4, -5, 6, -7, 8, -9, 10]) as input_column\r\nORDER BY input_column'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ecc7cf7a710>)])] Once the input columns for an ML model are identified and the feature transformations are chosen, it is enticing to apply the transformation and save the output as a view. But this has an impact on our predictions later on because these same transformations will need to be applied before requesting predictions. Step 2 shows how to prevent this separation of processing and model training. Step 2: Iterate through multiple models with inline TRANSFORM functions Building on the preprocessing explorations in Step 1, the chosen transformations are applied inline with model training using the TRANSFORM statement. This interlocks the model iteration with the preprocessing explorations while making any candidate ready for serving with BigQuery or beyond. This means you can immediately try multiple model types without any delayed impact of feature transformations on predictions. In this step, two models, linear regression and boosted tree, are trained side-by-side with identical TRANSFORM statements: Training with linear regression - Model a code_block [StructValue([(u'code', u"CREATE OR REPLACE MODEL `statmike-mlops-349915.feature_engineering.03_feature_engineering_2a`\r\nTRANSFORM (\r\n JUDGE_A,\r\n\r\n ML.MIN_MAX_SCALER(flourAmt) OVER() as scale_flourAmt, \r\n ML.ROBUST_SCALER(saltAmt) OVER() as scale_saltAmt,\r\n ML.MAX_ABS_SCALER(yeastAmt) OVER() as scale_yeastAmt,\r\n ML.STANDARD_SCALER(water1Amt) OVER() as scale_water1Amt,\r\n ML.STANDARD_SCALER(water2Amt) OVER() as scale_water2Amt,\r\n\r\n ML.STANDARD_SCALER(waterTemp) OVER() as scale_waterTemp,\r\n ML.ROBUST_SCALER(bakeTemp) OVER() as scale_bakeTemp,\r\n ML.MIN_MAX_SCALER(ambTemp) OVER() as scale_ambTemp,\r\n ML.MAX_ABS_SCALER(ambHumidity) OVER() as scale_ambHumidity,\r\n\r\n ML.ROBUST_SCALER(mix1Time) OVER() as scale_mix1Time,\r\n ML.ROBUST_SCALER(mix2Time) OVER() as scale_mix2Time,\r\n ML.ROBUST_SCALER(mix1Speed) OVER() as scale_mix1Speed,\r\n ML.ROBUST_SCALER(mix2Speed) OVER() as scale_mix2Speed,\r\n ML.STANDARD_SCALER(proveTime) OVER() as scale_proveTime,\r\n ML.MAX_ABS_SCALER(restTime) OVER() as scale_restTime,\r\n ML.MAX_ABS_SCALER(bakeTime) OVER() as scale_bakeTime\r\n)\r\nOPTIONS (\r\n model_type = 'LINEAR_REG',\r\n input_label_cols = ['JUDGE_A'],\r\n enable_global_explain = TRUE,\r\n data_split_method = 'AUTO_SPLIT',\r\n MODEL_REGISTRY = 'VERTEX_AI',\r\n VERTEX_AI_MODEL_ID = 'bqml_03_feature_engineering_2a',\r\n VERTEX_AI_MODEL_VERSION_ALIASES = ['run-20230112234821']\r\n ) AS\r\nSELECT * EXCEPT(Recipe, JUDGE_B)\r\nFROM `statmike-mlops-349915.feature_engineering.bread`"), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ecc7cf7add0>)])] Training with boosted tree - Model b code_block [StructValue([(u'code', u"CREATE OR REPLACE MODEL `statmike-mlops-349915.feature_engineering.03_feature_engineering_2b`\r\nTRANSFORM (\r\n JUDGE_A,\r\n\r\n ML.MIN_MAX_SCALER(flourAmt) OVER() as scale_flourAmt, \r\n ML.ROBUST_SCALER(saltAmt) OVER() as scale_saltAmt,\r\n ML.MAX_ABS_SCALER(yeastAmt) OVER() as scale_yeastAmt,\r\n ML.STANDARD_SCALER(water1Amt) OVER() as scale_water1Amt,\r\n ML.STANDARD_SCALER(water2Amt) OVER() as scale_water2Amt,\r\n\r\n ML.STANDARD_SCALER(waterTemp) OVER() as scale_waterTemp,\r\n ML.ROBUST_SCALER(bakeTemp) OVER() as scale_bakeTemp,\r\n ML.MIN_MAX_SCALER(ambTemp) OVER() as scale_ambTemp,\r\n ML.MAX_ABS_SCALER(ambHumidity) OVER() as scale_ambHumidity,\r\n\r\n ML.ROBUST_SCALER(mix1Time) OVER() as scale_mix1Time,\r\n ML.ROBUST_SCALER(mix2Time) OVER() as scale_mix2Time,\r\n ML.ROBUST_SCALER(mix1Speed) OVER() as scale_mix1Speed,\r\n ML.ROBUST_SCALER(mix2Speed) OVER() as scale_mix2Speed,\r\n ML.STANDARD_SCALER(proveTime) OVER() as scale_proveTime,\r\n ML.MAX_ABS_SCALER(restTime) OVER() as scale_restTime,\r\n ML.MAX_ABS_SCALER(bakeTime) OVER() as scale_bakeTime\r\n)\r\nOPTIONS (\r\n model_type = 'BOOSTED_TREE_REGRESSOR',\r\n booster_type = 'GBTREE',\r\n num_parallel_tree = 1,\r\n max_iterations = 30,\r\n early_stop = TRUE,\r\n min_rel_progress = 0.01,\r\n tree_method = 'HIST',\r\n subsample = 0.85, \r\n input_label_cols = ['JUDGE_A'],\r\n enable_global_explain = TRUE,\r\n data_split_method = 'AUTO_SPLIT',\r\n l1_reg = 10,\r\n l2_reg = 10,\r\n MODEL_REGISTRY = 'VERTEX_AI',\r\n VERTEX_AI_MODEL_ID = 'bqml_03_feature_engineering_2b',\r\n VERTEX_AI_MODEL_VERSION_ALIASES = ['run-20230112234926']\r\n ) AS\r\nSELECT * EXCEPT(Recipe, JUDGE_B)\r\nFROM `statmike-mlops-349915.feature_engineering.bread`"), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ecc7cf7ae90>)])] Identical input columns that have the same preprocessing means you can easily compare the accuracy of the models. Using the BigQuery ML function ML.EVALUATE makes this comparison as simple as a single SQL query that stacks these outcomes with the UNION ALL set operator: code_block [StructValue([(u'code', u"SELECT 'Manual Feature Engineering - 2A' as Approach, mean_squared_error, r2_score\r\nFROM ML.EVALUATE(MODEL `statmike-mlops-349915.feature_engineering.03_feature_engineering_2a`)\r\nUNION ALL\r\nSELECT 'Manual Feature Engineering - 2B' as Approach, mean_squared_error, r2_score\r\nFROM ML.EVALUATE(MODEL `statmike-mlops-349915.feature_engineering.03_feature_engineering_2b`)"), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ecc7cf7af50>)])] The results of the evaluation comparison show that using the boosted tree model results in a much better model than linear regression with drastically lower mean squared error and higher r2. Both models are ready to serve predictions, but the clear choice is the boosted tree regressor. Once we decide which model to use, you can predict directly within BigQuery ML using the ML.PREDICT function. In the rest of the tutorial, we show how to export the model outside of BigQuery ML and predict using Google Cloud Vertex AI. Using BigQuery Models for Inference Outside of BigQuery Once your model is trained, if you want to do online inference for low latency responses in your application for online prediction, you have to deploy the model outside of BigQuery. The following steps demonstrate how to deploy the models to Vertex AI Prediction endpoints. This can be accomplished in one of two ways: Manually export the model from BigQuery ML and set up a Vertex AI Prediction Endpoint. To do this, you need to do steps 3 and 4 first. Register the model and deploy from Vertex AI Model Registry automatically. The capability is not available yet but will be available in a forthcoming release. Once it’s available steps 3 and 4 can be skipped. Step 3. Manually export models from BigQuery BigQuery ML supports an EXPORT MODEL statement to deploy models outside of BigQuery. A manual export includes two models - a preprocessing model that reflects the TRANSFORM statement and a prediction model. Both models are exported with a single export statement in BigQuery ML. code_block [StructValue([(u'code', u"EXPORT MODEL `statmike-mlops-349915.feature_engineering.03_feature_engineering_2b`\r\n OPTIONS (URI = 'gs://statmike-mlops-349915-us-central1-bqml-exports/03/2b/model')"), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ecc7cf80350>)])] The preprocessing model that captures the TRANSFORM statement is exported as a TensorFlow SavedModel file. In this example it is exported to a GCS bucket located at ‘gs://statmike-mlops-349915-us-central1-bqml-exports/03/2b/model/transform’. The prediction models are saved in portable formats that match the frameworks in which they were trained by BigQuery ML. The linear regression model is exported as a TensorFlow SavedModel and the boosted tree regressor is exported as Booster file (XGBoost). In this example, the boost tree model is exported to a GCS bucket located at ‘gs://statmike-mlops-349915-us-central1-bqml-exports/03/2b/model’ These export files are in a standard open format of the native model types making them completely portable to be deployed anywhere - they can be deployed to Vertex AI (Steps 4-7 below), on your own infrastructure, or even in edge applications. Steps 4 through 7 show how to register and deploy a model to Vertex AI Prediction endpoint. These steps need to be repeated separately for the preprocessing models and the prediction models. Step 4. Register models to Vertex AI Model Registry To deploy the models in Vertex AI Prediction, they first need to be registered with the Vertex AI Model Registry To do this two inputs are needed - the links to the model files and a URI to a pre-built container. Go to Step 4 in the tutorial to see how exactly it’s done. The registration can be done with the Vertex AI console or programmatically with one of the clients. In the example below, the Python client for Vertex AI is used to register the models like this: code_block [StructValue([(u'code', u'vertex_model = aiplatform.Model.upload(\r\n display_name = \'gcs_03_feature_engineering_2b\',\r\n serving_container_image_uri = \'us-docker.pkg.dev/vertex-ai/prediction/xgboost-cpu.1-1:latest\',\r\n artifact_uri = "gs://statmike-mlops-349915-us-central1-bqml-exports/03/2b/model"\r\n)'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ecc7cf80810>)])] Step 5. Create Vertex AI Prediction endpoints Vertex AI includes a service forhosting models for online predictions. To host a model on a Vertex AI Prediction endpoint you first create an endpoint. This can also be done directly from the Vertex AI Model Registry console or programmatically with one of the clients. In the example below, the Python client for Vertex AI is used to create the endpoint like this: code_block [StructValue([(u'code', u'vertex_endpoint = aiplatform.Endpoint.create (\r\n display_name = \u201803_feature_engineering_manual_2b\u2019\r\n)'), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ecc7cf80f90>)])] Step 6. Deploy models to endpoints Deploying a model from the Vertex AI Model Registry (Step 4) to a Vertex AI Prediction endpoint (Step 5) is done in a single deployment action where the model definition is supplied to the endpoint along with the type of machine to utilize. Vertex AI Prediction endpoints can automatically scale up or down to handle prediction traffic needs by providing the number of replicas to utilize (default is 1 for min and max). In the example below, the Python client for Vertex AI is being used with the deploy method for the endpoint (Step 5) using the models (Step 4): code_block [StructValue([(u'code', u"vertex_endpoint.deploy(\r\n model = vertex_model,\r\n deployed_model_display_name = vertex_model.display_name,\r\n traffic_percentage = 100,\r\n machine_type = 'n1-standard-2',\r\n min_replica_count = 1,\r\n max_replica_count = 1\r\n)"), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ecc7cf800d0>)])] Step 7. Request predictions from endpoints Once the model is deployed to a Vertex AI Prediction endpoint (Step 6) it can serve predictions. Rows of data, called instances, are passed to the endpoint and results are returned that include the processed information: preprocessing result or prediction. Getting prediction results from Vertex AI Prediction endpoints can be done with any of the Vertex AI API interfaces (REST, gRPC, gcloud, Python, Java, Node.js). Here, the request is demonstrated directly with the predict method of the endpoint (Step 6) using the Python client for Vertex AI as follows: code_block [StructValue([(u'code', u"results = vertex_endpoint.predict(instances = [\r\n{'flourAmt': 511.21695405324624,\r\n 'saltAmt': 9,\r\n 'yeastAmt': 11,\r\n 'mix1Time': 6,\r\n 'mix1Speed': 4,\r\n 'mix2Time': 5,\r\n 'mix2Speed': 4,\r\n 'water1Amt': 338.3989183746999,\r\n 'water2Amt': 105.43955159464981,\r\n 'waterTemp': 48,\r\n 'proveTime': 92.27755071811586,\r\n 'restTime': 43,\r\n 'bakeTime': 29,\r\n 'bakeTemp': 462.14028505497805,\r\n 'ambTemp': 38.20572852497746,\r\n 'ambHumidity': 63.77836403396154}])"), (u'language', u''), (u'caption', <wagtail.wagtailcore.rich_text.RichText object at 0x3ecc7cf80550>)])] The result of an endpoint with a preprocessing model will be identical to applying the TRANSFORM statement from BigQuery ML. The results can then be pipelined to an endpoint with the prediction model to serve predictions that match the results of the ML.PREDICT function in BigQuery ML. The results of both methods, Vertex AI Prediction endpoints and BigQuery ML with ML.PREDICT are shown side-by-side in the tutorial to show that the results of the model are replicated. Now the model can be used for online serving with extremely low latency. This even includes using private endpoints for even lower latency and secure connections with VPC Network Peering. Conclusion With the new preprocessing functions, you can simplify data exploration and feature preprocessing. Further, by embedding preprocessing within model training using the TRANSFORM statement, the serving process is simplified by using prepped models without needing additional steps. In other words, predictions are done right inside BigQuery or alternatively the models can be exported to any location outside of BigQuery such as Vertex AI Prediction for online serving. The tutorial demonstrated how BigQuery ML works with Vertex AI Model Registry and Prediction to create a seamless end-to-end ML experience. In the future you can expect to see more capabilities that bring BigQuery, BigQuery ML and Vertex AI together. Click here to access the tutorial or check out the documentation to learn more about BigQuery ML Thanks to Ian Zhao, Abhinav Khushraj, Yan Sun, Amir Hormati, Mingge Deng and Firat Tekiner from the BigQuery ML team
  • Forum Statistics

    63.6k
    Total Topics
    61.7k
    Total Posts
×
×
  • Create New...