Associate Data Practitioner

Unlock the power of your data in the cloud! Get hands-on with Google Cloud's core data services like BigQuery and Looker to validate your practical skills in data ingestion, analysis, and management, and earn your Associate Data Practitioner certification!

Practice Test

Fundamental
Exam

Perform inference using BigQuery ML models

Execute Inference with Pre-trained BigQuery ML Models

BigQuery ML lets you perform inference by applying models you have already trained to new or existing data. This process generates predictions directly in BigQuery, so you don’t need to export data or switch tools. By running inference in SQL, you make data-driven decisions faster and more transparently. Using pre-trained models can save time because you skip the training step and focus on applying insights. Overall, inference is the bridge between a trained model and actionable predictions.

To run inference, you write SQL queries that call the ML.PREDICT function. A typical query looks like this:

SELECT *
FROM ML.PREDICT(
  MODEL `project.dataset.model`,
  (SELECT * FROM `project.dataset.new_data`)
);

You can also adjust options such as batch size, confidence thresholds, or the destination table for your results. These settings help you control performance and storage, making your workflows more flexible. By keeping everything in SQL, you maintain a simple, repeatable process.

Once you have results, you need to interpret the output table. It usually contains columns like:

  • predicted_label: the model’s chosen class or value
  • confidence or probability: how sure the model is about its prediction
  • feature_attributions (optional): which inputs influenced the prediction the most
    These fields give you insights into model behavior and help build trust in the predictions. By reviewing them, you can see which features matter and decide how to use the predictions in reports or applications.

Evaluating your predictions is key to understanding model quality. Use the ML.EVALUATE function to compare predictions against known outcomes. For example:

SELECT *
FROM ML.EVALUATE(
  MODEL `project.dataset.model`,
  (SELECT * FROM `project.dataset.labeled_data`)
);

This returns metrics like accuracy, mean_squared_error, and AUC. These numbers let you decide whether to retrain the model, adjust features, or deploy the predictions to production.

Putting it all together, inference with BigQuery ML follows a clear cycle:

  1. Apply the model to data with ML.PREDICT
  2. Interpret results and understand feature impacts
  3. Evaluate performance using ML.EVALUATE
  4. Decide on retraining or deployment
    By following these steps, you create a reliable process for turning raw data into actionable insights entirely within BigQuery. This approach keeps your pipeline simple, scalable, and easy to manage.

Conclusion

In this section, we learned how to perform inference using BigQuery ML models by writing SQL queries with ML.PREDICT. We saw how to adjust options for batch size, confidence thresholds, and output tables to meet different needs. The results include predicted_label, confidence, and optional feature_attributions, which help explain model decisions. We then covered how to evaluate predictions using ML.EVALUATE and metrics such as accuracy and AUC. Together, these steps form a repeatable workflow for applying, interpreting, and validating BigQuery ML models, enabling you to make informed, data-driven decisions.