/web-to-ai-ml-integrations

How to Run ML Model in Browser with TensorFlow.js

Run ML models in your browser using TensorFlow.js. Follow our step-by-step guide for easy in-browser AI integration.

Book a free  consultation
4.9
Clutch rating 🌟
600+
Happy partners
17+
Countries served
190+
Team members
Matt Graham, CEO of Rapid Developers

Book a call with an Expert

Starting a new venture? Need to upgrade your web app? RapidDev builds application with your growth in mind.

Book a free No-Code consultation

How to Run ML Model in Browser with TensorFlow.js

Loading TensorFlow.js Library in the Browser

 
  • TensorFlow.js is an open-source library that enables running machine learning models directly in the browser using JavaScript. This means that instead of sending data to a server, all inference happens locally, providing faster interaction and enhanced privacy.
  • To start, include the TensorFlow.js script in your HTML file. You can use a CDN link to load the library easily.

// In your HTML file, add the following script tag:
 // This loads TensorFlow.js into your browser environment
 

Preparing Your ML Model for Browser Use

 
  • Most machine learning models are not in a format that can be read directly by TensorFlow.js. Therefore, you might need to convert your pre-trained model into a TensorFlow.js-friendly format.
  • If you're using a model built in TensorFlow or Keras, you can use the TensorFlow.js Converter. This tool converts your existing model files into JSON and binary weight files which can be loaded into the browser.
  • For example, if you have a Keras model, you would execute a conversion command (usually in a command line environment) like: tensorflowjs_converter --input_format keras my_model.h5 ./tfjs_model
  • This conversion generates a model.json file and several binary files that contain the weights.
 

Loading the Converted Model in the Browser

 
  • Once the model is converted and hosted on your server or available locally, you can load it in your JavaScript code using the tf.loadLayersModel or tf.loadGraphModel function.
  • The tf.loadLayersModel is used for models created with Keras-style APIs, while tf.loadGraphModel is used for inference with TensorFlow graphs.

// Load a Keras (Layers API) model
tf.loadLayersModel('path/to/model.json')
  .then(function(model) {
    // The model is now loaded and ready for predictions
    console.log('Model loaded successfully!');
  })
  .catch(function(error) {
    console.error('Error loading the model:', error); // Handle errors during loading
  });
 

Preprocessing Input Data for Inference

 
  • Before using your model for inference (making predictions), the input data must be in the correct format, typically as a Tensor. A Tensor is a multi-dimensional array that TensorFlow.js uses for numerical operations.
  • If the model expects an image, you may need to resize and normalize it. For example, to process an image in the browser, you could use the HTML Canvas API to resize and extract pixel data, converting it into a Tensor.

// Assume you have an image element in your HTML with id 'input-image'
const imageElement = document.getElementById('input-image');

// Use TensorFlow.js to capture the image data and convert to a tensor
let tensor = tf.browser.fromPixels(imageElement)
.resizeNearestNeighbor([224, 224]) // Resize image if model input is 224x224
.toFloat(); // Convert pixel values to float

// Normalize the image, dividing by 255 if needed (common normalization for images)
tensor = tensor.div(tf.scalar(255));

// Add a batch dimension, since models expect a batch-size dimension
tensor = tensor.expandDims();


 

Running Inference with the Model

 
  • After preprocessing, pass the tensor to the model for prediction.
  • The model's predict method returns a tensor representing the model's output. You may need to perform postprocessing on this output to extract useful information.

// Assume the model is already loaded and tensor is prepared from input data
model.predict(tensor).data()
  .then(predictions => {
    // 'predictions' is a typed array containing prediction results
    console.log('Prediction Results:', predictions);
  })
  .catch(error => {
    console.error('Error during prediction:', error); // Handle errors during inference
  });
 

Optimizing Performance in the Browser

 
  • TensorFlow.js can leverage your computer's GPU through technologies like WebGL, providing significant performance improvements for matrix operations used in ML inference. This is usually enabled by default when available.
  • For larger models, consider using batching strategies or lower precision formats to reduce computation time.
  • If the model is large, lazy-loading or progressive loading can enhance user experience by loading only necessary parts of the model initially.
 

Troubleshooting Common Issues

 
  • Model Loading Errors: Check the file path to your model.json file and ensure all related weight files are accessible. Verify that your server is configured to serve these files.
  • Input Data Mismatches: Ensure the shape and data type of your input tensor match what your model expects. Sometimes, dimensions such as batch size or color channels need to be explicitly added or rearranged.
  • Performance Bottlenecks: If inference is slow, confirm that WebGL is being used. You can verify this by checking the TensorFlow.js backend with tf.getBackend(). It should log 'webgl' if GPU acceleration is active.
 

Conclusion

 
  • This guide explained how to load and run a machine learning model in the browser using TensorFlow.js. By following these steps, you can convert an existing model, load it into your web application, preprocess inputs, and perform inference entirely on the client-side.
  • Utilizing TensorFlow.js allows you to bring intelligent features to your web applications with no additional server overhead.
 


Recognized by the best

Trusted by 600+ businesses globally

From startups to enterprises and everything in between, see for yourself our incredible impact.

RapidDev was an exceptional project management organization and the best development collaborators I've had the pleasure of working with.

They do complex work on extremely fast timelines and effectively manage the testing and pre-launch process to deliver the best possible product. I'm extremely impressed with their execution ability.

Arkady
CPO, Praction
Working with Matt was comparable to having another co-founder on the team, but without the commitment or cost.

He has a strategic mindset and willing to change the scope of the project in real time based on the needs of the client. A true strategic thought partner!

Donald Muir
Co-Founder, Arc
RapidDev are 10/10, excellent communicators - the best I've ever encountered in the tech dev space.

They always go the extra mile, they genuinely care, they respond quickly, they're flexible, adaptable and their enthusiasm is amazing.

Mat Westergreen-Thorne
Co-CEO, Grantify
RapidDev is an excellent developer for custom-code solutions.

We’ve had great success since launching the platform in November 2023. In a few months, we’ve gained over 1,000 new active users. We’ve also secured several dozen bookings on the platform and seen about 70% new user month-over-month growth since the launch.

Emmanuel Brown
Co-Founder, Church Real Estate Marketplace
Matt’s dedication to executing our vision and his commitment to the project deadline were impressive. 

This was such a specific project, and Matt really delivered. We worked with a really fast turnaround, and he always delivered. The site was a perfect prop for us!

Samantha Fekete
Production Manager, Media Production Company
The pSEO strategy executed by RapidDev is clearly driving meaningful results.

Working with RapidDev has delivered measurable, year-over-year growth. Comparing the same period, clicks increased by 129%, impressions grew by 196%, and average position improved by 14.6%. Most importantly, qualified contact form submissions rose 350%, excluding spam.

Appreciation as well to Matt Graham for championing the collaboration!

Michael W. Hammond
Principal Owner, OCD Tech

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We’ll discuss your project and provide a custom quote at no cost.Â