Learn to connect your web app to a machine learning backend using WebSockets. Follow our step-by-step guide for real-time integration.

Book a call with an Expert
Starting a new venture? Need to upgrade your web app? RapidDev builds application with your growth in mind.
To link your web application to an ML backend in real-time, you first need a robust WebSocket server running at your ML backend. This server will accept persistent connections from clients, allowing immediate bidirectional communication. In this example, we use a Python-based WebSocket server with the websockets library. You can integrate your ML model within this server to react to incoming data and send back appropriate responses.
// Import necessary modules for WebSocket creation and ML model handling
import asyncio // Asynchronous I/O operations for handling multiple connections
import websockets // Library to support WebSocket connections
import pickle // For data serialization and deserialization (if needed)
// Assume load_ml_model is a function that loads your machine learning model
from ml_model import load_ml\_model, predict
model = load_ml_model() // Load your ML model once for reuse
// Handler to manage each connection from clients
async def ml\_handler(websocket, path):
try:
// Continuously receive data from client
async for message in websocket:
// Convert the input data for the ML model if necessary
input\_data = pickle.loads(message.encode('latin1'))
// Perform inference using the ML model
prediction = predict(model, input\_data)
// Serialize the prediction result to send back
response = pickle.dumps(prediction).decode('latin1')
// Send the result back to the web client
await websocket.send(response)
except Exception as e:
// Optionally log or handle exceptions here
print("Error processing message:", e)
// Boot the server on a specific host and port
async def main():
async with websockets.serve(ml\_handler, "localhost", 6789):
await asyncio.Future() // Run indefinitely
if **name** == "**main**":
asyncio.run(main())
In the web application (client-side), you have to create a WebSocket connection to the ML backend server. The client sends data for predictions and listens for responses. This step ensures that the connection remains persistent, enabling seamless real-time interactions with your ML backend.
// Create a new WebSocket connection to your ML backend server
var ws = new WebSocket("ws://localhost:6789");
// Event handler when the connection is successfully established
ws.onopen = function(event) {
console.log("WebSocket connection opened:", event);
// You can send data to the ML backend once the connection is open
// For example, serialize your input data appropriately before sending
var input\_data = { feature1: 3.14, feature2: 2.71 }; // Example data
var serializedData = JSON.stringify(input\_data); // Alternatively, use a more robust serialization method if needed
// Sending the data
ws.send(serializedData);
};
// Event handler for receiving messages (responses) from the ML backend
ws.onmessage = function(event) {
// Process the incoming message from the backend
console.log("Received prediction from ML backend:", event.data);
// Use the prediction value to update UI or trigger another action
};
// Event handler for any errors occurring during the communication
ws.onerror = function(error) {
console.error("WebSocket encountered error:", error);
};
// Event handler when the WebSocket connection is closed
ws.onclose = function(event) {
console.log("WebSocket connection closed:", event);
};
It is crucial to agree upon a data protocol since the web client and ML server might use different data representations. Two common strategies are JSON and binary protocols (e.g., using pickle or Protocol Buffers). JSON is human-readable and widely supported, whereas binary protocols can be more efficient for complex numerical data.
// Example function to send JSON data from the client
function sendData(ws, data) {
try {
// Convert data to a JSON string
var jsonData = JSON.stringify(data);
ws.send(jsonData);
} catch(e) {
console.error("Error serializing data:", e);
}
}
// On the ML server side, you can parse the incoming JSON
import json
async def ml\_handler(websocket, path):
try:
async for message in websocket:
// Parse JSON string received from client
input\_data = json.loads(message)
prediction = predict(model, input\_data)
// Return the prediction result as JSON to the client
await websocket.send(json.dumps(prediction))
except Exception as e:
print("Error processing JSON message:", e)
Both the backend ML server and the web client run asynchronously to enable real-time communication. The ML backend must handle multiple concurrent connections without blocking the main thread. With Python’s asyncio library and JavaScript’s event-driven model, each connection and data transfer can be handled concurrently without waiting for one process to complete before starting another.
For a production-level connection between the web client and ML backend, careful attention has to be paid to performance optimizations such as efficient serialization/deserialization, connection retries, and graceful error handling.
// Example of handling reconnect logic on the client side
function createWebSocket() {
var ws = new WebSocket("ws://localhost:6789");
ws.onopen = function(event) {
console.log("Connected to ML backend.");
};
ws.onerror = function(error) {
console.error("WebSocket error:", error);
};
ws.onclose = function(event) {
console.log("Connection closed. Attempting to reconnect...");
// Simple reconnect strategy: try reconnecting after a delay
setTimeout(createWebSocket, 2000);
};
ws.onmessage = function(event) {
console.log("Received message:", event.data);
};
return ws;
}
// Initialize the WebSocket connection
var ws = createWebSocket();
While WebSockets create a seamless bridge between the ML backend and the web application, their persistent nature means it’s important to secure the connection. Implement measures such as authentication tokens, encryption (using Secure WebSockets wss:// protocol), and origin checking.
// Server-side pseudo-code for a token verification system
async def ml\_handler(websocket, path):
token = websocket.request\_headers.get("Authorization")
if not verify\_token(token):
await websocket.send("Authentication failed")
await websocket.close()
return
// Continue processing authenticated connections
async for message in websocket:
// Process and send back response
By integrating WebSockets with your ML backend, you achieve real-time bidirectional communication between your web application and the predictive engine. This design allows users to experience immediate feedback from the machine learning model as data is transmitted. The emphasis should be on robust asynchronous management, consistent data serialization protocols, and stringent security practices.
From startups to enterprises and everything in between, see for yourself our incredible impact.
Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We’ll discuss your project and provide a custom quote at no cost.Â