/web-to-ai-ml-integrations

Progress Bar for Long ML Inference Tasks

Boost ML inference UX with our step-by-step guide to add a progress bar for long tasks—fast and engaging solution.

Book a free  consultation
4.9
Clutch rating 🌟
600+
Happy partners
17+
Countries served
190+
Team members
Matt Graham, CEO of Rapid Developers

Book a call with an Expert

Starting a new venture? Need to upgrade your web app? RapidDev builds application with your growth in mind.

Book a free No-Code consultation

Progress Bar for Long ML Inference Tasks

Introducing the Concept of a Progress Bar for Long ML Inference Tasks

 

  • Long ML Inference Tasks often take several seconds or minutes, especially when processing large datasets or running complex deep learning models.
  • Integrating a progress bar helps provide real-time visual feedback to users about the ongoing process, reducing uncertainty and enhancing usability.
  • This guide explains a technical approach to build, update, and display a progress bar during ML inference tasks.
 

Architecting the Progress Tracking System

 

  • Backend Task Management: Use asynchronous processing techniques. The ML inference job runs in a background worker, periodically updating its progress status.
  • Communication Layer: The progress information is communicated from the backend to the frontend by using either polling via AJAX or real-time updates via WebSockets.
  • Frontend Display: The frontend includes a progress bar component that updates its visual state as the backend reports new progress values.
 

Implementing the Backend Progress Updates

 

  • Task Execution and Progress Updates: Divide the ML inference into logical steps and update a progress counter accordingly. This can be stored in a shared data store or in a server-side variable accessible by the progress endpoint.
  • Simulated Code Example (Python): Below is a simplified code snippet showing progress updates during a ML inference process.
 

// Example: Python pseudo-code for ML inference with progress updates

import time
import random

# Simulated function to update progress into a shared store (e.g., Redis or in-memory dict)
def update_progress(task_id, percentage):
    // Save the new percentage value for the given task\_id
    progress_store[task_id] = percentage

def long_ml_inference(task\_id, data):
    total\_steps = 10  // Define number of steps in the inference process
    for step in range(total\_steps):
        // Simulate ML processing time for each step
        time.sleep(random.uniform(0.5, 1.5))
        // Calculate the current progress percentage
        percentage = int(((step + 1) / total\_steps) \* 100)
        update_progress(task_id, percentage)
    return "Inference Complete"
    
# Global dictionary to simulate shared store for progress status
progress\_store = {}

# Start inference with a unique task id (in an actual deployment, this might be managed by a task queue like Celery)
task_id = "unique_task\_123"
result = long_ml_inference(task\_id, data={})
  • In real applications, replace the global dictionary with a production-grade solution like Redis, and consider using asynchronous task queues (e.g., Celery or RQ) to manage the background job.
  • Each update to progress is stored and can be queried by the frontend.
 

Implementing the Frontend Progress Bar

 

  • Polling Mechanism: The simplest method is to have the frontend periodically send an AJAX request to a progress endpoint that returns the current progress percentage.
  • Real-time Communication: Alternatively, set up a WebSocket connection to push progress updates to the client as soon as they occur.
 

Frontend AJAX Polling Example

 

  • HTML and JavaScript Code: The code below illustrates how to poll a backend endpoint for progress updates and update an HTML progress bar dynamically.
 

// HTML structure for the progress bar
<div id="progress-container" style="width: 100%; background-color: #eee;">
  <div id="progress-bar" style="width: 0%; background-color: #5cb85c; height: 30px;"></div>
</div>

// JavaScript code to poll the progress status via AJAX (using fetch API)
function pollProgress(taskId) {
    fetch('/progress?task\_id=' + taskId)
        .then(response => response.json())
        .then(data => {
            // Update the progress bar width
            document.getElementById("progress-bar").style.width = data.percentage + "%";
            // Check if the task is complete
            if (data.percentage < 100) {
                // Continue polling every second until complete
                setTimeout(function() {
                    pollProgress(taskId);
                }, 1000);
            } else {
                alert("ML Inference Completed!");
            }
        })
        .catch(error => {
            console.error("Error fetching progress data:", error);
        });
}

// Start polling for a given task ID
const taskId = "unique_task_123";
pollProgress(taskId);
  • The endpoint '/progress' should be implemented on the server to return a JSON object containing the progress value like: { "percentage": 45 }.
  • This approach keeps the client updated even if the job takes a long time to complete.
 

Using WebSockets for Real-Time Progress Updates

 

  • A more advanced solution uses WebSockets to send updates in real-time.
  • This method involves the server pushing the progress information as soon as it becomes available, reducing the need for constant polling.
  • Frameworks such as Socket.IO (for Node.js) or channels in Django can be used to implement this solution.
 

Example: Simple WebSocket Progress Update (Node.js with Socket.IO)

 

  • Server-side Code: The server emits progress updates for a specific task.
 

// Example: Node.js server using Express and Socket.IO

const express = require('express');
const http = require('http');
const socketIO = require('socket.io');

const app = express();
const server = http.createServer(app);
const io = socketIO(server);

app.get('/', (req, res) => {
    res.sendFile(\_\_dirname + '/index.html'); // Serve homepage with progress bar
});

// Simulate ML inference task with periodic progress updates
function simulateMLTask(taskId, socket) {
    let current = 0;
    const total = 10;
    const interval = setInterval(() => {
        current++;
        const percentage = Math.round((current / total) \* 100);
        socket.emit('progressUpdate', { taskId: taskId, percentage: percentage });
        if (current >= total) {
            clearInterval(interval);
            socket.emit('progressUpdate', { taskId: taskId, percentage: 100, message: "Inference Complete" });
        }
    }, 1000);
}

io.on('connection', (socket) => {
    console.log('Client connected');
    // Listen for a start task event from the client
    socket.on('startTask', (data) => {
        simulateMLTask(data.taskId, socket);
    });
});

server.listen(3000, () => {
    console.log('Server is listening on port 3000');
});
  • Client-side Code (index.html): This file sets up the Socket.IO client and updates the progress bar when receiving new progress data.
 

// In the HTML file (index.html), include Socket.IO client script
<script src="/socket.io/socket.io.js"></script>
<div id="progress-container" style="width: 100%; background-color: #eee;">
  <div id="progress-bar" style="width: 0%; background-color: #5cb85c; height: 30px;"></div>
</div>
<script>
    var socket = io(); // Connect to the backend using Socket.IO
    var taskId = "unique_task_123";
    // Start the ML inference task and progress updates
    socket.emit('startTask', { taskId: taskId });
    // Listen for progress updates from the server
    socket.on('progressUpdate', function(data) {
        if (data.taskId === taskId) {
            document.getElementById("progress-bar").style.width = data.percentage + "%";
            if (data.percentage === 100 && data.message) {
                alert(data.message);
            }
        }
    });
</script>
  • This WebSocket approach allows immediate feedback without delay, making the user experience smoother.
 

Final Considerations and Best Practices

 

  • User Experience: Consider adding messages or activity spinners alongside the progress bar to reassure users during long processes.
  • Error Handling: Ensure that both your backend and frontend properly handle timeouts, disconnections, or failures in updating progress.
  • Task Identification: Use unique IDs for tasks to avoid conflicts when multiple inference jobs run simultaneously.
  • Security: Safeguard your progress endpoints by authenticating requests and protecting against unauthorized access.
 

Conclusion

 

  • This guide detailed how to integrate a progress bar for long ML inference tasks by setting up backend progress tracking and synchronizing updates with the frontend.
  • Whether you choose an AJAX polling method or real-time WebSocket updates, the goal is to provide users with clear feedback during resource-intensive operations.
  • With careful design and robust implementation, the progress bar will significantly improve user engagement and satisfaction during heavy ML tasks.
 


Recognized by the best

Trusted by 600+ businesses globally

From startups to enterprises and everything in between, see for yourself our incredible impact.

RapidDev was an exceptional project management organization and the best development collaborators I've had the pleasure of working with.

They do complex work on extremely fast timelines and effectively manage the testing and pre-launch process to deliver the best possible product. I'm extremely impressed with their execution ability.

Arkady
CPO, Praction
Working with Matt was comparable to having another co-founder on the team, but without the commitment or cost.

He has a strategic mindset and willing to change the scope of the project in real time based on the needs of the client. A true strategic thought partner!

Donald Muir
Co-Founder, Arc
RapidDev are 10/10, excellent communicators - the best I've ever encountered in the tech dev space.

They always go the extra mile, they genuinely care, they respond quickly, they're flexible, adaptable and their enthusiasm is amazing.

Mat Westergreen-Thorne
Co-CEO, Grantify
RapidDev is an excellent developer for custom-code solutions.

We’ve had great success since launching the platform in November 2023. In a few months, we’ve gained over 1,000 new active users. We’ve also secured several dozen bookings on the platform and seen about 70% new user month-over-month growth since the launch.

Emmanuel Brown
Co-Founder, Church Real Estate Marketplace
Matt’s dedication to executing our vision and his commitment to the project deadline were impressive. 

This was such a specific project, and Matt really delivered. We worked with a really fast turnaround, and he always delivered. The site was a perfect prop for us!

Samantha Fekete
Production Manager, Media Production Company
The pSEO strategy executed by RapidDev is clearly driving meaningful results.

Working with RapidDev has delivered measurable, year-over-year growth. Comparing the same period, clicks increased by 129%, impressions grew by 196%, and average position improved by 14.6%. Most importantly, qualified contact form submissions rose 350%, excluding spam.

Appreciation as well to Matt Graham for championing the collaboration!

Michael W. Hammond
Principal Owner, OCD Tech

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We’ll discuss your project and provide a custom quote at no cost.