/web-to-ai-ml-integrations

Create Custom ML API with Flask Blueprint

Step-by-step guide to create a custom ML API with Flask Blueprint. Build and deploy smart ML endpoints seamlessly.

Book a free  consultation
4.9
Clutch rating 🌟
600+
Happy partners
17+
Countries served
190+
Team members
Matt Graham, CEO of Rapid Developers

Book a call with an Expert

Starting a new venture? Need to upgrade your web app? RapidDev builds application with your growth in mind.

Book a free No-Code consultation

Create Custom ML API with Flask Blueprint

Introduction

 

This guide explains how to build a custom machine learning (ML) API using Flask and Flask Blueprints. A Flask Blueprint is a way to organize your Flask application by grouping related routes and logic into separate modules. This separation enhances maintainability and scalability. In this guide, we will create an ML endpoint that loads a pre-trained model, accepts data in JSON format, makes predictions, and returns the results in JSON.

 

Creating a Flask Blueprint for the ML API

 

Let's start by setting up a Flask Blueprint dedicated to the ML API. This blueprint will contain the routes responsible for handling prediction requests.

  • Define the Blueprint: Create a new Python file named ml\_blueprint.py that initializes the blueprint and sets up its routes.
  • Load or Simulate an ML Model: For demonstration, we will mimic an ML model using a function that produces sample predictions. In a real-world scenario, you could load a serialized model (e.g., with pickle or joblib).

// ml\_blueprint.py

from flask import Blueprint, request, jsonify

// Initialize the blueprint named "ml_api"
ml_api = Blueprint('ml_api', name)

// Simulated ML model function
def dummy_model_predict(data):
// For each input data, return the length as a sample prediction
return [len(str(item)) for item in data]

// Define the prediction route using POST method
@ml_api.route('/predict', methods=['POST'])
def predict():
try:
// Parse JSON from the incoming request
input_json = request.get_json()

    // Validate that the "data" key is present in the JSON payload
    if 'data' not in input\_json:
        return jsonify({'error': 'Missing parameter: data'}), 400
    
    // Extract data for prediction
    data = input\_json['data']
    
    // Make prediction using the dummy model function
    prediction = dummy_model_predict(data)
    
    // Return the prediction as JSON
    return jsonify({'prediction': prediction}), 200
except Exception as e:
    // In case of error, return the error message
    return jsonify({'error': str(e)}), 500

This code defines a Blueprint called "ml\_api" with a single route /predict that accepts POST requests containing JSON formatted input. The dummy model simply returns the length of each input item. Error handling is also included.

 

Registering the Blueprint with the Flask Application

 

Once the blueprint is defined, the next step is to register it with the main Flask application. This registration connects the blueprint routes to the main app, allowing them to be accessed as part of your API.

  • Create the main application file: In a file like app.py, import Flask and the blueprint, then register the blueprint with a URL prefix (e.g., /ml).

// app.py

from flask import Flask
from ml_blueprint import ml_api // Import the blueprint from the file

// Instantiate the Flask application
app = Flask(name)

// Register the blueprint with a URL prefix, e.g., '/ml'
app.register_blueprint(ml_api, url_prefix='/ml')

// Run the application
if name == 'main':
// Enable debug mode for development
app.run(debug=True)

With the blueprint registered, the prediction endpoint is now accessible at /ml/predict.

 

Understanding the Data Flow and Error Handling

 

This API endpoint works as follows:

  • Data Input: The endpoint expects a JSON payload with a key called "data". This key should contain the data instances for which predictions are required.
  • Prediction Process: The dummy model processes the data and returns simplified predictions. In a real-world application, you would replace the dummy function with a call to your actual ML model.
  • Error Handling: If data is missing or an exception occurs, the endpoint responds with an appropriate error message and HTTP status code. The HTTP status code 400 indicates a client-side error (e.g., missing input), whereas 500 indicates a server-side error.

This design ensures that the API is robust and provides clear feedback to the client on data issues.

 

Testing and Validation

 

To verify that everything works properly, you can test the API using tools such as Postman or curl. Here is an example using curl:


// Command to test the prediction endpoint
curl -X POST http://localhost:5000/ml/predict -H "Content-Type: application/json" -d '{"data": ["sample", "test"]}'

// Expected response:
// {
// "prediction": [6, 4]
// }

This command sends a POST request containing a JSON payload. The API processes the data and returns an array where each element represents the "prediction" of the corresponding input.

 

Conclusion

 

This guide provided a comprehensive walkthrough for creating a custom ML API with Flask Blueprints. We covered how to define a blueprint, create an endpoint for ML predictions, register the blueprint with the main Flask application, and test the endpoint. By following this approach, you modularize your application, making it more maintainable and scalable. Additionally, key technical details such as error handling, data parsing, and response formatting were explained to ensure your API is robust and user-friendly.

 


Recognized by the best

Trusted by 600+ businesses globally

From startups to enterprises and everything in between, see for yourself our incredible impact.

RapidDev was an exceptional project management organization and the best development collaborators I've had the pleasure of working with.

They do complex work on extremely fast timelines and effectively manage the testing and pre-launch process to deliver the best possible product. I'm extremely impressed with their execution ability.

Arkady
CPO, Praction
Working with Matt was comparable to having another co-founder on the team, but without the commitment or cost.

He has a strategic mindset and willing to change the scope of the project in real time based on the needs of the client. A true strategic thought partner!

Donald Muir
Co-Founder, Arc
RapidDev are 10/10, excellent communicators - the best I've ever encountered in the tech dev space.

They always go the extra mile, they genuinely care, they respond quickly, they're flexible, adaptable and their enthusiasm is amazing.

Mat Westergreen-Thorne
Co-CEO, Grantify
RapidDev is an excellent developer for custom-code solutions.

We’ve had great success since launching the platform in November 2023. In a few months, we’ve gained over 1,000 new active users. We’ve also secured several dozen bookings on the platform and seen about 70% new user month-over-month growth since the launch.

Emmanuel Brown
Co-Founder, Church Real Estate Marketplace
Matt’s dedication to executing our vision and his commitment to the project deadline were impressive. 

This was such a specific project, and Matt really delivered. We worked with a really fast turnaround, and he always delivered. The site was a perfect prop for us!

Samantha Fekete
Production Manager, Media Production Company
The pSEO strategy executed by RapidDev is clearly driving meaningful results.

Working with RapidDev has delivered measurable, year-over-year growth. Comparing the same period, clicks increased by 129%, impressions grew by 196%, and average position improved by 14.6%. Most importantly, qualified contact form submissions rose 350%, excluding spam.

Appreciation as well to Matt Graham for championing the collaboration!

Michael W. Hammond
Principal Owner, OCD Tech

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We’ll discuss your project and provide a custom quote at no cost.Â