/web-to-ai-ml-integrations

ML Model API Using Flask and Python

Build an ML Model API using Flask & Python with our step-by-step guide. Learn expert tips, code examples, and easy deployment.

Book a free  consultation
4.9
Clutch rating 🌟
600+
Happy partners
17+
Countries served
190+
Team members
Matt Graham, CEO of Rapid Developers

Book a call with an Expert

Starting a new venture? Need to upgrade your web app? RapidDev builds application with your growth in mind.

Book a free No-Code consultation

ML Model API Using Flask and Python

Initializing Your ML Model and Saving It for Deployment

 
  • Export your trained model: Use your favorite ML library (such as TensorFlow, PyTorch, or scikit-learn) to export your trained model. For example, if you are using scikit-learn, you can save your model using joblib or pickle.
  • Example: Suppose you have a scikit-learn model named my\_model, save it as:
    • import joblib
    • joblib.dump(my\_model, 'model.pkl') // This writes the model to a file named model.pkl
  • Ensure compatibility: The saved file must be accessible within the directory that your Flask application will run from.

Setting Up a Flask Application for Your ML API

 
  • Create the Flask app: Initialize your Flask application to host API endpoints that interact with the ML model.
  • Import necessary modules: These include Flask, any JSON routines for handling requests/responses, and your model loading libraries.

// Import Flask to create API endpoints
from flask import Flask, request, jsonify

// For model loading
import joblib

// Create the Flask app
app = Flask(**name**)

Loading the Model in Your Flask Application

 
  • Load the model once upon startup: To ensure performance, load your model during initialization rather than loading it on every request.
  • Error handling: Consider handling errors if the model file is missing or corrupted.

// Load the trained model
try:
    model = joblib.load('model.pkl')  // 'model.pkl' contains your saved model
except Exception as e:
    print("Failed to load model:", e)
    // Optionally, exit or set model as None and handle in routes
    model = None

Defining API Endpoints for Model Inference

 
  • Create a prediction endpoint: Define a route (e.g., '/predict') that accepts POST requests with JSON data.
  • Data extraction and preprocessing: Extract the input data from the request, preprocess it if necessary, then pass it to your model for prediction.
  • Return results: Format the model’s predictions as a JSON response.

// Define an API endpoint for predictions
@app.route('/predict', methods=['POST'])
def predict():
    // Ensure that the model has been loaded
    if model is None:
        return jsonify({"error": "Model not loaded"}), 500

    // Parse input JSON from the request
    data = request.get\_json(force=True)
    
    // Extract features from the JSON, for example, assuming an array of features under 'input'
    features = data.get('input', None)
    if features is None:
        return jsonify({"error": "No input data provided"}), 400

    // Optionally preprocess the features here (e.g., scaling, transforming, etc.)
    
    try:
        // Make prediction using the loaded model
        prediction = model.predict([features])
        // Convert prediction to a list in case it is a numpy array
        prediction\_result = prediction.tolist()
    except Exception as e:
        return jsonify({"error": "Prediction failed", "message": str(e)}), 500

    // Return the prediction in JSON format
    return jsonify({"prediction": prediction\_result})

Running and Testing Your Flask ML API

 
  • Start your Flask application: Use the built-in Flask development server. This can be done by including a statement that checks if the module is the main one executed.
  • Batch processing or batching requests: For increased efficiency, consider processing requests in batches if your model and use case permit.
  • Security consideration: Limit exposure by adding validations, authentication, or even deploying behind a secure proxy when using this API in production.

// Run the flask app
if **name** == '**main**':
    // Use debug mode for development; disable in production
    app.run(host='0.0.0.0', port=5000, debug=True)

Testing the API Using Tools Like cURL or Postman

 
  • Using cURL: Below is an example command:
    • curl -X POST -H "Content-Type: application/json" -d '{"input": [/_ your feature values here _/]}' http://localhost:5000/predict
  • Using Postman: Create a POST request, set the content type to application/json, and add the JSON body. Send the request to http://localhost:5000/predict to observe the prediction output.

Additional Technical Considerations

 
  • Model Scaling: For production applications, consider using a production-grade server (like Gunicorn or uWSGI) to deploy your Flask app, which can handle multiple concurrent requests.
  • Logging: Integrate logging for monitoring API usage and errors—Python’s logging module can be useful for this.
  • Error Handling and Status Codes: Be comprehensive with error checks and return appropriate HTTP status codes (e.g., 400 for bad requests, 500 for server errors).
  • Input Validation: Validate your input thoroughly to prevent invalid data from reaching your model which could cause unexpected failures.
  • Security and Rate Limiting: Consider integrating security measures such as API keys or JWT tokens along with rate limiting to secure your API endpoints.


Recognized by the best

Trusted by 600+ businesses globally

From startups to enterprises and everything in between, see for yourself our incredible impact.

RapidDev was an exceptional project management organization and the best development collaborators I've had the pleasure of working with.

They do complex work on extremely fast timelines and effectively manage the testing and pre-launch process to deliver the best possible product. I'm extremely impressed with their execution ability.

Arkady
CPO, Praction
Working with Matt was comparable to having another co-founder on the team, but without the commitment or cost.

He has a strategic mindset and willing to change the scope of the project in real time based on the needs of the client. A true strategic thought partner!

Donald Muir
Co-Founder, Arc
RapidDev are 10/10, excellent communicators - the best I've ever encountered in the tech dev space.

They always go the extra mile, they genuinely care, they respond quickly, they're flexible, adaptable and their enthusiasm is amazing.

Mat Westergreen-Thorne
Co-CEO, Grantify
RapidDev is an excellent developer for custom-code solutions.

We’ve had great success since launching the platform in November 2023. In a few months, we’ve gained over 1,000 new active users. We’ve also secured several dozen bookings on the platform and seen about 70% new user month-over-month growth since the launch.

Emmanuel Brown
Co-Founder, Church Real Estate Marketplace
Matt’s dedication to executing our vision and his commitment to the project deadline were impressive. 

This was such a specific project, and Matt really delivered. We worked with a really fast turnaround, and he always delivered. The site was a perfect prop for us!

Samantha Fekete
Production Manager, Media Production Company
The pSEO strategy executed by RapidDev is clearly driving meaningful results.

Working with RapidDev has delivered measurable, year-over-year growth. Comparing the same period, clicks increased by 129%, impressions grew by 196%, and average position improved by 14.6%. Most importantly, qualified contact form submissions rose 350%, excluding spam.

Appreciation as well to Matt Graham for championing the collaboration!

Michael W. Hammond
Principal Owner, OCD Tech

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We’ll discuss your project and provide a custom quote at no cost.Â