Deploy your ML model with Docker using our easy step-by-step guide. Achieve scalable, efficient deployment with expert tips.

Book a call with an Expert
Starting a new venture? Need to upgrade your web app? RapidDev builds application with your growth in mind.
app.py) that loads your trained ML model (which could be serialized using pickle, joblib, or any other format) and exposes an API endpoint to accept data and return predictions. In this example, we will use Python with the Flask framework to serve our model.
// Import essential libraries
from flask import Flask, request, jsonify
import pickle // For model deserialization
app = Flask(**name**)
// Load the pre-trained ML model from disk
with open('model.pkl', 'rb') as file:
model = pickle.load(file)
// Define an API endpoint for predictions
@app.route('/predict', methods=['POST'])
def predict():
data = request.get\_json() // Expecting JSON data
features = data.get('features')
prediction = model.predict([features])
return jsonify({'prediction': prediction.tolist()})
// Run the server on specified host and port
if **name** == '**main**':
app.run(host='0.0.0.0', port=5000)
model.pkl) into the container, and install the dependencies listed in a requirements.txt file.
// Use an official lightweight Python image as the base image
FROM python:3.9-slim
// Set the working directory inside the container
WORKDIR /app
// Copy the requirements file to the container
COPY requirements.txt /app/
// Install the Python dependencies
RUN pip install --upgrade pip && pip install -r requirements.txt
// Copy the rest of the application code and model
COPY . /app
// Expose the container's port where the application runs
EXPOSE 5000
// Command to run the application
CMD ["python", "app.py"]
EXPOSE instruction informs Docker that the container listens on the specified network port at runtime.
requirements.txt:
// Flask for the web API
Flask==2.1.0
// Any additional libraries required for your ML model
numpy==1.21.0
scikit-learn==1.0.2
// Navigate to your project directory containing the Dockerfile
cd path/to/your/project
// Build the Docker image with a tag, for example, ml-model-deployment
docker build -t ml-model-deployment .
// Run the container and map port 5000 of the container to port 5000 on your host machine
docker run -p 5000:5000 ml-model-deployment
http://localhost:5000/predict.
model\_v1.pkl) and updating your code to refer to the correct version.
// Example snippet in your Dockerfile to avoid cache issues:
COPY requirements.txt /app/
RUN pip install --upgrade pip && pip install -r requirements.txt
// Copy model file after dependency installation
COPY model\_v1.pkl /app/model.pkl
COPY app.py /app
// Command to test the prediction API via curl
curl -X POST http://localhost:5000/predict -H "Content-Type: application/json" -d '{"features": [1.2, 3.4, 5.6]}'
docker logs [container\_id].--memory and --cpus) to prevent resource exhaustion in production environments.USER directive.
// Example of adding a non-root user
RUN useradd -ms /bin/bash myuser
USER myuser
From startups to enterprises and everything in between, see for yourself our incredible impact.
Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We’ll discuss your project and provide a custom quote at no cost.Â