/web-to-ai-ml-integrations

Host ML Model on AWS Lambda

Host your ML model on AWS Lambda easily with our step-by-step guide. Scale your AI quickly with clear instructions.

Book a free  consultation
4.9
Clutch rating 🌟
600+
Happy partners
17+
Countries served
190+
Team members
Matt Graham, CEO of Rapid Developers

Book a call with an Expert

Starting a new venture? Need to upgrade your web app? RapidDev builds application with your growth in mind.

Book a free No-Code consultation

Host ML Model on AWS Lambda

Preparing the ML Model and Dependencies

 

  • Ensure your ML model (for example, a pre-trained TensorFlow or PyTorch model) is saved in an accessible format such as .h5 for Keras or a .pth file for PyTorch.
  • List the Python packages your model requires (e.g., tensorflow, torch, numpy, or scikit-learn) along with any other dependencies that are not included in AWS Lambda’s native environment.
 

Packaging the Model and Dependencies for AWS Lambda

 

  • Create a project directory where you place your model file and a handler file. The handler file (for example, lambda\_function.py) contains the code that AWS Lambda will execute.
  • Since AWS Lambda has a limit on the package size, consider using AWS Lambda Layers to separate heavy dependencies if your libraries exceed the limit.
  • If your ML libraries are large, you may use a Docker image to package your Lambda function. AWS Lambda supports container images up to 10 GB, enabling you to bundle your model and dependencies without issues.
 

Creating the Lambda Handler

 

  • Write a Lambda handler that loads the model during initialization and invokes it during each function call. To avoid re-loading the model on every invocation, load it outside your handler function.
  • Ensure your handler parses incoming events (data for predictions) and returns a response with the inference results.
 

// Example lambda\_function.py

import json
import os
import numpy as np

// Import your framework library, e.g., for TensorFlow or PyTorch
import tensorflow as tf
// For a PyTorch example, you might import torch instead

// Load the model once during cold start to save loading time in subsequent invocations
MODEL\_PATH = os.path.join(os.path.dirname(**file**), "model.h5")
model = tf.keras.models.load_model(MODEL_PATH)

def lambda\_handler(event, context):
    // Assume the event contains input data in JSON format under 'data'
    try:
        input\_data = json.loads(event["body"])
        // Convert input data into a format suitable for your model, for instance a numpy array
        prediction_input = np.array(input_data["features"])
        // Expand dimensions if necessary
        prediction_input = np.expand_dims(prediction\_input, axis=0)
        prediction = model.predict(prediction\_input)
        // Return a JSON response with the prediction results
        return {
            "statusCode": 200,
            "body": json.dumps({"prediction": prediction.tolist()})
        }
    except Exception as e:
        return {
            "statusCode": 500,
            "body": json.dumps({"error": str(e)})
        }

Creating an AWS Lambda Layer (Optional for Heavy Dependencies)

 

  • Build a Lambda Layer to include large libraries. To do this, create a directory structure similar to python/lib/python3.8/site-packages/ and install your packages into that folder using pip.
  • After installing, zip the contents (not the directory itself) and upload it as a Lambda Layer through the AWS Management Console or CLI.
  • Modify your Lambda function’s configuration to add the layer. This allows your function to access the packages without bundling them directly in your deployment package.
 

// Example steps locally:
// Create directory structure
mkdir -p python/lib/python3.8/site-packages/

// Install required packages into the folder using pip
pip install tensorflow -t python/lib/python3.8/site-packages/

// Zip the directory contents
zip -r layer.zip python

Deploying to AWS Lambda

 

  • If using a ZIP archive, package your lambda\_function.py, the model file, and any other necessary resources into a ZIP file. The file structure should have the handler file at the root.
  • Upload the ZIP file using the AWS Management Console, AWS CLI, or Infrastructure-as-Code tools like AWS SAM or CloudFormation.
  • If you are using a container image for enhanced customization, create a Dockerfile that copies your application code, installs dependencies, and sets the appropriate AWS Lambda base image. Then build and push the image to Amazon ECR (Elastic Container Registry).
 

// Example Dockerfile for AWS Lambda container image

FROM public.ecr.aws/lambda/python:3.8

// Copy requirements.txt and install packages
COPY requirements.txt  .
RUN pip install -r requirements.txt

// Copy the model and lambda\_function.py
COPY model.h5  .
COPY lambda\_function.py  .

// Command for AWS Lambda to run the handler function
CMD ["lambda_function.lambda_handler"]

Configuring and Testing the Lambda Function

 

  • Configure your Lambda function’s memory and timeout settings based on the inference time and model size. ML models might require more memory and longer processing, so adjust these parameters accordingly.
  • Set environment variables if your model requires specific configurations.
  • Test the function using sample events in the AWS Lambda console to ensure it loads the model correctly and returns valid predictions.
 

Best Practices and Considerations

 

  • Utilize the “cold start” awareness by caching the model in the global scope so that subsequent invocations do not re-load the model.
  • Monitor your Lambda function’s performance using CloudWatch logs and metrics to identify any memory or latency issues.
  • Consider the concurrency limits and scale your infrastructure if the model inference demands increase.
  • For very large models or complex ML operations, evaluate using AWS services like Amazon SageMaker that are specifically designed for hosting and scaling ML models.
 


Recognized by the best

Trusted by 600+ businesses globally

From startups to enterprises and everything in between, see for yourself our incredible impact.

RapidDev was an exceptional project management organization and the best development collaborators I've had the pleasure of working with.

They do complex work on extremely fast timelines and effectively manage the testing and pre-launch process to deliver the best possible product. I'm extremely impressed with their execution ability.

Arkady
CPO, Praction
Working with Matt was comparable to having another co-founder on the team, but without the commitment or cost.

He has a strategic mindset and willing to change the scope of the project in real time based on the needs of the client. A true strategic thought partner!

Donald Muir
Co-Founder, Arc
RapidDev are 10/10, excellent communicators - the best I've ever encountered in the tech dev space.

They always go the extra mile, they genuinely care, they respond quickly, they're flexible, adaptable and their enthusiasm is amazing.

Mat Westergreen-Thorne
Co-CEO, Grantify
RapidDev is an excellent developer for custom-code solutions.

We’ve had great success since launching the platform in November 2023. In a few months, we’ve gained over 1,000 new active users. We’ve also secured several dozen bookings on the platform and seen about 70% new user month-over-month growth since the launch.

Emmanuel Brown
Co-Founder, Church Real Estate Marketplace
Matt’s dedication to executing our vision and his commitment to the project deadline were impressive. 

This was such a specific project, and Matt really delivered. We worked with a really fast turnaround, and he always delivered. The site was a perfect prop for us!

Samantha Fekete
Production Manager, Media Production Company
The pSEO strategy executed by RapidDev is clearly driving meaningful results.

Working with RapidDev has delivered measurable, year-over-year growth. Comparing the same period, clicks increased by 129%, impressions grew by 196%, and average position improved by 14.6%. Most importantly, qualified contact form submissions rose 350%, excluding spam.

Appreciation as well to Matt Graham for championing the collaboration!

Michael W. Hammond
Principal Owner, OCD Tech

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We’ll discuss your project and provide a custom quote at no cost.Â