/bolt-ai-integration

Bolt.new AI and Google Cloud AI Platform integration: Step-by-Step Guide 2025

Learn how to integrate Bolt.new AI with Google Cloud AI Platform in 2025 using this clear step-by-step guide for seamless deployment.

Matt Graham, CEO of Rapid Developers

Book a call with an Expert

Starting a new venture? Need to upgrade your web app? RapidDev builds application with your growth in mind.

Book a free No-Code consultation

How to integrate Bolt.new AI with Google Cloud AI Platform?

To integrate a Bolt.new project with Google Cloud AI Platform, you treat Bolt as a normal full-stack runtime that can call Google Cloud services over standard APIs. There is no special “Bolt → Google Cloud” magic. You integrate using Google Cloud’s REST API or one of the official client SDKs (usually via service account credentials stored as environment variables inside Bolt). In practice, you create a Google Cloud service account, download its JSON key, store that JSON safely in Bolt.new environment variables, install the Google Cloud Node.js client libraries inside your Bolt project, and call Vertex AI (the modern name for Google’s AI Platform) endpoints just like any backend server would.

 

What You Actually Do

 

You wire Bolt.new to Google Cloud AI Platform (Vertex AI) using service account JSON credentials and Google Cloud’s REST or Node.js SDK. Bolt.new can run Node.js code on its backend, which lets you install @google-cloud/aiplatform and authenticate with environment variables. From there, you can call model prediction endpoints, embeddings, fine-tuned models, or custom model deployments exactly as you would from any server.

  • You create a Google Cloud project with Vertex AI enabled.
  • You create a service account with the proper Vertex AI permissions.
  • You download the service account JSON key.
  • You store the JSON in Bolt.new environment variables (usually as a single JSON string).
  • Inside Bolt code, you load the credentials and call Google Cloud AI Platform using Node.js SDK or REST.

This is the real and correct way; there is no other integration pathway.

 

Step-by-Step Deep Explanation

 

Below is the practical, real-world flow you use in Bolt.new, written so a junior developer can implement it.

  • Create a project in Google Cloud Enable the Vertex AI API. Vertex AI is Google’s ML/AI platform replacing the older “AI Platform” branding.
  • Create a service account Go to IAM & Admin → Service Accounts. Create a new one with roles like Vertex AI User (required) and Storage Object Viewer if your models rely on Cloud Storage.
  • Generate service account key (JSON) Download the JSON. This file contains private keys and must be kept secret.
  • In Bolt.new, store credentials as environment variable In Bolt’s environment variable UI, create an env var such as GCP_SERVICE_ACCOUNT\_JSON containing the entire JSON file as a string.
  • Install Google’s Vertex AI SDK In Bolt’s backend package.json add:
npm install @google-cloud/aiplatform
  • Load credentials in Node.js In Bolt.new, your server code can load the JSON, parse it, and pass it to Google’s client library. Use GoogleAuth from google-auth-library or let the SDK read the key file from memory.
// backend/vertex.js

import {PredictionServiceClient} from '@google-cloud/aiplatform';
import {GoogleAuth} from 'google-auth-library';

const credentials = JSON.parse(process.env.GCP_SERVICE_ACCOUNT_JSON); // Load JSON from Bolt env

const auth = new GoogleAuth({
  credentials: credentials,
  scopes: ['https://www.googleapis.com/auth/cloud-platform'] // Required for Vertex AI
});

const client = new PredictionServiceClient({auth: auth});

// Example: call a specific Vertex AI endpoint
export async function runPrediction(instance) {
  const projectId = 'YOUR_PROJECT_ID';        // Replace with real project ID
  const location = 'us-central1';             // Or your region
  const endpointId = 'YOUR_ENDPOINT_ID';      // Deployed model endpoint

  const endpoint = `projects/${projectId}/locations/${location}/endpoints/${endpointId}`;

  const request = {
    endpoint: endpoint,
    instances: [instance]                      // instance must match your model schema
  };

  const [response] = await client.predict(request);
  return response;
}
  • Call the backend from your Bolt UI, or call runPrediction from other backend routes.

 

Key Concepts (Explained Simply)

 

  • Service account: a special Google Cloud identity used by servers (not humans) to authenticate.
  • JSON key: a file containing a private key and account metadata. This is your “password” for the service account.
  • Environment variable: a private value stored inside Bolt’s runtime. This prevents you from hardcoding secrets in source code.
  • Vertex AI Endpoint: a deployed model with a stable REST URL. You send prediction requests to it.
  • REST vs SDK: Vertex AI offers both raw HTTP REST and easier Node.js client libraries. SDK is safer for junior devs.

 

Important Security & Practical Notes

 

  • Never commit the JSON key into your repository. Only store it in Bolt.new environment variables.
  • Ensure the service account has minimal permissions — usually Vertex AI User is enough.
  • Test with logging enabled to inspect responses from Vertex AI.
  • Do not expose your backend directly to the UI. Use backend routes that sanitize inputs.

 

Summary

 

The real integration path is: give Bolt.new valid Google Cloud credentials → install Vertex AI Node.js client library → call prediction endpoints inside Bolt’s backend code. This is the same pattern you’d use in any Node-based backend; Bolt.new is simply a convenient workspace where this can execute immediately.

Want to explore opportunities to work with us?

Connect with our team to unlock the full potential of no-code solutions with a no-commitment consultation!

Book a Free Consultation

Client trust and success are our top priorities

When it comes to serving you, we sweat the little things. That’s why our work makes a big impact.

Rapid Dev was an exceptional project management organization and the best development collaborators I've had the pleasure of working with. They do complex work on extremely fast timelines and effectively manage the testing and pre-launch process to deliver the best possible product. I'm extremely impressed with their execution ability.

CPO, Praction - Arkady Sokolov

May 2, 2023

Working with Matt was comparable to having another co-founder on the team, but without the commitment or cost. He has a strategic mindset and willing to change the scope of the project in real time based on the needs of the client. A true strategic thought partner!

Co-Founder, Arc - Donald Muir

Dec 27, 2022

Rapid Dev are 10/10, excellent communicators - the best I've ever encountered in the tech dev space. They always go the extra mile, they genuinely care, they respond quickly, they're flexible, adaptable and their enthusiasm is amazing.

Co-CEO, Grantify - Mat Westergreen-Thorne

Oct 15, 2022

Rapid Dev is an excellent developer for no-code and low-code solutions.
We’ve had great success since launching the platform in November 2023. In a few months, we’ve gained over 1,000 new active users. We’ve also secured several dozen bookings on the platform and seen about 70% new user month-over-month growth since the launch.

Co-Founder, Church Real Estate Marketplace - Emmanuel Brown

May 1, 2024 

Matt’s dedication to executing our vision and his commitment to the project deadline were impressive. 
This was such a specific project, and Matt really delivered. We worked with a really fast turnaround, and he always delivered. The site was a perfect prop for us!

Production Manager, Media Production Company - Samantha Fekete

Sep 23, 2022