Learn how to integrate Bolt.new AI with Google Cloud AI Platform in 2025 using this clear step-by-step guide for seamless deployment.

Book a call with an Expert
Starting a new venture? Need to upgrade your web app? RapidDev builds application with your growth in mind.
To integrate a Bolt.new project with Google Cloud AI Platform, you treat Bolt as a normal full-stack runtime that can call Google Cloud services over standard APIs. There is no special “Bolt → Google Cloud” magic. You integrate using Google Cloud’s REST API or one of the official client SDKs (usually via service account credentials stored as environment variables inside Bolt). In practice, you create a Google Cloud service account, download its JSON key, store that JSON safely in Bolt.new environment variables, install the Google Cloud Node.js client libraries inside your Bolt project, and call Vertex AI (the modern name for Google’s AI Platform) endpoints just like any backend server would.
You wire Bolt.new to Google Cloud AI Platform (Vertex AI) using service account JSON credentials and Google Cloud’s REST or Node.js SDK. Bolt.new can run Node.js code on its backend, which lets you install @google-cloud/aiplatform and authenticate with environment variables. From there, you can call model prediction endpoints, embeddings, fine-tuned models, or custom model deployments exactly as you would from any server.
This is the real and correct way; there is no other integration pathway.
Below is the practical, real-world flow you use in Bolt.new, written so a junior developer can implement it.
npm install @google-cloud/aiplatform
// backend/vertex.js
import {PredictionServiceClient} from '@google-cloud/aiplatform';
import {GoogleAuth} from 'google-auth-library';
const credentials = JSON.parse(process.env.GCP_SERVICE_ACCOUNT_JSON); // Load JSON from Bolt env
const auth = new GoogleAuth({
credentials: credentials,
scopes: ['https://www.googleapis.com/auth/cloud-platform'] // Required for Vertex AI
});
const client = new PredictionServiceClient({auth: auth});
// Example: call a specific Vertex AI endpoint
export async function runPrediction(instance) {
const projectId = 'YOUR_PROJECT_ID'; // Replace with real project ID
const location = 'us-central1'; // Or your region
const endpointId = 'YOUR_ENDPOINT_ID'; // Deployed model endpoint
const endpoint = `projects/${projectId}/locations/${location}/endpoints/${endpointId}`;
const request = {
endpoint: endpoint,
instances: [instance] // instance must match your model schema
};
const [response] = await client.predict(request);
return response;
}
The real integration path is: give Bolt.new valid Google Cloud credentials → install Vertex AI Node.js client library → call prediction endpoints inside Bolt’s backend code. This is the same pattern you’d use in any Node-based backend; Bolt.new is simply a convenient workspace where this can execute immediately.
When it comes to serving you, we sweat the little things. That’s why our work makes a big impact.