/how-to-build-lovable

How to build Content moderation tool with Lovable?

Learn step-by-step how to build a scalable content moderation tool with Lovable including setup policies automation and performance best practices

Book a free  consultation
4.9
Clutch rating 🌟
600+
Happy partners
17+
Countries served
190+
Team members
Matt Graham, CEO of Rapid Developers

Book a call with an Expert

Starting a new venture? Need to upgrade your web app? RapidDev builds application with your growth in mind.

Book a free No-Code consultation

How to build Content moderation tool with Lovable?

We’ll build a small content-moderation flow inside your Lovable app: a UI to submit user content, a server-side moderation endpoint that calls the OpenAI Moderation API (keeps the API key in Lovable Secrets), and an optional Supabase-backed store for flagged items. All changes are done in Lovable Chat Mode (edits/diffs), Preview, and Publish — there’s no terminal required. If you need DB migrations you’ll create the Supabase table in the Supabase dashboard (outside Lovable), then store the service key in Lovable Secrets.

 

What we’re building / changing

 

Build a moderation tool with:

  • Frontend page where admins or a review process can paste user content and submit for moderation.
  • Server API route that calls the OpenAI Moderation endpoint using a secret API key stored in Lovable Secrets.
  • Optional Supabase persistence to save flagged items (configured via Lovable Secrets and Supabase dashboard).

 

Lovable-native approach

 

How we’ll work inside Lovable: Use Chat Mode to create/modify files (provide exact paths below), Preview to test end-to-end, and Publish to deploy changes. Store sensitive keys via Lovable Cloud Secrets UI. If you need DB schema work, do it in the Supabase dashboard (outside Lovable). If you need advanced server control or migrations, use GitHub sync/export and run CLI steps locally (explicitly labeled below).

 

Meta-prompts to paste into Lovable

 

  • Prompt A — Add server moderation API

    Goal: Create a server API that accepts POST {content} and returns moderation result; uses OPENAI_API_KEY from env.

    Files to create: src/pages/api/moderate.ts (or api/moderate.js if project uses JS).

    What to write inside the file (ask Lovable to implement): Implement an API handler that reads req.body.content, calls OpenAI Moderation API (POST https://api.openai.com/v1/moderations) with Authorization: Bearer process.env.OPENAI_API_KEY, returns JSON { flagged: boolean, categories, scores } and 400/500 errors handled.

    Acceptance criteria (done when): POST /api/moderate returns a JSON moderation decision for a sample input in Preview and server logs (Preview) show API call succeeded.

    Secrets/setup steps: In Lovable Cloud Secrets UI add OPENAI_API_KEY.

  • Prompt B — Add frontend moderation page

    Goal: Add a simple admin page to submit text to /api/moderate and display results, with a “Save flagged” button that calls /api/moderate?save=true if Supabase configured.

    Files to create/modify: create src/pages/moderation.tsx (or src/routes/moderation.jsx depending on your framework). Also update top-level navigation if present (e.g., update src/components/Nav.tsx to add a link to /moderation).

    Acceptance criteria (done when): In Preview you can open /moderation, paste text, click “Check”, see a clear flagged/not flagged result, and if flagged a “Save to Reports” button is shown.

    Secrets/setup steps: none new for frontend; it calls your API route so ensure OPENAI_API_KEY secret exists.

  • Prompt C — Optional: save flagged items to Supabase

    Goal: When moderation result is flagged and user clicks Save, server API inserts a row into a Supabase table.

    Files to modify: update src/pages/api/moderate.ts to, when query param save=true and result.flagged true, insert into Supabase table moderation_reports with columns (content text, categories json, scores json, created_at default now()).

    Acceptance criteria (done when): Clicking Save inserts a row visible in Supabase dashboard.

    Secrets/setup steps: In Lovable Cloud Secrets add SUPABASE_URL and SUPABASE_SERVICE_ROLE_KEY (or anon key depending on security; service role recommended for server inserts). Create table moderation\_reports in Supabase dashboard (outside Lovable).

  • Prompt D — Add basic UI protections and rate-limit note

    Goal: Add client-side guard to prevent accidental spam (e.g., disable submit for 2s after click) and show helpful error messages from API.

    Files to modify: src/pages/moderation.tsx (the same file) to include simple disable state and error handling.

    Acceptance criteria (done when): Submit button disables briefly and API errors show readable messages in Preview.

 

How to verify in Lovable Preview

 

  • Open Preview, visit /moderation, paste a known violating phrase and click Check — you should see flagged:true and categories.
  • Toggle Save and click Save — then verify a row in Supabase dashboard if you enabled persistence.
  • Check logs in Preview runtime output to confirm external API calls used the secret (no key printed).

 

How to Publish / re-publish

 

  • In Lovable use Publish to push changes to Lovable Cloud. Make sure your Secrets are set in Lovable Cloud before Publish so the server route has access to OPENAI_API_KEY and SUPABASE keys at runtime.
  • If syncing to GitHub use Lovable’s GitHub sync/export. If you require DB migrations or package installs that need the terminal, export to GitHub and run those steps locally (labelled “outside Lovable”).

 

Common pitfalls in Lovable (and how to avoid them)

 

  • Forgetting Secrets: API calls fail in Preview/Publish if OPENAI_API_KEY or SUPABASE keys aren’t set in Lovable Secrets — add them via the Secrets UI before testing.
  • DB schema not created: Supabase table must be created in the Supabase dashboard; Lovable can’t run migrations for you.
  • Exposing keys client-side: Don’t call OpenAI directly from the browser — always proxy through the server API so keys remain in Secrets.
  • Dependency changes: If you add packages (e.g., @supabase/supabase-js), Preview will run the build — if you need local testing or migrations, export to GitHub and run npm install/build locally.

 

Validity bar

 

  • Accuracy: All instructions use Lovable-native actions (Chat Mode edits, Preview, Publish, Secrets UI, GitHub export). No terminal required for the core flow. Supabase dashboard changes are outside Lovable and explicitly stated.

Want to explore opportunities to work with us?

Connect with our team to unlock the full potential of no-code solutions with a no-commitment consultation!

Book a Free Consultation

How to add a Moderator Action Audit Log with Lovable

This prompt helps an AI assistant understand your setup and guide to build the feature

AI AI Prompt

How to build a Bulk Moderation Processor with Lovable

This prompt helps an AI assistant understand your setup and guide to build the feature

AI AI Prompt

How to add a per-moderator rate limiter (enforce + debug) to a Lovable content moderation tool

This prompt helps an AI assistant understand your setup and guide to build the feature

AI AI Prompt

Want to explore opportunities to work with us?

Connect with our team to unlock the full potential of no-code solutions with a no-commitment consultation!

Book a Free Consultation
Matt Graham, CEO of Rapid Developers

Book a call with an Expert

Starting a new venture? Need to upgrade your web app? RapidDev builds application with your growth in mind.

Book a free No-Code consultation

Best Practices for Building a Content moderation tool with AI Code Generators

 

Build a layered, auditable moderation pipeline that combines lightweight deterministic filters, a model-based classifier (use a reliable moderation API like OpenAI’s moderation endpoint or a tuned classifier built on embeddings + a small model), and a clear human-in-the-loop review flow. In Lovable, keep secrets (API keys) in the Secrets UI, develop and iterate inside Chat Mode and Preview, use Publish or GitHub sync to deploy production services (or run heavier batch jobs off-platform), and always log decisions and allow appeals — don’t rely on a single model decision and never embed raw keys in code.

 

Design the pipeline

 

Layer defenses so a cheap deterministic layer blocks obvious things (regex, profanity lists), a model layer classifies edge content, and a human review queue handles high-risk or borderline cases.

  • Deterministic filters: pattern checks, token / URL heuristics, rate limits.
  • Model classifier: use an official moderation API or your tuned classifier for categories (hate, sexual, violence, self-harm, spam).
  • Human-in-loop: flagged queue with context, history, and one-click actions.
  • Audit log: store original content, model outputs, reviewer decision, timestamps.

 

Lovable-specific engineering workflow

 

Develop and iterate inside Lovable using Chat Mode for code edits, Preview to test API calls, and Secrets UI to store API keys. Because Lovable has no terminal, do not expect to run migrations or background daemons there — export to GitHub or Publish to run on your production environment or external worker.

  • Secrets: put OpenAI / provider keys in Lovable Secrets and reference via process.env in code.
  • Preview: quick-test endpoints and UI flows in Preview; inspect logs there.
  • Publish / GitHub sync: push when you need server hosting, background workers, CI, or to run heavy retraining outside Lovable.

 

Prompt & model best practices

 

  • Prefer a moderation API (eg. OpenAI moderation) for general safety to avoid building from scratch.
  • Use structured outputs (scores, categories, rationale) — don’t rely on free-text only.
  • Tune thresholds per category for different actions: block, soft-block (warn), require review, allow.
  • Keep prompts minimal and test adversarial examples frequently.

 

Example: server endpoint calling OpenAI moderation + storing to Supabase

 

// Node/Express endpoint (works when you deploy this code outside Lovable with NODE env vars)
// Install: npm install express node-fetch @supabase/supabase-js
const express = require('express');
const fetch = require('node-fetch');
const { createClient } = require('@supabase/supabase-js');

const app = express();
app.use(express.json());

// Secrets set in Lovable -> exported or in your deployed env
const OPENAI_KEY = process.env.OPENAI_KEY;
const SUPABASE_URL = process.env.SUPABASE_URL;
const SUPABASE_KEY = process.env.SUPABASE_KEY;

const supabase = createClient(SUPABASE_URL, SUPABASE_KEY);

app.post('/moderate', async (req, res) => {
  const { content, userId } = req.body;

  // Call OpenAI moderation API
  const resp = await fetch('https://api.openai.com/v1/moderations', {
    method: 'POST',
    headers: {
      'Authorization': `Bearer ${OPENAI_KEY}`,
      'Content-Type': 'application/json'
    },
    body: JSON.stringify({ input: content, model: 'omni-moderation-latest' })
  });
  const mod = await resp.json();

  // Decide: simple threshold example
  const blocked = mod.results?.[0]?.categories?.sexual || mod.results?.[0]?.flagged;
  const action = blocked ? 'flag' : 'allow';

  // Store audit to Supabase
  await supabase.from('moderation_logs').insert({
    user_id: userId,
    content,
    moderation: mod,
    action
  });

  res.json({ action, moderation: mod });
});

app.listen(3000);

 

Operational and safety practices

 

  • Logging & retention: retain moderation logs for audits and appeals but redact PII where required by privacy rules.
  • Human workflows: expose full context to reviewers, show model rationale, and store reviewer decisions to retrain/tune.
  • Monitoring: track false positive/negative rates, category distribution, latency, and cost per moderation.
  • Rate limits & cost control: batch low-risk content or sample for model checks to control API cost.
  • Appeals & transparency: give users a clear appeals path and record outcomes.
  • Security: keep keys in Lovable Secrets; never commit secrets to GitHub. Use least privilege for service keys.

 

Common pitfalls

 

  • Assuming a single model is perfect — always add human review for high-stakes content.
  • Embedding secrets in code — use Lovable Secrets and validate they’re available in Preview/Publish.
  • Testing only on English — check multilingual performance and local norms.
  • Expecting Lovable to run long background jobs — export to GitHub or external workers for heavy processing.


Recognized by the best

Trusted by 600+ businesses globally

From startups to enterprises and everything in between, see for yourself our incredible impact.

RapidDev was an exceptional project management organization and the best development collaborators I've had the pleasure of working with.

They do complex work on extremely fast timelines and effectively manage the testing and pre-launch process to deliver the best possible product. I'm extremely impressed with their execution ability.

Arkady
CPO, Praction
Working with Matt was comparable to having another co-founder on the team, but without the commitment or cost.

He has a strategic mindset and willing to change the scope of the project in real time based on the needs of the client. A true strategic thought partner!

Donald Muir
Co-Founder, Arc
RapidDev are 10/10, excellent communicators - the best I've ever encountered in the tech dev space.

They always go the extra mile, they genuinely care, they respond quickly, they're flexible, adaptable and their enthusiasm is amazing.

Mat Westergreen-Thorne
Co-CEO, Grantify
RapidDev is an excellent developer for custom-code solutions.

We’ve had great success since launching the platform in November 2023. In a few months, we’ve gained over 1,000 new active users. We’ve also secured several dozen bookings on the platform and seen about 70% new user month-over-month growth since the launch.

Emmanuel Brown
Co-Founder, Church Real Estate Marketplace
Matt’s dedication to executing our vision and his commitment to the project deadline were impressive. 

This was such a specific project, and Matt really delivered. We worked with a really fast turnaround, and he always delivered. The site was a perfect prop for us!

Samantha Fekete
Production Manager, Media Production Company
The pSEO strategy executed by RapidDev is clearly driving meaningful results.

Working with RapidDev has delivered measurable, year-over-year growth. Comparing the same period, clicks increased by 129%, impressions grew by 196%, and average position improved by 14.6%. Most importantly, qualified contact form submissions rose 350%, excluding spam.

Appreciation as well to Matt Graham for championing the collaboration!

Michael W. Hammond
Principal Owner, OCD Tech

We put the rapid in RapidDev

Need a dedicated strategic tech and growth partner? Discover what RapidDev can do for your business! Book a call with our team to schedule a free, no-obligation consultation. We’ll discuss your project and provide a custom quote at no cost.