CSRF and XSS in AI-built web apps: fix patterns fast
CSRF and XSS in AI-built web apps are common when UI code is auto-generated. Learn vulnerable patterns and a checklist to patch holes without rewriting screens.

Why CSRF and XSS show up in AI-built apps
CSRF and XSS show up often in AI-built web apps for a simple reason: many prototypes are built to look right and demo well, not to hold up under real traffic. A login page that “works locally” can still be unsafe once it’s deployed, shared, and used by people who aren’t you.
What “it works locally” hides is the gap between a private dev setup and a live app. Locally, you rarely deal with real cookies across multiple tabs, third-party content, user-generated text at scale, or browser extensions that change what pages do. In production, those things appear fast, and one unsafe spot can become a doorway into many pages.
AI-generated prototypes also tend to skip security basics because prompts focus on features. The model often picks the shortest path: render user content directly, store tokens in unsafe places, or send state-changing requests with no strong protection. It looks fine in a demo, but it leaves gaps like missing CSRF tokens or HTML getting injected into the page.
Small patterns can have big blast radius. One component that uses dangerouslySetInnerHTML to render “formatted notes” can turn one user’s input into a script that runs for every viewer. One “Delete” button that calls an API without CSRF checks can let an attacker trigger actions using a victim’s logged-in session.
“Without rewriting the UI” is realistic, with boundaries. You usually don’t need to redesign screens or rebuild components from scratch. You add protection under the surface: safer rendering defaults, consistent request wrappers, and server-side checks that reject unsafe requests. The UI can look identical while the app becomes much harder to abuse.
If you inherited an AI-built codebase and you’re seeing these patterns, a practical first move is a quick audit to find the few high-impact holes that make everything else risky, then patch them without changing how the app looks to users. FixMyMess typically starts exactly there: codebase diagnosis first, then targeted repairs.
CSRF vs XSS in plain language
CSRF and XSS get mixed up because both can lead to “someone did something in my app that I didn’t expect.” The difference is where the attacker’s control happens.
CSRF (Cross-Site Request Forgery) in one sentence: it tricks a logged-in user’s browser into sending a real request your server will accept.
XSS (Cross-Site Scripting) in one sentence: it lets attacker-controlled code run inside your site in the user’s browser.
A quick way to remember it: CSRF abuses your user’s login (usually cookies). XSS abuses your page.
How they chain together
They’re bad alone, but worse together. A common chain looks like this:
- An XSS bug runs in your app and reads something sensitive (like a CSRF token in the page or a JWT in localStorage).
- The attacker uses that secret to send authenticated requests that look legitimate.
- Those requests perform actions (change email, add an admin, transfer credits) without the user noticing.
That’s why “we’ll add CSRF later” often fails if an XSS hole already exists.
Simple signs you might have each issue
If you use cookie-based auth, CSRF risk is likely when important actions work with a single POST and no per-request token, or when your API accepts requests without checking the Origin or Referer header. Risk also climbs with long-lived sessions (“remember me”) and apps that rely on “it’s a private dashboard” as protection.
XSS risk is likely when you render user content with dangerouslySetInnerHTML (or similar), insert raw HTML from a rich text editor or Markdown without sanitizing, build HTML strings and set them via innerHTML, or echo user input back into the page (comments, names, search terms).
If you inherited an AI-generated prototype, these two issues show up a lot in quick builds. The fastest path is to confirm which risk is real in your app, then patch it without changing the UI layout.
Vulnerable XSS patterns to search for first
XSS usually sneaks in when an AI-built UI takes a shortcut to “make it look right” and ends up treating text as HTML. Start by finding every place the app turns user-controlled content into markup.
The fastest red flags
Generated code often includes a few high-impact shortcuts:
dangerouslySetInnerHTMLused for quick formatting, highlighting, or inserting rich snippets.- User content rendered as HTML (comments, bios, support tickets, “about me”, markdown previews).
- HTML built by string concatenation (template strings that contain
\u003cdiv\u003e/\u003ca\u003e/\u003cimg\u003e). - Untrusted data placed into attributes (especially
href,src,style, ordata-*) or inline event handlers likeonclick="...". - Copy-pasted “sanitize” helpers that only remove a few tags, use regex, or only escape
\u003cand\u003e.
A realistic failure: a dashboard shows a “Release notes” field pulled from the database. Someone pastes \u003cimg src=x onerror=alert(1)\u003e and suddenly every admin who opens the page runs that code. It can also steal session tokens, change UI text, or silently submit actions.
What to search for in code
Use simple searches first. The goal is to build an inventory, not fix it yet.
- dangerouslySetInnerHTML
- innerHTML =
- insertAdjacentHTML
- onClick=" onerror=" onload="
- href={user
- `\u003cdiv` `\u003c/` (inside template strings)
If you inherited a prototype from tools like v0, Replit, or Cursor, these shortcuts show up a lot. Treat any homegrown sanitize function as suspicious until proven safe. FixMyMess often sees “sanitizers” that miss SVG payloads, event attributes, or javascript: URLs.
Vulnerable CSRF patterns to search for first
CSRF shows up when your app trusts a browser cookie too much. Many AI-generated prototypes “just work” in dev because you’re always logged in, but the same design becomes risky in production.
Start by hunting for any request that changes data but doesn’t prove the user meant to do it. The fastest way is to look at server routes and frontend API calls side by side.
High-risk patterns worth checking today
Common high-risk CSRF patterns include:
- POST/PUT/DELETE requests that rely on a session cookie but don’t send a CSRF token (or don’t validate one on the server).
- Auth cookies with no clear SameSite policy (or SameSite=None without a strong reason).
- GET endpoints that change state (for example: /api/deleteUser?id=123 or /api/toggle?id=...).
- “We enabled CORS, so we are safe” thinking. CORS controls which sites can read responses, not which sites can send a request.
- Multiple subdomains that share cookies in a confusing way, especially when the cookie Domain is too wide.
A quick example: a prototype admin dashboard has a button that calls GET /api/approveInvoice?id=42. If an attacker can get an admin to load a page that triggers that URL, the browser may send the admin cookie automatically.
Cookie and subdomain gotchas
If your app uses app.example.com and api.example.com, be explicit about cookie scope and what is allowed to send authenticated requests. Wide cookie domains plus missing CSRF checks is a common “it worked locally” trap.
If you want a fast audit, FixMyMess can flag these CSRF patterns quickly (including cookie scope issues) before you touch the UI.
Patch CSRF without changing your UI layout
Most CSRF fixes live in cookies, headers, and server middleware. That’s why you can usually lock down CSRF without redesigning a single page.
Pick a CSRF strategy
Two common approaches work well with AI-generated frontends:
- Synchronizer token: the server creates a token, stores it in the session, and requires it on every state-changing request.
- Double-submit cookie: the server sets a CSRF cookie and requires the same value in a request header (or body). No server session needed.
If your app already uses cookies for auth, double-submit cookie plus a header is often the least disruptive. The UI stays the same. You add one header in your API client.
Backend checks that do not touch the UI
Put CSRF enforcement where requests enter your backend: middleware, a controller base class, or a single request guard. Enforce it for unsafe methods (POST, PUT, PATCH, DELETE) and only for browser cookie-authenticated traffic.
A practical pattern is:
- Set a CSRF cookie on initial page load or after login.
- Require
X-CSRF-Tokenon unsafe requests. - Compare header token to cookie token (reject if missing or mismatched).
- Skip CSRF checks for endpoints that use Authorization headers (API keys, bearer tokens) instead of cookies.
- Log rejections with route and origin so you catch accidental breaks fast.
Confirm cookie settings while you’re there. Use SameSite=Lax by default, Secure in production (HTTPS), and HttpOnly for auth cookies. For CSRF cookies specifically, you may need HttpOnly=false if the browser must read it to copy into a header.
If you have APIs used by both browser code and server code (SSR, cron jobs, webhooks), separate them: cookie-based routes get CSRF checks; token-based routes do not. Teams often ask FixMyMess to add this split cleanly when an AI-built prototype starts failing in production.
Patch XSS without changing your UI layout
XSS fixes don’t have to mean a redesign. Most of the time, you can keep the same components and routes and only change how text and HTML get rendered. This is common in AI-built apps: the UI looks fine, but the rendering defaults are unsafe.
Make “text-only” the default
Treat every string as untrusted, even if it came from your own database. Comments, profile names, “notes”, support messages, and anything produced by an LLM can carry hidden HTML.
Focus on a few patterns that close most holes:
- Remove or tightly limit
dangerouslySetInnerHTML(and similar APIs in other frameworks). - Render user content as plain text by default (no HTML interpretation).
- If you must allow rich text, sanitize it with a well-known library and a small allowlist.
- Put output encoding in one place (a helper/component) so fixes apply everywhere.
- Treat Markdown as untrusted too (Markdown can produce HTML depending on the parser).
If you truly need HTML (for example, a “release notes” editor), sanitize right before rendering and only allow a small set of tags (like b, i, em, strong, a). Avoid inline event handlers, inline styles, and unknown attributes.
// Example pattern (React): sanitize before using dangerouslySetInnerHTML
const safeHtml = sanitize(userProvidedHtml, { allowTags: ['b','i','em','strong','a'] });
return \u003cdiv dangerouslySetInnerHTML={{ __html: safeHtml }} /\u003e;
Add a CSP as a safety net
A Content Security Policy (CSP) won’t fix bad rendering, but it can limit the damage if something slips through.
Start simple, then test your app and loosen only what you must:
- Block inline scripts when possible.
- Allow scripts only from your own domain.
- Disallow
javascript:URLs in links. - Avoid
unsafe-evalunless you have no choice.
If you inherited an AI-generated prototype and you’re not sure where unsafe HTML is coming from, a good workflow is to find every risky render path, replace it with safe defaults, then add CSP to catch the leftovers.
Step-by-step hardening in under a day
You can close most CSRF and XSS holes without touching layout. The trick is to work like a tester first, then patch the smallest surface area.
Start by mapping how data moves. Make a quick inventory of where data enters (forms, query params, cookies, webhooks, rich text fields) and where it’s shown back to a user (tables, toasts, profile pages, admin panels). This often reveals “hidden” paths like an internal settings page that never got reviewed.
Next, make your fixes visible. Turn on server logs for CSRF failures (rejected tokens, missing origin checks). For XSS, add a clear signal when sanitization removes content. If a user reports “my text disappeared,” you want to see what was removed and why.
A fast patch plan
- Add CSRF protection to every state-changing route (POST, PUT, PATCH, DELETE) using shared middleware, and confirm cookies have sensible defaults (HttpOnly, Secure, SameSite).
- Standardize how the client sends the token (header or hidden field) so you don’t fix it in five different ways.
- Search for
dangerouslySetInnerHTMLand similar patterns. Remove them when you can, or isolate them to a single component that always sanitizes. - Sanitize untrusted HTML at one boundary (right before render, or right when saving). Pick one, document it, and keep it consistent.
- Add a basic CSP and run your core flows so you catch accidental inline scripts early.
After that, retest like a normal user: sign up, log in, update a profile, submit a form, and use any admin actions. Try the same actions from a second browser where you’re not logged in. If something breaks, the logs should tell you whether it was CSRF protection or sanitization.
If you inherited a messy AI-generated codebase, FixMyMess can run a quick audit and apply these patches with expert human verification, so you get security gains without a full rewrite.
Example: prototype dashboard that breaks in production
A common setup is an “admin dashboard” plus a public-facing comment box. The UI looks fine: admins can approve users, issue refunds, and change pricing. Visitors can leave feedback that shows up in a feed on the dashboard.
The XSS slips in when comments are rendered as HTML. A typical pattern is a React component that uses dangerouslySetInnerHTML so line breaks and links “just work.” If a visitor types something that becomes script code, it can execute inside the admin’s browser when the admin opens the dashboard.
CSRF slips in when admin actions rely only on cookies for auth. The buttons call endpoints like /api/admin/refund, and the server assumes “cookie present” means “admin approved this action.” If an admin is logged in and then visits a malicious page in another tab, that page can auto-submit a hidden form or request to your app, and the browser will attach the admin cookie.
A realistic attack path looks like this: the attacker posts a “comment” that runs in the admin’s session, then triggers state-changing requests against CSRF-free endpoints. It’s not magic, it’s the browser doing what it always does.
Minimal fixes that keep the same screens and UX:
- Stop rendering raw comment HTML. Render text by default, or sanitize and allow only a small safe set (bold, italics, links).
- Add CSRF protection for every state-changing request and require it on POST/PUT/PATCH/DELETE.
- Set session cookies to
SameSite=Lax(orStrictwhere possible) andHttpOnly. - Require Origin or Referer checks for sensitive admin actions as a backup.
Teams often bring this exact dashboard to FixMyMess when it “works locally” but fails a real security review. The good news is you can usually patch it without changing the UI.
Common mistakes and false fixes
Many quick patches feel “secure” because they quiet a warning or stop one obvious exploit. The real problem is usually that the same unsafe pattern exists in two or three places you didn’t think to check.
A common trap is escaping input once (maybe in a form handler) and then rendering the same data through a different path that skips the escape. Example: a “notes” field is escaped when saved, but a preview panel uses dangerouslySetInnerHTML for formatting and brings script execution back.
Another false fix is sanitizing only in the browser and trusting the result on the server. Attackers don’t use your UI. They send requests directly, so the server must validate and the app must encode output no matter what the client does.
People also assume JSON is “safe.” It isn’t. If you take JSON fields and inject them into HTML (templates, tooltips, toast messages, rich text components), you can still get XSS. The response format isn’t protection; how you render it is.
Third-party widgets are often forgotten. Chat widgets, analytics snippets, markdown editors, and embed components can inject HTML or scripts. Even if your code is clean, an unsafe widget config can undo your work.
For CSRF, a frequent mistake is protecting one endpoint and leaving another state-changing route open. The UI might call /settings/update, but there’s also /settings/save or /api/admin/promote that still accepts cookies with no CSRF token check.
Quick reality checks that catch most “fixed but still vulnerable” apps:
- Search for every render path of user content, not just the form that collects it.
- Enforce server-side validation and output encoding, even if the client sanitizes.
- Review state-changing routes (POST, PUT, PATCH, DELETE) and confirm they all require CSRF protection.
- Inventory third-party scripts and components and review how they inject content.
Teams often bring FixMyMess a prototype that “works” in demos but fails these checks in production. The fastest wins usually come from closing the extra render paths and the forgotten endpoints, not rewriting the UI.
Quick checklist before you ship
Before you push an AI-built prototype to real users, do one pass focused on two common escape hatches: state changes without CSRF protection, and untrusted content that can become script.
- Protect every state-changing route (POST/PUT/PATCH/DELETE). Require a CSRF token (or equivalent defense) and reject requests that lack it.
- Lock down cookies: set Secure and HttpOnly where possible, and choose a SameSite value that matches your flows (watch third-party redirects and embedded apps).
- Don’t render untrusted HTML unless it goes through a sanitizer with an allowlist. If you don’t need HTML, render plain text.
- Add a CSP and test your main pages with it enabled. A policy that blocks inline scripts catches many accidental XSS paths.
- Re-test login, signup, password reset, and logout after changes. Security fixes often break auth in small ways (tokens not sent, cookies not set, redirects looping).
Then run a small set of pre-deploy tests:
- Try the app in a private window and confirm cookies behave as expected after login.
- Submit a state-changing request without a CSRF token and confirm it fails.
- Paste a harmless XSS probe (like
\u003cimg src=x onerror=alert(1)\u003e) into any field that later shows up on a page; confirm it renders as text and does not execute. - Open key pages with CSP on and confirm nothing essential breaks (buttons, modals, form submits).
If you inherited an AI-generated codebase and these checks uncover a mess of edge cases, FixMyMess can audit and patch the risky parts quickly without forcing a UI rewrite.
Next steps if you inherited an AI-generated codebase
Start by deciding whether you need quick patches or a short cleanup sprint. If the app is small, has a simple login, and only a few forms, you can often close the biggest CSRF and XSS gaps without touching the UI. If the code has copy-pasted auth logic, lots of ad-hoc API calls, and HTML rendering sprinkled everywhere, patching alone can turn into whack-a-mole.
Quick patches are usually enough when you have a small number of write actions and they all go through one client wrapper, user-generated text is displayed in a few obvious components, and sessions/cookies behave consistently across the app.
Plan for cleanup when you see duplicated fetch calls, unclear ownership of cookies/tokens, or “temporary” admin endpoints that shipped.
To get a fast, useful review, hand your reviewer a simple map of what exists today. The goal is to find the hotspots where these issues hide, not to critique your UI.
Here’s what to prepare (even a rough doc is fine):
- A list of every endpoint that changes data (method + path + who can call it)
- How auth works (cookies vs headers, where session is created, logout behavior)
- Where HTML is rendered or injected (render helpers, markdown, rich text, email previews)
- Any third-party embeds or user content inputs (comments, profile fields, uploads)
- Where secrets live (env files, client bundle, CI logs)
A short audit can usually spot risky patterns quickly: missing CSRF checks on cookie-based sessions, unsafe HTML rendering, and places where untrusted data reaches the DOM.
If the app is already breaking in production, consider a targeted remediation sprint: fix the top vulnerabilities, add guardrails (tokens, sanitization, safe defaults), then refactor only the worst offenders.
If you want an outside set of eyes, FixMyMess (fixmymess.ai) focuses on fixing AI-generated codebases from tools like Lovable, Bolt, v0, Cursor, and Replit, including issues like broken auth flows, exposed secrets, CSRF/XSS, and security hardening, while keeping the UI intact.