Sep 04, 2025·8 min read

AI-built app is slow: fix N+1 queries and missing indexes

AI-built app is slow? Start with the top database issues: N+1 queries, missing indexes, unbounded scans, and chatty ORMs, with quick fixes.

AI-built app is slow: fix N+1 queries and missing indexes

What “slow” usually means (and why the database is a common cause)

“Slow” is rarely a single number. For users, it feels like a page that hangs on a spinner, a login that takes forever, a search that returns late, or a checkout that times out. Even when something eventually loads, the app can feel unreliable because you never know if the next click will stall.

A common surprise: the database often dominates the total time. Your server can render a page in milliseconds, but if it waits on data, everything waits. One slow query can block the whole request. A pile of medium-slow queries can do the same.

If your AI-built app is slow, the database is worth checking early because AI-generated code tends to produce “reasonable-looking” data access that is inefficient in real life. It might fetch the same data over and over, pull far more rows than needed, or run queries that work fine with 50 records but collapse at 50,000.

Why the database wins (for better or worse)

Most pages boil down to a simple pattern: read data, shape it, send it back. The reads are where time disappears.

Here are the usual ways the database takes over:

  • Too many queries per page load (each one adds waiting time).
  • Queries that scan lots of rows because they cannot use an index.
  • Queries that return huge results, then get filtered in code.
  • Locks or long transactions that make other requests queue up.
  • Connection pool limits that cause requests to wait for a free database connection.

The goal is not to “fix everything.” Start by finding the biggest bottleneck you can measure, fix it, and re-check. One good change can cut load time in half, while chasing ten tiny issues often wastes a day.

At FixMyMess, this is a common pattern in AI-generated prototypes: the app feels fine during a demo, then becomes painfully slow once real users and real data arrive. The fastest progress usually comes from isolating the single query (or small set of queries) that sits on the critical path for your slowest page.

Quick signs the database is the bottleneck

When an AI-built app is slow, the database is often the first place to look because it is the shared resource every request depends on. One slow query can block many users at once.

Here are common symptoms that point to database trouble:

  • Specific endpoints are slow, while others feel normal (often list pages, search, dashboards).
  • You see timeouts or “request took too long” errors under real traffic.
  • The app server CPU looks fine, but the database CPU is high or the database is doing lots of reads.
  • Performance gets worse as data grows (more users, more rows, more joins).
  • The same page gets slower week by week, without any new features.

To separate app-server slowness from database slowness, think about where the time is spent. App-server issues usually look like: every route is sluggish, CPU on the web server is high, or memory is climbing until things crash. Database issues usually look like: a few routes are painfully slow, the slowdown is tied to certain tables, and the problem gets much worse when there is more data.

“Fast locally, slow in production” is a big clue. Your laptop database is small, warm (cached), and has no other users competing for it. Production has real data size, real concurrency, and often stricter network and security settings. If an endpoint is instant locally but drags in production, it often means the query plan is doing extra work at scale, like missing indexes, N+1 queries, or a full table scan.

A quick reality check: open the slow page, then try a version with less data (smaller date range, fewer filters, fewer rows). If it suddenly becomes fast, you are likely paying a data-size tax. Teams that bring these cases to FixMyMess often find the same pattern: the app “works” as a prototype, then slows down as soon as real users and real tables arrive.

A simple step-by-step troubleshooting flow

When an AI-built app is slow, it’s tempting to tweak code everywhere. Resist that. Pick one user action and follow it end to end so you can measure what changes actually help.

Start by choosing a single slow action a real user does, like “open Orders” or “search customers.” Then identify the one endpoint behind it (the exact API route or server handler). This keeps your test repeatable.

Next, run that action while capturing database activity. Use whatever you already have: a query log in development, an APM trace in production, or temporary logging around the request. You’re looking for the top queries that happen during that one action, not a general snapshot of the whole app.

Measure two things instead of guessing: how many queries fire, and how much total time the database spends on them. A page that triggers 120 tiny queries can be just as slow as one page with 2 huge queries.

A practical flow that stays focused:

  1. Reproduce the slow action and note the endpoint handling it.
  2. Capture the queries during that action.
  3. Record query count and total DB time for the request.
  4. Sort by impact: the slowest query and the most repeated query usually come first.
  5. Fix one issue, then re-run the same action to confirm the improvement.

Example: if “open Orders” takes 6 seconds, and you find 80 queries with 4.8 seconds of DB time, fix the worst offender first (often an unindexed filter or an N+1 loop). If the re-test drops to 2 seconds, you know you’re on the right track.

If you inherited a messy AI-generated codebase and the traces don’t make sense, FixMyMess can do a quick audit and point to the few database issues that will move the needle fastest.

5 quick checks before you change any code

If your AI-built app is slow, don’t start by rewriting features. First, get a clear picture of what the database is doing on a single slow page or request.

Quick checks that usually reveal the problem

Start with these five checks in your logs, APM, or database query history:

  • Count queries per request. If a simple page triggers 50, 100, or 500+ queries, performance will collapse as traffic grows.
  • Scan for repeats. If you see the same query shape run over and over with only the ID changing, you likely have an N+1 pattern.
  • Rank by total cost, not just “slowest”. A 30 ms query run 1,000 times hurts more than a single 2 second query. Look at duration x count.
  • Find accidental “too much data” queries. Watch for SELECT * on big tables, missing WHERE filters, missing LIMIT, or loading large text/blob columns you do not need.
  • Check whether indexes are actually used. A query can look fine, but the plan may show a full scan instead of an index lookup.

A small example: a dashboard loads a list of 50 customers, then fetches “latest invoice” inside a loop. You might see one query for the list, plus 50 near-identical invoice queries. Each query is quick alone, but together they turn one request into a traffic jam.

What to capture before you touch anything

Write down three numbers for the slow request: total query count, top 3 queries by total time, and whether those top queries are using indexes or scanning. That gives you a baseline, so you can confirm each fix actually helps.

If you inherited an AI-generated codebase, these issues often hide behind an ORM layer and noisy logs. At FixMyMess, this is the exact data we pull during a free audit so the first change is the right one.

N+1 queries: how to spot them and stop them

Get it deployment ready
We’ll refactor messy data access and get the codebase ready for a stable deployment.

An N+1 query happens when your app runs 1 query to load a list, then runs 1 more query for each row in that list. Example: you load 50 users (1 query), then fetch each user’s orders one-by-one (50 more queries).

This is common in ORMs because “lazy loading” feels convenient. You loop over users, access user.orders, and the ORM quietly hits the database every time. AI-generated code often leans on these defaults, so “AI-built app is slow” can mean “the page is making hundreds of tiny queries.”

How to spot it fast

Look for a repeating pattern in your logs or APM: the same SQL shape over and over, with only the ID changing. Another giveaway is a page that gets slower as your data grows, even if the code did not change.

If you can, count queries for one request. If the number jumps with the number of items on the page (20 items -> about 21 queries, 100 items -> about 101 queries), you likely found N+1.

Quick fixes that usually work

Pick the smallest fix that removes per-row fetches:

  • Eager load relationships (preload users with their orders in one go)
  • Batch fetch by IDs (one query for all orders where user_id IN (...))
  • Use a join when you truly need fields from both tables
  • Return only the columns you need (avoid loading huge blobs)

Validate the fix with two checks: query count should drop sharply, and page time should improve in a repeatable test (same data, same request).

One caution: “fixing” N+1 by joining everything can backfire. Over-joining can create large result sets, duplicate rows, and more memory work in your app. Load only what the page actually shows, and keep the query results small and focused.

If you inherited an AI-built prototype, a code audit often finds N+1 hotspots quickly, especially around lists, dashboards, and admin tables.

Missing indexes: the fastest win for many slow apps

An index is like the index at the back of a book. Without it, you flip page by page until you find the topic you want. With it, you jump straight to the right pages. A database works the same way: without an index, it often has to read lots of rows to find a few matches.

The most common misses are boring, and that’s good news because they’re easy to fix. Check columns you use all the time in WHERE filters, JOIN conditions, and ORDER BY sorting. Foreign keys are frequent culprits, especially in AI-generated schemas where relations exist in code but the database constraints (and indexes) were never added.

A simple example: if your app loads orders for a user with WHERE user_id = ? ORDER BY created_at DESC, you usually want an index that matches how you search. That might be a single-column index on user_id, but if you filter and sort together a composite index like (user_id, created_at) can be much faster.

Quick rules of thumb when choosing an index:

  • Index columns that appear in WHERE, JOIN, or ORDER BY on hot queries.
  • Prefer composite indexes when you frequently filter by multiple columns together.
  • Make sure foreign key columns are indexed when they’re used to join tables.
  • Don’t assume the ORM added the right indexes for you.

A common trap is indexing low-cardinality columns (values repeat a lot), like status with only a few states. Those indexes often don’t help because the database still has to touch a big chunk of the table. They can even slow down writes because every insert and update must also update the index.

To confirm you fixed the right thing, check the query plan before and after. You’re looking for the plan to change from a scan (reading many rows) to an index lookup (jumping to the matching rows). If the plan still scans, the index may be on the wrong column order, the filter isn’t selective enough, or the query is written in a way that prevents index use.

Unbounded scans: when the database reads far more than you think

An unbounded scan happens when a query is so broad that the database has to read a huge chunk of a table (sometimes all of it) just to return a small result. If your AI-built app is slow and it gets worse as data grows, this is often the reason.

The common pattern is simple: the query does not narrow the search early enough, so the database keeps reading rows until it finds what it needs. That extra work shows up as high CPU on the database, long query times, and pages that feel fine in dev but crawl in production.

Red flags to look for

A few signs usually show up in code reviews and query logs:

  • A query with no WHERE filter on a large table
  • Missing LIMIT on endpoints that list records
  • Offset pagination (page=2000) on a table that keeps growing
  • Selecting lots of columns (or SELECT *) when you only need a few fields
  • “Latest items” queries that do not filter by time or category

If you see any of these on a hot path like a dashboard or feed, treat it as a likely culprit.

Quick fixes that usually work

Start by making the query do less work per request. Small changes can cut load dramatically:

  • Add a real filter (status, user_id, tenant_id, created_at) and make sure it matches how people use the page
  • Switch from offset pagination to keyset pagination (use a last_seen id or timestamp)
  • Return fewer fields (only the columns the UI needs)
  • Add a time window for log-like tables (last 7 days), and archive older data

Be careful with “search everywhere” features. A naive LIKE '%term%' across big text fields often forces scans. If search matters, use proper text search tools your database supports, or restrict search to smaller, indexed fields.

A realistic example: an activity table grows forever. The homepage asks for “recent activity” but does not filter by account and uses offset pagination. It works at 5,000 rows, then slowly turns into a scan at 5 million rows. Adding an account filter, keyset pagination, and a 30-day window often turns a 3-second query into a 50 ms query.

If you inherited AI-generated code, this problem is common because list endpoints are built quickly and left open-ended. FixMyMess often finds a few unbounded scans that account for most of the slowness once real users and data arrive.

Chatty ORMs: too many small queries add up

Review a slow endpoint
Share one slow page and we’ll map the database calls end to end.

When an AI-built app is slow, the database is not always “doing hard work”. Sometimes it’s doing lots of tiny work, over and over. A chatty ORM is when your code makes many small database calls instead of a few useful ones.

This often happens without you noticing because each query looks harmless. But 200 “fast” queries can be slower than 5 well-shaped queries, especially once network time and connection pooling are involved.

What chatty looks like

You’ll see patterns like:

  • One query to load a list, then another query per row to fetch a related detail
  • Per-field lookups (for example, loading a user, then querying again for their plan, team, and settings)
  • Computed properties or getters that run a query, called inside a loop
  • Heavy “include everything” loads that pull large related tables you never show
  • Repeated queries for the same row in a single request (no request-level caching)

A common trap in AI-generated code is a neat-looking model method like user.displayName() that quietly queries extra tables. Call it 100 times on a page and you just created 100 extra round trips.

Quick fixes that usually work

Start by making the request “load data in one pass”. Fetch the list and the related data together, or fetch related data in a second query that covers all rows (not one query per row). Then keep the payload small: select only the columns you need, and avoid loading large relations unless the page actually uses them.

If the same lookup happens multiple times in one request (like “current team”, “current plan”, “feature flags”), add a simple request-level cache so you hit the database once.

After any change, measure again. Count queries per request and watch p95 latency, not just average. A good outcome is fewer queries, fewer round trips, and a p95 that drops in a way you can feel.

If you’re working with an inherited AI-generated codebase, FixMyMess often finds chatty ORM hotspots quickly during a code audit because they show up as repeated query patterns tied to a single endpoint.

A realistic example: the page that got slower every week

A common story with an AI-built marketplace is an admin Orders page that felt fine at launch, then slowly became painful. At 200 orders it loads in 1 second. A month later, at 10,000 orders, it takes 12 to 20 seconds and sometimes times out. Nothing “big” changed, but the database is now doing a lot more work.

Here’s what the app is doing on that page:

  • Query 1: load the latest 50 orders for the table (often with filters like status and a date range).
  • Then, for each order row, the UI shows the customer name and a short list of items.

The hidden problem is N+1. You get 1 query for the order list, then N more queries for customers, and N more for items. With 50 rows, that can become 101 queries (or more). Each query is “fast” alone, but the total time stacks up, and the database connection pool gets busy.

At the same time, filtering gets slower because of a missing index. The code filters by status and sorts or filters by created_at, but the database has no helpful index for that combination. As the table grows, the database starts scanning far more rows than needed before it can return the newest 50.

A practical fix order that keeps changes small:

  1. Measure first: capture total query count and total database time for one page load.
  2. Fix the index: add a composite index that matches the filter and sort (for example, status + created_at). Re-test.
  3. Fix N+1: fetch customers in one go, and fetch items in one go (or use an eager-load/include approach). Re-test.
  4. Add pagination limits and guardrails (hard max page size). Re-test.

What you typically see after each step: the index change drops the main list query from seconds to tens of milliseconds, but the page may still feel slow. Removing N+1 often cuts query count from ~100 to under 10, and the page load becomes predictable again.

Keep constraints in mind: change one thing at a time, run the same request each time, and verify results with real-ish data volume. With AI-generated code, it’s easy to “fix” performance by accidentally changing behavior, so small steps and quick checks matter. If the codebase is messy (chatty ORM calls scattered through view code), teams often use a short audit to map the query paths before making larger refactors.

Common mistakes that waste time (and sometimes make it worse)

Speed up and secure it
While improving performance, we also repair common security issues in AI-generated code.

When an AI-built app is slow, it is tempting to grab the fastest-looking fix. The problem is that quick-looking fixes often hide the real cause, so the slowdown comes back (or moves somewhere else).

Fixes that feel good, but usually backfire

A common trap is adding caching before you have proven the query is healthy. If a page needs 2 seconds because it runs 120 queries, caching the response might mask it for a while, but the moment the cache expires (or data changes), the spike is still there.

Another time-waster is adding indexes blindly. An index can be a big win, but the wrong index can do nothing, or it can slow writes and increase storage. Always confirm what the database is actually doing, then choose the smallest index that helps the real filter or sort.

Over-fetching is also everywhere in AI-generated code. If the UI needs a user name and plan, but the ORM returns the whole user record plus nested relations, you pay for extra reads, memory, and network transfer on every request.

Finally, the "one mega-query" approach can be just as bad. People merge everything into a giant join to avoid N+1, but end up with a query that is hard to change, hard to debug, and still slow because it returns too many rows.

Skipping verification is the biggest mistake

If you do not set a baseline, you cannot know if you improved anything. Before changing code, capture a simple before/after snapshot: request time, number of queries, and the slowest query.

Here are five red flags that usually mean you are guessing instead of fixing:

  • No baseline metrics (only "it feels faster")
  • No check of the query plan before indexing
  • No limit or pagination on endpoints that can grow over time
  • No regression test for the slow path (so it comes back next week)
  • No re-check in production data volumes

At FixMyMess, we often see teams spend days on caches and refactors, then discover the real issue was one missing index plus an ORM call inside a loop. The fastest path is boring: measure, verify, change one thing, measure again.

Next steps: lock in the gains and get help if you need it

Once you’ve found the big query problems, don’t stop at the one fix that made the graph dip. Performance tends to drift back as features ship and data grows. Treat your first wins as a new baseline to protect.

Start by writing down a small “hit list” you can revisit weekly. Keep it concrete, not theoretical.

  • Top 3 slow user actions (example: login, search, checkout) and their worst queries
  • For each action: average time, p95 time, and query count
  • The exact query patterns causing it (N+1, missing index, unbounded scan, too many small ORM calls)

Next, add a few guardrails so you catch regressions early. These are cheap and pay off fast.

  • Set a query-count budget per request for key pages
  • Turn on slow-query alerts in your database and review them regularly
  • Run a basic load test on the top 3 actions after each release
  • Add a small performance check to code review ("did we add queries?")
  • Keep a shared place for approved query helpers so everyone uses the same patterns

After that, plan the refactors that remove the root cause. Common ones are cleaning up ORM access patterns (fetch related data once, not in loops), standardizing pagination, and creating shared query helpers so the same mistake is not repeated across endpoints.

If your AI-built app is slow and it was generated by tools like Lovable, Bolt, v0, Cursor, or Replit, consider getting a professional diagnosis. These codebases often hide repeated query logic in many files, so you fix one endpoint and three more keep hurting you.

FixMyMess is one option if you want a clear next step: we can run a free code audit to identify both database and app-level issues, then apply targeted fixes with human verification so the improvements stick.