Audit event tracking in AI-generated apps that you can trust
Learn how to audit event tracking end-to-end in AI-generated apps, remove duplicate firing, and make dashboards match real user actions.

Why analytics goes wrong in AI-generated apps
Bad analytics usually isn’t subtle. You’ll see checkout conversion rates over 100%, a random spike at 3 a.m., or a funnel where everyone “views pricing” but almost nobody “starts trial.” Sometimes a key step is missing entirely, so the dashboard tells a neat story that never happened.
AI-generated prototypes often ship with half-finished tracking because the goal was “make it work” for a demo, not “make it measurable” in production. An AI tool might copy a tracking snippet into one page but not others, fire events from multiple places, or mix old and new event names after a refactor. It also might trigger events at the wrong time, like tracking “purchase” when the button is clicked instead of when payment actually succeeds.
That breaks decision-making fast. If signup is overcounted, you might assume onboarding is fine and spend money on ads. If trials are undercounted, you might change pricing or rip out features that weren’t the problem. If attribution is messy, you can pause a good campaign and double down on a bad one.
Trusted analytics is simple to describe: the same user action produces the same event, once, with the same meaning every time. You can explain where it fires in the code, reproduce it in a test account, and see it match what your app logs and database say.
When something looks off, do a quick reality check. Did the event fire only after success (not on click)? Could it be firing twice (client and server, or two listeners)? Is the name consistent across pages and environments? Do counts match a truth source like your orders table? Are bots, retries, or page reloads inflating totals?
If you inherited an AI-built app from tools like Bolt, v0, or Replit, these issues are common. FixMyMess often sees tracking tangled with UI code, duplicated handlers, and events that fire even when requests fail.
Decide what you actually need to measure
Before you touch code or dashboards, decide what “real progress” looks like in your product. Many AI-built apps fail analytics because they track everything, but miss the few actions that explain growth and revenue.
Start with a small set of business-critical actions. For many products, it’s some version of signup (account created successfully), activation (first meaningful task), purchase (money changes hands or a paid plan starts), retention (the user returns and does the key action again), and sometimes referral or sharing.
Define each event in one sentence that answers two questions: when does it fire, and why do you care. Example: “signup_completed fires after the user confirms their email, because this is the earliest point we can trust the account is real.”
Pick a naming style and stick to it. Simple, readable names beat clever ones. Choose one format (like verb_noun), keep tense consistent, and avoid names that sound like UI elements (for example, blueButtonClick).
Keep user actions separate from system events. A user action is “created project” or “started trial.” A system event is “email sent” or “webhook received.” When you mix them, funnels stop meaning what you think they mean.
Write down what success means for each event, too. It might be a count, a conversion rate, or a time-to-action target. That makes later audits much easier, especially if you inherit messy AI-generated code and need to prove what users actually did.
Map events to real user actions
Good analytics starts with a simple map: what a real person does, step by step, and what you expect to record at each step. If you skip this and jump straight to dashboards, you end up measuring clicks that don’t mean anything.
Write down your key user flows in plain language. Keep it to the actions that matter, not every button. A typical set is: first visit, sign up, login, your core feature (the one action that proves value), and checkout.
For each step, capture three things: (1) the expected event name, (2) the moment it should fire, and (3) where it should fire (client, server, or both). That turns “tracking is broken” into something you can point to and fix.
Edge cases are where AI-generated apps often drift from reality. A modal can fire an event on open and again on submit. A redirect can double-fire page views. A multi-step form can log a completion event on each step because state resets.
As you map each flow, note the cases you need to test: refresh or back button, retries (double click, resubmit, payment retry), slow network timeouts, flaky connections, and mobile vs desktop differences.
Example: a user signs up, gets redirected to a dashboard, then the app restores session on load. If both the signup success handler and the session restore code send "signup_completed", your funnel shows two signups for one person.
Understand where events are coming from
When you audit tracking, the first job is simple: find the exact line of code that fires each event. In AI-generated apps, that line can move quickly as the tool rewrites components, adds helpers, or duplicates logic.
Client vs server events
Client-side events (browser or mobile app) are best for UI actions and context, like button clicks, page views, and form steps. They capture what the user saw and did, but they’re easier to block or drop (ad blockers, script errors, flaky networks).
Server-side events (your API, backend jobs, webhooks) are best for things you must trust, like payments completed, subscription changes, or account creation. They’re harder to fake and usually more reliable, but they may miss UI intent (for example, a user tried to pay but abandoned).
In AI-generated code, events often get triggered from a few repeatable places: UI handlers (onClick, onSubmit), hooks and effects (useEffect), API wrappers, state managers, and shared utilities like a generic track() function.
Duplicates usually come from code that runs more than you think: rerenders that rebind handlers, multiple listeners attached to the same action, optimistic UI logging plus a server confirmation event, or requests that retry and log each attempt.
Missing events tend to be less dramatic but just as common: early returns before the tracking call, exceptions that skip the line, scripts blocked by the browser, or race conditions where the app navigates away before an event flushes.
A quick sanity check: if “Signup Completed” is sent from a React effect that depends on user state, a refresh can trigger it again. If it’s sent from the server only after the user record is created, it will fire once, but you’ll still want a separate “Signup Started” to understand dropoff.
Step-by-step: audit one event end-to-end
Pick one user flow you care about and keep it small. A good starter is signup, then email verification (if you have it), then the first key action (like creating a project or saving a draft). Once you trust this flow, everything else gets easier.
Before you test, write down what you expect in plain words: which event should fire, exactly when, and what it should contain. This is the smallest useful event spec.
Here’s an end-to-end routine that works even in messy AI-generated codebases:
- Run the flow like a real user and watch real-time events as you do it.
- Pause at each moment that should trigger an event and confirm it happens at that moment.
- Repeat the same action twice to catch double firing. One click should mean one event.
- Open the event payload and check the basics:
user_id(oranonymous_id),session_id, plan or tier, source (web/app/referral), screen/page name, and any error state. - After processing, confirm the same event appears in reports with the same counts and breakdowns.
Use one concrete “failure” scenario while you test. If you submit the signup form and get a validation error, you should see a failure event (or an error property) without also seeing a success event. If both appear, your funnel will look better than reality.
As you move to the next flow, keep notes: event name, expected moment, actual moment, duplicates, missing properties, and which file or component seems responsible.
Find duplicate firing and missing events
Duplicate events are one of the fastest ways to lose trust in your numbers. Start by comparing what you see in dashboards with what you can reproduce in a quick manual test.
How to spot duplicates (and why they happen)
Duplicates usually show up as the same event name with the same key properties firing within a second or two. Page views that increase even when you didn’t navigate are another tell.
Common red flags include repeated identical events, funnels where later steps have more completions than earlier steps, sudden spikes that don’t match traffic, and metrics that change a lot when you refresh.
Root causes in AI-generated apps are often simple: rerender loops in React, tracking added in both a component and a global handler, event listeners bound twice, or network retries that resend the same event without any idempotency.
How to spot missing events
Missing events show up as drop-offs that don’t match what you see in user testing. If you can complete “Add to cart” in real life but the funnel shows a big gap, you likely have tracking that never fires, fires only sometimes, or fires before the real action finishes.
Fixes that usually work are straightforward: add a guard so an event can only fire once per action, debounce click handlers, move tracking to one layer (often the final submit handler), add an idempotency key (like order_id or request_id) so retries don’t create duplicates, and track after success instead of on button click (especially for auth and payments).
Confirm the fix by running the same flow again and comparing before vs after. Also retest on a slow network, because retries are where duplicates and missing events hide.
Verify identity, sessions, and attribution
If your charts look “mostly right” but funnels are off, identity and session handling is often the reason. AI-generated apps frequently mix client-side trackers, auth libraries, and quick fixes that don’t agree on who the user is.
Identity: pick one source of truth
Define a single reliable user ID strategy and apply it everywhere. In most apps, that means tracking anonymously until login or signup completes, then switching to a stable internal user ID (not an email, not a display name).
A common bug is incorrect merging of anonymous and logged-in users. Example: a user visits the site anonymously, signs up, logs out, then signs in again in a new tab. If your code reuses the old anonymous ID, you can merge two people into one profile or split one person into many.
When you test identity, focus on a few scenarios: sign up then refresh (ID should stay stable), log out and back in (logged-in ID should be the same, anonymous ID behavior should be consistent), do the same action in a new tab (one user with two sessions, not two users), and compare an incognito window or second device (don’t merge without a real login).
Sessions and attribution: keep it simple
Session boundaries should match real behavior: new session after a long gap, not after every refresh. If your tracker creates a new session on reload, you inflate “new sessions” and break funnels.
For attribution, capture the basics consistently: source, campaign, and referrer at the first meaningful entry. Store it once per session (or first touch) and reuse it, instead of rereading the URL on every page. That prevents accidental overwrites when users navigate, pay, or return from an external provider.
Check event properties and data safety
Good event names are only half the job. If properties are messy or risky, your charts will lie and you can create a privacy problem without noticing.
Start by writing a short, strict must-have set of properties for each event. Keep only what you actually use in reports, funnels, or alerts. Extra fields feel harmless, but they quickly turn into inconsistent junk.
A simple must-have list might include:
user_id(oranonymous_idwhen logged out)source(where the action started, like "pricing" or "settings")planorproduct_id(only if you use it)value(a number, same unit every time)environment(prod vs staging)
Then check types and consistency. One app might send value: "19.99" as a string, another sends value: 19.99 as a number, and a third sends value: null when the UI fails. Pick one format, enforce it, and decide what to do when data is missing (drop the event, set a default, or mark it invalid).
Data safety is non-negotiable. Events get logged, replayed in debugging tools, and stored for a long time. Scan client payloads and server logs for red flags like passwords, one-time codes, reset links, access tokens, API keys, private URLs, full card numbers, CVV, full bank details, raw request bodies that include secrets, or “debug” properties that mirror whole objects.
Make error events useful without leaking data. Instead of dumping stacks and payloads, capture what helps you fix the issue: where it failed (screen, step, endpoint name), what the user saw (short message), and a safe error code.
Align dashboards with the truth
A dashboard is only useful if every chart answers one clear question. If a chart mixes goals (like “signups and activations”), it becomes hard to spot when tracking breaks. Name the question first, then make sure the chart’s math matches your event definitions.
A simple habit that keeps dashboards honest is writing a one-line definition under each key metric. Example: “Activated this week = users who completed onboarding AND created their first project.” It forces clarity and avoids “close enough” funnel steps.
Before you trust the numbers, run an alignment check:
- Confirm each chart uses one event (or a clearly defined set) and one time window.
- Check filters like environment, platform, country, and app version for accidental hiding or double-counting.
- Make sure funnel steps map 1:1 to your event taxonomy.
- Add at least one external sanity check (database counts, server logs, payment provider exports).
- Decide how you exclude bots and internal traffic, and write down the rule.
Example: your dashboard shows 1,200 “New signups” yesterday, but the database has only 900 new user records. That often means the signup event fires on both “account created” and “email verified,” or it retries on refresh. Fix the event so it fires once (server-side if possible), then update the chart to count the corrected event only.
Example: fixing a broken signup and checkout funnel
An AI-built app shows a 90% signup conversion in the dashboard, but revenue is flat. The founder thinks marketing is the issue. In reality, the numbers are lying because the funnel is stitched together from copied snippets and mixed client and server tracking.
What we found in the data
“Signup Completed” was firing twice for many users: once when the form submitted, and again when the app redirected to the welcome page. Some users triggered it a third time on refresh.
At the same time, “Payment Confirmed” was missing for a big chunk of real purchases. Checkout used a third-party payment page, but the app never recorded the final success webhook as an event, so the dashboard couldn’t connect signups to paid users.
The result was predictable: signup events were inflated by duplicate client firing, payment success was undercounted because the server never logged it, and the funnel looked healthy while the business wasn’t.
How we fixed it
We moved “Signup Completed” to a single handler that runs only after the server confirms the user record was created. Then we made the server-side “Payment Confirmed” event idempotent (the same purchase ID can only be counted once), so retries and webhook re-sends don’t create duplicate revenue events.
To validate, we ran a small set of test users (new browser, returning browser, slow network) and compared real signups in the database vs “Signup Completed” events, successful charges vs “Payment Confirmed” events, and funnel step counts before and after.
After the fix, signup conversion dropped to a more realistic number, and the funnel showed the true drop-off at payment. Decisions got easier: fix checkout friction, not ad spend.
Quick checks and next steps
If you only do one thing this week, audit the handful of actions your business depends on (signup, activation, payment, upgrade). Small mistakes here can make every report feel random.
Use this checklist on any key event:
- Fires once (no double send on refresh, back button, retries, or both client and server)
- Fires at the right moment (after the action truly succeeds, not when a button is clicked)
- Has the right properties (plan, price, currency, screen, error reason) and no junk values
- Has the right user identity (stable IDs, no accidental merging or splitting)
- Shows up in the dashboard the way you expect (counts match real actions you can reproduce)
Once an event is “true,” write down what it means in plain language: the event name, when it triggers, required properties, and what should never be included (passwords, full tokens, raw card data). This one note makes future changes safer, especially when the code gets edited by AI tools.
To keep things from drifting, do a small monthly re-check: retest your top events end-to-end in a staging or test account, compare a small sample of real sessions to dashboard totals, review recent code changes that touched tracking or auth, and remove events no one uses.
If your app was generated by Lovable, Bolt, v0, Cursor, or Replit, a code-level tracking review is worth it. These projects often end up with duplicated handlers, mixed client/server tracking, and identity bugs that only show up under real traffic. If you want a second set of eyes, FixMyMess (fixmymess.ai) can start with a free code audit to pinpoint duplicate firing, missing events, and risky data capture before you scale.
FAQ
What’s the fastest way to tell if my analytics is broken?
Start with one flow that matters to revenue, like signup or checkout. Run it yourself and confirm each event fires exactly once, only after the action truly succeeds, and with the same meaning every time.
Why do I see purchases or signups counted when the action didn’t actually succeed?
Most AI-built apps track the click, not the outcome. A “purchase” event should fire after payment success is confirmed (ideally by the server), not when the user taps the button or lands on a “thank you” page.
What causes the same event to fire twice in AI-generated code?
Duplicate firing usually happens when tracking is wired in two places, like a UI handler plus a React effect, or both client and server for the same action. It also happens on retries, refresh, or redirects that trigger the same code path again.
How do I verify analytics numbers against “the truth”?
Treat the database or payment provider as your truth source, then reconcile events to it. If your orders table says 50 paid orders but analytics shows 80, your tracking is inflating counts and needs a code-level fix.
How many events should I track in a new product?
Define a small set of business-critical events first, then ignore the rest until those are reliable. Most teams get better decisions from five clear events than from fifty noisy ones.
How should I name events so they don’t turn into a mess?
Pick one simple convention like verb_noun and keep tense consistent. Avoid UI-based names like blueButtonClick and instead name the user intent, like trial_started or project_created.
Should I track user actions and backend system events together?
Track user actions separately from system events so funnels stay meaningful. For example, “started trial” is a user action, while “webhook received” is a system event; mixing them makes conversion charts lie.
When should I track events on the client vs the server?
Use client-side events for UI context and intent, like page views and button clicks, and server-side events for outcomes you must trust, like account creation and payment success. A common pattern is “started” on the client and “completed” on the server.
Why do my funnels look wrong even when events seem to be firing?
Identity issues show up as users being merged or split, which breaks funnels and retention. A good default is anonymous tracking before login, then a stable internal user ID after signup/login, with consistent session handling across tabs and refreshes.
What data should never be sent in analytics events?
Don’t send passwords, one-time codes, access tokens, full card data, or raw request bodies as event properties. Keep properties minimal and consistent, and use safe error codes instead of dumping full stack traces or payloads.