Customer can see someone else's data: what to do first
If a customer can see someone else's data, contain it fast, capture the right evidence, and send clear updates while you fix the root cause.

What it means when a customer sees someone else’s data
If a customer can see someone else’s data, treat it as urgent. Even if it looks like a UI glitch, it can expose private information, break trust quickly, and create legal or contractual risk (privacy laws, NDAs, enterprise terms).
“Someone else’s data” means any information that belongs to a different user, account, or company (tenant). It can show up in obvious places, or in small details that still identify people.
Examples include another user’s:
- Name, email, address, or profile details
- Invoices, payment status, subscription plan, or billing history
- Messages, support tickets, or internal notes
- Files, documents, or exports (CSV/PDF)
- Admin-only views like user lists, audit logs, or API keys
One important distinction:
- A data isolation bug is a product defect that returns or displays data to the wrong tenant.
- A breach means an unauthorized party actually accessed data (you have evidence, or strong signals, that it likely happened).
Early on, you often don’t know which it is. Act as if the exposure is real until you can prove otherwise.
The practical goals stay the same:
- Stop the exposure quickly (containment).
- Learn the scope (who saw what, how often, and since when).
- Fix the root cause safely.
- Communicate clearly while you work, then explain what changed.
Immediate containment in the first 15 minutes
Treat this as a live security incident, not a normal bug ticket. Your first job is to stop further exposure, even if you don’t understand the cause yet.
Start by acknowledging the report and opening an incident log. Write down the exact time you received it, who reported it, what they saw (copy their wording), and any screenshots they provide. From then on, timestamp every action and decision.
Assign one incident owner. They don’t have to write the fix themselves, but they should coordinate decisions, keep notes, and send updates so the team doesn’t pull in different directions.
Reduce the blast radius fast:
- If you suspect a specific feature (admin view, search, exports, recent activity, shared inbox), disable it or block the endpoint.
- If you can’t isolate it quickly, switch to read-only or maintenance mode so customers can’t trigger new requests that might expose data.
While you contain the issue, freeze risky changes. Pause deploys, migrations, and background jobs that rewrite data. Avoid “quick fixes” made directly in production until you have a clearer picture.
A simple containment checklist:
- Open an incident log with timestamps and the reporter’s details
- Name one incident owner (and a backup)
- Disable the suspected feature/endpoint or affected role
- Use read-only or maintenance mode if scope is unclear
- Pause deploys and migrations until containment is confirmed
Confirm and scope the issue without spreading it
You need to confirm the report quickly without asking the customer to send more sensitive information.
Ask for the smallest set of steps that led to the issue. Ask for actions, not private content. Useful details include the time it happened, which page it appeared on, and which workspace/account they were using.
Questions that usually help:
- What exact page or action showed the other data?
- Did it happen once, or every time they refresh?
- Did they recently log out and back in, or switch accounts/workspaces?
- Does it happen in another browser or on mobile?
- Roughly how much data looked wrong (one item, many rows, a whole account view)?
Reproduce safely. Use a brand-new test tenant and a least-privileged user (not an admin). If you need two tenants, create two clean test accounts so you don’t touch real customer data while testing.
Then check how it behaves:
- Hard refresh
- New session (incognito)
- Second device
If the issue appears only after a deploy, after switching accounts, or only for a specific role, that often points to caching, session handling, or authorization boundaries.
Treat it as confirmed exposure when you can reproduce it, or when logs show cross-tenant reads (even if you can’t reproduce yet). Treat it as suspected if it’s a one-time report with no supporting evidence, but keep urgency until you rule it out.
What to document so you can fix it and explain it
Your notes become the backbone of the fix and the explanation later. Capture the report exactly before anyone paraphrases it.
Record:
- Who reported it and how to reach them
- When they noticed it
- What screen/feature they were using
- Their exact description of what looked wrong
Ask for evidence carefully. If they share a screenshot, ask them to blur names, emails, addresses, payment details, and anything not needed to verify the bug. Store any files in a restricted folder and note who has access.
The minimum technical details to capture
You want enough identifiers to replay the path without collecting more sensitive data than necessary.
Capture:
- Timestamps (with time zone): report received, first repro, containment, and each change made
- Reporting user ID and tenant/workspace ID (and the other tenant ID if you can identify it)
- Request IDs/correlation IDs and session identifiers
- Affected endpoints/pages, filters, and any unusual query parameters
- App version or commit hash and last deploy time
Pull log snapshots early, because some systems rotate. Save relevant auth logs, gateway/load balancer logs, app logs, and database query logs for the time window. Note where they came from and how long the source system keeps them.
Keep a running timeline of containment actions (feature flags, access disabled, caches purged, rollback started) and who did each step.
Fast triage: common root causes to test first
Most cross-tenant leaks come from a small set of failure modes. Start with the simplest and work toward the subtle ones.
A tight order of checks
- Authorization and tenant filtering
Look for reads that load records by id without checking ownership, or endpoints that forget to apply tenant/workspace filters. Watch for IDOR-style paths like /invoices/123 where the server doesn’t verify the record belongs to the current tenant.
- Session mix-ups
Verify cookies and tokens are scoped correctly (domain, path, environment). Watch for shared demo accounts, reused signing keys across environments, or a proxy that strips auth headers.
- Caching mistakes
Check CDN and server-side caching headers. A missing Vary on auth headers, or caching HTML/API responses that should never be shared, can cause user A to receive a response meant for user B. Also inspect client state: stale local storage can display old data after logout.
- Database query bugs
Review recent query changes, joins, and default scopes. Common issues include joins that drop the tenant constraint, soft-deleted records appearing in results, and “fallback” queries when a filter is empty.
- Background jobs and attachments
Confirm exports, PDFs, emails, and webhooks are built from the correct tenant context. Queue workers often run without the same request-scoped checks.
After each check, answer one question: is the wrong data coming from the server, or is the UI showing the wrong thing? Capturing the API response body and headers usually makes this clear.
Short-term mitigations while the code fix is built
Your goal is to stop new exposure fast, even before you have the full root cause.
Reduce access while you investigate
If you suspect sessions or token mix-ups, forcing a clean login often stops the bleed.
Common short-term moves:
- Revoke active sessions and require re-authentication (all users, or at least the affected tenant(s)).
- Temporarily disable account switching, impersonation, and “view as” admin features.
- Pause exports and downloads (CSV, PDF), and restrict admin screens that show many records.
- Put sensitive pages behind a temporary maintenance gate if you can’t trust isolation yet.
- Add rate limits to reduce bulk pulling while you investigate.
Tell support what changed so they can explain why people are being logged out or why exports are paused.
Add temporary guardrails
Add server-side checks that are hard to bypass. Validate tenant ownership for every request, not just the first page load.
Also consider disabling caching for sensitive endpoints and pages (including CDN or reverse-proxy caching). If you can’t fully disable it, shorten cache time and make sure the cache key includes tenant and user context.
Finally, fail closed: if the tenant id is missing, ambiguous, or mismatched, return an error instead of guessing.
How to communicate with the reporting customer
Reply quickly, even if you don’t have answers yet. Silence reads like you’re ignoring a serious problem.
Use a calm, direct message: you received the report, you’re treating it as urgent, and you’re taking steps to prevent further exposure while you investigate. Don’t argue, speculate, or blame their setup.
Ask for the minimum details that help you reproduce:
- Which page or feature showed the wrong data (and what they expected)
- The time it happened (with their time zone)
- The workspace/account they were logged into
- Whether it repeats after refresh, logout, or in a private window
- A screenshot with sensitive fields blurred (only if they can do it safely)
Set an update cadence you can keep. A good default is: “We’ll update you every 2 to 4 hours, even if we’re still investigating.”
A tight message you can reuse:
“Thanks for flagging this. We’re investigating urgently and have started containment steps to prevent further exposure. Please share the page/feature name, approximate time, and the workspace you were in. We’ll send an update within 2 hours, and then every few hours until resolved.”
Once you have a mitigation or fix, follow up in plain language:
- What you found
- What you changed (or temporarily disabled)
- What data may have been visible and for how long (if known)
- What you’re doing next (monitoring, additional checks)
How to communicate internally and to other customers
Pick one message owner (often the incident lead). Route updates through them so support, sales, and engineering don’t tell different stories.
Use a simple timeline in every update so people can scan quickly: discovered, contained, fixing, verified. Use one timezone.
Be explicit about what you know vs what you’re still confirming. Don’t guess. Point to evidence (logs, screenshots, specific endpoints) and state what you’re checking next.
When you describe potential impact, name the data types, not vague buckets like “personal data.” Examples: account email, name, company name, invoice PDF, last 4 digits of a card, shipping address, support ticket text, uploaded files. Only say “no passwords were exposed” if you have proof.
A simple internal update template:
- Status: discovered at [time], contained at [time], fix in progress, verification in progress
- What happened: 1-2 sentences
- Potential data involved: specific fields and screens
- What we know / what we’re confirming: two short lines
- Next update time: an actual time
Before you notify other customers, align on a clear trigger (for example: confirmed cross-tenant access, confirmed export/download exposure, or evidence it lasted beyond a single session).
Example scenario: tenant data mixed up after a deploy
A support ticket arrives on a Friday afternoon: a customer says they opened the Invoices page and saw another company’s invoice number, amount, and billing address.
The team treats it as a potential data exposure incident and contains it before debugging:
- They disable the Invoices page with a feature flag.
- They turn off the cache layer for invoice responses to avoid serving mismatched cached data.
- They revoke active sessions for accounts that recently used billing, forcing re-auth.
Once contained, they reproduce the issue using two test tenants and check access logs for the invoice endpoint. The pattern is clear: one API route returns mismatched tenant IDs, and only after the last deploy.
A diff of the recent change shows the root cause. A refactor moved query logic into a helper that no longer required a tenantId, so one endpoint stopped applying the tenant filter.
They ship a hotfix that:
- Adds explicit tenant authorization checks on the endpoint
- Adds tests that run the same request across multiple accounts to confirm isolation
- Fails closed if tenant context is missing
They follow up with the customer using plain language: what happened, what data could have been visible, what stopped it, and what prevents a repeat.
Common mistakes that make data exposure incidents worse
Small choices in the first hour can make the impact bigger and the cleanup harder.
Mistake 1: Making lots of changes without a clear record
Under pressure, it’s tempting to “try a few things” until the problem goes away. Then you lose the trail of what actually helped, and you can destroy evidence you’ll need later.
Write down every change: what changed, who did it, when, and why. Treat rollbacks, feature flag toggles, and config changes as formal steps.
Mistake 2: Letting sensitive data leak during the investigation
Support teams sometimes ask for screenshots, screen recordings, or exported files. That can spread private data further.
Ask for the minimum: timestamps, page name, and steps taken. If a screenshot is unavoidable, instruct the customer to blur names, emails, tokens, and anything not needed to reproduce.
Other mistakes to avoid:
- Posting public updates before you’ve confirmed basics (who is affected, what data, whether access is real).
- Assuming it’s “just the UI” without verifying the backend enforces tenant checks.
- Testing the fix with only one admin account and missing other roles or API paths.
- Treating the first report as the only case instead of checking logs for similar access patterns.
A common trap: a frontend patch stops the UI from showing mixed data, so someone declares it fixed. Later, you learn the API still allows cross-tenant reads via a direct request. Always confirm isolation on the server, across roles and tenants.
Quick checklist and practical next steps
Treat “I can see another customer’s data” as an emergency. Stop further exposure first, then gather enough facts to fix it and explain it clearly.
A practical checklist, in order:
- Contain (now): Disable the feature/endpoint showing the wrong data, turn off risky caching, and pause deploys/config changes until containment is confirmed.
- Scope (next 30-60 minutes): Reproduce safely (use test accounts), identify affected screens/APIs, and estimate the time window (since last deploy/migration/config change).
- Preserve evidence: Save request IDs, timestamps, user IDs, tenant IDs, and relevant logs. Redact screenshots before sharing internally.
- Patch safely: Add explicit tenant checks, fix cache keys, and add automated tests that fail if tenant boundaries are crossed.
- Communicate: Acknowledge the report, keep an update cadence you can meet, and document what happened, who was affected, and what changed.
Keep one incident doc from minute one. Include who noticed it, how you contained it, what logs showed, and the exact commit or config change that fixed it.
If you’re dealing with an AI-generated prototype that’s hard to trust under pressure (especially around auth, tenant checks, and caching), FixMyMess (fixmymess.ai) can run a free code audit to pinpoint the isolation bug and help repair and harden the app quickly, with most projects completed within 48-72 hours.
FAQ
Is seeing another customer’s data always an emergency?
Treat it as a potential security incident immediately. Even if it turns out to be a UI issue, you should assume real exposure until you can prove the data was not accessible cross-tenant.
What’s the difference between a data isolation bug and a breach?
A data isolation bug is your product returning or showing the wrong tenant’s data due to a defect. A breach is when an unauthorized party actually accessed data, based on evidence or strong signals; early on, you often won’t know which one it is.
What should we do in the first 15 minutes?
Containment first: disable the suspected feature or endpoint, and if you can’t isolate it quickly, switch to read-only or maintenance mode. Pause deploys and risky changes until you’re confident the exposure is stopped.
What should we document from the start?
Open an incident log and timestamp everything: when the report arrived, what the user saw (in their words), and every action you take. Assign one incident owner to coordinate decisions and updates so the team doesn’t work at cross purposes.
How do we ask the customer for details without collecting more sensitive data?
Ask for actions and context, not sensitive content: the page/feature, approximate time with timezone, which workspace they were in, and whether it repeats after refresh or re-login. If a screenshot is necessary, ask them to blur names, emails, addresses, billing details, and anything not needed to verify the issue.
How can we reproduce safely without touching real customer data?
Reproduce with clean test tenants and least-privileged test users, not real customer accounts. Compare behavior across a hard refresh, a new session (incognito), and a second device to narrow down whether it’s caching, session handling, or authorization.
What are the most common causes of cross-tenant data leaks?
Start with authorization and tenant filtering: look for endpoints that load by record ID without verifying ownership. Then check session mix-ups, caching headers and cache keys, database query changes that drop tenant constraints, and background jobs that run outside request-scoped checks.
What short-term mitigations can reduce risk while we build a proper fix?
Force a clean login by revoking sessions if you suspect session or token confusion. Temporarily disable account switching, impersonation, exports/downloads, and admin screens that expose many records, and consider turning off caching for sensitive endpoints so responses can’t be shared across users.
What’s the best way to reply to the reporting customer?
Respond quickly and calmly: confirm you’re treating it as urgent and that containment steps are underway. Set an update cadence you can keep, and avoid speculation; share what you know, what you’re checking next, and what changed once you have a mitigation or fix.
When should we bring in outside help like FixMyMess?
Use a single message owner and a simple timeline so support, sales, and engineering stay aligned. If this came from an AI-generated codebase with shaky auth, tenant checks, or caching, FixMyMess can run a free code audit and help repair and harden the app quickly, with most projects finished in 48–72 hours.