Sep 30, 2025·7 min read

Client intake checklist for inherited AI code for agencies

Client intake checklist for inherited AI code to ask the right questions, catch risk flags early, and set clear expectations for a 48-72 hour stabilization window.

Client intake checklist for inherited AI code for agencies

Why agencies need a different intake for inherited AI code

A lot of AI-built projects look great in a demo. The happy path works, the UI feels polished, and nobody notices the gaps until real users show up. Then the app starts failing in ways that feel random: sign-ins loop, payments time out, and data ends up in the wrong place.

That mismatch is predictable. Many AI-generated prototypes are built to look complete before they are safe or reliable. AI tools can stitch screens together fast, but they often skip the boring work that keeps software standing up in production.

The failure points tend to repeat across projects: authentication is half-finished or patched together, secrets live in the repo or the wrong environment, the data model shifts week to week, and deployment depends on manual steps that only one person remembers. If you run intake like a normal build (feature wishlists, design preferences, timelines), you miss those risks and inherit the surprises.

A strong intake for inherited AI code focuses less on "what do you want to add?" and more on "what could break next week?" Get clear answers to:

  • Where real users get stuck today (not what’s on the roadmap)
  • Who controls the repo, hosting, and third-party accounts
  • What data is stored and what must be protected
  • What must work for the business to run (login, checkout, emails)
  • What has already been "fixed," and by whom

If you’re aiming for a short stabilization window (often 48-72 hours), the first job is stopping the bleeding: make the app reliable, close obvious security holes, and get a repeatable deploy. Improvements come after the basics hold.

A common scenario: a founder says "it’s basically done" because demo signup works. Your intake should confirm what happens when 50 users sign up, passwords reset, and the app deploys from scratch.

Set the goal: stabilize first, then improve

When you inherit AI-generated code, the fastest way to lose trust is to promise new features before the basics work. Set a clear first goal: stabilization. That gives everyone a shared definition of "done" for the first 48-72 hours and makes your estimate defensible.

Define "stabilized" in plain terms your client can test. For most apps, it means:

  • Login and signup work end-to-end (including password reset, if used)
  • The main user flow completes without crashes or confusing errors
  • The app deploys the same way every time (no mystery steps)
  • Data isn’t obviously at risk (no exposed keys, no wide-open admin access)

Then draw a hard line between stabilization, rebuild, and new features.

  • Stabilization stops the bleeding.
  • A rebuild replaces shaky foundations.
  • New features wait until the product is reliable.

Put that language in your intake so your whole team (and the client) uses the same definitions.

Set guardrails early: timeline, budget boundaries, and who makes the final call when you hit tradeoffs like "fix it fast" vs "rewrite it clean." Name one approver and one backup, and agree on response times so you’re not blocked during the stabilization window.

Finally, choose one place for decisions and requirements. It can be a short doc or a ticket board, but it must answer three questions: what the app is supposed to do today, what counts as a defect, and what is out of scope until after stabilization.

Quick project triage questions (10 minutes)

A fast triage call prevents surprises. The goal isn’t to understand every detail. It’s to learn what you’re inheriting, what’s broken right now, and whether a 48-72 hour stabilization window is realistic.

Use these questions to get the basics quickly:

  • Who built it (person or vendor), and which AI tool was used (Lovable, Bolt, v0, Cursor, Replit, etc.)? What changed since it last "worked"?
  • Where does it run today: only on someone’s laptop, a staging site, production, or nowhere? Who can access it?
  • Who is using it now (if anyone), and what are the top 1-2 workflows that must work (sign up, checkout, booking, admin edits)?
  • What is failing right now, in plain words? Ask for one recent example a user hit (error message, blank page, wrong data).
  • Are there deadlines tied to a sales demo, pilot, funding milestone, or compliance review? What happens if it slips by a week?

Listen for contradictions. If they say "it worked last week" but also "no one can log in," you already have a priority: authentication and access.

Example: a client needs a demo on Friday, the app only runs locally, and the last dev "fixed it" by pasting secrets into the frontend. That’s not a feature sprint. That’s a short stabilization job: get a deployable build, lock down secrets, and make the demo path reliable.

Access and ownership: get control before you touch code

Before you open a repo or promise a timeline, make sure the client can actually grant control. With inherited AI code, the mess often isn’t just the code. It’s accounts, tokens, and half-finished deployments that nobody owns.

Start with one blunt question: where does the code live right now? If the answer is "on someone’s laptop" or "in a tool like Replit" with no clear owner, treat that as a real risk. Access first, work second.

Minimum access you need

Ask for these items up front so you can work safely and avoid surprises later:

  • Repo location and one true admin (GitHub, GitLab, or another host)
  • Deployment access (who can push to production, if production exists)
  • Environment list (dev, staging, prod) and whether anything is shared
  • Secret storage method (env files, dashboard variables, secret manager) and who can rotate secrets
  • Domain and email ownership (who controls DNS and transactional email settings)

If the client can’t provide at least an admin and a safe place to store secrets, pause. Touching code without ownership is how you end up with broken logins, lost data, or a production outage.

Expectations around secrets and deployments

AI-generated prototypes often have exposed keys or hardcoded passwords. Even if the app "works," assume secrets need rotation.

Confirm who can change:

  • Database passwords and API keys
  • Auth provider settings (OAuth apps, JWT secrets)
  • Hosting variables and build settings

Example: you inherit a Bolt prototype that deploys from a personal account, with the database password pasted into the code. The right move is to transfer the repo, move deployment to an agency-owned workspace, and rotate secrets before any feature work.

Business-critical flows to confirm (and what to ask)

To avoid surprises, agree on the few flows that must work no matter what. This prevents you from fixing the wrong thing first.

Start with identity and permissions. "Login works" is not enough. Ask who should be able to do what, and where that rule is enforced. If permissions are only hidden in the UI, a user can sometimes bypass them by guessing an ID or calling an endpoint directly.

Payments and billing also need a clear story for failure. Many prototypes only handle the happy path. Confirm what should happen when a card fails, a subscription is canceled, or a refund is needed, and who triggers those actions.

Pin down data sensitivity early. If the app touches personal data, health info, financial details, or data about minors, your security and logging choices change on day one.

Keep the questions practical:

  • Auth: Which roles exist (user, admin, staff) and what can each role do?
  • Payments: What counts as "paid," and what changes when payment fails or is refunded?
  • Data: What sensitive fields exist, and should any data be deleted on request?
  • Integrations: What connects to email, CRM, file storage, AI APIs, or webhooks, and what breaks if it goes down?
  • Admin access: Is there any "temporary" admin panel, shared password, or backdoor still in use?

Example: a founder says "only admins can export customer data." In inherited code, that rule is sometimes a button, not a permission check.

Risk flags you can spot in the first call

Decide patch vs rebuild
Get a clear recommendation on patching, refactoring, or rebuilding based on what we find

Some problems show up before you ever see the repo. A good first call is less about details and more about signals: can this code run anywhere, is access clean, and can you trust what you’re being told?

Watch for these red flags:

  • It only runs on one person’s laptop, or they say, "We can’t deploy it anymore." That often means missing env setup, broken build steps, or undocumented services.
  • Secrets are handled casually: API keys pasted in chat, tokens in a shared doc, or "just use this admin password." Assume exposure until proven otherwise.
  • Auth is inconsistent: "Login works sometimes," roles don’t work, or one user can see another user’s data.
  • Bugs seem random and go away on refresh. That can point to flaky state, caching problems, or mismatched front end and back end expectations.
  • No tests, no logs, and nobody knows what changed last. Without an audit trail, every fix becomes guesswork.

When you hear one or more of these, use calm follow-ups to turn vague pain into clear scope:

  • "Where is it running today, and who can restart it if it breaks?"
  • "How are secrets stored, and who has access right now?"
  • "Which user actions must work every time for the business to function?"
  • "When did it last work, and what happened right before it stopped?"

Example: the app was built in Replit, deployment broke last week, and they’re sharing a Stripe key in Slack. That’s enough to pause feature requests and switch to a stabilization-first intake focused on control, security, and repeatable deployment.

What to check once you see the repo (high level, non-technical)

Once you have access, you don’t need to read every file to learn whether the project is safe to stabilize. A quick scan tells you if this is a small rescue or a deeper rebuild.

Start with the data layer, because surprises there spread everywhere. Look for duplicate concepts (for example, both users and app_users), unclear "source of truth" fields, and missing migration files. If the repo can’t explain how the database changes over time, releases get risky fast.

Then do a basic security sweep, even if you’re not a security expert. Check whether user input is handled carefully, file uploads are restricted, and anything looks like it could accept raw SQL or untrusted commands. Scan for secrets in the repo (API keys, database passwords). If you see them, assume they leaked.

Also look at structure. AI-built prototypes often copy the same logic into many places. That makes fixes slow and bugs easy to reintroduce.

A short set of checks that usually surfaces the biggest risks:

  • Is there a clear backend vs frontend split, or is everything mixed together?
  • Do you see the same logic repeated across multiple files?
  • Is auth and permissions handled in one place, or scattered?
  • Is there any error reporting, or does it fail silently?
  • Are there basics like backups, environment configs, and a simple deploy path?

Finally, spot performance traps early. If pages depend on many repeated API calls, or the app fetches the same data over and over, you’ll see slow loads and timeouts once real users arrive.

A simple 48-72 hour stabilization plan (step by step)

Stop login loops
We repair broken auth, password resets, roles, and session issues in AI-built apps

A stabilization window isn’t a feature sprint. The goal is to make the app run reliably, stop the bleeding, and give everyone a clear view of what it will take to improve it next.

The 5-step plan

  1. Get access, run it, and reproduce the worst failures. Collect repo access, hosting credentials, and environment variables. Then run the app the same way users do. Pick the top 3 failures and reproduce them with clear notes.

  2. Freeze scope and define "done" for stabilization. Agree that new features wait. "Done" should be specific: sign in works, the main workflow completes, and deployment doesn’t rely on manual hacks.

  3. Fix the blockers first. Prioritize anything that stops users or creates immediate danger: broken auth, exposed secrets, crashing pages, failing builds, and deployments that only work on one machine.

  4. Add basic safeguards so it stays stable. Add simple logging, clear error messages, and a few smoke checks that catch the same break again.

  5. Hand back a short report and the next decision. Summarize what was fixed, what’s still risky, and what you recommend next (refactor, security hardening, or rebuild).

Common intake mistakes that cause blowups later

The fastest way for an inherited AI project to go sideways is to treat it like a normal build. A client might ask for new features on day one, but if the foundation is shaky, every "small change" can break three other things.

Common mistakes include:

  • Promising delivery dates before you’ve run a stabilization window
  • Waiting days for access (repo, hosting, database) and then rushing decisions
  • Defining success as "no bugs" instead of a small list of business flows
  • Fixing surface issues without rotating exposed secrets
  • Shipping changes in production without a rollback path

Example: a client asks for "one more payment option" in a prototype. You add it, payments work, but a leaked API key later causes fraud and the agency gets blamed. A solid intake includes a security reset (rotate keys, review auth) and a rollback plan before feature work.

Copy-paste intake checklist (quick checks)

Use this when you need to confirm basics fast and avoid surprises later. It’s designed for a 10-15 minute handover call plus a short follow-up.

[ ] Access confirmed: repo + hosting + database + domain/DNS (who has admin?)
[ ] Third-party accounts listed: auth/email/payments/storage/analytics (who owns each?)
[ ] Security baseline: where secrets live today, how they will be rotated, and what data is sensitive
[ ] User roles: who can do what (admin, staff, customer) and which role is highest risk
[ ] Top 3 broken flows written down with a "done" definition for each
[ ] Deploy plan: how releases happen today and who can press the button
[ ] Observability: logs exist, error tracking is on, and someone will watch it after release
[ ] Backups: database backup status + restore test (or date of last known good backup)
[ ] 48-72 hour stabilization: what’s included (fix critical breakages, stop data leaks) vs excluded (new features, redesign)
[ ] Sign-off: one decision-maker for tradeoffs, plus a fallback if they are unavailable

A simple way to set expectations: stabilization means the app stops failing in the most important places, and obvious security holes are closed. It does not mean the code is pretty, fast, or ready to scale.

Example: if a client says "checkout is broken," pin it down to one testable flow (product page to payment success) and one owner for the payment account. Without that, you can fix the code and still be blocked by missing access.

Example scenario: inherited prototype to stable release

Free code audit first
FixMyMess will audit your inherited AI code and list the real blockers before you commit

A client comes to your agency with a Bolt prototype they used to raise interest. The app looks fine in demos, but real users can’t log in. Sometimes the login button spins forever, other times it creates accounts without saving them.

On the first call, use a stabilization-first intake. Keep the tone neutral: you’re not judging the build, you’re figuring out what it takes to make it reliable.

In practice, that usually means:

  • Confirm the one or two flows that must work this week.
  • Get access while everyone is on the call: repo, hosting, database, domain, third-party services.
  • Confirm where secrets live and who owns the accounts.
  • Note risk flags like multiple half-wired auth providers, hardcoded keys, or a shaky data model.
  • Set the 48-72 hour goal: stabilize, add basic guardrails, and ship a small safe release.

In 2-3 days, teams can often stabilize login by fixing session handling, cleaning up environment variables, and adding basic error logging so failures are visible. Obvious security problems (like exposed secrets) can be patched, and the worst "random breakage" often drops once a few tangled parts are simplified.

What usually becomes a rebuild proposal: architecture that blocks changes, unclear ownership of third-party accounts, or a database schema that can’t support real usage. Frame the tradeoff plainly: "We can patch this to work now, but if you want faster changes later, a rebuild will be cheaper than repeated fixes."

Next steps: make intake repeatable and reduce surprises

Treat intake like a small product of its own. When the process is the same every time, you spend less time chasing details and more time fixing what matters.

Send your checklist before kickoff. Ask the client to return it with access in place. If access is "coming soon," the project is already slipping. A simple rule helps: no work starts until you can see the repo and a running environment.

A practical rhythm is to book a short stabilization window before you plan new features. Put 48-72 hours on the calendar to confirm what exists, what’s broken, and what’s risky. After that, you can estimate improvements with fewer surprises.

Write risks in plain language and get sign-off on priorities. This isn’t about paperwork. It’s about preventing confusion later when a "small fix" turns into a security issue or a rebuild.

If you need outside help for inherited AI-generated code, FixMyMess (fixmymess.ai) does codebase diagnosis and repair, including security hardening, refactoring, and deployment preparation. They also offer a free code audit to surface issues before you commit to a plan.

FAQ

Why shouldn’t we use our normal feature-based intake for inherited AI code?

Start by focusing on what can break in production, not what features they want next. Ask where users get stuck, what data is at risk, who owns the accounts, and whether the app can be deployed from scratch without “special steps.”

What does “stabilization” actually mean in a 48–72 hour window?

Treat stabilization as “the app reliably does the few things the business needs.” That usually means login works end-to-end, the main workflow completes without crashes, deployments are repeatable, and obvious security holes (like exposed keys) are closed.

How do we get useful bug reports from a non-technical founder?

Ask for one recent real example, in plain words: what the user did, what they expected, and what happened instead. Get the exact screen, error message, or incorrect data outcome so you can reproduce it quickly.

What access do we need before we start work?

Don’t touch code until the client can grant admin access to the repo, hosting, database, and key third-party services. If access is fragmented or tied to a former contractor’s personal accounts, treat that as a blocker and solve ownership first.

What should we do if we find secrets in the repo or pasted into the frontend?

Assume anything hardcoded or shared casually has been exposed. Move secrets into proper environment settings, rotate keys and passwords, and confirm who can manage future rotations so you’re not stuck if something leaks again.

How do we decide which user flows are business-critical?

Agree on the top one or two workflows that must work for the business to function this week, like signup, checkout, booking, or admin edits. Write a simple “done” definition the client can test, and postpone new features until those flows are reliable.

What’s the fastest way to sanity-check auth and permissions in inherited AI code?

Look for risks that aren’t enforced on the server, like “admin-only” buttons that just hide UI. Confirm roles, permissions, and data isolation, and test whether a user can access someone else’s data by changing an ID or calling an endpoint directly.

What are the biggest red flags you can spot during the first call?

Expect missing environment setup, undocumented services, and manual steps that only one person knows. Your first goal is to get a clean deploy path and minimal logging so failures are visible and repeatable to fix.

Why should we avoid promising new features before stabilization?

When the foundations are shaky, small changes can break unrelated parts, and you lose trust fast. Promise a stabilization outcome first, then propose a rebuild if the structure or data model makes ongoing fixes slow and risky.

When should we bring in FixMyMess instead of trying to fix it ourselves?

A free audit can quickly map what’s broken, what’s risky, and what a realistic 48–72 hour stabilization plan looks like. If you need help rescuing AI-generated prototypes from tools like Lovable, Bolt, v0, Cursor, or Replit, FixMyMess can diagnose, repair, harden security, and prep deployment so you can ship safely.