Jul 22, 2025·8 min read

One-page product spec for AI builds: screens, fields, rules

Learn how to write a one-page product spec that AI builders can follow, using clear screens, data fields, and rules so the build is predictable.

One-page product spec for AI builds: screens, fields, rules

Why AI builds go off track without a clear spec

When you give an AI builder a vague prompt, it has to guess. It fills in missing details with patterns it has seen before, not with what you meant. That’s why AI-built apps can feel random even when the first demo looks fine.

That “random” feeling usually shows up in small, painful ways: a screen is missing a field you assumed was obvious, a form accepts values that should be blocked, or a button goes to the wrong place because the tool imagined a different flow.

Inconsistency is another common symptom. One screen calls it “Company,” another calls it “Organization.” Admins can edit something in one place but not in another. The database ends up with extra columns, missing columns, or the wrong types because the UI and data model were invented separately.

A one-page product spec reduces that guessing. It gives the tool a tight set of decisions it doesn’t have to make, so you get fewer rebuilds and less back-and-forth. When something is wrong, you can point to a single sentence and fix it instead of arguing about what the prompt “really meant.”

Without a clear spec, you’ll often see:

  • Extra screens you didn’t ask for, while key ones are skipped
  • Required fields turned optional (or the opposite)
  • Inconsistent permissions (who can view, create, edit, delete)
  • Missing validation rules, so bad data slips in
  • Multiple versions of the same flow (two ways to do one task)

This approach fits founders, PMs, designers, and agencies using tools like Lovable, Bolt, v0, Cursor, or Replit who want builds that behave predictably. It also helps if you inherited AI-generated code and want the next iteration to be less chaotic and easier to repair.

What “one-page spec” means in practice

A one-page product spec is a short, structured note that removes guesswork before you ask an AI tool to build. It doesn’t try to capture your whole vision. It captures the decisions that builders tend to “fill in,” which is where random features and broken logic come from.

Think of it as a contract: these are the screens, these are the fields, and these are the rules. If a detail changes how the app behaves, it belongs on the page. If it’s just copy or styling, it usually doesn’t.

A practical one-page spec includes:

  • A screen list with one sentence per screen (what the user can do there)
  • The data fields that must exist (name, type, required/optional)
  • The rules that govern behavior (validation, permissions, status changes)
  • A few constraints that prevent bad defaults (auth method, roles, basic security limits)

It does not include a full UI design, long user stories, or pages of edge cases. You can add mockups later, but the spec should make sense on its own.

To keep it short without being vague, write in “must” and “if/then” statements. “Only admins can delete a project” beats “Admins manage projects.” “Email is required and must be unique” beats “Users sign up with email.”

A quick reality check: if you handed this page to someone else, could they build the behavior you expect without asking 20 questions? If not, the page needs clearer rules or missing fields.

Step-by-step: turn an idea into a one-page spec

A good one-page spec isn’t a novel. It’s a set of clear instructions that keeps an AI build from guessing. If you can print it and someone else can explain your app back to you, it’s doing its job.

Start with one plain sentence that describes the app’s job. Avoid buzzwords. Example: “Help a small team track customer requests from inbox to done.”

Next, name your user types. Keep it tight (2 to 4). For each, write what they’re allowed to do using simple verbs: view, create, edit, approve, delete. This prevents permission chaos later.

A reliable order that keeps the spec grounded:

  1. Write the one-sentence job (what success looks like).
  2. Define user types and permissions (who can do what).
  3. Write the happy path in 5 to 8 steps (the journey that should work every time).
  4. List screens before features (what pages exist and what each screen must do).
  5. Add data fields and rules last (once screens are stable).

For the happy path, keep it concrete. For a request-tracker app: user signs in, creates a request, assigns an owner, owner changes status, requester gets notified, manager views a weekly summary.

After the screen list is clear, add the data fields each screen needs (title, description, status, owner, timestamps) and the rules (required fields, allowed status changes, who can edit after approval). This is where AI-generated prototypes often break: rules were implied instead of written.

Define screens in a way an AI tool can execute

If you want a predictable build, your screen list needs to read like instructions, not a mood board. A simple naming rule helps: use verb + noun so each screen implies what happens there.

Examples: “View Orders”, “Create Invoice”, “Edit Profile”, “Reset Password”. Avoid vague names like “Dashboard” unless you say what’s on it.

Treat each screen like a small card with the same fields every time:

  • Purpose: one sentence describing what “success” looks like on that screen.
  • Who can access: roles (Guest, Signed-in user, Admin) plus any special limits.
  • Main actions: 2 to 4 actions written like buttons.
  • Data shown/edited: the key objects involved.
  • Outputs: what changes after the action (record created, email sent, status updated).

Call out entry points so navigation doesn’t get invented. For example: onboarding from first launch, login from “Sign in,” an invite link to “Accept Invite,” or a deep link to a specific item.

Also define the boring states that are easy to skip but expensive later. For every screen, add one-line notes for:

  • Empty state: what the user sees when there’s no data yet.
  • Error state: what the user sees when something fails, plus the recovery action.

Finally, prevent scope creep by naming screens that must not exist. Example: “Must not exist: Admin analytics, public user profiles, in-app chat.” It’s a simple line that avoids extra data, permissions, and bugs.

Make user flows predictable, not imaginative

AI tools are good at filling gaps, which is the problem. If you don’t spell out the main user flows, the tool will invent steps, screens, and navigation that feel plausible but don’t match what you meant.

Pick 2 to 3 core flows and write them end-to-end as plain steps. Keep the first pass linear, then add only the branches that prevent broken UX and broken logic.

A simple way to write flows

Write each flow as a numbered path with clear start and finish. Use the exact screen names you listed elsewhere, and state what happens after each action.

  • Flow 1: Sign up -> Verify email -> Create profile -> Land on Dashboard
  • Flow 2: Create item -> Save -> See item detail -> Return to list with a success message
  • Flow 3: Checkout -> Pay -> Confirmation -> Land on Orders (not back to the cart)

Then add a few critical branches:

  • Cancel: if the user backs out halfway, where do they land?
  • Retry: what happens after a failed payment or failed upload?
  • Permission denied: what does the user see, and where can they go next?
  • Delete: confirm step, and where does the user land after deletion?

Be explicit about navigation. If the app uses tabs, name them. If it uses a sidebar, list sections. If it’s a single linear flow, say that. Also note “return to” expectations like “after editing, return to item detail” or “after saving settings, stay on the same page.”

List data fields so the database and UI match

Get fast, verified fixes
Most projects are completed in 48-72 hours, with a reported 99% success rate.

If you don’t write down data fields, an AI tool will invent them. That’s how you end up with a UI that asks for one thing and a database that stores another.

A small field table is often enough to keep the build consistent.

Use a compact field table

Pick the main objects (for example: User, Project, Item) and list the fields for each. Keep it tight: name, type, required, and an example.

FieldTypeRequiredExample
titletextyes"Landing page"
statusenumyesdraft, active, done
due_datedateno2026-02-01
owner_user_ididyesusr_123
created_attimestampyes2026-01-21T10:12Z

Under the table, add one line: which fields are user input vs system-generated. Example: title and due_date are user input; status defaults to draft; owner_user_id is set from the signed-in user; created_at is automatic.

Relationships matter because they drive screens and permissions. Write them plainly: “A user owns many projects. A project has many items. An item belongs to one project.”

Defaults and timestamps prevent “why is this blank?” bugs. State them: status = draft, role = member, updated_at changes on edit, deleted_at is used for soft delete (if you want that).

Sensitive fields need explicit handling:

  • Passwords: store only a hash, never plain text
  • Tokens/keys: store encrypted, never show the full value in the UI
  • Secrets: keep out of the client app and out of logs

Write the rules that stop broken logic

A screen list and field list tell an AI what to build. Rules tell it how the app should behave when real people use it. Without rules, you get “mostly works” code: forms accept bad data, users see things they shouldn’t, and status changes happen out of order.

Write rules in plain language, but make them testable. Someone should be able to read a rule and answer “pass or fail?”

The three rule types to include

  • Validation: required vs optional, min/max length, allowed formats, cross-field conditions
  • Permissions: who can view, create, edit, delete, and what happens if they try anyway
  • Business logic: how records change over time (statuses), limits, uniqueness, deduping

Example: a “Request a quote” form.

Validation rules: email must match a normal email format; company name is 2 to 80 chars; budget is required only if Project type = Full rebuild.

Permission rules: anyone can create a request; only admins can change status; the requester can view their request only through a magic link.

Business rules: status can move Draft -> Submitted -> Approved/Rejected, but never backwards; one active request per email per day; duplicate submissions merge notes instead of creating a second record.

Decide what happens when things go wrong

Don’t stop at “show an error.” Specify the behavior so the UI and API match.

Keep it simple:

  • The message is short and specific (no tech words).
  • It appears in a consistent place (inline under the field or a top banner).
  • The user’s input is preserved.
  • Permission failures default to deny.
  • Failures are logged with a reason admins can review.

Edge cases worth calling out: expired links, deleted users referenced by old records, and double-click submissions that send the same request twice.

Add a few non-negotiables (security, auth, scale)

Stop auth and role chaos
Fix broken login, roles, and permissions so the app matches your spec.

A one-page spec isn’t complete until you name the things the build must not get wrong. These guardrails prevent last-minute rebuilds.

Security and authentication (pick, don’t imply)

Be explicit about how people sign in. If you don’t choose, the AI will.

Decide:

  • Auth method: email + password, magic link, OAuth (Google, etc.), or no login
  • Roles: who can see what (for example: admin vs regular user)
  • Session rules: auto-logout timing, “remember me” on/off

Also write two basic expectations in plain words: “No secrets in the code or client” and “All user input must be validated and safe.” This helps prevent exposed API keys, SQL injection, and forms that accept anything.

Scale, logs, and deployment expectations

You don’t need exact numbers, just rough targets. Example: “launch with about 200 users, could grow to 10,000; typical account has 50 to 5,000 records.”

Add what must be logged for audits and debugging: sign-ins, failed sign-ins, permission changes, and deletes.

Finally, state where it will run. “We need staging and production” is enough. Note that secrets must be stored as environment variables (not hard-coded), and list the few you already know (database URL, email provider key).

Set boundaries so the build stays focused

AI tools are fast, but they also “help” by adding extra features you didn’t ask for. A one-page spec needs guardrails so v1 stays small, testable, and worth shipping.

Start with a simple split: must have vs nice to have. Must haves are the smallest set that makes the product usable end to end. Nice to haves are real ideas, but they can’t block v1.

Define what “done” means for v1 in plain words. This is where you give yourself permission to ignore common scope balloons: custom design polish, advanced filters, roles you don’t need yet, multi-language, deep analytics.

To keep the build objective, add a few acceptance checks per screen:

  • Screen loads without errors and shows the right empty state
  • User can complete the main action in 3 steps or fewer
  • Validation errors show next to the field and block saving
  • Success feedback appears and data is visible after refresh
  • Permission rules are enforced (blocked users can’t see or edit)

Also decide what can be mocked in v1 vs what must be real. Mocks are fine if they’re clearly labeled and safe. For example: fake payment success (no real charge), emails written to a log instead of sent, placeholder file upload URLs, fixed sample responses for external APIs.

Example one-page spec (simple, realistic scenario)

Here’s a one-page spec you could hand to an AI builder for a small clinic appointment app.

App: “BrightClinic Booking”

Users: Patients and clinic admins.

Screens: Landing (what the clinic offers + “Book”), Sign up / Log in, Book appointment, My bookings, Admin schedule.

On Book appointment, the user picks a date, sees available times, chooses a time, adds an optional note, and confirms. On My bookings, the user sees upcoming appointments and can reschedule or cancel.

Data fields (Appointment): Patient name (text), phone (text), appointment time (date/time), status (scheduled, cancelled), notes (optional text). Keep the patient profile minimal for v1: name and phone.

Rules (the logic that must not break)

  • No double booking: only one appointment per time slot.
  • Cancel window: patients can cancel up to 24 hours before the appointment time.
  • Admin-only edits: only admins can change status, move any booking, or edit notes after confirmation.
  • Patients can only view and manage their own bookings.

Quick acceptance checks:

  1. A patient can sign up, book a slot, and see it in My bookings.
  2. Rescheduling frees the old slot and reserves the new one.
  3. Cancelling within 24 hours is blocked with a clear message.
  4. Admin schedule shows all appointments, including status.
  5. A patient cannot access the admin screen.

Common mistakes that make AI-generated code messy

Make fields and DB match
We sync UI, database, and validations so fields stop drifting across the app.

Most messy AI-generated projects start with a spec that sounds complete but leaves out the details an AI needs to make consistent choices. The result is random screens, mismatched data, and logic that works in one place but breaks elsewhere.

The patterns that cause the most damage:

  • Feature-only notes with no screens. “Users can manage invoices” isn’t enough. Without naming screens (List, Create, Detail, Edit), the build often skips basics like empty states and confirmation steps.
  • Missing permissions and roles. If you don’t say who can view, create, edit, delete, the default often becomes “everyone can do everything.”
  • No field examples. If you say “phone number” or “status” with no sample values, the AI guesses formats and naming. You end up with phone, phoneNumber, and mobile across different files, plus competing status values.
  • No failure handling. Many builds describe only the happy path. A failed login or empty list then leads to blank screens or endless loading.
  • Mixing v1 and “later.” When future ideas share the same spec, the build tries to include everything at once. You get half-built settings, unused tables, and confusing navigation.

Another reality check: if you handed your spec to a new teammate, could they tell exactly what to build tomorrow, and what to ignore until later? If not, the AI will fill in the gaps.

Quick checklist and next steps

Before you hit “build,” do a fast pass over your one-page spec. If you can answer these items without guessing, your AI tool is far more likely to produce something consistent.

Confirm you have:

  • Clear screen scope (what you can do on each screen, not just the title)
  • Roles and permissions, plus the main flows written end-to-end
  • Data fields listed (name, type, required/optional, default, example value)
  • Validation, permissions, and failure behavior (what error shows, where, what happens next)
  • Boundaries for v1 (must have vs nice to have)

If anything feels fuzzy, fix it now. “Users can edit a profile” is vague. “User can edit name and phone; email is read-only; phone must be 10 to 15 digits; show error under the field; save stays disabled until valid” is something a tool can follow.

Next steps:

  1. Paste the spec into your AI builder and ask for a working v1, not a full product.
  2. Test the happy path and a few failure cases (wrong password, missing required field, no permission, empty state, network error).
  3. If you see odd behavior, update the spec first, then rebuild or regenerate only the affected part.

If you already have an AI-generated prototype that’s broken or unsafe (auth issues, exposed secrets, inconsistent logic), FixMyMess (fixmymess.ai) is built for that exact handoff: diagnose what’s wrong, repair and harden the code, and prep it for deployment when the prototype needs to become production-ready.

FAQ

What is a “one-page product spec” for an AI-built app?

A one-page spec is a single, structured page that tells the builder exactly what to create: the screens that exist, the data that must be stored, and the rules that control behavior. It’s meant to remove guesswork, not to document every future idea.

If a detail changes what the app does, put it in the spec; if it’s just wording or styling, it can wait.

How detailed should the spec be without turning into a long document?

Aim for one page of tight statements, usually 300–700 words plus a small field table if needed. The real constraint is clarity: someone else should be able to describe the app back to you without asking a long list of questions.

If you’re spilling into multiple pages, you’re probably mixing v1 with “later,” or writing explanations instead of rules.

How do I describe screens so the AI doesn’t invent random navigation?

Write screen names as verb + noun, then add one sentence on what success looks like there. Include who can access it, the main actions the user can take, and what changes after those actions.

This prevents the AI tool from inventing extra pages or skipping key ones like empty states, confirmations, and “where do I land after saving?” behavior.

What’s the simplest way to define data fields so the UI and database match?

List your main objects (like User, Project, Item) and write each field with its type and whether it’s required. Add a small example value so naming and formats don’t drift.

Also state what’s user-entered versus system-generated, and any defaults like initial status or automatic timestamps, so the UI and database stay aligned.

How do I prevent permission chaos in an AI-generated build?

Keep user types to 2–4 roles, then write permissions using simple verbs like view, create, edit, delete. Default to deny when you’re unsure, and make “admin-only” actions explicit.

This avoids the common AI-build failure where everyone can access everything because roles weren’t clearly stated.

How do I write rules that stop “mostly works” logic?

Write rules as testable statements using “must” and “if/then.” A good rule lets you answer pass/fail without interpretation.

Focus on three areas: validation (what inputs are allowed), permissions (who can do what), and business logic (how statuses change, uniqueness, limits).

What should I include for error states and empty states?

Define what the user sees and what they can do next when something fails. Keep it consistent: where the error appears, whether input is preserved, and what the recovery action is.

Also call out the states that are easy to forget but expensive later, like empty lists, expired links, permission denied screens, and double-submission behavior.

How do I stop the AI from adding extra features I didn’t ask for?

Draw a hard line between must-have v1 and nice-to-have later, and explicitly name screens or features that must not exist. AI tools tend to “help” by adding extras unless you set boundaries.

A practical v1 definition is: the core flow works end-to-end, data saves correctly, permissions are enforced, and the user gets clear success and failure feedback.

Can a one-page spec help if I already inherited messy AI-generated code?

Start by reverse-engineering what the app is supposed to do into a one-page spec: screens, fields, and rules. Then regenerate or refactor only the parts that violate the spec, instead of repeatedly prompting vague fixes.

If the code already has mismatched fields, inconsistent permissions, or duplicated flows, the spec becomes the reference point for cleaning it up without breaking more things.

When should I bring FixMyMess in instead of trying to prompt my way out of bugs?

Use FixMyMess when the prototype works in demos but breaks in real use, or when you see red flags like broken authentication, exposed secrets, spaghetti architecture, or security holes like SQL injection risk. They specialize in diagnosing and repairing AI-generated code from tools like Lovable, Bolt, v0, Cursor, and Replit.

A common next step is a free code audit to map all issues before you commit; most fixes land in 48–72 hours with expert human verification and a reported 99% success rate.