Sep 19, 2025·7 min read

Prompts for maintainable code: constraints for folders and naming

Write prompts for maintainable code by adding clear rules for folder structure, naming, and configuration so AI output stays easy to debug and ship.

Prompts for maintainable code: constraints for folders and naming

Why AI output becomes hard to maintain

AI-generated code feels great on day one because it runs fast and shows a result quickly. The trouble starts a few days later, when you need to add a feature, fix a bug, or hand the project to someone else. Without clear rules, the model optimizes for “make it work now,” not “make it easy to change later.”

Vague prompts are the usual cause. If you don’t specify folder structure, naming, and configuration rules, the model fills in the blanks differently each time. That’s how you end up with files scattered across random directories, three ways to name the same thing, and settings hidden in places nobody thinks to check. The code can still run, but it becomes fragile.

Common signs you’re heading toward a maintenance mess include inconsistent styles for similar features, unpredictable file placement, drifting names (userService, users_service, UserSvc), and configuration mixed into code with unclear defaults. Small changes start causing surprising breakages elsewhere.

For a solo founder or small team, “maintainable” usually means you can find things quickly, change one behavior without breaking five others, and onboard a new person without a guided tour. It also means fewer late-night surprises like broken authentication, exposed secrets, or a confusing tangle of files.

Add constraints whenever the project will live longer than a demo, or when you expect more than one iteration. Let the model improvise only for throwaway experiments, quick spike tests, or one-off scripts. If you might ship it, maintain it, or pay someone to fix it later, set constraints from the start.

The three constraints that matter most

If you want prompts for maintainable code, don’t start by asking for more features. Start by adding three constraints that decide how the code is organized, named, and configured. These are painful to change later.

1) Folder structure (where things go)

A folder rule makes output predictable and prevents the “everything in root” mess.

Set rules like: keep a single entry point, group code by responsibility, and separate app code from config, scripts, and tests. Also say what must never happen (for example, no business logic in UI components, no database code inside routes).

2) Naming (how you’ll recognize things later)

Naming rules reduce duplicate work and confusion. Without them, you get two files that do the same thing, or a “utils.ts” that becomes a junk drawer.

Be specific: file names, component names, functions, routes, and database tables should follow one style and reflect their purpose. Use consistent verbs for actions (create, update, delete), and avoid vague names like data, helper, or thing.

3) Configuration (how settings and secrets are handled)

Most AI prototypes break when you move from local runs to real deployment. Clear configuration constraints prevent that.

Define these rules up front:

  • One config location (env vars + one config module), not scattered constants
  • Separate dev and prod defaults, with safe fallbacks
  • Never hardcode secrets, tokens, or API keys in code or sample files
  • Provide a minimal example env file that contains placeholders only
  • Fail fast on missing required settings with a clear error

If you inherited an AI-generated app that ignores these constraints, this cleanup usually comes first because it makes every later fix simpler.

A reusable prompt template you can copy

The fastest win is a small, reusable constraints block. Paste it into any request, then tweak only what matters for the project.

Here’s a template you can copy as-is:

CONSTRAINTS
1) Before coding, output a brief file tree (max 15 lines). Ask 2-4 questions only if needed.
2) Folder structure:
   - Keep source in /src
   - Keep shared utilities in /src/lib
   - Keep UI components in /src/components
   - Keep config in /config
3) Naming:
   - Files: kebab-case (user-profile.ts)
   - React components: PascalCase
   - Functions/vars: camelCase
   - No single-letter names (except i/j in loops)
4) Configuration and secrets:
   - Read all secrets from environment variables
   - Provide a sample .env.example (no real secrets)
   - Safe defaults for dev, strict checks for prod
5) If a rule conflicts with the framework:
   - Follow the framework default
   - Explain the conflict in 2-3 sentences
   - Suggest the closest alternative that keeps the spirit of the rule
6) Output:
   - Generate code file-by-file with clear filenames
   - Add short comments only where the intent is not obvious

Keep rules specific, but not over-detailed. Good constraints describe outcomes you can verify (where files live, how names look, where secrets come from). Skip tiny style rules that create busywork.

One habit that prevents messy output: ask for the file tree first. It forces the model to plan, and it gives you a quick chance to say “move auth into /src/lib” before it writes 10 files.

If you’re fixing AI-generated code later, constraints also make repair faster. When everything has a predictable home, it’s easier to spot what’s missing and patch it without breaking unrelated parts.

Step-by-step: how to write your constraints

Good prompts for maintainable code start with clarity, then add only the rules you’ll actually enforce. If you add too many, the model will ignore them or produce filler.

A simple workflow that works

Write your prompt in five parts, in this order:

  • Start with one sentence that states the job to be done (who it’s for and the main action).
  • Lock the essentials of the stack (framework, language, and any version that affects syntax or config). Skip the rest.
  • Set project rules: how folders are organized and how files, components, and functions are named.
  • Define how configuration works: what goes in env vars, what defaults are safe, and what must never be committed.
  • Require an output sequence: first a file plan, then the code, then a short self-check of what it verified.

This order matters. If you start with folders and naming before the goal, you often get a neat structure that solves the wrong problem.

Mini example (same idea, smaller words)

Imagine you want a simple invoice tracker for a freelancer. If you pin React + TypeScript and say “keep server code in /api, UI in /web, shared types in /shared,” you prevent a lot of chaos. Add naming like “components are PascalCase, hooks start with use,” and you avoid the later mystery of where logic lives.

Be strict about config: “read secrets from env vars only, provide .env.example, and fail fast with a clear error if a variable is missing.” That single line prevents the classic broken-auth and exposed-secret problems.

End with a required self-check summary. It nudges the model to catch missing files, mismatched names, and unsafe defaults before you do.

Folder structure constraints that keep projects tidy

Fix Secrets and Auth
Remove exposed secrets, tighten auth, and patch common vulnerabilities in AI-made apps.

A clean folder tree is one of the easiest constraints to add. The goal isn’t perfection. It’s predictability, so you (or someone else) can find things fast and fix them without breaking unrelated parts.

A simple structure that fits most small web apps looks like this:

  • src/ui/ for screens and reusable components
  • src/domain/ for business rules (what the app does)
  • src/data/ for database and external APIs (how data is stored/fetched)
  • src/shared/ for truly shared helpers (small, boring, reused)
  • src/config/ for app setting defaults (not secrets)

Add one more rule: separate UI from business logic from data access. In plain terms, UI shouldn’t talk to the database directly, and database code shouldn’t decide what the user is allowed to do.

To keep files from turning into 800-line monsters, define “one responsibility per file” in a practical way. One file should have one main purpose (one component, one service, one repository). If it needs two different headings, split it. If it’s reused in three places, move it to shared/. If it’s only reused inside one feature, keep it inside that feature folder. Avoid a generic utils/ dumping ground: require specific folder names like shared/date/ or shared/format/.

A helpful rule for when to create a new module vs add to an existing one: create a new folder only when there’s a new concept in the app. A billing app might add domain/invoices/ and ui/invoices/ once invoices become a real feature, not just a single screen.

If you’re inheriting messy AI output, these constraints make remediation faster. A lot of the work is moving code into clear boundaries before you can safely repair logic, auth, or security issues.

Naming rules that prevent confusion later

Naming is where AI-generated code often goes sideways. If your prompt is strict here, you spend less time hunting imports, guessing what a function does, or untangling “almost the same” models.

Pick one style per thing, and don’t mix. A simple mapping you can paste into prompts is:

  • Files and folders: kebab-case (e.g., user-profile.ts)
  • React components / classes: PascalCase (e.g., UserProfile)
  • Functions and variables: camelCase (e.g., fetchUserProfile)
  • Constants: SCREAMING_SNAKE_CASE (e.g., MAX_RETRIES)
  • API routes: kebab-case (e.g., /user-profile)

File names should match what they export. If a file exports UserService, name it user-service.ts (or UserService.ts if that’s your rule). Don’t export five unrelated things from one file. One main export per file keeps imports predictable and makes refactors safer.

For functions, use verb-first names that say what happens: getUserById, validateSignupForm, saveInvoice, sendPasswordResetEmail. Avoid vague names like process, handleThing, or doWork. If it’s async, be consistent: either add an Async suffix everywhere or rely on context, but don’t mix styles.

Data models should use singular nouns (User, Invoice). Database fields should follow one convention (snake_case is common in SQL; camelCase is common in app code). Pick one and stick to it. Align related names so createdAt maps cleanly to created_at, not create_date in one place and created in another.

A small “do not use” list saves time later: temp, misc, helper2, final_v3, newNew. If you inherited a codebase full of names like these, renaming and reorganizing often pays off immediately because unclear names hide bugs.

Configuration constraints: environments, secrets, and defaults

Be strict about what is configuration and what is code. Configuration is anything that changes between dev, staging, and prod: database URL, API keys, allowed origins, cookie settings, feature flags. Code is the logic that stays the same: routes, validation, business rules.

A good constraint is to separate runtime config from build-time config. Runtime config should come from environment variables, because it can change without a rebuild. Build-time config can live in files committed to the repo, like a typed config module with safe defaults, or config/*.json used only for non-secret values.

Secrets need explicit rules because AI-generated apps often leak them by accident:

  • Never hardcode secrets in code or config files
  • Never log secrets (including debug logs)
  • Never commit secrets (only commit a sample file)
  • Fail fast if a required secret is missing
  • Use clear names like DATABASE_URL and JWT_SECRET

Defaults matter too. Ask for safe, boring defaults so the app behaves predictably even before you tune it. For example: CORS should be restricted (allow only a known list), auth cookies should be HttpOnly and Secure in production, and sessions should have a short idle timeout with a clear max age.

Also require one place where configuration is documented. The simplest pattern is a single .env.example with comments, plus a short “Configuration” section in the README listing each variable, what it does, and an example value. That constraint saves hours later, especially when a prototype worked on one laptop but breaks on deployment.

Quality constraints: errors, validation, and small tests

Make AI Code Maintainable
Turn a messy AI prototype into a codebase you can safely change and ship.

Most AI-generated apps fail in production for boring reasons: unclear errors, missing validation, and no tests around the risky parts. A few quality constraints make the output easier to debug now and easier to fix later.

Start by forcing consistent error handling. Ask for one pattern everywhere, not a mix of thrown strings, ad hoc responses, and silent failures. Require errors to be explicit, shaped the same way, and safe to show to users.

Then require input validation at boundaries, where bad data enters: API routes, forms, webhooks, background jobs. When validation is missing, you get weird bugs like “it works on my machine” or “it fails only for some users.”

Constraints you can copy:

  • Use a single error format (for example: { code, message, details? }) and never throw plain strings.
  • Validate all external inputs at the boundary, return clear validation errors, and never trust client data.
  • Keep functions small (about 20-40 lines max) and make return values predictable (no mixed types).
  • Log unexpected failures with enough context to reproduce (request id, user id if available, action).

Tests don’t need to be big to help. Ask for a minimal set that matches the risk. If you have login, a payment flow, or a create/update action, those are the first to cover. A lightweight minimum could include auth rejection for invalid credentials, a core flow success path plus one common failure, and one validation test proving bad input returns a clear error.

Finally, require a short “how to run locally” section in the output, including env vars needed and safe defaults. This prevents a lot of “it doesn’t start” projects.

Example prompt: building a small app that stays maintainable

Imagine you want a small web app with login, a profile page, and a settings form. Without constraints, an AI often scatters auth logic across pages, mixes database calls into UI code, and invents file names as it goes. With clear rules, you get clean boundaries and a file plan you can debug later.

Here’s a copyable example prompt you can use (and tweak):

Build a small app with:
- Login page
- Profile page (shows name + email)
- Settings page (update display name, toggle a feature flag)

Hard constraints (do not violate):
1) Folder structure:
- /src/routes = route definitions only
- /src/handlers = request/response logic only (no database queries)
- /src/services = business rules (auth, profile, settings)
- /src/data = database access only (queries, repositories)
- /src/config = configuration loader and typed config

2) Naming:
- Files: kebab-case (profile-handler.ts)
- Exports: camelCase functions, PascalCase types
- One main export per file

3) Code boundaries:
- Routes call handlers
- Handlers call services
- Services call data layer
- Data layer is the only place allowed to import the DB client

4) Config constraints:
- Read DATABASE_URL and AUTH_SECRET from environment variables
- Never hardcode secrets or sample keys
- Add FEATURE_SETTINGS_ENABLED default=false when missing
- Provide a single config object from /src/config

Deliverables:
- Start with a file tree
- Then output code file-by-file
- Finish with a summary table: file created/changed + 1 sentence why

If you want even tighter output, add: “If you need to add a new file, explain why it belongs in that folder.” It pushes the model to respect boundaries instead of dumping everything into one place.

Common mistakes when adding constraints

From Spaghetti to Structure
Refactor spaghetti structure into predictable folders and boundaries without rewriting everything.

Constraints are meant to make output easier to read, change, and fix later. Most problems happen when rules are either too vague to guide the model, or so strict they collide with the framework you’re using.

Mistake 1: Over-constraining the project

It’s easy to write rules that sound clean but force unnatural choices. For example, demanding a custom folder layout that fights Next.js conventions, or banning a framework’s standard config file because it feels messy. The model then spends effort working around your rules instead of building correct code.

A safer approach: keep framework defaults unless you have a real reason to change them. Add constraints only where you repeatedly see confusion later (like where API routes live, or where shared types go).

Mistake 2: Under-constraining with fuzzy words

“Clean code,” “best practices,” and “enterprise-ready” don’t tell the model what to do. Replace them with checks it can follow.

Examples of constraints that actually work:

  • “Output a file tree first, then code files in that order.”
  • “Use one naming style: camelCase for variables, PascalCase for components.”
  • “Put env vars in .env.example and read them only via a config module.”
  • “No new libraries unless you ask first and explain why.”
  • “If you move files, include migration steps.”

Another common issue is letting the model invent libraries or config files you didn’t request. You ask for auth, it adds three packages, two configs, and a new build tool.

Don’t skip migration steps when changing folder structure. A small rename can break imports, tests, and deployment. Ask for a short “what changed” section and any commands needed to update paths. And always require a file tree; without it, the output becomes scattered and hard to assemble.

Quick checklist and next steps

Before you accept AI-generated code (or merge it), do a 5-minute pass. These checks catch most “looks fine now, painful later” problems:

  • The file tree matches your rules (folders, layers, shared code) and there are no random one-off directories.
  • Names are consistent: the same concept isn’t called three different things across files, routes, and variables.
  • No secrets in code: no API keys, tokens, private URLs, or real credentials in files, comments, or example configs.
  • Configuration is clear: there’s a documented place to set env vars, defaults are safe, and behavior doesn’t change silently between dev and prod.
  • It’s runnable with copy-paste steps: someone new can install, configure, and run it without guessing.

If you can’t confidently say yes to most of that, pause. A small cleanup now is cheaper than debugging a month from now, especially when issues hide inside naming confusion, scattered config, or a folder structure that encourages spaghetti growth.

If you already have a messy AI-generated codebase, make it understandable before adding features. A practical order is: freeze scope for 24 hours, map the current structure (folders, entry points, where config and secrets live), pick one naming standard and apply it to the hot path first, pull configuration into one place with a safe example file and clear startup steps, then add a few small checks (validation, basic error handling, one or two smoke tests) to prevent regressions.

If you’re stuck with a prototype from tools like Lovable, Bolt, v0, Cursor, or Replit that won’t behave in production, FixMyMess (fixmymess.ai) can run a free code audit to flag structure, naming, config, and security issues before any fixes. That way you get a clear plan, not another round of guesswork.

FAQ

Why does AI-generated code get messy so fast?

AI usually optimizes for “works right now,” so it may take shortcuts that feel fine until you need to change something. Without clear rules for structure, naming, and config, each new prompt can produce a different pattern, which makes the code fragile to edits.

What are the easiest signs that my project is becoming hard to maintain?

Look for inconsistent naming of the same concept, files appearing in random folders, and configuration values hardcoded in multiple places. Another common sign is that a small change causes unrelated breakage, which usually means responsibilities are mixed together.

When should I add constraints vs letting the model improvise?

Add constraints whenever the project might be shipped, iterated on more than once, or handed to another person. If it’s truly a throwaway demo or a one-off script, you can be looser and accept some mess.

Why are folder structure, naming, and configuration the “big three” constraints?

Folder structure determines where future code will go, naming determines how you’ll find and reuse things, and configuration determines whether the app survives deployment. Those three are painful to fix later because they affect every file and every change.

What’s the fastest way to keep AI output organized from the start?

Ask for a brief file tree before any code and cap it to something readable, like 15 lines. Then require code to be output file-by-file with filenames, so you can quickly see if things landed in the wrong place before the model generates dozens of files.

What naming rules prevent duplicate or confusing files?

Pick one convention per category and stick to it: files in kebab-case, components in PascalCase, functions in camelCase, and consistent route naming. Also require that filenames match their main export and avoid vague “utils” junk drawers by using specific names.

What configuration rules stop the usual deployment breakages?

Require one config loader/module, read secrets only from environment variables, and include a placeholder-only .env.example. Also require “fail fast” checks for missing required values, because silent fallbacks are a common reason prototypes break in production.

How do I avoid over-constraining a project with too many rules?

Over-constraining happens when your rules fight the framework’s defaults, forcing awkward workarounds. A good rule is: follow the framework when there’s a conflict, and only add constraints where you consistently see confusion later (like boundaries between UI, services, and data access).

What quality constraints help without slowing everything down?

Ask for consistent error shapes across the app, validation at boundaries (routes/forms/webhooks), and a minimal set of tests around the risky flows like auth or create/update actions. These constraints don’t add much code, but they make failures easier to debug and prevent regressions.

I inherited a messy AI-generated codebase—what should I do first?

Start by mapping the current file tree and identifying where config and secrets live, then choose one naming standard and apply it to the critical path first. If you’re stuck with a broken AI-generated app from tools like Lovable, Bolt, v0, Cursor, or Replit, FixMyMess can run a free code audit and deliver a clear remediation plan, often with fixes completed in 48–72 hours.