Password storage audit: hashing, salting, and safe migration
Learn how to run a password storage audit on inherited code, confirm hashing, salting, and peppering, and upgrade hashes safely on next login.

What a password storage audit checks (and why you should care)
Good password storage means a simple thing: if someone steals your user database, they still can’t use those passwords.
Your app should never store passwords as readable text. It should store a one-way hash that’s slow to guess, unique per user, and set up so you can upgrade it over time.
A password storage audit checks the whole chain, not just the login screen. In inherited code (especially code generated quickly by AI tools), logins can appear to work while storage is unsafe. A prototype might hash passwords with a fast function, reuse the same salt, or accidentally log secrets during debugging. None of that breaks the happy path, but it can turn a data leak into a wave of account takeovers.
If attackers get weak password hashes, they can run offline guessing attacks. They don’t need to hit your servers or trigger rate limits. They just keep trying guesses on their own machines until they find matches. Once they crack one password, they often try it elsewhere because people reuse passwords.
Good password storage usually includes:
- A modern, slow password hashing algorithm (not a general-purpose hash like MD5 or SHA-1).
- A unique random salt per password, stored alongside the hash.
- Optional peppering: a separate secret kept outside the database.
- Clear versioning so you can change settings later without breaking logins.
- No accidental exposure (logs, analytics events, error messages, backups).
The safest sequence is audit first, migrate second. The audit identifies what you have today (algorithm, settings, where the code lives, how resets work), then you choose an upgrade plan.
A common plan is “rehash on next login”: users keep signing in normally, and when a password is verified successfully, you upgrade their stored hash to the new standard.
Red flags you can spot quickly in inherited code
You don’t need a full rewrite to find the biggest risks. A quick audit often starts with simple searches that reveal whether passwords are stored or handled unsafely.
The most urgent red flag is anything that suggests passwords can be recovered. If you see passwords saved directly in a database column, logged to a console, sent to analytics, or “encrypted” with something reversible (like AES with a stored key), treat it as an incident. Passwords should be stored only as one-way hashes.
Another common problem is the use of fast hashes. You might see MD5, SHA-1, or plain SHA-256 used directly on the password. Even if it “looks hashed,” fast hashes are built for speed, which makes them easy to crack at scale. If the code says hash(password) and nothing else, that’s a strong sign it isn’t using a password hashing function correctly.
Quick checks that catch many of the worst issues:
- Mentions of
md5,sha1,sha256(password), or “encryptPassword” helper methods. - Hash outputs that are always the same length and format, with no sign of a per-user salt.
- Passwords or reset tokens showing up in logs, error reports, or database dumps.
- Secrets (JWT keys, database passwords,
PEPPER=...) hardcoded in the repo or committed config. - Login code that assumes only one hash format and has no upgrade path.
Missing or broken salting is subtle but easy to spot once you know what to look for. If two users with the same password end up with the same stored value, the system is likely using no salt, a global salt, or a predictable salt. A healthy setup produces different stored results even for identical passwords.
A practical example: you inherit a Node app and find a users.password column full of 64-character hex strings. The login handler does sha256(req.password) and compares it to the database value. It “works,” but it’s vulnerable, and it gives you no safe way to improve security without planning for multiple hash formats.
Finally, check whether the login flow can handle change. If the code can’t verify legacy hashes and then rehash with a stronger method on the next successful login, upgrades get risky and can lead to lockouts.
Map where passwords and reset flows are handled
An audit goes faster when you start with a simple map: every place a password or reset token is created, sent, processed, or stored. In inherited code, the risky parts are often not in an “auth” folder. They’re scattered across UI forms, API routes, background jobs, and admin tools.
Start by listing every entry point where a password can enter the system, then confirm each one in the code and in production configs. Common places include signup flows, login (including SSO fallback paths), password reset and recovery, admin/support tools, and imports or migrations (CSV uploads, CRM syncs, seed scripts).
Next, locate where password-related data lives in the database. Check obvious columns like password, password_hash, and hashed_password, but also look for legacy tables and shadow copies (for example, an old users_legacy table still read by a background job). If you find more than one password hash field, note which one is actually used at login.
Logging is another common leak. Search your code and monitoring config for anything that might capture sensitive values: request logs, analytics events, error reports, and debug prints. A realistic failure mode: a login failure handler logs the full request body “for troubleshooting,” quietly shipping plaintext passwords into logs.
Password resets deserve their own mini-map because tokens are easy to mishandle. Identify how reset tokens are generated (randomness source), where they’re stored (database row, cache, email link payload), and how they expire. Also check whether tokens are single-use, and what happens if a token is replayed after the password was changed.
Finally, draw the service boundary. Note every component that touches auth: frontend client, API gateway, auth service, background workers (email, SMS), and any third-party identity provider. In AI-generated projects, it’s also common to find extra auth endpoints left behind during iterations, so include old routes and disabled feature flags in your scan.
Verify the hashing algorithm and its settings
A password storage audit starts with one blunt question: are you using a password hashing function, or a general-purpose hash?
If you see MD5, SHA-1, SHA-256, or anything framed as “encrypt password,” treat it as a serious issue. Those tools aren’t designed to slow attackers down.
Prefer dedicated password hashes like Argon2id, bcrypt, or scrypt. They’re built to be expensive to crack, so leaked hashes are harder to turn into real passwords.
How to tell what you have from the stored hash
Most systems store the algorithm and settings inside the hash string, so a quick glance at one database value can tell you a lot.
Common patterns:
- Argon2id often starts with
$argon2id$and includes memory and iterations. - bcrypt often starts with
$2a$,$2b$, or$2y$and includes a cost like10or12. - scrypt may show
$scrypt$or parameters likeN,r, andpdepending on the library.
If you see a fixed-length hex string (for example, 32 or 64 hex characters) with no $ separators, it may be a general hash, or a custom scheme that needs deeper review.
Check the settings, not just the name
Algorithm choice is only half the story. Argon2id can be weak if memory is tiny. bcrypt can be weak if the cost is too low.
Look for where the work factor is set, and whether you can change it without a redeploy. Healthier setups keep cost settings in config, use the library’s built-in verify method (not a manual string compare), and compare in constant time.
Also confirm the login endpoint has basic defenses: rate limiting per account and per IP, plus sensible lockout rules. Short, temporary lockouts are usually safer than permanent ones.
Check salting: uniqueness, storage, and randomness
An audit should confirm one rule: every password hash must have its own unique, random salt.
If two users pick the same password, their stored hashes should still look different. If they match, something’s wrong.
Uniqueness: one salt per password, no exceptions
Salts stop attackers from using precomputed tables and make bulk cracking much harder. That only works when salts aren’t reused.
A common inherited-code problem is a single hardcoded salt in a config file or a “default” salt reused for every user. A quick sanity check is to pull a small sample of stored password hashes (even 20 to 50) and see if they share an identical salt segment or a repeating prefix that suggests reuse. If you find repetition, treat it as a security bug.
Storage: embedded in the hash vs a separate column
Many modern formats store the salt inside the hash string itself. For example, bcrypt and Argon2 hash strings typically include the algorithm, cost parameters, the salt, and the hash in one field. That’s normal.
Some systems store the salt in a separate database column next to the hash. That can also be fine, as long as it’s truly per-user and not nullable or defaulted to a shared value. The risk with separate columns is accidental reuse through migrations, ORM defaults, or seed scripts.
Practical checks that catch most issues:
- Confirm each user has a different salt value (or a different embedded salt inside the hash string).
- Confirm salts are generated during password set or reset, not at app boot.
- Make sure salt length matches the algorithm’s expectations.
- Ensure the code uses a cryptographically secure random source.
- Avoid custom “salt formats” that manually concatenate strings.
Randomness matters as much as uniqueness. If the salt comes from predictable sources (timestamps, usernames, incremental IDs), attackers can guess it.
A realistic failure mode in AI-generated prototypes is a helper like salt = Math.random().toString(36) or a fixed SALT="abc" copied across files. It looks random, but it isn’t secure.
If you need to change how salts are generated or stored, do it in a way that keeps existing users logging in normally, then upgrade their hashes safely on the next successful login.
Decide on peppering and how to store the secret safely
A pepper is a secret value added to the password before hashing. Unlike a salt (unique per user and stored with the hash), the pepper is shared across many users and should be kept only on the server.
Peppering helps most when you’re worried about database leaks and offline cracking. It’s especially useful in inherited apps where you don’t fully trust what was generated or where secrets may already have been exposed.
Peppering can backfire if you treat it like a normal config string. If the pepper leaks (hardcoded in the repo, printed in logs, copied into a client app), you gain little and add risk. It can also cause an outage if a deploy removes or changes the pepper: suddenly nobody can log in.
Store the pepper like a real secret:
- Keep it out of the database and out of source control.
- Load it from environment variables or a secrets manager used by your hosting setup.
- Limit access to the small set of people and services that must have it.
- Never send it to the browser or mobile app.
- Avoid logging anything that could reveal it (even partial values).
Plan for rotation before you ship. Rotating a pepper is harder than rotating an API key because it affects every password check.
The safest approach is dual-pepper support for a window of time: accept the old pepper and the new pepper, and upgrade users gradually. On login, verify with the new pepper first. If that fails, verify with the old pepper. If the old pepper works, rehash and save using the new pepper. This lets rotation happen without forcing password resets.
Write down who can view or change the pepper, where it’s set in each environment, and what the rollback plan is if a deploy breaks logins.
Step by step: migrate hashes safely on next login
A safe “rehash on next login” plan lets you accept existing passwords, then upgrade storage without forcing a mass reset. It’s one of the highest-impact fixes because it reduces risk quickly without interrupting users.
1) Detect whether a stored hash is legacy or modern
Make the database value self-describing. Most password hash formats already are. For example, bcrypt hashes often start with $2a$ or $2b$, and Argon2 hashes start with $argon2id$.
If your legacy system used something custom (like sha1(salt+password)), add an explicit hash_version column so you can tell what you’re verifying.
2) Verify using the legacy method only when needed
On login, check the stored format/version first. If it’s modern, verify normally. If it’s legacy, run only the legacy verifier for that user.
Watch for “double hashing” (hashing the incoming password before handing it to the verifier). Make sure the verifier gets the raw password string exactly once.
3) Rehash with the new algorithm and overwrite on success
If legacy verification succeeds, immediately rehash with your new choice and current settings (for example, Argon2id or bcrypt with a stronger cost) and write it back in the modern format.
Keep the update atomic and tied to the user ID so two logins don’t race. A simple approach is “verify first, then update hash and hash_version in one write.”
if verify(password, stored_hash, version) == true:
new_hash = hash_new(password)
update users set password_hash = new_hash, hash_version = "v2" where id = user_id
allow_login()
else:
deny_login()
Concrete example: you inherit a prototype that used unsalted SHA-1 plus a global secret. You keep that verifier only to validate old accounts. After the first successful login, the row is upgraded to Argon2id, and future logins never touch SHA-1 again.
4) Add observability without leaking secrets
Track progress and problems, but never log passwords or full hashes. Log only safe counters and outcomes, such as legacy logins succeeded (rehash happened), legacy logins failed, modern logins succeeded/failed, number of users still on legacy format, and rehash errors (write failed, version mismatch).
Common migration mistakes that lock users out
A hash upgrade should feel invisible to users. Most lockouts happen when the migration changes behavior, not security.
One big mistake is forcing a password reset without a plan. If you invalidate all existing hashes, people who no longer have access to their old email (or who use SSO some of the time) get stuck. If you must force resets after a breach, you still need a fallback path, clear messaging, and support for edge cases like unverified emails.
Another common trap is upgrading hashes on failed login attempts. On a wrong password, you don’t know the correct plaintext, so you can’t safely rehash anything. Worse, some code overwrites the stored hash with garbage derived from the wrong input, locking the user out even if they later type the correct password.
Password normalization changes can also quietly break logins. Small differences like trimming spaces, changing case rules, or shifting Unicode handling can make the same typed password hash differently than before. A realistic example: an inherited app trimmed trailing spaces during signup but not during login. After a rewrite, both sides started trimming, and some users who intentionally used a trailing space could never sign in again.
Finally, watch for concurrent logins and race conditions during rehash. If two devices log in at the same time, both might try to upgrade the hash. If your update isn’t atomic, one request can overwrite the other, or fail in a way that leaves the account inconsistent.
A short checklist to avoid lockouts:
- Only rehash after a successful password check.
- Keep normalization rules exactly the same during migration.
- Store the pepper server-side only, never in client code.
- Make the hash update atomic (one write, guarded by expected current hash).
- Log and alert on migration failures without blocking valid logins.
Quick checks before you ship the changes
Before you release any password work, do a practical pass that catches the mistakes users feel immediately: failed logins, broken resets, and accidental leaks in logs.
Test the login and rehash flow with messy, real inputs. Don’t rely on one happy-path account. Try very long passwords (200+ characters), Unicode, old accounts that haven’t logged in for months, and users created through different paths (signup, admin import, OAuth then password set). Confirm “wrong password” errors are generic and consistent and don’t reveal whether an account exists.
Password resets deserve their own check because inherited code often stores tokens in plain text. Treat reset tokens like passwords: store only a hash of the token, and compare by hashing what the user submits. Verify tokens expire when they should, expire after use, and can’t be replayed. One simple scenario: request two resets in a row, use the first token, then confirm the second still works and the first is rejected.
Don’t ship without a rollback plan. Take a fresh backup, and write down exactly how you’ll revert if logins spike or support tickets flood in. Also confirm your “rehash on next login” update behaves safely if it fails halfway through (for example, the password verifies but the database write fails). Users should still be able to log in on the next attempt.
Finally, run a basic abuse check. Add rate limits on login and reset endpoints, and make sure suspicious patterns get flagged (many attempts, many reset requests, repeated failures from one IP). Review logs and error tracking, too. They must never include raw passwords, reset tokens, or full authorization headers, even in debug mode.
Next steps: get the audit done and harden inherited AI-built code
If you inherited an AI-generated app, treat authentication like the thing that can hurt you fastest. A password storage audit is usually the quickest way to find problems that lead to account takeovers.
A common pattern is a prototype that ships with patchwork auth: weak hashing settings, hardcoded secrets, and a reset flow that can be abused. The app looks fine in a demo, but it’s not safe in production.
Prioritize in a sensible order. First, remove exposed secrets, block any unsafe fallback login logic, and fix obvious injection paths around auth-related endpoints. Then migrate users gradually so you don’t force mass password resets.
If you want an expert to review and fix it quickly, it helps to prepare repo access (or a clean export), the branch you plan to deploy, a way to run the app locally (env vars, test DB), a couple of test accounts (including a legacy account if you have old hashes), and notes on where secrets live today (hosting config, env files, CI settings).
If you’re dealing with an inherited AI-generated codebase and you’re not sure what’s actually stored, FixMyMess (fixmymess.ai) focuses on diagnosing and repairing AI-built apps, including authentication logic, secret handling, and safe “rehash on next login” migrations. They offer a free code audit to identify issues before you commit, with most projects completed within 48-72 hours.
A clear “done” state helps keep the work tight:
- New passwords always use the approved hash and settings.
- Successful logins upgrade legacy hashes automatically.
- Reset and change-password flows are tested and rate-limited.
- No plaintext passwords, no reversible encryption, no exposed secrets.
- Legacy hash count trends toward zero as active users sign in.
If you can name your current hashing method but not its settings, or you’re unsure how many users are still on legacy hashes, that’s a sign to pause and audit before the next release.
FAQ
What is a password storage audit actually trying to prove?
You’re checking whether a stolen user database would let an attacker recover real passwords. Good storage uses a slow, one-way password hash with a unique salt per user, and it avoids leaking secrets through logs, backups, or reset flows.
It also checks whether you can upgrade safely over time, so you don’t get stuck with a weak scheme forever.
Why is using MD5 or SHA-256 for passwords considered unsafe?
Fast hashes like MD5, SHA-1, or plain SHA-256 are built to be quick, which is exactly what attackers want for offline guessing. If a database leaks, they can try billions of guesses on their own machines without touching your servers.
A dedicated password hash (like Argon2id, bcrypt, or scrypt) is designed to be expensive to crack, so leaked hashes are much harder to turn into real passwords.
How can I tell what hashing method my app is using?
Look for the stored password value format and the code that verifies it. Many modern hashes are self-describing and start with markers like $argon2id$ or $2b$ (bcrypt).
If you see fixed-length hex strings (like 32 or 64 hex chars) and the login code does something like sha256(password) before comparing, that’s a strong sign it’s not using a proper password hashing function.
What does “unique salt per user” mean, and how do I spot bad salting?
A salt should be unique and random for every password. Its job is to make identical passwords produce different stored values, so attackers can’t crack many users at once with the same work.
If two users with the same password end up with the same stored value, you likely have no salt, a shared “global salt,” or a predictable salt, and that’s a serious risk.
Should I use a pepper, or is salting enough?
A pepper is a secret added to the password before hashing, kept only on the server (not in the database). It can help if your database leaks, because attackers still need the pepper to verify guesses.
Only add a pepper if you can store it safely and keep it stable; losing or changing it can lock everyone out unless you build a careful rotation plan.
Where do password leaks happen besides the database column?
It usually hides in places you wouldn’t expect: request logs that capture full bodies, debug prints in error handlers, analytics events, or support/admin tools that log inputs for troubleshooting.
A practical audit includes searching your code and monitoring settings for any place that might record login requests, reset tokens, or authorization headers.
What should I check in the password reset flow?
Reset tokens are often treated too casually and end up stored in plaintext. If someone gets access to the database or logs, they can use those tokens to take over accounts.
A safer pattern is to store only a hash of the reset token, enforce short expiry, make tokens single-use, and ensure older tokens stop working after a password change.
What is “rehash on next login,” and why is it the safest migration?
Keep existing logins working, but upgrade storage the moment you can. On a successful login, verify using the legacy method if needed, then immediately rehash the same password using the new algorithm and overwrite the stored value.
This avoids a mass reset while steadily moving active users to the stronger format.
What are the most common mistakes that cause lockouts during a hash upgrade?
Rehashing on failed attempts is a common bug that can overwrite good hashes with garbage and lock people out. Another trap is changing password normalization rules (like trimming spaces or changing Unicode handling), which makes the “same” password hash differently than before.
Race conditions can also bite you if two logins try to upgrade the hash at once; make the update atomic so the account doesn’t end up in an inconsistent state.
How does this differ when the codebase was generated quickly by AI tools?
AI-generated prototypes often have auth that “works in a demo” but is unsafe in storage: fast hashing, hardcoded secrets, accidental logging, or leftover endpoints from iterations. The quickest win is an audit that maps every entry point where passwords and tokens are handled, then fixes the highest-risk pieces first.
If you’re not sure what’s stored or where secrets live, FixMyMess can do a free code audit and typically turn fixes around in 48–72 hours, including safe migrations that don’t break logins.