Performance budgets help keep web apps fast by setting clear limits on load time, JS size, and Core Web Vitals, plus quick audits and fix-first rules.

A performance budget is a set of limits you agree on before you build. It can be a time limit (how fast the page feels), a size limit (how much code you ship), or a simple cap (requests, images, third-party scripts). If you go over the limit, it’s treated like a broken requirement, not a “nice to fix later” task.
Speed usually gets worse because shipping is additive. Each new widget adds JavaScript, CSS, fonts, images, API calls, and more work for the browser. Even small changes stack up until the app feels heavy, especially on mid-range phones and slower networks where most real users are.
Opinions don’t protect you here. One person says “it feels fine on my laptop,” another says “it’s slow,” and the team debates. A budget ends the debate by turning performance into a product constraint you can measure and enforce.
This is where Addy Osmani’s thinking fits: treat performance like design constraints and security rules. You don’t “try” to stay secure or “hope” the layout looks good. You set standards, you check them continuously, and you block changes that break them.
Budgets solve several practical problems at once. They make tradeoffs explicit (adding a feature means paying for it somewhere else), they catch regressions early (when fixes are cheaper), and they give everyone the same definition of “fast enough.” They also reduce the late panic that tends to show up right before a launch.
Here’s the kind of scenario budgets are built for: you add a rich charting library for one dashboard view. It ships to everyone, grows the main bundle, and pushes the first meaningful screen later. Without a budget, this slips through because the feature “works.” With a budget, the team has to choose: lazy-load the chart, replace the library, or simplify the view.
This matters even more when teams can generate and iterate on apps quickly, including with chat-driven build workflows like Koder.ai. Speed is great, but it also makes it easy to ship extra dependencies and UI flourishes without noticing. Budgets keep fast iteration from turning into slow products.
Performance work fails when you measure everything and own nothing. Pick one page flow that matters to real users, and treat it as the anchor for your budgets.
A good starting point is a primary journey where speed affects conversion or daily work, like “home to signup,” “dashboard first load after login,” or “checkout and payment confirmation.” Choose something representative and frequent, not an edge case.
Your app doesn’t run on your laptop. A budget that looks fine on a fast machine can feel slow on a mid-range phone.
Decide on one target device class and one network profile to start. Keep it simple and write it down as a sentence everyone can repeat.
For example: a mid-range Android phone from the last 2 to 3 years, on 4G while moving around (not office Wi‑Fi), measuring a cold load and then one key navigation, in the same region where most users are.
This isn’t about picking the worst case. It’s about choosing a common case you can actually optimize for.
Numbers only matter if they’re comparable. If one run is “Chrome with extensions on a MacBook” and the next is “throttled mobile,” your trend line is noise.
Pick one baseline environment and stick to it for budget checks: same browser version, same throttling settings, same test path, and the same cache state (cold or warm). If you use real devices, use the same device model.
Now define what “fast enough” means in terms of behavior, not perfect demos. For example: “users can start reading content quickly” or “the dashboard feels responsive after login.” Translate that into one or two metrics for this journey, then set budgets around them.
Budgets work best when they cover both what users feel and what teams can control. A good set mixes experience metrics (the “did it feel fast?” part) with resource and CPU limits (the “why did it get slow?” part).
These track how the page behaves for real people. The most useful ones map directly to Core Web Vitals:
Timing budgets are your north star because they match user frustration. But they don’t always tell you what to fix, so you also need the budget types below.
These are easier to enforce in builds and reviews because they’re concrete.
Weight budgets cap things like total JavaScript, total CSS, image weight, and font weight. Request budgets cap total request count and third-party scripts, reducing network overhead and “surprise” work from tags, widgets, and trackers. Runtime budgets limit long tasks, main-thread time, and hydration time (especially for React), which often explains why a page “feels” slow on mid-range phones.
A practical React example: bundle size might look fine, but a new carousel adds heavy client-side rendering. The page loads, yet tapping filters feels sticky because hydration blocks the main thread. A runtime budget like “no long tasks over X ms during startup” or “hydration completes within Y seconds on a mid-tier device” can catch this even when weight budgets don’t.
The strongest approach treats these as one system: experience budgets define success, and size, request, and runtime budgets keep releases honest and make “what changed?” easy to answer.
If you set too many limits, people stop paying attention. Pick 3 to 5 budgets that match what users feel most, and that you can measure on every pull request or release.
A practical starter set (tune the numbers later):
Two thresholds keep things sane. “Warn” tells you you’re drifting. “Fail” blocks a release or requires explicit approval. That makes the limit real without creating constant fire drills.
Write the budget down in one shared place so nobody debates it during a busy release. Keep it short and specific: which pages or flows are covered, where measurements run (local audit, CI, staged build), which device and network profile you use, and exactly how metrics are defined (field vs lab, gzip vs raw, route-level vs whole app).
Start with a baseline you can repeat. Pick one or two key pages and test them on the same device profile and network each time. Run the test at least three times and record the median so one weird run doesn’t set your direction.
Use a simple baseline sheet that includes both a user metric and a build metric. For example: LCP and INP for the page, plus total JavaScript size and total image bytes for the build. This makes budgets feel real because you can see what the app shipped, not just what a lab run guessed.
Set budgets slightly better than today, not fantasy numbers. A solid rule is 5 to 10 percent improvement from your current median on each metric you care about. If your LCP is 3.2s on your baseline setup, don’t jump straight to 2.0s. Start with 3.0s, then tighten after you prove you can hold it.
Add a quick check to every release before users see it. Keep it fast enough that people don’t skip it. A simple version is: run a single-page audit on the agreed page, fail the build if JavaScript or images exceed the budget, store results per commit so you can see when it changed, and always test the same URL pattern (no random data).
Review breaches weekly, not only when someone complains. Treat a breach like a bug: identify the change that caused it, decide what to fix now, and what to schedule. Tighten slowly, only after you’ve held the line for a few releases.
When product scope changes, update budgets deliberately. If you add a new analytics tool or a heavy feature, write down what grew (size, requests, runtime), what you’ll do to pay it back later, and when the budget should return.
A budget only helps if you can check it quickly. The goal of a 10-minute audit isn’t to prove a perfect number. It’s to spot what changed since the last good build and decide what to fix first.
Start with one page that represents real usage. Then run the same quick checks every time:
Two views usually give you answers in minutes: the network waterfall and the main-thread timeline.
In the waterfall, look for one request that dominates the critical path: a giant script, a blocking font, or an image that starts late. If the LCP resource isn’t requested early, the page can’t hit an LCP budget no matter how fast the server is.
In the timeline, look for long tasks (50 ms or more). A cluster of long tasks around startup often means too much JavaScript on first load. One massive chunk is usually a routing issue or a shared bundle that grew over time.
Quick audits fail when each run is different. Capture a few basics so changes are clear: page URL and build/version, test device and network profile, LCP element description, the key numbers you track (for example LCP, total JS bytes, request count), and a short note on the biggest offender.
Desktop testing is fine for quick feedback and PR checks. Use a real device when you’re close to the budget, when the page feels janky, or when your users skew mobile. Mobile CPUs make long tasks obvious, and that’s where many “looks fine on my laptop” releases fall apart.
When a budget fails, the worst move is to “optimize everything.” Use a repeatable triage order so each fix has a clear payoff.
Start with what users notice most, then work down toward finer tuning:
A team ships a new dashboard and suddenly misses the LCP budget. Instead of tweaking cache headers first, they find the LCP element is a full-width chart image. They resize it, serve a lighter format, and load only what matters early. Next, they notice a large charting library loads on every route. They load it only on the analytics page and delay a third-party support widget until after the first interaction. Within a day, the dashboard is back within budget, and the next release has clear “what changed” answers.
The biggest failure mode is treating budgets like a one-time document. Budgets only work when they’re easy to check, hard to ignore, and tied to how you ship.
Most teams get stuck in a few traps:
A common pattern is a “small” feature that pulls in a new library. The bundle grows, LCP slows by a second on slower networks, and nobody notices until support tickets arrive. Budgets exist to make that change visible at review time.
Start simple and keep checks consistent. Pick 2 to 4 budgets that map to user experience and tighten them gradually. Lock your test setup and write it down. Track at least one real-user signal if you can, and use lab tests to explain the “why,” not to win arguments. When a dependency adds meaningful weight, require a short note: what it costs, what it replaces, and why it’s worth it. Most importantly, put the budget check on the normal release path.
If budgets feel like constant friction, they’re usually unrealistic for today, or they aren’t tied to real decisions. Fix those two things first.
A small team shipped a React analytics dashboard in a week. It felt fast at first, but every Friday release made it a bit heavier. After a month, users started saying the first screen “hangs” and filters feel laggy.
They stopped arguing about “fast enough” and wrote down budgets tied to what users notice:
The first failure showed up in two places. The initial JavaScript bundle crept up as charts, date libraries, and a UI kit were added. At the same time, the dashboard header image was swapped for a bigger file “just for now,” pushing LCP over the limit. INP got worse because each filter change triggered heavy rerenders and expensive calculations on the main thread.
They fixed it in an order that produced quick wins and prevented repeat regressions:
Get LCP back under the line by resizing and compressing images, setting explicit image dimensions, and avoiding blocking font loads.
Cut initial JS by removing unused libraries, splitting non-critical routes, and lazy-loading charts.
Improve INP by memoizing expensive components, debouncing typing filters, and moving heavy work off the hot path.
Add a budget check to every release so if a metric breaks, the release waits.
After two releases, LCP dropped from 3.4s to 2.3s, and INP improved from around 350ms to under 180ms on the same test device.
A budget only helps if people can follow it the same way every time. Keep it small, write it down, and make it part of shipping.
Pick a handful of metrics that fit your app, set “warn vs fail” thresholds, and document exactly how you test (device, browser, network, page/flow). Save a baseline report from the current best release and label it clearly. Decide what counts as a valid exception and what doesn’t.
Before each release, run the same audit and compare it to the baseline. If something regresses, log it where you track bugs and treat it like a broken checkout step, not a “later” task. If you ship with an exception, record an owner and an expiry date (often 1 to 2 sprints). If the exception keeps getting renewed, the budget needs a real discussion.
Move budgets earlier into planning and estimates: “This screen adds a chart library, so we need to remove something else or lazy-load it.” If you’re building with Koder.ai (koder.ai), you can also write these constraints up front in Planning Mode, then iterate in smaller slices and use snapshots and rollback when a change pushes the app over a cap. The point isn’t the tool, it’s the habit: every new feature has to pay for its weight, or it doesn’t ship.
A performance budget is a set of hard limits (time, size, requests, CPU work) that your team agrees to before building.
If a change exceeds the limit, treat it like a broken requirement: fix it, reduce scope, or explicitly approve an exception with an owner and an expiry date.
Because performance gets worse gradually. Each feature adds JavaScript, CSS, images, fonts, API calls, and third‑party tags.
Budgets stop the slow creep by forcing a tradeoff: if you add weight or work, you must pay it back (lazy-load, split a route, simplify UI, remove a dependency).
Pick one real user journey and one consistent test setup.
A good starter is something frequent and business-critical, like:
Avoid edge cases at first; you want a flow you can measure every release.
Start with one target that matches typical users, for example:
Write it down and keep it stable. If you change device, network, cache state, or the path you test, your trend becomes noise.
Use a small set that covers both what users feel and what teams can control:
A practical starter set is:
Use two thresholds:
This avoids constant fire drills while still making the limits real when you cross the line.
Do this in order:
Not always. Bundle size can be fine while the page still feels slow because the main thread is busy.
Common React causes:
Add a runtime budget (for example, limit long tasks during startup or set a hydration time cap) to catch this class of issues.
Fast generation and iteration can quietly add dependencies, UI flourishes, and third-party scripts that ship to everyone.
The fix is to make budgets part of the workflow:
This keeps fast iteration from turning into a slow product.
Timing metrics show the pain; size and runtime limits help you quickly find what caused it.
Pick 3–5 budgets first. Tune later based on your baseline and release history.
Treat the breach like a bug: identify the commit, fix or scope-reduce, and prevent repeats.