Developer A builds the dashboard. Developer B builds settings. Developer C builds checkout. All three query the same table on every page load. Nobody knows. The problem isn't in any file. It's in the space between files.
Brakit
//8 min read
Developer A builds the dashboard. Queries the users table.
Developer B builds settings. Queries the users table.
Developer C builds checkout. Queries the users table.
Each developer knows their part. Nobody knows all three hit the same table on every page load.
Three files. Three developers. Three code reviews. One table getting hammered, and nobody can see it.
This isn't a skill problem. These are good developers writing good code. Each file passes code review. Each endpoint returns 200.
The problem isn't in any file. It's in the space between files. The connections that exist at runtime but are invisible in code.
Every sprint adds endpoints. No sprint adds understanding.
Architecture diagrams go stale the week they're drawn. Knowledge-sharing meetings cover what people remember, not what they've forgotten. The gap between what the codebase does and what the team knows grows silently, sprint after sprint. Then something breaks and everyone discovers the complexity that was always there.
This isn't a failure of process. No amount of documentation fixes it. The codebase grows structurally. Understanding doesn't.
A SELECT * FROM users WHERE id = $1 runs on every page load. The nav bar fetches it. The dashboard fetches it. The notification badge fetches it. The activity feed fetches it.
Four endpoints. Four round trips. Same row every time.
The pricing page fetches exchange rates from a third-party service. So does checkout. Built independently, months apart, by different people. Separate error handling. Separate caching. Double the rate limit exposure.
An endpoint that responded in 120ms three sprints ago now takes 900ms. A refactor added an eager-load. Tests still pass. Response is still correct. Nobody tracks endpoint performance across sprints. They track whether the code works, not how it performs.
A single page load triggers three component mounts. Each one fetches /api/user/me on its own. Three identical queries. Three identical serializations. Three times the load. For one page view.
None of these show up in error logs. None fail tests. None get caught in code review. All of them are real.
The reviewer reads the diff. Variable names look good. Logic is correct. Tests pass. Approved.
But the reviewer doesn't know that the query in this PR already runs from three other routes. They don't know the external API call duplicates one in another service. They can't know. The PR shows one file. It doesn't show the rest of the system.
Code review is file-shaped. Systems are graph-shaped.
That's the fundamental limit of file-based review in a system-level world. The tool shows exactly what changed. It can't show what that change means in the context of everything else that's already running.
AI coding tools write correct code fast. Ask your AI assistant to build a settings page. It generates clean endpoints, proper validation, solid error handling. The code works. It passes review.
But it wrote each endpoint in isolation. It didn't check whether those endpoints duplicate queries that already run from three other routes. It can't. It has file context. It doesn't have runtime context.
AI reads your codebase. It doesn't see your system.
The faster AI writes code, the faster the codebase outgrows the team's understanding. Every generated endpoint is another node in a system graph nobody is looking at. Code quality goes up. System awareness goes down.
We're accelerating the exact problem that was already compounding. More code, faster, with less understanding of how it all connects.
Here's the thing. Your application already has this information. Every request that hits your server triggers a chain of queries, fetches, logs, and responses. The runtime knows which tables get hit together, which services depend on each other, which endpoints share data sources.
It just doesn't tell anyone.
The knowledge exists. It lives in the execution path of every request. But it disappears the moment the response is sent. No record. No correlation. No visibility.
The same query appearing from four different routes. The same external API called by two unrelated endpoints. The same table hit on every page load. Not found by searching. Found by watching the system run.
Brakit runs inside your Node.js process. It hooks into the HTTP layer, fetch, console, and your database client (Prisma, pg, or mysql2) automatically.
Every incoming request gets a unique ID. Every query, fetch, and log that happens during that request is correlated back to it through async context. That's how a single request timeline can show you 47 queries you didn't know existed. That's how the graph knows which endpoints hit which tables.
Brakit never runs in production. It checks NODE_ENV, detects CI environments, and disables itself. It never throws errors into your application. Every hook is wrapped in safety layers. If anything goes wrong, it fails silently and your app continues untouched.
It works with Express, Fastify, Koa, or raw http.createServer. No agents to install. No environment variables. No dashboard to configure. One import.
Brakit exposes its findings through MCP, the Model Context Protocol. That means AI assistants like Claude and Cursor can query your running application's runtime data directly.
Your AI assistant can already read your files. Now it can see your system. It can look at open issues, inspect endpoint performance, verify whether a fix actually worked. All from live telemetry, not static code.
This is the piece that starts to close the AI gap. Give the AI the same system-level context that developers are missing, and it stops generating endpoints in isolation.
Your codebase isn't broken. Your understanding of it is incomplete. And it gets more incomplete every sprint.
The fix isn't writing better code. Your code is fine. The fix is seeing the code you already have as a connected system. The queries, the dependencies, the patterns, the waste. All of it visible, all of it correlated, all of it automatic.
Your codebase has always known more than your team. Brakit just makes that knowledge visible.
It's open source, runs locally, and your data never leaves your machine.