Breaking down knowledge silos
Today I'm sharing the vision behind review.ai, where we're tackling one of software development's most persistent challenges: knowledge silos and context loss. While we're just getting started with our first beta testers, we're building tools to help development teams preserve and surface the critical context behind their code decisions.
To get started, head over to review.ai/install to join the beta.
If you've worked on any software team of meaningful size, you know the scenario: It starts when you need to make changes to an unfamiliar part of the codebase. You dig in, only to find yourself stuck on questions that the code itself can't answer. Why was this particular approach chosen? What alternatives were considered and rejected? What's the broader context that informed these decisions?
You may start out with high hopes: You check the docs (if they exist), scroll through old GitHub issues, and dig through git history to piece together some context. When those trails run cold — and they almost always do — you resort to the final fallback: tracking down the person who wrote the code. This works, sort of. But it creates its own set of problems. The original author might be busy, or out of office, or they might have left the company entirely. Even when available, they need to context-switch away from their current work, reconstruct their thought process from months ago, and explain it all over again. When it works, it's lossy at its best and costly for everyone involved.
Today I'm excited to announce review.ai, a new company focused on solving one of the most persistent challenges in software development: the knowledge silo problem. We believe there's a better way.
The costs of missing context in accumulating knowledge debt
We're launching with a product called review
, a programming companion that analyzes code and provides actionable suggestions to help developers produce better output. It's like having a thoughtful teammate pairing next to you, pointing out ways to improve your code before it even gets to review. We've written more about how review
works and why we built it this way.
We think of review
as a step towards addressing an issue I call "knowledge debt."
Once you're looking for it, you can spot the knowledge debt problem in many typical software development processes. Let's say someone — a product manager, a customer success rep, or an engineer — realizes something needs to be built or changed. They write up a ticket or doc, and it makes its way to an engineer for implementation. That engineer then has to do their own research to understand the problem space, looking through existing code, tailoring to other teams, and exploring potential approaches. Only then can they actually write the code.
From there, they submit it for review, explain their reasoning to reviewers, make adjustments, and get it merged. And after it gets merged? The code lives on, but all that valuable context — from the initial product discussions to the technical research to the implementation debates — effectively becomes another needle in the haystack. We know there's knowledge and context connected to those lines of code, but to get them requires knowing who to ask.
This pattern repeats across teams and organizations. Just like technical debt, knowledge debt accumulates over time and makes everything harder. New team members take longer to get up to speed, they all ask the same questions and get the same answers repeatedly. Experienced developers hesitate to touch unfamiliar parts of the codebase. Overall progress slows.
Beyond code: Building the future of development
For now, we're approaching the knowledge debt problem from the knowledge user's side. review
is always available to look at your proposed changes, catch issues, and improve the quality of code you bring to human reviews. We think that's important, because code review is such a critical moment — it's when developers are actively thinking about and discussing not just what the code does, but why they made specific decisions and chose specific approaches. Tools that help raise the quality of code that makes it to those reviews raises developers' confidence, and allows more time and human energy to be spent on those more complex questions.
But that's just one piece of the puzzle. Our larger vision is to build tools that can incorporate even more context. It's a challenge, because context in software development is scattered all over the map. Some of it is structured (like commit messages and pull requests), but much of it is unstructured (like Slack discussions and "hallway" conversations). Some context is captured somewhere in your tools, but a vast amount goes completely uncaptured. This missing context is what forces developers to interrupt colleagues, slows down development, and creates knowledge silos.
Our mission is to methodically tackle this matrix of structured/unstructured and captured/uncaptured context. We're building tools that not only capture more of this valuable context but also structure it in ways that make it discoverable and useful.
Just as test coverage helps teams understand their technical risks, we envision a notion of knowledge coverage — a way to measure and improve how well your codebase's context is preserved and accessible. In turn, we could help teams identify areas where knowledge debt is accumulating and guide them in systematically reducing it. review
is our first step in this journey, but we're already working on more tools that will help development teams preserve and surface the critical "why" behind their code.
We think such an understanding can be a superpower for developers. Because the best software isn't just code that works — it's code that can be understood, maintained, and improved by the entire team. With robust understanding, engineers can work confidently across any part of the codebase, ship higher quality code faster, and spend more time building rather than getting stuck. By capturing, structuring, and surfacing the context behind code decisions, we're making it possible for engineers to level up both the speed and quality of their work.
We're currently bringing on our first beta testers and looking forward to expanding to more design partners who can help shape the future of the product. If you're interested in being part of this journey with review.ai, reach out to [email protected].
We're not interested in creating another "eat your broccoli" developer tool — the kind you use because you have to, not because you want to. Instead, we see AI as an opportunity to create frictionless experiences that make developers' lives better. Our approach is about augmenting human capabilities, not replacing them. We believe AI opens up new possibilities for tools that feel less like mandatory processes and more like superpowers that naturally extend how developers work.