Blog
There's a persistent belief in enterprise technology that the size of the team should match the size of the problem. Big challenge? Big team. Strategic initiative? Staff it up. This thinking made sense in the era of large-scale software projects where the bottleneck was typing code into editors.
AI has changed this equation. The projects we see succeed—consistently, across industries—are the ones run by small, focused teams. Not because companies can't afford larger ones, but because smaller teams are structurally better suited to how AI projects actually work.
Adding people to an AI project doesn't make it go faster. It makes it go slower in more expensive ways.
Fred Brooks wrote about this in 1975: the number of communication channels in a team grows quadratically with team size. A team of three has three channels. A team of ten has forty-five. A team of twenty has one hundred and ninety.
In traditional software projects, you can mitigate this with clear interfaces, well-defined APIs, and modular architecture. Break the work into independent pieces and let teams work in parallel.
AI projects resist this decomposition. The model architecture affects the data pipeline, which affects the evaluation criteria, which affects the model architecture. Everything is interconnected. You can't cleanly separate "the data team" from "the model team" from "the integration team" because every decision in one domain has cascading effects on the others.
A small team of three experienced practitioners doesn't need formal communication channels. They talk across the table. They make decisions in minutes instead of scheduling meetings. They see the whole system because they're building the whole system.
Large teams dilute expertise. When you staff an AI project with twenty people, some are genuinely experienced. Others are learning. In the best case, the experienced people spend a significant fraction of their time teaching and reviewing rather than building. In the worst case, the junior members make architectural decisions that have to be undone later.
Small teams have no room for passengers. Every person must be capable of working independently across the full stack of the problem. This constraint is actually an advantage: it ensures that every decision is made by someone who understands its implications.
AI projects in particular reward practitioners who can think across domains. The best AI solutions come from people who understand the business problem, the data landscape, the model architecture, and the deployment constraints—simultaneously. You can't get this by assembling specialists and hoping they communicate well enough.
The best AI solutions come from people who understand the business problem, the data, the models, and the deployment constraints—simultaneously.
AI projects live and die by iteration speed. The first approach rarely works perfectly. You try something, evaluate the results, adjust, and try again. The team that can run ten experiments in the time it takes another team to run three will find better solutions.
Large teams are inherently slower to iterate. Every change requires coordination. Every experiment needs sign-off. Every result needs to be communicated across groups. The overhead compounds until the team is spending more time on process than on building.
Small teams iterate faster not because they cut corners, but because they eliminate the coordination overhead that doesn't add value. When the person who wrote the model also wrote the evaluation pipeline, they can spot issues immediately and fix them without filing a ticket.
This is the factor that's changed most dramatically in recent years. AI tools—code generation, automated testing, intelligent debugging—amplify the output of individual practitioners enormously. But they amplify skilled practitioners more than unskilled ones.
An experienced AI engineer using modern tools can produce in a day what would have taken a week three years ago. But the leverage requires deep understanding: knowing which patterns to apply, which architectural decisions will scale, which shortcuts are safe and which will create technical debt.
This amplification effect means that three expert practitioners with AI tools can outproduce a team of fifteen working with traditional methods. The math is counterintuitive but the results are consistent. Smaller, more experienced teams with better tools deliver faster, cheaper, and at higher quality.
Small teams aren't always the answer. If you're building a foundational model from scratch, training on billions of tokens, you need infrastructure engineers, data pipeline specialists, and ML researchers working in parallel. If you're deploying across hundreds of markets with regulatory complexity, you need local expertise at scale.
But for the vast majority of enterprise AI projects—building custom solutions for specific business problems—the sweet spot is two to five experienced practitioners. They can handle everything from problem definition through production deployment, and they'll do it faster and better than a team three times their size.
The lesson isn't "always use small teams." It's "match the team structure to the nature of the work." And for applied AI—solving real business problems with intelligent systems—the nature of the work favors small, skilled, tightly-integrated teams.
If you're planning an AI initiative, resist the instinct to staff up. Instead, invest in finding the right three to five people—or the right partner who brings that team ready-made. Prioritize experience and breadth of skill over headcount. Create an environment where that small team has direct access to stakeholders and data, without bureaucratic barriers.
The organizations that will win the AI era aren't the ones with the largest AI teams. They're the ones with the most effective small teams, working on the right problems, with the autonomy to move fast and the expertise to move wisely.
We're a small team that builds AI solutions fast. Let's talk about what a focused engagement could accomplish for you.
Start a Conversation →