A new study from researchers at Google Research, Google DeepMind, and MIT challenges one of the most widely held beliefs in artificial intelligence development: that adding more AI agents automatically improves performance.
For the past few years, multi-agent systems—where multiple AI models collaborate on a task—have been promoted as a path toward more powerful, human-like reasoning. This research shows the reality is far more nuanced. In many cases, adding agents helps only under very specific conditions, and in others, it significantly degrades results.
The paper, published on December 9, 2025, represents one of the most systematic efforts to understand how AI agent systems scale. Rather than relying on anecdotal demonstrations, the researchers ran 180 controlled experiments to test when collaboration helps and when it backfires.
Large-Scale Experiments Reveal Extreme Performance Swings
The research team tested five different agent architectures across three major families of large language models: OpenAI’s GPT series, Google’s Gemini models, and Anthropic’s Claude models. The goal was to isolate the effects of coordination itself, rather than differences in model capability.
The results were striking. Depending on task design and coordination strategy, multi-agent systems produced outcomes ranging from an 81 percent performance improvement to a 70 percent decline. In other words, adding agents could either dramatically boost results or severely undermine them. These swings demonstrate that agent collaboration is not inherently beneficial—it must be carefully matched to the problem being solved.
This variability explains why previous research has produced conflicting conclusions. Some high-profile demonstrations showed impressive gains with agent teams, while others quietly failed in more realistic settings.
The 45 Percent Accuracy Threshold That Changes Everything
One of the study’s most important findings is what the researchers call a “critical performance threshold.” When a single AI agent already achieves around 45 percent accuracy on a task, adding more agents usually leads to diminishing or negative returns. Beyond this point, coordination overhead—extra communication, conflict resolution, and validation—starts to outweigh any benefit from parallel reasoning.
Statistical analysis confirmed this effect was not random. The negative relationship between added agents and performance past this threshold was both strong and consistent. This finding directly contradicts last year’s influential “More agents is all you need” narrative, showing that scaling agent count without understanding task structure can actively harm outcomes.
Why Some Tasks Benefit While Others Collapse
The study highlights that task structure is the key factor determining success. Financial analysis problems, where work can be split into independent components, performed exceptionally well with centralized multi-agent coordination. In these cases, different agents examined sales data, costs, and market trends simultaneously, then merged their insights. This parallelism led to performance improvements of over 80 percent.
By contrast, tasks with strong sequential dependencies fared poorly. In Minecraft planning experiments, where each action changes the environment and affects future decisions, multi-agent systems consistently underperformed. Performance dropped between 39 and 70 percent across all multi-agent configurations. The reason is simple: when context changes step by step, dividing reasoning across agents fragments the shared state, making it harder to maintain a coherent plan.
Error Amplification and Token Inefficiency Exposed
The research also uncovered serious efficiency and reliability issues. In decentralized multi-agent systems, errors spread rapidly, compounding more than 17 times faster than in single-agent setups. Centralized coordination reduced this effect but still amplified errors over four times faster than a single agent.
Token efficiency suffered as well. A single agent completed an average of 67 successful tasks per 1,000 tokens. Centralized multi-agent systems managed only 21, while hybrid systems dropped to just 14. Much of this loss came from agents “talking to each other” rather than solving the task itself, revealing a hidden cost of collaboration that many benchmarks overlook.
A Predictive Framework for Smarter Agent Design
Rather than dismissing multi-agent systems entirely, the researchers developed a predictive framework to determine the optimal coordination strategy for a given task. By analyzing measurable task properties—such as tool usage, dependency depth, and error sensitivity—the framework correctly identified the best agent setup for 87 percent of new scenarios.
The study establishes the first quantitative scaling principles for agent systems, offering practical guidance for AI engineers. The message is clear: more agents are not inherently better. Effective AI design depends on knowing when to collaborate, when to centralize control, and when a single, well-designed agent is the smarter choice.






