AI Models Assume Humans Are Too Rational, Study Finds

AI models overestimate human rationality

New research is challenging a core assumption behind today’s AI “behavior simulators”: leading language models often expect people to act more logically than they really do, which can distort predictions in economics-style games and real-world decision tasks.​

What the studies found

A peer-reviewed study in the Journal of Economic Behavior & Organization tested how large language models perform in “Keynesian beauty contest” style strategic games and found they frequently play “too smart” because they overestimate how rational their human opponents will be.​
The researchers replicated results from classic beauty-contest experiments and reported that while models can adjust to opponents with different sophistication levels, they still misread how people actually reason in the game.​

A separate research paper led by researchers affiliated with Princeton University, Boston University, and New York University evaluated several leading models (including GPT-4o, GPT-4-Turbo, Llama 3 8B/70B, and Claude 3 Opus) against large datasets of human decisions.
Across both “forward modeling” (predicting choices) and “inverse modeling” (inferring preferences from choices), the authors found the models systematically drift toward expected value (EV) logic—closer to a textbook rational-choice rule than to how people actually decide.

Key evidence (numbers)

In the risky-choice tests, the paper reports that with chain-of-thought prompting, GPT-4o’s predictions correlate very strongly with maximizing expected value (Spearman 0.94), while human choices correlate much less with that rational benchmark (0.48).
The same paper reports that zero-shot prompts can be noisy and may even lead models to sometimes underuse probability information, while chain-of-thought pushes models toward more “rational” patterns than humans show.

Where the “too rational” bias shows up

In strategic “beauty contest” games, success requires predicting what others will choose—not what a perfectly rational agent should choose—so overestimating rationality can cause systematic misses.​
The study also links the beauty contest framework to market behavior, where participants try to anticipate other participants’ expectations, not intrinsic value alone.​

In the risky-choice study, researchers used the “choices13k” risky decision dataset and describe a subset of 9,831 non-ambiguous problems for evaluation, drawn from a larger collection of 13,006 risky choice problems.
They tested three forward-modeling tasks (predicting an individual’s choice, predicting the proportion of people choosing an option, and simulating choices), then compared model outputs to human response proportions and to rational benchmarks like expected value.

Two-study snapshot

Study / venue What was tested Models referenced Main result Why it matters
Strategic games (Journal of Economic Behavior & Organization) “Keynesian beauty contest” style strategic reasoning, including “Guess the Number” variants; models play against different opponent types. ​ AI models including ChatGPT and Claude are referenced in reporting of the findings. ​ Models tend to assume opponents behave more rationally than humans do, leading to “too smart” play and losses. ​ Predicting people in markets, negotiations, and policy settings can fail if AI assumes unrealistic rationality. ​
Decision datasets (arXiv paper, June 2024) Risky-choice decisions (forward modeling) plus preference inference (inverse modeling) using established psychology datasets. GPT-4o, GPT-4-Turbo, Llama 3 8B/70B, Claude 3 Opus. With chain-of-thought prompting, models align more with expected value theory than with human choices (e.g., GPT-4o EV correlation 0.94 vs humans 0.48). Using LLMs as “human proxies” in experiments or forecasts may produce biased conclusions.

Why this matters beyond labs

The authors of the risky-choice and inference study argue that AI systems need accurate internal models of human decision-making to communicate effectively and to support safe, helpful interactions.
They also warn that if LLMs are used to simulate people for policy design, experimentation, or decision support, an overly rational “implicit human model” can mislead downstream conclusions.

Other peer-reviewed work also suggests LLMs can show systematic decision biases that differ from humans, including stronger-than-human omission bias in moral dilemmas (a tendency to prefer inaction over action).​
That PNAS study also reports that some biases may be linked to fine-tuning for chatbot behavior, which raises questions about how alignment methods reshape decision tendencies.​

Timeline of the idea

Year Milestone What it adds
1936 Keynes introduces the “beauty contest” idea to explain markets as expectation-forecasting problems. ​ Shows why predicting others can matter more than “true” value. ​
1979 Kahneman & Tversky formalize core patterns of human deviation from strict rational choice in risky decisions (referenced in the modern study’s framing). Establishes why “perfect rationality” is an unreliable human baseline.
2024 Researchers quantify that LLMs often assume humans are more rational than they are, especially with chain-of-thought prompting. Demonstrates a measurable “rationality gap” between model predictions and human choices.
2025 Beauty contest experiments show the same pattern in strategic interaction: models play too rationally and mispredict opponents. ​ Highlights practical failure modes in strategic forecasting settings. ​

What researchers say could help

One implication from the risky-choice study is that prompting style matters: chain-of-thought can increase internal consistency and rational structure, but that may move predictions away from human behavior in domains where people use heuristics.
The same paper suggests that training data and evaluation practices may over-reward “perfectly reasoned” outputs, potentially teaching models an unrealistic picture of everyday human decision-making.

In the inverse-modeling experiments, the researchers report that model inferences about other people’s preferences can correlate strongly with how humans themselves interpret others—even if humans do not behave that rationally when choosing for themselves.
That split helps explain why LLM behavior can feel “human-like” in explanation mode while still failing at predicting real human choice frequencies.

Final thoughts

Taken together, the two lines of evidence point to a consistent risk: when AI systems are asked to predict, simulate, or strategically respond to humans, they may assume a level of logic and consistency that people often do not display.​
For publishers, businesses, and policymakers using LLMs for forecasting or experiment design, these results strengthen the case for validating outputs against real behavioral data rather than relying on plausibility or fluent reasoning alone.


Subscribe to Our Newsletter

Related Articles

Top Trending

Power of Immutable Infrastructure for Web Hosting
Immutable Infrastructure for Web Hosting: Speed, Security, Scale
Niragi vs Chishiya
Niragi vs. Chishiya: Why Chaos Will Always Lose to Logic [The Fatal Flaw]
Does Chishiya Die?
Does Chishiya Die? Why His Survival Strategy Was Flawless [Analysis]
Gold vs Bitcoin Investment
The Great Decoupling: Why Investors Are Choosing Bullion Over Blockchain in 2026
North Sea Wind Pact
The Hamburg Declaration: How the North Sea Wind Pact is Redrawing Europe’s Power Map

Fintech & Finance

Gold vs Bitcoin Investment
The Great Decoupling: Why Investors Are Choosing Bullion Over Blockchain in 2026
Why Customer Service is the Battleground for Neobanks in 2026
Why Customer Service is the Battleground for Neobanks in 2026
cryptocurrencies to watch in January 2026
10 Top Cryptocurrencies to Watch in January 2026
best travel credit cards for 2026
10 Best Travel Credit Cards for 2026 Adventures
Understanding Credit Utilization in the Algorithmic Age
What Is Credit Utilization: How Credit Utilization Is Calculated [Real Examples]

Sustainability & Living

Tiny homes
Tiny Homes: A Solution to Homelessness or Poverty with Better Branding?
Smart Windows The Tech Saving Energy in 2026 Skyscrapers
Smart Windows: The Tech Saving Energy in 2026 Skyscrapers
The Environmental Impact of Recycling Solar Panels
The Environmental Impact Of Recycling Solar Panels
Renewable Energy Trends
Top 10 Renewable Energy Trends Transforming the Power Sector in 2026
Eco-Friendly Building Materials
10 Top Trending Eco-Friendly Building Materials in 2026

GAMING

Esports Fatigue How Leagues Are reinventing Viewership for Gen Alpha
Esports Fatigue: How Leagues Are Reinventing Viewership For Gen Alpha
Exploring the Future of Online Gaming How New Platforms Are Innovating
Exploring the Future of Online Gaming: How New Platforms Are Innovating
The Economics of Play-to-Own How Blockchain Gaming Pivoted After the Crash
The Economics of "Play-to-Own": How Blockchain Gaming Pivoted After the Crash
Why AA Games Are Outperforming AAA Titles in Player Retention jpg
Why AA Games Are Outperforming AAA Titles in Player Retention
Sustainable Web3 Gaming Economics
Web3 Gaming Economics: Moving Beyond Ponzi Tokenomics

Business & Marketing

Billionaire Wealth Boom
Billionaire Wealth Boom: Why 2025 Was The Best Year In History For Billionaires
ESourcing Software The Complete Guide for Businesses
ESourcing Software: The Complete Guide for Businesses
The End of the Seat-Based License How AI Agents are Changing Pricing
The End of the "Seat-Based" License: How AI Agents are Changing Pricing
Best Citizenship by Investment Programs
The "Paper Ceiling": Why a Second Passport is No Longer a Luxury, But an Economic Survival Kit for the Global South
cryptocurrencies to watch in January 2026
10 Top Cryptocurrencies to Watch in January 2026

Technology & AI

zero-water data centers
The “Thirsty” Cloud: How 2026 Became the Year of Zero-Water Data Centers and Sustainable AI
The End of the Seat-Based License How AI Agents are Changing Pricing
The End of the "Seat-Based" License: How AI Agents are Changing Pricing
the Great AI Collapse
The Great AI Collapse: What the GPT-5.2 and Grokipedia Incident Actually Proves
green web hosting providers
10 Best Green Web Hosting Providers for 2026
Blockchain gas fees explained
Blockchain Gas Fees Explained: Why You Pay Them and How to Lower Transaction Costs

Fitness & Wellness

Mental Health First Aid for Managers
Mental Health First Aid: A Mandatory Skill for 2026 Managers
The Quiet Wellness Movement Reclaiming Mental Focus in the Hyper-Digital Era
The “Quiet Wellness” Movement: Reclaiming Mental Focus in the Hyper-Digital Era
Cognitive Optimization
Brain Health is the New Weight Loss: The Rise of Cognitive Optimization
The Analogue January Trend Why Gen Z is Ditching Screens for 30 Days
The "Analogue January" Trend: Why Gen Z is Ditching Screens for 30 Days
Gut Health Revolution The Smart Probiotic Tech Winning CES
Gut Health Revolution: The "Smart Probiotic" Tech Winning CES