The Wikimedia Foundation, steward of the world’s largest online encyclopedia, has delivered a firm message to artificial intelligence developers: cease unauthorized data scraping and subscribe to its paid API for ethical, sustainable use of Wikipedia’s content. This directive, outlined in a November 10, 2025, blog post, addresses surging server costs and an 8% decline in human traffic amid the rise of AI tools that siphon knowledge without contributing back.
As AI models increasingly rely on Wikipedia’s vast, volunteer-curated repository for training, the nonprofit organization is pushing for reciprocity to preserve the platform’s integrity and financial health.
The Surge in AI Scraping and Its Hidden Costs
Wikipedia has long been a goldmine for AI developers, offering millions of articles in over 300 languages, meticulously edited by volunteers to ensure accuracy and neutrality. However, recent audits revealed that AI bots now account for up to 65% of the site’s internet traffic, driving exponential increases in server operating expenses.
These bots, often designed to mimic human behavior and evade detection, caused unusual traffic spikes in May and June 2025, as confirmed by Wikimedia’s upgraded bot detection systems. The result? A concerning 8% year-over-year drop in genuine human page views, which directly impacts donation prompts that fund the site’s $179 million annual operations.
Without these visits, fewer volunteers contribute to content enrichment, and the cycle of declining engagement threatens Wikipedia’s role as a trusted, ad-free knowledge hub. AI summaries in tools like ChatGPT are diverting users, allowing companies to extract value without supporting the human labor behind it.
Wikimedia Enterprise: A Paid Path to Responsible AI Access
In response, the Wikimedia Foundation is promoting Wikimedia Enterprise, its opt-in paid API platform tailored for large-scale users. This service provides structured data feeds, revision metadata, and attribution tools, enabling AI firms to access content efficiently without overloading public servers.
The enterprise product not only offloads traffic strain but also generates revenue to sustain Wikipedia’s nonprofit mission, emphasizing that high-quality datasets like its own deserve financial reciprocity. Companies benefit from reliable uptime, provenance tracking to credit human editors, and avoidance of outdated or incomplete scraped data that could propagate errors in AI outputs.
Wikimedia’s guidelines stress proper attribution in AI-generated responses, urging developers to link back to original sources and encourage user participation, thereby fostering trust in online information ecosystems. This approach mirrors deals struck by platforms like Reddit with OpenAI and Google, signaling a broader industry shift toward compensated data use.
Broader Implications for AI Ethics and Open Knowledge
The clash underscores a pivotal tension in the AI era: balancing open access with sustainability as tech giants build billion-dollar models on public resources. Scraping not only risks misinformation from unverified snapshots but also erodes the collaborative spirit that powers Wikipedia, potentially weakening editorial quality over time.
While the Foundation stops short of legal threats, its call highlights how evasive bot practices skew analytics and inflate costs, prompting calls for industry-wide standards on data sourcing. As AI evolves, platforms like Wikipedia are positioning themselves as essential partners, not free-for-alls, to ensure knowledge remains accessible and attributable for all.
This move could inspire similar actions from other open-data stewards, reshaping how AI companies navigate the web’s nonprofit treasures.






