OpenAI is reporting a sharp rise in enterprise use of its AI tools, backed by new data on millions of paying business users and a dedicated State of Enterprise AI 2025 report, even as Google’s Gemini push forces OpenAI to prove it can stay ahead in the workplace AI race.
The numbers suggest that OpenAI is deeply embedded in corporate workflows worldwide, but faces intensifying pressure from Google’s rapidly expanding Gemini Enterprise platform and broader AI ecosystem.
OpenAI report overview
OpenAI has released its first State of Enterprise AI 2025 report, drawing on internal usage data from more than 1 million business customers and a survey of around 9,000 AI users at 100 organizations worldwide. The report concludes that enterprise AI adoption is accelerating across industries and geographies, with workers reporting measurable value from tools such as ChatGPT and custom GPT-based assistants in their daily tasks. OpenAI’s analysis highlights a pronounced divide between so‑called frontier workers who use AI deeply and the median employee, estimating roughly a sixfold productivity gap between heavy and typical users.
The same research indicates that, despite the hype, AI is currently saving most employees less than an hour per workday on average, suggesting that real gains are meaningful but still incremental rather than transformational for many teams. OpenAI frames this as evidence that enterprises are still early in the adoption curve, with the biggest efficiency and revenue impacts expected as organizations move from experimentation to more systematic, workflow‑level deployment of AI agents and integrated tools.
Adoption metrics and growth
Alongside the report, OpenAI and external trackers say the company has surpassed 3 million paying business users for ChatGPT across its Enterprise, Team, and Edu offerings by early June 2025, up sharply from earlier in the year. One analytics site estimated 1.5 million enterprise customers in March 2025, underlining how quickly organizations have moved from trials to paid deployments in just a few months. Another analysis in November 2025 reported that OpenAI had more than 1 million business customers overall, and that ChatGPT for Work alone had grown to about 7 million seats, a roughly 40 percent increase in only two months.
OpenAI’s own usage data shows that organizations using its API consumed roughly 320 times more reasoning tokens than they did a year earlier, a sign that companies are applying models to more complex problem‑solving rather than just simple queries. The company also reports that use of custom GPTs inside enterprises has jumped about 19‑fold in a year, with these tailored assistants now accounting for roughly 20 percent of all enterprise ChatGPT messages.
Key OpenAI enterprise metrics (2024–2025)
| Metric | Latest figure (2025) | Earlier baseline | Notes |
| Paying business users (ChatGPT Enterprise/Team/Edu) | About 3 million paying business customers as of June 2025. | Around 1.5 million enterprise customers estimated in March 2025. | Reflects rapid growth in paid workplace adoption. |
| Business customers in OpenAI’s 2025 report dataset | More than 1 million business customers contributing usage data. | Not publicly disclosed for prior years. | Used to analyze trends across sectors and geographies. |
| Reasoning token usage via API | Roughly 320× increase year‑on‑year. | Baseline set at 1× usage a year earlier. | Indicates shift toward deeper, more complex AI workloads. |
| Share of enterprise messages from custom GPTs | About 20% of enterprise ChatGPT messages now involve custom GPTs. | Custom GPT usage up roughly 19× year‑on‑year. | Signals growing use of internal, domain‑specific assistants. |
| Seats for ChatGPT for Work | Roughly 7 million seats reported in late 2025. | Seat count up about 40% in just two months prior to that. | Shows expansion from pilots to broad seat‑based rollouts. |
How enterprises use OpenAI tools
According to OpenAI and coverage of its enterprise products, paying business customers span highly regulated sectors such as finance and healthcare, as well as retail, technology, and transportation, with firms like Lowe’s, Morgan Stanley, and Uber cited as examples. Common use cases include drafting and summarizing documents, generating code, analyzing internal data, assisting customer‑support staff, and building internal knowledge assistants that can search across corporate repositories.
New features for ChatGPT Enterprise and Team, such as connectors to services like Google Drive, OneDrive, Box, Dropbox, and SharePoint, allow employees to query and synthesize information from multiple systems without leaving the chat interface. OpenAI’s report and independent commentary also point to case studies like Spanish digital bank BBVA, which reportedly operates more than 4,000 internal custom GPTs to handle specialized workflows, from compliance checks to customer‑facing support templates. In surveys summarized by OpenAI and third‑party analysts, many business users report sizable perceived productivity gains—sometimes on the order of 30 percent improvements in selected tasks—though results vary widely depending on how deeply teams embed AI in their processes.
Google’s Gemini pressure
OpenAI’s enterprise wins are unfolding against a backdrop of escalating competition from Google, which has launched its Gemini Enterprise suite to target the same workplace AI budgets. Google’s offering, priced starting around the low‑$20s per user per month for business plans and about $30 per user for enterprise tiers, integrates Gemini models across Google Workspace and Google Cloud, promising unified access to AI agents for document analysis, data work, and task automation. Industry newsletters describe Gemini Enterprise as a front door for AI in the workplace, emphasizing agent orchestration and deep integration with existing Google workflows.
Coverage of the AI race notes that Google’s broader Gemini roadmap, including the launch of its Gemini 3 model, is explicitly aimed at challenging OpenAI’s technical lead and mindshare in generative AI. One TechCrunch report said OpenAI’s decision to publicize new enterprise usage data came just days after an internal code red meeting on the Google threat, underscoring how seriously the company views Gemini’s combination of model quality and distribution through products like Search, Gmail, Android, and Chrome. Analysts argue that while OpenAI currently benefits from brand recognition and early enterprise relationships, Google’s installed base and aggressive product bundling could erode that advantage if OpenAI fails to keep innovating on reliability, governance, and integration.
Selected 2025 Google moves in enterprise AI
| Date (2025) | Google move | Target | Implications for OpenAI |
| October 9 | Launch of Gemini Enterprise, a full‑scale AI platform for businesses with per‑user pricing and deep Workspace integration. | Microsoft 365 Copilot and OpenAI’s ChatGPT Enterprise. | Directly competes for seat‑based workplace AI contracts and embedded agents. |
| October 2025 | Positioning Gemini Enterprise as a central hub for internal AI agents and automation across Google Cloud. | Enterprise automation and developer platforms. | Raises expectations that AI suites include robust agent orchestration, not just chatbots. |
| December 2025 | Rollout of the Gemini 3 model framed as raising the stakes in the global AI race. | OpenAI’s flagship models and ecosystem. | Increases pressure on OpenAI to demonstrate superior performance, safety, and enterprise‑specific capabilities. |
To reinforce its position, OpenAI is not only expanding its enterprise product line but also deepening strategic partnerships, such as taking an ownership stake in Thrive Holdings in December 2025 with the stated goal of accelerating enterprise AI adoption. Market analyses based on OpenAI’s 2025 report forecast rapid growth in enterprise AI spending over the next several years, but also warn that issues like data privacy, regulatory compliance, and the widening gap between advanced and average users will shape which vendors ultimately dominate. For businesses, the near‑term priority is likely to be moving from scattered pilots to governed, organization‑wide deployments that show clear return on investment while maintaining control over data and model behavior.






