Over the past year, artificial intelligence has been touted as revolutionizing productivity, helping us write emails, generate code, and summarize documents. But what if the reality of how people actually use AI is completely different from what we have been led to believe?
A data-driven study by OpenRouter pulled back the curtain on real-world AI use by analyzing over 100 trillion tokens (essentially billions of conversations and interactions with large language models like ChatGPT, Claude, and dozens of others). The findings challenge many assumptions about the AI revolution.
OpenRouter is a multi-model AI inference platform that routes requests across 300+ models from 60+ providers, from OpenAI and Anthropic to open source alternatives like DeepSeek and Meta’s LLaMA.
With more than 50% of usage occurring outside the US and serving millions of developers worldwide, the platform provides a unique cross-section of how AI is actually deployed across different geographies, use cases, and user types.
Importantly, the study analyzed metadata from billions of interactions without accessing the actual conversation text to uncover patterns of behavior while protecting user privacy.

A role-playing revolution that no one expected will occur
Perhaps the most surprising finding is that more than half of open source AI model usage is not for productivity purposes at all. For role play and creative storytelling.
Yes, that’s right. While tech executives tout the potential of AI to transform business, users are spending the majority of their time engaging with character-driven conversations, interactive fiction, and game scenarios.
More than 50% of interactions in open source models fall into this category, dwarfing even programming assistance.

“This contradicts the assumption that LLMs are primarily used for writing codes, emails, and summaries,” the report states. “Many users actually use these models for companionship and exploration.”
This isn’t just a small talk. This data shows that users treat AI models as structured role-playing engines, with 60% of role-play tokens falling into specific game scenarios or creative writing contexts. This is a large-scale, largely invisible use case that is reshaping the way AI companies think about their products.
The rapid rise of programming
While role-playing dominates the use of open source, programming has become the fastest growing category of all AI models. At the beginning of 2025, coding-related queries accounted for only 11% of total AI usage. By the end of the year, that number had jumped to more than 50%.
This growth reflects the deepening integration of AI into software development. The average prompt length for programming tasks has quadrupled from about 1,500 tokens to over 6,000 tokens, with some code-related requests exceeding 20,000 tokens. This is roughly equivalent to inputting the entire codebase into an AI model for analysis.
For context, programming queries now generate the longest and most complex interactions in the entire AI ecosystem. Developers are no longer just looking for simple code snippets. They conduct advanced debugging sessions, architectural reviews, and multi-step problem solving.
Anthropic’s Claude model dominates the space, accounting for more than 60% of programming-related usage for most of 2025, but competition is increasing as Google, OpenAI, and open source alternatives emerge.

China’s AI surge
Another important fact has become clear. China’s AI models now account for about 30% of global usage, nearly triple the 13% share at the beginning of 2025.
Models from DeepSeek, Qwen (Alibaba), and Moonshot AI are quickly gaining traction, with DeepSeek alone processing 14.37 trillion tokens during the study period. This represents a fundamental shift in the global AI landscape, with Western companies no longer holding an unshakeable advantage.
Simplified Chinese is currently the second most common language for AI interactions globally, accounting for 5% of total usage, behind English at 83%. Asia’s overall share of AI spending more than doubled from 13% to 31%, with Singapore emerging as the second-largest user after the US.

The rise of “agent” AI
This study introduces agent inference, a concept that defines the next stage of AI. This means that AI models are no longer just answering single questions, but are performing multi-step tasks, calling on external tools, and reasoning across extended conversations.
The percentage of AI interactions classified as “inference-optimized” jumped from nearly zero at the beginning of 2025 to more than 50% by the end of 2025. This reflects a fundamental shift from AI as a text generator to AI as an autonomous agent capable of planning and execution.
“The median LLM request is no longer a simple question or personalized instruction,” the researchers explain. “Instead, it’s part of a structured agent-like loop that calls external tools, infers state, and persists over longer contexts.”
Think of it this way. Instead of asking AI to “write a function,” you’re asking it to “debug this codebase, identify performance bottlenecks, and implement a solution,” and it can actually do that.
“Glass slipper effect”
One of the most interesting insights from this study relates to user retention. Researchers have discovered a phenomenon they call the Cinderella “glass slipper” effect, where AI models that “solve” important problems the first time create lasting user loyalty.
If a newly released model perfectly matches a previously unmet need (a figurative “glass slipper”), early users will stick with it much longer than later adopters. For example, Google’s June 2025 cohort of Gemini 2.5 Pro retained about 40% of its users in its fifth month, which is significantly higher than subsequent cohorts.
This challenges conventional wisdom about AI competition. Being first is important, but especially being the first to solve high-value problems creates a lasting competitive advantage. Users incorporate these models into their workflows, making switching expensive both technically and operationally.
Cost isn’t an issue (as much as you think)
Perhaps counterintuitively, this study reveals that the use of AI is relatively price inelastic. A 10% decrease in price would only increase usage by about 0.5-0.7%.
Premium models from Anthropic and OpenAI cost between $2 and $35 per million tokens while maintaining high usage rates, while budget options like DeepSeek and Google’s Gemini Flash achieve similar scale for less than $0.40 per million tokens. Both coexist successfully.
“The LLM market does not yet appear to behave like a commodity,” the report concludes. “Users balance cost with inference quality, reliability, and breadth of functionality.”
This means AI is not in a race to the bottom in pricing. Quality, reliability, and features still come at a premium, at least for now.
What does this mean going forward?
OpenRouter’s research paints a much more nuanced picture of real-world AI usage than the industry narrative suggests. Yes, AI is transforming programming and professional work. But it is also creating a whole new category of human-computer interaction through role-play and creative applications.
The market is diversifying geographically, with China emerging as a major power. This technology has evolved from simple text generation to complex multi-step inference. And user loyalty depends more on being the first to truly solve a problem than on being first to market.
As the report points out, “The ways in which people use LLMs do not always match expectations and vary widely from country to country, state to state, and use case to use case.”
As AI becomes more integrated into daily life, it will be important to understand these real-world patterns, not just benchmark scores and marketing claims. The gap between how we think AI is being used and how it is actually used is wider than most people realize. This study helps fill that gap.
See also: Deep Cogito v2: Open source AI to hone your inference skills

Want to learn more about AI and big data from industry leaders? Check out the AI & Big Data Expo in Amsterdam, California, and London. This comprehensive event is part of TechEx and co-located with other major technology events including Cyber Security Expo. Click here for more information.
AI News is brought to you by TechForge Media. Learn about other upcoming enterprise technology events and webinars.

