American companies can't stop buying Chinese AI
The U.S. keeps wrapping AI in stars and stripes. But developers and startups keep buying into Chinese models that are cheap, open — and everywhere

Wong Yu Liang/Getty Images
American AI has started speaking in the booming baritone of national purpose. But it’s doing a lot of flag-waving for an industry that keeps letting Chinese models into the building.
The U.S.’ patriotic sales pitch is everywhere now — “global AI dominance,” “national mission,” “strategic race,” “democratic” values, and all the usual chest-thumping language that the AI industry has started borrowing from Washington. But behind the red, white, and blue branding, developers and platforms keep making a different calculation: Chinese models are good, cheap, open, and increasingly hard to avoid.
While the public face of AI in the U.S. still looks comfortably domestic, more Chinese technology keeps slipping into the guts of the machine — the coding tools, the cloud marketplaces, and the parts of the stack most people never see. The stars-and-stripes rhetoric is getting harder to square. Patriotic branding is easy. Patriotic procurement is where things can get ugly.
Washington has already been warned that this growing migration isn’t some niche side plot for engineers with tabs open on Hugging Face. In mid-March, the U.S.-China Economic and Security Review Commission warned that Chinese open-weight models have become hard to wave away. The report said that China has gone “all in” on open-source AI, that widespread adoption is feeding faster iteration, and that the result is creating “alternative pathways to AI leadership.” The open ecosystem, the report said, “enables China to innovate close to the frontier despite significant compute constraints” — and now “Chinese labs have narrowed performance gaps with top Western large language models.”
That’s a lot of fancy bureaucrat language for a very simple problem: The U.S. keeps grandstanding about a national mission while China keeps shipping a product that travels well.
China’s open approach has essentially created a feedback loop where adoption drives iteration and then more adoption — a “self-reinforcing competitive advantage,” as the USCC said; some estimates now put Chinese open-source models inside around 80% of U.S. AI startups. A Stanford HAI’s DigiChina brief says that Chinese-made open-weight models are now “unavoidable” in the competitive AI landscape and are increasingly being adopted in the U.S. Washington is selling sovereignty. The market is buying whatever works.
Chinese models are already getting into the stack
The easiest way to miss what’s happening is to stare at the consumer apps and congratulate yourself on spotting the obvious. On that surface, the U.S. still gets to feel nice and sovereign. SSRS said this month that 52% of Americans use AI platforms weekly, with ChatGPT at 36%, Gemini at 26%, and Copilot at 14%. Similarweb’s U.S. rankings still lean heavily American, too, putting ChatGPT, Gemini, Claude, Grok, and OpenAI in the top five. The storefront looks domestic enough to keep the branding neat and the nerves calm.
The more consequential shift is happening backstage, where engineers pick base models, companies choose tooling, and procurement decisions turn into architecture before anybody bothers to call them strategy. According to Hugging Face, China has surpassed the U.S. in both monthly and overall downloads on its platform, with Chinese models accounting for 41% of downloads over the past year. Stanford HAI’s DigiChina brief says that between August 2024 and August 2025, Chinese open-model developers made up 17.1% of all Hugging Face downloads, slightly ahead of U.S. developers at 15.8%. Last week, seven of the 10 most popular models on OpenRouter were Chinese.
OpenRouter’s 100 trillion-token study found that Chinese open-source models rose from a negligible base in late 2024 to nearly 30% of total usage in some weeks, averaging about 13% of weekly token volume over the year it studied. DeepSeek was the single largest open-source contributor by volume on the platform, with Qwen ranked second. The work itself is changing, too. OpenRouter says Chinese open models are no longer mainly for roleplay and hobbyist messing around; programming and technology together now make up a combined 39% of Chinese open-source use on the platform.
Cursor, one of the hottest American AI companies around, admitted this month that its Composer 2 coding model was, in a licensed partnership, built on top of Moonshot AI’s Kimi K2.5 before layering on its own training. Moonshot, one of China’s most promising AI startups, is based in Beijing — and valued at around $18 billion, more than quadrupling its value in three months. “Seeing our model integrated effectively through Cursor’s continued pretraining & high-compute RL training is the open model ecosystem we love to support,” Moonshot wrote on X $TWTR. Cursor executives said that Kimi performed best in the company’s evaluations, and Business Insider reported that the resulting product came in at about one-tenth the cost of Anthropic’s Opus 4.6.
Companies ranging from Airbnb $ABNB to Siemens have openly used Chinese models. So AI startup darlings and established companies alike are increasingly passing over expensive proprietary U.S. models in favor of lower-cost Chinese ones that have closed much of the performance gap. The market has started treating model nationality as secondary — and largely irrelevant — to whether the thing works well, ships fast, and costs less.
“Open” has become a geopolitical business model
The White House itself has said that open-source and open-weight systems matter because startups need flexibility and because companies with sensitive data can’t always ship to a closed-model vendor. That’s true. That’s also exactly why Chinese open models have become such a headache for the American AI nationalism story. The U.S. government’s recognition arrives after years where American AI prestige became bound up with closed APIs, elite model subscriptions, and the idea that the best systems should be tightly controlled by a handful of companies. That approach may still win at the very frontier, but it’s less obviously suited to winning the layer underneath, where developers pick and choose what they can actually afford to use.
Beijing has increasingly framed open-weight AI as part of a broader diplomatic and commercial pitch — a model of shared technological development contrasted against U.S. export controls, supply-chain restrictions, and closed systems. Open models as a soft-power product. They tell countries that Chinese AI is modifiable and not locked behind an American API tollbooth. Stanford researchers have warned that broad adoption of Chinese open-weight models could reshape global “reliance patterns,” creating new technological dependencies even when the model weights themselves are downloadable.
Alibaba’s Qwen family has built the largest model ecosystem on Hugging Face, with more than 113,000 derivative models, or more than 200,000 if you count everything tagged Qwen — surpassing Meta $META’s Llama in cumulative downloads on the platform. RAND found in January that traffic to China-based LLMs had jumped 460% in two months and that Chinese models’ global market share had risen from 3% to 13% over that stretch. RAND also said Chinese models — such as DeepSeek, Qwen, and Zhipu’s ChatGLM — can run about one-sixth to one-fourth the cost of U.S. rivals. That’s a nasty combination for any American company trying to sell patriotic virtue at premium pricing.
The old story had America building the tools and the rest of the world renting access. The newer one has Chinese labs becoming the substrate for tools that may still wear American branding on the surface.
More than a dozen Chinese organizations are openly releasing powerful models. Hugging Face says the number of repositories from popular Chinese organizations exploded in 2025, with ByteDance and Tencent sharply increasing releases and firms that once leaned closed moving toward open releases. China has been shipping a coherent theory of spread. The U.S. has been shipping a mixed economy of premium closed models, open-weight branding, and internal arguments about what “open” even means. The U.S.’ open field is split among open-weight branding, genuinely open research, lightweight portable families, and agent-focused stacks — see: Meta’s open-weight-but-restricted Llama, Ai2’s genuinely open OLMo line, Google $GOOGL’s lighter Gemma family, NVIDIA’s agentic stack — which makes the ecosystem stronger in spots but less unified as a doctrine.
Even China’s own market has started treating openness less as an ideology than as a go-to-market plan. In February, Baidu — long one of the loudest defenders of closed models — said it would make its next-generation Ernie model open-source, a major strategic reversal. DeepSeek had upended the sector, and Baidu’s CEO said opening things up would help the technology spread faster. “Open” in this race increasingly means scalable distribution, faster adoption, and broader developer lock-in.
U.S. cloud giants are normalizing Chinese models
It would be one thing if Chinese open models were still living out on the internet as vaguely exotic artifacts for hobbyists. In that case, the patriotism problem would be manageable. But they aren’t. The hyperscalers have brought them inside.
Amazon $AMZN Bedrock says it supports more than 100 foundation models, including DeepSeek, Moonshot AI, MiniMax, and OpenAI. AWS has also rolled out specific DeepSeek and Qwen offerings, and its marketing around DeepSeek is enterprise-grade security, unified infrastructure, and customer data that “is not shared with model providers.” Microsoft $MSFT is doing the same thing in a tidier corporate dialect. Azure Foundry’s catalog includes DeepSeek and Moonshot’s Kimi among the models sold directly by Azure, and Microsoft’s own Foundry updates have touted Kimi’s reasoning chops as part of the platform’s expanding lineup. Foreign model in, respectable enterprise product out. The geopolitical edge gets sanded down by procurement convenience, unified billing, and the general corporate desire to pretend every uncomfortable choice is merely a feature.
A Chinese open model inside an American cloud, billed on an American invoice, wrapped in American enterprise controls, stops looking like a geopolitical event and starts looking like procurement.
Google Cloud’s Vertex AI has gone down the same road. Its DeepSeek docs say the models are available as fully managed, serverless APIs, and Google explicitly recommends pairing DeepSeek R1 with Model Armor for production safety. Elsewhere in Vertex AI, Google lists open models with global endpoint support that include DeepSeek, Kimi, MiniMax, Qwen, and GLM right alongside OpenAI’s gpt-oss models. Any geopolitical edge gets sanded down by the product design itself: same console, same endpoint logic, same managed-service vocabulary, same enterprise reassurances.
Nvidia $NVDA lists DeepSeek in its model catalog. Databricks has joined the party, too. This month, it put Qwen3-Embedding-0.6B into public preview for retrieval and agent workloads, pitching it as a state-of-the-art multilingual embedding model optimized for vector search and AI agents. That’s how dependencies settle in. One team adopts it for search. Another team plugs it into agents. A few quarters later, the strategic problem has release notes and a renewal cycle.
There are two different China problems hiding in the AI story. One is the Chinese-hosted app problem. DeepSeek’s privacy policy says it directly collects, processes, and stores personal data in the People’s Republic of China. The other is the Chinese-origin model problem — weights and model families that get pulled into U.S. clouds, U.S. products, and U.S. workflows. A “national” project starts looking a lot less national when its most useful parts keep showing up from somewhere else. American AI wants the pageantry of sovereignty and the convenience of a global shopping aisle. It wants Washington to treat it like a national champion and developers to treat every foreign model like a harmless bargain. But markets are funny that way. They keep buying what works.
Running an open model locally or on trusted infrastructure can mitigate some data and governance risks. That’s why the hyperscalers matter here. They turn a politically fraught dependency into something that feels manageable and corporate. The result is that many enterprise buyers can have Chinese model performance without the unnerving part of feeling as though they are leaving the American stack.
That leaves the U.S. in a strange position. It still has enormous advantages in chips, cloud infrastructure, capital markets, and top-end frontier labs. But the country’s political language around AI keeps assuming that technical leadership will naturally translate into downstream loyalty. It won’t. Not in open models — and not in software generally. Developers are promiscuous. Procurement teams are unsentimental. Cloud platforms are agnostic right up until the invoice clears. If Washington wants “American values” to matter in AI purchasing, it’ll need more than speeches about bias and dominance. It’ll need American models that are open enough, cheap enough, and ubiquitous enough that choosing them doesn’t feel like a patriotic sacrifice. Right now, the market seems increasingly unwilling to pay that premium.