OpenAI's embrace of 'erotica' is causing ripples in the porn world
It's not direct competition, but as AI technologies become more advanced, some performers are getting nervous — and studios are setting limits

Donald Iain Smith / Getty
Sam Altman said it plainly: ChatGPT is going to support "erotica for verified adults." No hedging, no soft launch language. The adult entertainment industry took note — not because AI showing up in their space is new, but because OpenAI showing up changes the scale of it entirely.
This change in policy is part of OpenAI’s larger plan to “treat adult users like adults,” similar to what other tech companies like Elon Musk’s xAI are doing with chatbots that can have explicit conversations. OpenAI hasn’t shared details about its plans for erotica, but the impact on the adult entertainment industry could be big, especially since this industry has a long history of adapting to new technology.
Related Content
The industry saw this coming well in advance, as it often does. This is the same group that adopted VHS before Hollywood, built streaming before Netflix $NFLX, and made live cam technology a billion-dollar business before webcams were common. New technology doesn't show up in adult entertainment by chance. It arrives because there's immediate demand and a strong willingness to try new things.
Two years ago, Mia Malkova and a Los Angeles startup called Synthetic Turing Experience Technologies — STXT — launched an AI girlfriend chatbot built around her persona. It wasn't a novelty project. It was a business. And since then, AI-generated images, videos, and virtual companions have been spreading steadily across Reddit $RDDT, Pornhub, and similar platforms — mostly without much fanfare and with very little oversight.
Pornhub at least tried to draw a line. Any photorealistic depiction of a real person requires verified identity, documented consent, and a live third-party test matching the person to their ID. An AI character has no ID. There's nothing to verify. It simply can't meet the standard.
OnlyFans went a different direction — AI content is fine, just label it as AI. 404 Media went looking to see how well that was working and found deepfakes and face swaps throughout the platform. So, not well.
Billions of dollars at stake
The numbers behind all of this are worth sitting with for a moment. Last year, global adult entertainment revenue hit $65.95 billion. By 2029, The Business Research Company expects it to cross $100 billion. OnlyFans has paid out $25 billion to creators since it launched in 2016. This industry isn't operating in the shadows — it's a legitimate economic force, and AI is coming for it the same way it's coming for every other content-driven business.
The difference is OpenAI's footprint. A $500 billion company announcing a new web browser was enough to knock $100 billion off Google $GOOGL's market cap the same day. That's real gravity. A deliberate push into adult content from a company that size doesn't just affect one platform — it reshapes expectations across the entire space.
A potential Pandora's Box
Text erotica is the version of this problem that's relatively manageable. The legal landscape is reasonably familiar, the risks are contained, and it's been around in various forms forever. But lifelike videos— including the kind Sora 2 can now generate? That's where the ground gets soft fast. Age verification laws are on the books in several U.S. states and across the U.K., but every single one of them was written with real human performers in mind. They break down almost immediately when applied to content where the people depicted were generated by a computer. Lawmakers haven't sorted that out yet. Most haven't seriously tried.
If OpenAI builds this carefully — genuine age verification, real enforcement, a framework other platforms can actually use — there's a path where this goes reasonably well. The industry has needed workable standards for AI content for a while now. If OpenAI doesn't build it carefully, what you get is a content volume that buries any moderation system, AI-assisted or otherwise.
For human performers, the economics are genuinely complicated. Lower production costs mean a solo creator with a laptop can now produce content that used to require a full studio setup. That's not nothing. But that same shift floods the market with cheap AI-generated content competing for the same audience, and when supply goes up that fast, prices go down. In most industries that's an abstract problem. Here it's personal.
And the harm isn't waiting for policy to catch up. Deepfakes of real people — built from scraped images, shared without consent, and nearly impossible to fully remove once they're out — have already wrecked reputations and caused genuine harm to real individuals. Platforms know. The content keeps spreading anyway. The legal tools to hold anyone accountable for it are still being assembled, which means most people who create and distribute this stuff face little to no consequence.
How this plays out depends on decisions being made at the platform level, inside legislative bodies, and within OpenAI itself. The industry has a track record of adapting and surviving. But this particular adaptation has more ways to go badly than most, and the people most at risk from it going badly aren't the companies at the table. They're the ones who never got a seat.