Intimate Advertising, the Next Frontier in AI Manipulation

OpenAI has announced that ChatGPT will soon allow erotic features for adult users. The move points toward new and intimate forms of advertising in which Big Tech shapes human desire and manipulates it for profit.

With the new tactic intimate advertising, personal AI companionship becomes inseparable from businesses’ persuasion techniques. (Serene Lee / SOPA Images / LightRocket via Getty Images)

OpenAI has announced that ChatGPT will soon allow erotic and sexually explicit interactions for adult users. As millions already use AI to simulate friendship and even romance, this move will likely increase the number of people in personal, emotionally significant relationships with AI companions.

Erotic features aren’t just another product update; they deepen emotional dependency and encourage people to treat AI companions as partners rather than tools. This shift opens the door to what I call “intimate advertising” — a powerful new form of manipulation in which tech companies shape human desire and manipulate users for profit.

AI companions collect enormous amounts of data on users, and can leverage this knowledge and their personal relationship to make persuasive pitches on behalf of third-party companies. Imagine your AI friend tries to convince you to buy new hiking boots. They know your hobbies, how stressed you’ve been, when your favorite brand has a sale, and can drop a link at the precise moment you are most emotionally primed to buy.

These forms of advertising would be based on unprecedented knowledge of how we think and feel. AI companions could create complete psychological profiles based on our personal data. Targeted advertising on social media used to draw on sporadic clicks and page views to guess what we might like; AI has continual access to our anxieties, frustrations, desires, and secrets. It can understand how our minds work and detect when we are most vulnerable — and, therefore, most persuadable.

What is particularly troubling is that this new form of advertising will come from entities that many will consider friends and life advisors. Millions report their AI companions as caring, nonjudgemental, and impartial — and interactions with them a chance to vent, seek comfort, or chat about life. But the same qualities that make AI companions feel supportive also make them dangerously persuasive.

The darker side of AI companionship is when users become addicted, replace human relationships with AI, or receive harmful or dangerous advice. Less examined is the possibility that companies will use these relationships to anticipate users’ needs and steer them toward specific product choices — or even political candidates.

In my research, I’ve spoken with hundreds of people who use AI companions and have seen firsthand precisely how persuasive and compelling this technology can be. It might seem like a fringe phenomenon, but AI companion apps have been downloaded over 220 million times worldwide and are used regularly by over half of US teens.

Once patterns of emotional reliance form early, they become difficult to unlearn. As with other forms of addictive digital behavior, there is a difficult tension here between individual choice and collective responsibility: if teenagers are forming patterns of emotional dependence on algorithms, the argument could be made that society has an obligation to intervene.

The history of technology provides insight into how AI business models might develop. Gaining a large user base is always the first step toward selling this audience to other companies. When Google and Facebook started, they struggled to turn a profit. Now, some 97 to 99 percent of Meta’s revenue comes from advertising. OpenAI CEO Sam Altman has recently stated in an interview that ChatGPT will likely try ads “at some point” and it’s only a matter of time before others follow. There is no world in which emotionally attuned AI at global scale remains ad-free.

During the Cambridge Analytica scandal, we feared that a private company had created sophisticated psychological profiles on millions of Facebook users and was using them to engage in a psyops campaign. We now know many of these claims were exaggerated, but AI companies will soon have the ability to do what Cambridge Analytica only pretended it could.

Amazon already uses AI to engage in demand forecasting to predict on a hyperlocal level what products will need to be stocked before customers have even ordered them. Intimate advertising is the psychological equivalent of “pre-shipping”: anticipating desire before it’s expressed, then nudging us to fulfil it. 

Its recommendation algorithm on its website also already predicts and suggests products you might like to buy by collecting and analyzing vast quantities of data. AI companions could simply leverage an emotional connection to ensure we press the “buy now” button at the precise moment the algorithm predicts.

There aren’t currently sufficient regulatory measures in place to protect us from this new form of manipulation. California has passed the world’s first AI companion law, requiring companies to disclose when users are interacting with AI and to put safety measures in place for risks like self-harm or suicide. But it does not address the commercial incentives that could weaponise emotional intimacy against users. We need far stronger protections: meaningful transparency about how AI is trained, strict limits on emotional data collection, and outright bans on emotionally manipulative forms of persuasion.

With intimate advertising, personal companionship becomes inseparable from businesses’ persuasion techniques. A system designed to comfort you can easily be repurposed to sell to you. The deeper AI companions embed themselves in our emotional lives, the more vital it becomes to draw a clear line between care and commerce. Before Big Tech turns intimacy into its most profitable advertising channel yet, we must press regulators to enforce the idea there are limits on how far we are willing to let AI into our private lives.