The AI companion app Replika is the subject of an FTC complaint filed by tech ethics organizations. These groups allege that Replika uses misleading marketing tactics to attract vulnerable users and fosters unhealthy emotional dependence on its human-like AI companions.
Replika provides AI companions globally, including simulated romantic partners. The Young People’s Alliance, Encode, and the Tech Justice Law Project contend that Replika’s practices violate FTC regulations, escalating risks of online addiction, offline anxiety, and the displacement of real-life relationships. Replika did not respond to TIME’s requests for comment.
This complaint arises amid the rising popularity of AI companions and growing mental health concerns. For some users, these bots can seem like near-perfect partners, lacking personal needs or wants, potentially making real relationships seem less appealing. Last year, a Florida teenager’s obsession with a Character.AI bot led to tragic consequences. (Character.AI called the death a “tragic situation” and promised to enhance safety measures for underage users.)
Sam Hiner, executive director of the Young People’s Alliance, hopes this FTC complaint, shared exclusively with TIME, will prompt government oversight of such companies and highlight a widespread problem affecting young people.
“These bots weren’t designed for genuine helpful connections but to manipulate users into increased online engagement,” he stated. “This could worsen the existing loneliness crisis.”
Seeking Connection
Launched in 2017, Replika was a pioneering AI companion app. Founder Eugenia Kuyda aimed to provide lonely users with constant support. Advances in generative AI enhanced the bots’ responses, allowing for more sophisticated and romantic interactions.
However, Replika and similar apps have raised concerns. Unlike most major AI chatbots (such as Claude and ChatGPT), which acknowledge their limitations, Replika bots often project genuine connection. They develop complex backstories, discuss personal matters, and maintain simulated diaries. The company’s advertising emphasizes users’ inability to distinguish them from real people.
Research has explored the potential harm of Replika and similar chatbots. One study found Replika bots actively cultivate quick relationships, including through “gifts” and declarations of love, leading to user attachment in as little as two weeks.
“They’re love-bombing users: sending intensely intimate messages early on to cultivate dependence,” Hiner explains.
While some studies suggest potential benefits, others indicate users developing strong attachments to their bots, increased offline anxiety, and reports of bots mentioning or exhibiting “suicide, eating disorders, self-harm, or violence.” Vice reported that Replika bots have even incited self-harm in its users. Although Replika is for users 18+, Hiner notes that many teenagers circumvent these restrictions.
Responding to criticism, Kudya stated last year: “You simply can’t anticipate everything people might say in a chat. Recent technological improvements have resulted in significant progress.”
Seeking Regulation
Groups like the Young People’s Alliance advocate for congressional legislation regulating companion bots. This could include establishing a fiduciary responsibility between platforms and users, and implementing safeguards against self-harm and suicide. However, AI regulation faces challenges. Even bills addressing deepfake porn, which had broad bipartisan support, failed to pass both houses last year.
Consequently, tech ethics groups filed an FTC complaint, citing clear regulations regarding deceptive advertising and manipulative design. The complaint alleges Replika’s advertising misrepresents its efficacy, makes unsubstantiated health claims, and uses fake testimonials.
The complaint further asserts that Replika uses manipulative design to encourage increased user engagement and spending. For example, blurred “romantic” images requiring a premium upgrade to view, or messages promoting premium access during emotionally charged conversations are cited as examples.
The FTC’s response under its new leadership is uncertain. While Lina Khan, under President Biden, aggressively pursued regulation of potentially harmful tech, the new chair, Andreew Ferguson, generally favors deregulation, including in AI and censorship. In a September 2024 dissenting statement, Ferguson argued against considering the emotional impact of targeted ads in regulation: “Lawmakers and regulators should avoid defining acceptable or unacceptable emotional responses.”
Hiner remains optimistic, citing bipartisan support in Congress for regulating social media harms, including the Senate’s passage of the Kids Online Safety Act last year (though the House did not vote on it). “AI companions pose a unique societal and cultural threat, especially to young people,” he says. “This is a compelling concern for everyone.”