For several months in early 2025, an AI had secretly infiltrated a close-knit online community on Reddit. This particular section of the social platform was dedicated to civil debate, where users shared views and encouraged others to challenge them. It was in this space that , as reported by The Atlantic, the AI aimed to generate arguments compelling enough to alter human perspectives. It proved successful.
This felt particularly intrusive because the AI was sometimes granted access to individuals’ online histories, allowing it to customize messages specifically for their unique identities. Behavioral scientists refer to this communication method as “personalized persuasion,” and at times, a tailored approach can be appealing. Who wouldn’t prefer content relevant to their specific interests over a flood of irrelevant material?
However, AI is on the verge of something far more concerning than merely adapting messages based on easily identifiable traits, as the AI accounts on Reddit did. If it can master what we term “deep tailoring,” it could imperceptibly integrate into our online environments, learning our fundamental selves, and using that personal information to manipulate our beliefs and opinions in ways that might be unwelcome—and detrimental.
As professors specializing in the psychology of persuasion, we recently contributed to compiling the latest research from leading global experts in a comprehensive report on personalized persuasion. Our assessment is that while communicators can benefit from tailoring messages based on basic audience information, deep tailoring extends significantly beyond such readily available data. It leverages a person’s core psychology—their foundational beliefs, identities, and needs—to customize the message.
For example, messages are more persuasive when they resonate with a person’s most important moral foundations. Something can be deemed ethical or unethical for various reasons, but individuals differ in which reasons are most important to their personal moral compasses. People with more politically liberal viewpoints, for instance, tend to prioritize fairness, making them more convinced by arguments that a policy is equitable. More politically conservative individuals, on the other hand, typically value loyalty to their community more, so they are more persuaded when a message argues that a policy upholds their group identity.
Although it might seem like a novel concept, computer scientists have been developing AI-powered persuasion for decades. One of us recently produced a piece on IBM’s “Project Debater,” which spent years training an AI system to engage in debates, continually refining it with expert human debaters. In 2019, during a live event, it defeated a human world champion debater.
With the proliferation of accessible AI tools, such as the user-friendly ChatGPT mobile app, anyone can now harness AI for their own persuasive objectives. Studies indicate that generic AI-generated messages can be as compelling as those created by humans.
But can it achieve “deep tailoring”?
For AI to implement autonomous deep tailoring on a mass scale, it will need to accomplish two concurrent tasks, which it seems poised to do. First, it must learn an individual’s core psychological profile to understand what levers to pull. Already, new research is demonstrating that AI can reasonably accurately detect people’s personalities from their Facebook posts. And it won’t stop there. Columbia Business School professor and author of The Persona Principle, Dr. Sandra Matz told us in an interview: “Pretty much everything that you’re trying to predict can be predicted with some degree of accuracy” based on people’s digital footprints.
The second step is developing messages that resonate with these essential psychological profiles. In fact, research is already finding that GPT can develop advertisements tailored to people’s personalities, values, and motivations, which are especially persuasive to the people for whom they were designed. For example, simply asking it to produce an ad “for someone who is down-to-earth and traditional” resulted in the argument that the product “won’t break the bank and will still get the job done,” which was reliably more persuasive to the people whose personalities were targeted.
These systems are expected to become increasingly sophisticated, applying deep tailoring to political campaigns, marketing, and public health. So, what can be done to shield individuals from the influence of personalization?
On the consumer side, it’s beneficial to be aware that personalized online communication is occurring. When something feels specifically crafted for you, it very well might be. And even if you believe you reveal little of yourself online, you still leave subtle indicators through your clicks, visits, and searches. You may have even inadvertently granted permission to advertisers to use that information when agreeing to terms of service you didn’t read carefully. Taking stock of your online behavior and utilizing tools like a VPN can help protect you from messages tailored to your unique psychology.
But the responsibility does not rest solely on consumers. Platforms and policymakers should consider regulations that mandate personalized content be labeled and provide information about why a particular message was delivered to a specific individual. Evidence shows that people can resist influence more effectively when they are aware of the tactics being employed. There should also be clear safeguards on the types of data that can be used for personalized content, limiting the depth of tailoring possible. Although Americans are often receptive to personalized online content, they are concerned about data privacy, and the boundary between these two attitudes should be respected.
Even with such protections, the slightest communication advantage is worrying in the wrong hands, especially when deployed at a mass scale. It’s one thing for a marketplace to recommend products purchased by people with a similar shopping history, but quite another to encounter a disguised computer that has unknowingly deconstructed your soul and interwoven it into disinformation. Any communication tool can be used for good or for evil, but now is the time to start seriously discussing policy on the ethical use of AI in communication before these tools become too sophisticated to rein in.