A young man on his phone with a red and black vortex details around him.

Sophisticated technology isn’t required to deceive individuals online. We presented over 3,000 high school students a grainy video depicting election workers discarding ballots to manipulate an election. A simple caption, prominently displayed in red, capitalized text, was sufficient to convince students of U.S. voter fraud—despite the video originating from Russia. Just three students recognized this deception. 

We’ve long asserted that readily produced fakes posed a greater threat than deepfakes, being almost as impactful yet much simpler to create. In the recent election, despite widespread access to AI tools, digitally stitched-together content sparked discussions regarding President Joe Biden’s suitability for office. 

The period dominated by simple, easily manufactured deceptions is now drawing to a close. Content created using advanced video tools signifies an even more perilous landscape of information. With offerings like Google’s Veo 3, OpenAI’s Sora 2, and Meta’s Vibes, the proliferation of AI-generated content has become so effortless that it is rapidly spreading across digital interfaces, facilitated by platforms’ widespread disengagement from accountability. To traverse the contemporary internet effectively, we require insights from ancient traditions: the enduring focus on the significance of reputation found in Muslim, Jewish, Buddhist, and other spiritual practices. 

Pious Muslims verify the pronouncements of Muhammad by following an “isnad,” or “chain of narration.” Observant Jews interpret Talmudic lessons considering the rabbi who uttered them. Tibetan Buddhists convey their principles orally, maintaining a lineage from the Buddha to contemporary times. Each of these traditions prompts us to critically evaluate information, yet only after confirming its origin and evaluating the credibility of the wise figures who endorsed it.

Reputation also holds significance in non-religious settings: it serves as our primary method for making choices when we lack sufficient information or specialized knowledge. We depend on reputation when selecting a therapist or a plumber, deciding on a restaurant, or booking accommodation. We seek advice from trusted individuals and examine reviews, understanding that individuals are unlikely to reveal their shortcomings or hidden agendas.

Given reputation’s vital role in numerous aspects of our lives, why is it largely overlooked by most individuals on the internet?

Our research has assessed thousands of young individuals’ capacity to evaluate online information. Repeatedly, we observed them forming judgments about content without considering its origin. A student from rural Ohio validated the voter fraud video, believing their unaided sight could discern “fraud in multiple different states.” Another student from Pennsylvania stated that the video distinctly “showed people entering fake votes into boxes.”

This identical pattern intensifies significantly with AI. A teacher, sharing their interaction with students, recalled questioning a student about the accuracy of information from ChatGPT. The student thrust their phone toward the teacher, exclaiming, “Look, it says it right here!’” Our preliminary studies in secondary and higher education settings reveal a comparable tendency: many students place their faith in AI chatbots, even when these tools fail to provide crucial context regarding the information’s source. 

A substantial number of internet users neglect to assess reputation or confuse superficial indicators or popular platforms for verified sources, rather than recognizing them as potentially unreliable information. When individuals attempt to gauge reputation, they are often swayed by easily manipulated cues offered by the source itself: a seemingly professional domain, formal language on the “about” section, the sheer volume of data regardless of its merit, or subjective impressions about how something appears.

These characteristics possess a deceptive allure. Anyone can acquire a professional-looking website, including organizations promoting hatred. Websites denying the Holocaust assert on their ‘about us’ pages that they “provide factual information.” Content featuring elaborate charts can still convey harmful misinformation. Furthermore, evidence indicates that AI is so convincing it compels us to question our own perceptions: from audio clips that perfectly mimic our parents’ voices to extraordinarily lifelike visuals of a fire consuming Seattle’s Space Needle.

The current information environment offers a problematic dilemma: either passively accepting falsehoods or cynically believing nothing. The first scenario exposes us to malicious entities who exploit authentic-looking media. The second deprives us of valuable information. Both outcomes diminish the capacity for informed civic engagement at a period when it is urgently needed. 

Here’s a possible approach: rather than concentrating solely on the content, initially inquire about its originator, similar to how religious traditions evaluate teachings based on their speaker. 

Moreover, when employed judiciously, the same technologies capable of deceiving us can also provide a solution to this challenge. This isn’t about delegating our critical thought to technology, but rather utilizing technology to verify credibility and enhance our analytical skills.

The three students who identified the voter fraud video’s Russian origin did not use any complex technical methods. They simply opened a new browser tab, input relevant keywords, and discovered articles from reputable outlets like the BBC and Snopes discrediting the footage. Furthermore, with some astute guidance on the operation of Large Language Models (LLMs) and effective prompt construction, AI can genuinely assist in confirming social media posts and providing omitted context.

Leading AI platforms incorporate brief disclaimers instructing users to validate information. Google states, “Gemini can make mistakes, so double-check it.” OpenAI advises, “ChatGPT can make mistakes. Check important info.” However, across all age groups, from Generation Alpha to older demographics, nearly everyone finds it challenging to verify the information they encounter. 

The encouraging news is that improvement is possible for everyone. Even a limited period of instruction on evaluating credibility can yield significant progress—as demonstrated in our research across various settings, from high school classrooms in different regions to college courses in other areas. Previously, students relied on visual assessment to determine reliability. Afterward, they acquired the skill of establishing a source’s reputation. Research conducted in multiple other locations has revealed comparable positive outcomes. 

In an age where discerning authentic content from AI-fabricated material becomes nearly impossible, attempting to determine trustworthiness can seem utterly pointless. Nevertheless, we can more effectively navigate the contemporary information landscape by reinforcing an age-old principle: the fundamental value of reputation.