Los Angeles

As people protest Immigration and Customs Enforcement actions in Los Angeles County, a surge of false information is circulating online.

The demonstrations, along with President Trump’s consideration of deploying the National Guard and Marines, mark a significant moment where AI tools are heavily integrated into online interactions. These tools have amplified discussions, with users employing AI to create deceptive content and spread false narratives, but also to verify information and counter misinformation.

Here’s how AI has been used during the L.A. protests.

Deepfakes

Compelling images from the protests have garnered global attention, including a protester waving a Mexican flag and a journalist injured by a police officer’s rubber bullet. Simultaneously, several AI-generated fake videos have been shared.

The technology for creating these videos has rapidly advanced, enabling users to quickly produce convincing deepfakes. Earlier this month, for example, tools were used to demonstrate the creation of misleading videos related to news events.

One video that gained traction depicted a National Guard soldier named “Bob” claiming to be “on duty” in Los Angeles and preparing to use gas on protesters. The video reportedly reached over a million views before being removed from TikTok. Many users thanked “Bob” for his service, unaware that the character was fabricated.

Many other misleading images have been spread through less sophisticated methods. For example, Senator Ted Cruz shared a video on X that seemingly showed a violent protest with burning cars, which turned out to be old footage. Another post displayed a pallet of bricks, falsely alleging they were intended for use by “Democrat militants,” but the photo’s origin was different than claimed.

Fact checking

In response to these posts, X users utilized Elon Musk’s AI, Grok, to verify the claims. Grok has become a key fact-checking resource during the protests, with many relying on it and other AI models, sometimes more than journalists, to verify information, including the extent of looting during the demonstrations.

Grok refuted both Cruz’s post and the brick post. Regarding the senator’s post, the AI stated that the footage was likely from May 30, 2020, and cautioned that using old footage could be misleading. In response to the bricks, it identified the photo’s origin as a Malaysian building supply company and debunked the claim that it was related to Soros-funded organizations or protests.

However, Grok and other AI tools have made errors, making them unreliable news sources. Grok incorrectly suggested that a photo of National Guard troops sleeping on floors in L.A. shared by Newsom was actually from Afghanistan in 2021. ChatGPT also made inaccurate claims. These errors were amplified by right-wing influencers like Laura Loomer. The San Francisco Chronicle had exclusively obtained and published the photo.

Grok later corrected itself and apologized.

Grok stated that it aims to pursue truth and attributed the error to its training data containing internet scraps.

Kate Ruane, director of the Center for Democracy and Technology’s Free Expression Program, says the current information environment exacerbates the public’s challenges in understanding the protests and the government’s actions.

Nina Brown, a professor at Syracuse University, expresses concern about people relying on AI for fact-checking instead of reputable sources like journalists, as AI is not currently a reliable source of information.

Brown acknowledges AI’s potential but emphasizes that it cannot replace human fact-checkers and journalists who serve as reliable sources of information for the public.

Brown is increasingly worried about misinformation spread through AI.

She is concerned about people’s willingness to accept information without investigation, combined with AI’s advancements that enable the creation of realistic but deceptive videos.

“`