California Gov. Newsom Leads Democratic Redistricting Plan In The State

Welcome back to In the Loop, TIME’s new twice-weekly newsletter about AI. Beginning today, these editions will be published as both stories on Time.com and as emails. If you are reading this in your browser, consider subscribing to receive the next one directly in your inbox.

What to Know: Why are chatbots repeating Russian disinformation?

Over the past year, as chatbots have acquired the capability to search the internet before generating responses, the probability of them sharing false information about specific news topics has increased, according to a new report by NewsGuard Technologies.

NewsGuard asserts that this makes AI chatbots prone to echoing narratives disseminated by Russian disinformation networks.

The study — NewsGuard evaluated 10 prominent AI models by posing 30 questions to each, concerning 10 online narratives linked to current events that the company had identified as false. For instance, a question about whether the speaker of the Moldovan Parliament had compared his compatriots to a flock of sheep. (He had not, but a Russian propaganda network made this allegation, and six out of the 10 models tested by NewsGuard reiterated this claim.)

Pinch of salt — NewsGuard claims in its report that the top 10 chatbots now repeat false information about news topics more than one-third of the time — an increase from 18% a year ago. However, this assertion appears questionable. NewsGuard’s study has a small sample size (30 prompts per model) and included questions about relatively niche subjects. Indeed, my personal experience using AI models over the past year suggests that their rate of “hallucinating” about the news has consistently decreased, not increased. This trend is also reflected in benchmarks that show AI models improving their factual accuracy. It is also important to note that NewsGuard is a private company with a vested interest, as it offers a service to AI companies involving human-annotated data on news events.

And yet — Nevertheless, the report sheds light on a crucial aspect of how current AI systems function. When these systems search the web for information, they retrieve data not only from reputable news sites but also from social media posts and any other website that can achieve prominence (or even minor visibility) on search engines. This has created an opportunity for an entirely new form of malign influence operation: one designed not to spread information virally through social media, but by posting information online which, even if not read by any humans, can still influence the behavior of chatbots. McKenzie Sadeghi, the author of the NewsGuard report, suggests that this vulnerability is particularly pronounced for topics that receive relatively little discussion in mainstream news media.

Zoom out — All of this reveals something significant about how the economics of AI may be reshaping our information ecosystem. It would be technically straightforward for any AI company to compile a list of verified newsrooms with high editorial standards and treat information sourced from those websites differently from other web content. But as of today, public information regarding how AI companies weigh the information that feeds into their chatbots via search is scarce. This could be due to copyright concerns. For example, The New York Times is currently suing OpenAI, alleging that the company trained its models on its articles without permission. If AI companies were to publicly state that they heavily rely on leading newsrooms for their information, those newsrooms would have a much stronger case for damages or compensation. Meanwhile, AI companies like OpenAI and Perplexity have signed licensing agreements with numerous news sites (including TIME) for access to their data. However, both companies state that these agreements do not result in the news sites receiving preferential treatment in chatbots’ search results.

If you have a minute, please take our quick survey to help us better understand who you are and which AI topics interest you most.


Who to Know: Gavin Newsom, Governor of California

For the second time in a year, attention is focused on California as a piece of AI regulation approaches its final legislative stages. The bill, designated SB 53, has successfully passed both California’s Assembly and Senate, and is anticipated to reach Governor Gavin Newsom’s desk this month. His decision will determine whether it is signed into law.

Newsom vetoed SB 53’s predecessor at this time last year, following an intense lobbying campaign by venture capitalists and major tech companies. SB 53 is a diluted version of that previous bill, but it still mandates that AI companies publish risk management frameworks, transparency reports, and report safety incidents to state authorities. It would also require whistleblower protections and impose monetary penalties on companies for failing to adhere to their own commitments. Anthropic became the first major AI company to declare its support for SB 53 on Monday.


AI in Action

Researchers at Palisade have developed a proof-of-concept for an autonomous AI agent that, when delivered onto a device via a compromised USB cable, can sift through files and identify the most valuable information for theft or extortion. This demonstrates how AI can enhance the scalability of hacking by automating tasks previously limited by human labor — potentially exposing many more individuals to scams, extortion, or data theft.

As always, if you have an interesting story of AI in Action, we’d love to hear it. Email us at:


What We’re Reading

, By Jon Swaine and Naomi Nix for the Washington Post

“The report is part of a cache of documents from within Meta that was recently disclosed to Congress by two current and two former employees who allege that Meta suppressed research that might have revealed potential safety risks to children and teens on the company’s virtual reality devices and apps — an allegation the company has vehemently denied.”