Advancements in artificial intelligence suggest that human-initiated pandemics could be five times more probable than they were just a year ago, according to a survey of leading experts’ projections shared exclusively with TIME.
This data aligns with concerns recently expressed by AI firms OpenAI and Anthropic, both of which have cautioned that current AI tools are acquiring the capacity to significantly assist malicious actors attempting to develop bioweapons.
Biologists have long been able to modify viruses using laboratory technology. The novel advancement is the capacity for AI chatbots—such as ChatGPT or Claude—to offer precise troubleshooting advice to amateur biologists attempting to create a lethal bioweapon in a lab. Safety experts consistently viewed the complexity of this troubleshooting process as a major impediment to the ability of terrorist groups to create a bioweapon, states Seth Donoughe, a co-author of the research. Now, he asserts, due to AI, the expertise required to intentionally trigger a new pandemic “could become accessible to a significantly larger population.”
From December 2024 to February 2025, the Forecasting Research Institute consulted 46 biosecurity specialists and 22 “superforecasters” (individuals demonstrating a strong track record in predicting future events) to assess the risk of a human-induced pandemic. On average, survey respondents projected that the annual probability of such an event was 0.3%.
Significantly, the researchers then posed a further query: by what margin would that risk escalate if AI instruments were capable of replicating the performance of a team of specialists in a complex virology troubleshooting examination? Should AI achieve this capability, the mean expert response indicated that the yearly risk would rise to 1.5%—representing a quintuple increase.
Unbeknownst to the forecasters, Donoughe, a research scientist affiliated with SecureBio, a nonprofit focused on pandemic prevention, was evaluating AI systems for precisely that capacity. During April, Donoughe’s team disclosed the findings from those evaluations: the leading AI systems currently available are able to surpass PhD-level virologists in a challenging troubleshooting assessment.
Thus, AI is now capable of the exact action that forecasters cautioned would lead to a quintuple increase in the risk of a human-induced pandemic. (The Forecasting Research Institute intends to re-survey these same specialists in the future to monitor if their perception of the risks has heightened as they anticipated, though they noted this follow-up research would require several months to finalize.)
Certainly, there are several grounds for skepticism regarding these results. Predicting future events is not a precise science, and it is particularly challenging to precisely forecast the probability of extremely infrequent occurrences. Forecasters in the study also indicated a misunderstanding of the pace of AI advancement. (For instance, when queried, the majority did not anticipate AI to exceed human performance in the virology test until beyond 2030, despite Donoughe’s evaluation having already demonstrated that benchmark was achieved.) Nonetheless, even if the figures themselves are regarded with some reservation, the paper’s authors contend that the overall findings still suggest a troubling trajectory. “It appears that immediate AI capabilities possess the potential to significantly elevate the risk of a human-triggered epidemic,” states Josh Rosenberg, CEO of the Forecasting Research Institute.
The study also outlined methods for mitigating the biological weapon hazards associated with AI. These risk-reduction strategies generally separated into two main classifications.
The first category involves protections implemented at the AI model level. In interviews, researchers commended initiatives by firms such as OpenAI and Anthropic to deter their AIs from responding to commands intended for bioweapon construction. The paper also points to limiting the widespread distribution of “open-source” models, and implementing defenses against models being “jailbroken,” as measures likely to diminish the probability of AI being employed to initiate a pandemic.
The second category of safeguards entails enacting limitations on entities that synthesize nucleic acids. Currently, one can transmit a genetic sequence to such a company and receive biological materials matching that code. At present, these companies are not legally compelled to vet the genetic codes they acquire prior to synthesizing them. This poses a potential hazard as these synthesized genetic materials might be utilized to produce pathogens via mail order. The paper’s authors suggest that laboratories screen their genetic sequences to ascertain their harmful potential, and that laboratories adopt “know your customer” protocols.
Collectively, all these protective measures—if put into practice—could reduce the risk of an AI-facilitated pandemic to 0.4%, according to the average forecaster. (This is only marginally higher than the 0.3% baseline they perceived the world to be at before understanding AI’s current capacity to assist in bioweapon creation.)
“Broadly, this appears to be an emerging area of risk that warrants attention,” Rosenberg states. “However, effective policy solutions are available.”