Daimler Continues Artificial Intelligence Self-Driving Race

In 2025, misunderstandings about AI were widespread as people tried to grasp the fast-paced development and uptake of the technology. Below are three common ones to set aside in the upcoming year.

AI models are hitting a wall

When GPT-5 launched in May, people questioned (not for the first time) whether AI was reaching a plateau. Even with the significant name change, the enhancements felt minor. The New Yorker published a piece titled, “What if A.I. Doesn’t Get Much Better Than This?” stating that GPT-5 was “the newest offering indicating that progress on large language models has slowed.”

It quickly emerged that, despite the naming milestone, GPT-5 was mainly focused on providing performance at a reduced cost. Five months later, OpenAI, Google, and Anthropic all released models with significant improvements on economically useful tasks. “Contrary to the popular belief that scaling is done,” the performance leap in Google’s latest model was “as big as we’ve ever seen,” said Google DeepMind’s deep learning team lead, Oriol Vinyals, following the release of Gemini 3. “No barriers ahead.”

There’s cause to question exactly how AI models will advance. In areas where training data is costly—like using AI agents as personal shoppers—progress might be slow. “AI might keep getting better and still struggle in key areas,” said Helen Toner, interim executive director at the Center for Security and Emerging Technology. But the claim that progress is slowing is hard to support.

Self-driving cars are more dangerous than human drivers

When a chatbot’s AI fails, it usually leads to a mistake in someone’s work or a misspelled “strawberry” (too many or too few “r”s). When a self-driving car’s AI fails, people can be injured. It’s no surprise many are wary of the technology.

A U.K. survey of 2,000 adults found only 22% were comfortable riding in a driverless car. In the U.S., that figure was 13%. In October, a Waymo vehicle struck a cat in San Francisco, causing anger.

But autonomous cars have often been safer than human drivers, per a Waymo analysis of 100 million driverless miles. Waymo’s vehicles had nearly five times fewer injury-causing crashes and 11 times fewer crashes with “serious injury or worse” compared to human drivers. 

AI can’t create new knowledge

In 2013, mathematician Sébastien Bubeck published a graph theory paper in a top journal. “We had some open questions, and I worked on them with Princeton graduate students,” says Bubeck, now an OpenAI researcher. “We solved most, but one remained.” After over a decade, Bubeck gave the problem to a system based on GPT-5. 

“We let it process for two days,” he says. “The model found a remarkable mathematical identity and solved the problem.”

Critics say large language models like GPT-5 can’t create original work—only repeat training data—earning them the ironic label “stochastic parrots.” In June, Apple released a paper arguing that LLMs’ reasoning ability is an “illusion.”

To be clear, LLMs generate responses differently from human reasoning. They struggle with simple diagrams even as they win top math and coding competitions and “independently” create “new mathematical structures.” But struggling with easy tasks doesn’t stop them from generating useful, complex ideas. 

“LLMs can definitely follow logical steps to solve problems needing deduction and induction,” Dan Hendrycks, executive director of the Center for AI Safety, told TIME. “Whether you call that ‘reasoning’ or something else is up to you and your dictionary.”