Amazon and Microsoft have, so far, stood slightly apart from the others. While Google and Meta made developing their own AI models a top priority, Microsoft and Amazon have invested in smaller technology companies, in return receiving access to those companies’ AI models that they then incorporated into their products and services.

Microsoft has invested at least $13 billion in OpenAI, the company behind ChatGPT. As part of this agreement, OpenAI gives Microsoft access to the AI systems it develops, while Microsoft provides OpenAI with the computational power it needs. Anthropic has deals with both Amazon and Google, receiving funding from each, respectively, in exchange for Anthropic making its models available through Amazon and Google’s cloud services platforms. (Investors in Anthropic also include Salesforce, where TIME co-chair and owner Marc Benioff is CEO.)

Now, there are signs that the two technology giants are wading deeper into the fray. In March, The Verge reported that Amazon has tasked its AGI team with building a model that outperforms Anthopic’s most capable AI model, Claude 3, by the middle of this year. Earlier this month, The Information reported that Microsoft is training a foundation model large enough to compete with frontier model developers such as OpenAI.

While there are many types of AI systems that are used in a multitude of ways, the big trend of the last couple of years is language models—the AI systems that can generate coherent prose and usable code, and that power chatbots such as ChatGPT. While younger companies OpenAI and Anthropic, alongside the more established Google DeepMind, are in the lead for now, their new big tech rivals have advantages that will be hard to offset. And if the tech giants come to dominate the AI market, the implications—for corporate concentration of power and for whether the most powerful AI systems are being developed safely—could be troubling.

Over the course of the 2010s, AI researchers began to realize that training their AI systems with more computational power would reliably make them more capable. Over the same period, the computational power used to train AI models increased rapidly, doubling every six months according to researchers at Epoch, an AI-focused research institute.

The specialized semiconductor chips required to do that much computational work are expensive, as is employing the engineers who know how to make use of them. OpenAI CEO Sam Altman has said that GPT-4 cost over $100 million to train. Needing more and more capital is why OpenAI, which was founded in 2015 as a nonprofit, changed its structure and went on to ink multibillion dollar deals with Microsoft, and why Anthropic has signed similar agreements with Amazon and Google. Google DeepMind—the AI team within Google that develops Google’s most powerful AI systems—was formed last year when Google merged its elite AI group, Google Brain, with DeepMind. Much like OpenAI and Anthropic, DeepMind started out as a startup before it was acquired by Google in 2014.

These partnerships have paid off for all parties involved. OpenAI and Anthropic have been able to access the computational power they need to train state-of-the-art AI models—most experts agree that OpenAI’s GPT-4 and Anthropic’s Claude 3 Opus, along with Google DeepMind’s Gemini Ultra, are the three most capable models currently available. Companies behind the frontier have so far tried alternative business strategies. For example, Anthropic gives more thorough access to its AI models in order to benefit from developers outside the company tuning them up, and to attract talented researchers who prefer to be able to openly publish their work.

At quarterly earnings reports in April, Microsoft and Amazon reported bumper months, which they both partly credited to AI. Both companies also benefit from the agreements in that a large proportion of the money flows back to them, as it’s used to purchase computational power from their cloud computing services units.

However, as the technical feasibility and commercial utility of training larger models has become apparent, it has become more attractive for Microsoft and Amazon to build their own large models, says Neil Thompson, who researches the economics of AI as the director of the FutureTech research project at the Massachusetts Institute of Technology. Building their own models should, if successful, be cheaper than licensing the models from their smaller partners and give the big tech companies more control over how they use the models, he says.

It’s not only the big tech companies that are making advances. OpenAI’s Altman has compared his company’s products to a range of large firms that include Microsoft customers.

The good news for OpenAI and Anthropic is that they have a head start. GPT-4 and Claude 3 Opus, alongside Google’s Gemini Ultra, are still in a different class from other language models such as Meta’s Llama 3, according to a recent analysis. OpenAI notably released GPT-4 back in August 2022.

But maintaining this lead will be “a constant struggle,” writes Nathan Benaich, founder and general partner at venture capital firm Air Street Capital, in an email to TIME. “Labs are in the challenging position of being in constant fundraising mode to pay for talent and hardware, while lacking a plan to translate this model release arms race into a sustainable long-term business. As the sums of money involved become too high for US investors, they’ll also start having to navigate tricky questions around foreign sovereign wealth.” In February, the Wall Street Journal reported that Altman was in talks with investors including the U.A.E government to raise up to $7 trillion for AI chip manufacturing projects.

Big technology companies, on the other hand, have ready access to computational resources—Amazon, Microsoft, and Google account for 31%, 24%, and 11% of the global cloud infrastructure market, respectively, according to data from market intelligence firm Synergy Research Group. This makes it cheaper for them to train large models. It also means that, even if further development of language models doesn’t pay off commercially for any company, the tech companies selling access to computational power via the cloud can still profit.

“The cloud providers are the shovel salesmen during the gold rush. Whether frontier model builders make money or lose it, cloud providers win,” writes Benaich. “Companies like Microsoft and Amazon sit in an enviable position in the value chain, combining both the resources to build their own powerful models with the scale that makes them an essential distribution partner for newer entrants.”

But while the big technology companies may have certain advantages, the smaller companies have their own strengths, such as greater experience training the largest models, and the ability to attract the most talented researchers, says Thompson.

Anthropic is betting that its talent density and proprietary algorithms will allow it to stay at the frontier while using less computational resources than many of its competitors, says Jack Clark, one of the company’s co-founders and head of policy. “We’re going to be on the frontier surprisingly efficiently relative to others,” he says. “For the next few years, I don’t have concerns about this.”

While it could be argued that more companies entering the foundation model market would increase competition, it is more likely that the vertical integration will serve to increase the power of already powerful technology companies, argues Amba Kak, co-executive director of the AI Now Institute, a research institute that studies the social implications of artificial intelligence.

“Viewing this as ‘more competition’ would be the most inventive corporate spin that obscures the reality that all the versions of this world serve to consolidate power in the hands of a few large firms,” Kak says.