Microsoft has officially introduced the Maia 200 AI chip, the second generation in the Maia chip line that the company first announced in 2023.
This chip will be put into operation this week at a Microsoft data center in Iowa (USA), before continuing to be deployed at another facility in Arizona in the near future.
This move takes place in the context that cloud computing giants such as Microsoft, Google (Alphabet) and Amazon Web Services, who are Nvidia's largest customers, are promoting their AI chip self-development strategy to reduce dependence on market-dominating suppliers.
Unlike many self-designed chip generations that previously only focused on hardware, Microsoft this time emphasizes the software factor, which is a piece considered Nvidia's biggest competitive advantage.
Along with Maia 200, Microsoft provides a programming tool package, including Triton, which is open source software significantly contributed by OpenAI, the developer of ChatGPT.
Triton is designed to play a similar role to Cuda, the software platform that helped Nvidia build an almost exclusive position in the field of AI chips.
According to Wall Street analysts, Nvidia's biggest advantage is not only hardware, but the Cuda software ecosystem, which is a barrier that makes it difficult for competitors to compete.
Microsoft's serious investment in software shows that the company is targeting this core strength.
Regarding production technology, the Maia 200 is manufactured by Taiwan Semiconductor Manufacturing Company (TSMC) on a 3-nanometer process, similar to the high-end Vera Rubin AI chip line that Nvidia just introduced earlier this month.
Microsoft chips also use high bandwidth memory (HBM), although it is an older generation and has slower speeds than Nvidia's upcoming products.
However, the Maia 200 is integrated with a large amount of SRAM, which is a type of memory with very fast access speed.
This design is considered suitable for chatbots and AI systems serving a large number of users at the same time, helping to reduce latency when processing queries. This is a direction that many emerging Nvidia competitors are pursuing.
Cerebras Systems, a company that has just signed a $10 billion deal with OpenAI to provide computing capabilities, also relies heavily on similar memory technology.
Groq, another AI startup, has even been licensed by Nvidia for technology in a deal believed to be worth up to $20 billion.
Microsoft's development of chips and the construction of a software ecosystem show that the AI race is entering a new phase, where technology giants no longer accept complete dependence on Nvidia.