
The decision shows that Google is prioritizing computing infrastructure strongly, with plans to spend up to $93 billion on data center and device investment by the end of 2025.
Amin Vahdat is the person who has been quietly building Google's AI platform for 15 years. He graduated with a PhD from UC Berkeley and spent time teaching at Duke and UC San Diego before joining Google in 2010. In his research career, he has published nearly 400 papers, focusing on how to operate large-scale computer systems more effectively.
At the Google Cloud Next event held earlier this year, Vahdat introduced the seventh generation of TPU. Each TPU cluster has more than 9,000 chips with the power of 42.5 exaflops. He said that the demand for AI computing has increased 100 million times in eight years, requiring Google to continuously upgrade its infrastructure.
Vahdat is also the leader of many technical backbone of Google such as TPU chips for AI, Jupiter networks that allow data centers to transmit huge amounts of data, or Borg systems used to coordinate workloads on hundreds of thousands of servers. He also oversee the development of Axion, the CPU based on Google's first Arm architecture for data centers.
In the context of technology corporations fiercely competing for AI human resources, raising Vahdat to senior leadership positions also helps Google retain core personnel. With 15 years of experience and many platform projects, he is considered one of the most important figures in Google's AI strategy.