Chart #1 —
Google’s hidden weapon in the AI infrastructure race
Google’s AI chips, called Tensor Processing Units (TPUs), are getting a lot of attention. They were used to train its newest gen-AI model, Gemini 3, which has been widely praised, and they’re cheaper to run than Nvidia’s Graphics Processing Units (GPUs).
The real reason Google created the TPU goes back to 2013. The company ran a forecast showing that if every Android user used voice search for just three minutes a day, Google would need to double its global data-centre footprint. Not because of video or storage, but because running AI on conventional chips was too expensive. So, Google built its own AI-focused processor. Fifteen months later, TPUs were already powering Google Maps, Photos, and Translate, long before the public knew the chip existed.
TPUs matter because GPUs were originally built for gaming, not AI workloads. TPUs are purpose-built for AI with no unnecessary overhead. The result is better performance per dollar, lower energy use, and faster execution for many AI tasks. Each generation also brings a major performance jump. Even Nvidia’s CEO, Jensen Huang, acknowledges the quality of Google’s TPU program.
So why don’t more companies use TPUs? Most engineers are trained on Nvidia and CUDA, and TPUs only run on Google Cloud. Switching ecosystems is costly and disruptive.
From Google’s perspective, TPUs give its cloud business a major advantage. While AI workloads are pressuring cloud margins across the industry due to reliance on Nvidia hardware, Google controls both the chip and the software stack. That means lower costs, better margins, faster development cycles and a defensible position competitors can’t easily replicate. Some experts argue TPUs now match or exceed Nvidia’s top chips.
In short, Google didn’t create TPUs to sell hardware. It built them to handle its own AI growth. Today, TPUs may be Google Cloud’s strongest competitive asset, and if Google opens them more widely to external developers, the AI infrastructure landscape could shift quickly.

Source: zerohedge, uncoveralpha






Source: FT
Source: Jim Bianco