TPU computing power frenzy is sweeping in! Three key words run through the new wave of AI investment: ASIC, optical interconnection, and storage.

date
16:33 27/11/2025
avatar
GMT Eight
TPU AI computing power feast has arrived: from Google partnering with Broadcom to create the AI ASIC computing power overflow Meta, to optical interconnection company Lumentum, as well as storage giant Micron teaming up to handle the entire flow of TPU AI computing power.
With the launch of the Gemini3 AI application ecosystem by the American technology giant Alphabet Inc. Class C last week, this cutting-edge AI application software has quickly become popular worldwide, driving a sudden surge in Alphabet Inc. Class C's AI computing power demand. Based on recent feedback from paid users of Alphabet Inc. Class C's Gemini AI and social media discussions, both B-end large enterprises and C-end individual users have marveled at this "the most powerful multimodal large model in human society to date," which is expected to exponentially revolutionize enterprise operational efficiency and C-end user software collaboration efficiency. Therefore, major Wall Street investment institutions such as Morgan Stanley and Mizuho are bullish on the prospects of Alphabet Inc. Class C's stock price and the so-called entire "Alphabet Inc. Class C AI ecosystem," with voices becoming increasingly louder. In the eyes of these financial giants, not only will the key participant in the "Alphabet Inc. Class C AI ecosystem" - Broadcom Inc., the leading developer of the Alphabet Inc. Class C TPU chip, benefit comprehensively from the unprecedented surge in demand for Alphabet Inc. Class C's AI computing power and move towards a new "super bullish market," but also other participants in the Alphabet Inc. Class C AI ecosystem closely related to "optical interconnect" high-performance network infrastructure led by Alphabet Inc. Class C, as well as leaders focusing on enterprise-level high-performance storage systems for data centers, will follow the brutal expansion of Alphabet Inc. Class C's AI computing infrastructure and AI application ecosystem towards an unprecedented "AI prosperity cycle." According to reports on Tuesday Eastern Time, Meta Platforms, the parent company of Facebook, is negotiating the purchase of TPU AI computing clusters from Alphabet Inc. Class C for billions of dollars. This move has undoubtedly ignited a global investment frenzy related to the Alphabet Inc. Class C TPU AI computing cluster, driving the stock prices of companies related to the global stock market and Alphabet Inc. Class C supply chain to skyrocket. Meanwhile, representatives of AI GPU technology routes - NVIDIA Corporation and other AI GPU category leaders - such as AMD, have seen their stock prices plummet. Insiders revealed that Meta's parent company, Meta, is considering investing billions of dollars in purchasing Alphabet Inc. Class C TPU AI computing infrastructure in 2027, including the construction of a massive AI data center for Meta. In addition, Salesforce, Inc.'s CEO, Marc Benioff, recently announced that the company will abandon OpenAI's large models and switch to Alphabet Inc. Class C's latest AI model, Gemini3. With these latest developments and the recent news that Anthropic, previously known as OpenAI's competitor, plans to invest billions of dollars to purchase 1 million TPU chips, the so-called "Alphabet Inc. Class C AI ecosystem" is becoming increasingly heated, with the stock prices of almost all ecosystem participants experiencing a soaring trend in recent days. According to analysis by Semianalysis, the latest TPU v7 (Ironwood) from Alphabet Inc. Class C has shown a remarkable intergenerational leap. The BF16 computing power of TPU v7 reaches up to 4614 TFLOPS, while the previous generation widely used TPU v5p was only 459 TFLOPS, representing a significant increase. Furthermore, the v7 TPU memory directly competes with NVIDIA Corporation's Blackwell architecture B200 and, for specific applications, an AI ASIC with architectural advantages in terms of cost-effectiveness and efficiency can more easily handle mainstream inference workloads. For example, the latest TPU cluster from Alphabet Inc. Class provides performance 1.4 times higher per dollar than NVIDIA Corporation's Blackwell. The series of AI product combinations based on Gemini 3 newly released by Alphabet Inc. Class C have brought an immense amount of AI token processing, further confirming the "AI heatwave that is still in the early stage of infrastructure demand" as proclaimed by Wall Street. Additionally, the recent news about Meta's potential purchase of TPU AI computing clusters, coupled with the dynamic updates regarding Amazon.com, Inc.'s cloud computing giant Amazon Web Services (AWS), has led to an extraordinary surge in AI semiconductor and global AI application software sector stock prices closely related to Alphabet Inc. Class C. Mizuho points out the key stocks benefiting from the "explosive demand for Alphabet Inc. Class C AI": Broadcom Inc., Lumentum, and Micron Technology, Inc. Recently, the Wall Street financial giant Mizuho issued a new research report, not only optimistic about the future stock price of the global AI ASIC leader Broadcom Inc., but also significantly raised the target price of Lumentum (LITE.US) stock relating to OCS (optical circuit switch) and high-speed optical module leaders from $290 to $325. As various large model architectures gradually converge into several mature paradigms, more cost-effective and energy-efficient AI ASICs can more easily handle mainstream inference workloads on the edge. Some cloud service providers or industry giants will deeply integrate software stacks to make ASICs compatible with common network operators and provide excellent developer tools, accelerating the normalization and mass adoption of ASIC cluster inference in AI inference scenarios. NVIDIA Corporation's AI GPUs may focus more on large-scale exploratory training, rapidly changing multimodal or new structural experiments, as well as general computing tasks such as HPC, graphics rendering, and visual analytics. Mizuho points out that Lumentum, as a crucial beneficiary of the "explosive demand for Alphabet Inc. Class C AI," profits greatly from the rapid expansion of AI infrastructure. The current trend of AI infrastructure expansion is thriving, and super-scale cloud manufacturers such as Alphabet Inc. Class C are actively deploying OCS + high-speed optical modules as the "base of high-performance network infrastructure" to support TPU/AI GPU computing clusters comprehensively. In addition, Mizuho's analyst team believes that the US storage giant Micron Technology, Inc. (MU.US) will also be one of the largest beneficiaries of the accelerated expansion of the Alphabet Inc. Class C AI computing cluster. After all, Alphabet Inc. Class C's massive TPU AI computing cluster, purchase of a large number of NVIDIA Corporation AI GPU computing clusters, and the dire need for high-performance DDR5 storage devices and enterprise-level high-performance SSDs for the new construction or expansion of AI data centers, will be directly beneficial for Micron. Led by senior analyst Vijay Rakesh, the stock analysis team at Mizuho stated that Lumentum will be one of the biggest winners of the "explosive outbreak of Alphabet Inc. Class C AI" because it excels in the essential optical interconnection component that is deeply integrated with the Alphabet Inc. Class C TPU AI computing clusters. This optical component, including optical modules, benefits the entire optical communication industry chain as the training/inference of massive AI models based on massive computation is essentially "weaving hundreds of thousands of computation chips into a machine" using optical fibers. The growth in network bandwidth and port numbers is as crucial as the AI chips themselves, which is why optical modules in the A-share market have recently experienced a comprehensive surge. In Alphabet Inc. Class C's Jupiter/AI data center network system, OCS clusters (optical circuit switches) have been massively embedded in the architecture to support TPU AI systems and large-scale training/inference systems. Products like Lumentum's R300/R64 OCS are specially tailored for "large-scale cloud computing + AI/ML data center networks," establishing optical connections directly between endpoints using MEMS optics to bypass intermediate electrical switching and OEO conversion, focusing on high port counts, low latency, and low power consumption. Additionally, the company is a significant supplier of high-speed optical modules and optical interconnect chips like 400G/800G, positioning these products as core components for providing scalable interconnect bandwidth for AI and large-scale cloud data centers. Mizuho sees Micron benefiting from the logic of the "explosive demand for Alphabet Inc. Class C AI computing power" under the thriving AI infrastructure trend, primarily due to the storage "super cycle". Not only are Alphabet Inc. Class C, Microsoft Corporation, and Meta leading the construction of such massive AI data centers with HBM storage systems built using 3D-stacked DRAM, but this is complemented by the necessity and large-scale procurement of server-grade DDR5, a core storage resource essential for the construction of AI data centers. The capacity of AI server computing clusters that support the enormous AI training/inference demand typically exceeds 4-8 times that of traditional CPU servers, with many single devices already exceeding 1TB of DRAM and migrating towards DDR5 due to its 50% higher bandwidth compared to DDR4, making it more suitable for massive AI workloads. The robust performance reports from the global storage giants - Samsung Electronics, SK Hynix, Western Digital Corporation, and Seagate, as well as the rally in demand for DRAM/NAND series storage products driven by the continued surging demand for AI training/inference computing power and the resurgence of consumer electronics demand driven by the AI boom at the edge, especially in the segments of DRAM and SSD, particularly in the HBM storage and server-level high-performance DDR5 sectors, where Micron dominates, underline what Morgan Stanley and other Wall Street powerhouses have called the arrival of a "storage super cycle". Is the bull market for the Alphabet Inc. Class C AI ecosystem just beginning? The launch of Gemini3 has shaken the world, coupled with Nvidia Corporation's explosive performance growth driving, global investors are once again feeling the immense shock brought about by the "AI faith" from the AI frenzy investment funds, driving a significant rise in stock prices in the AI semiconductor and global AI application software sectors closely related to Alphabet Inc. Class C. Mizuho has identified the critical beneficiaries of the "explosive demand for Alphabet Inc. Class C AI": Broadcom Inc., Lumentum, and Micron Technology, Inc. Recently, the Wall Street financial titan Mizuho issued a new research report, not only optimistic about the future stock price of the global AI ASIC leader Broadcom Inc., widely known as a TPU AI computing cluster being a typical AI ASIC technology route, but also significantly raised the target price of Lumentum (LITE.US) stock relating to OCS (optical circuit switch) and high-speed optical component leaders from $290 to $325. As various large model architectures gradually converge into several mature paradigms, more cost-effective and energy-efficient AI ASICs can more easily handle mainstream inference workloads on the edge. Some cloud service providers or industry giants will deeply integrate software stacks to make ASICs compatible with common network operators and provide excellent developer tools, accelerating the normalization and mass adoption of ASIC cluster inference in AI inference scenarios. NVIDIA Corporation's AI GPUs may focus more on large-scale exploratory training, rapidly changing multimodal or new structural experiments, as well as general computing tasks such as HPC, graphics rendering, and visual analytics. Mizuho points out that Lumentum, as a crucial beneficiary of the "explosive demand for Alphabet Inc. Class C AI," profits greatly from the rapid expansion of AI infrastructure. The current trend of AI infrastructure expansion is thriving, and super-scale cloud manufacturers such as Alphabet Inc. Class C are actively deploying OCS + high-speed optical modules as the "base of high-performance network infrastructure" to support TPU/AI GPU computing clusters comprehensively. In addition, Mizuho's analyst team believes that the US storage giant Micron Technology, Inc. (MU.US) will also fuel the most massive beneficiaries of the accelerated expansion of the Alphabet Inc. Class C AI computing cluster. After all, Alphabet Inc. Class C's massive TPU AI computing cluster, purchase of a large number of NVIDIA Corporation AI GPU computing clusters, and the dire need for high-performance DDR5 storage devices and enterprise-level high-performance SSDs for the new construction or expansion of AI data centers, will be directly beneficial for Micron.