NVIDIA's Position Unshaken Amid ASIC Overheating

Advertisements

The past six months have witnessed a notable shift in the financial markets, particularly in the realm of artificial intelligence (AI) tradingThis shift has been marked by a growing interest in customized Application-Specific Integrated Circuits (ASICs), as many market participants speculate that the potential for ASICs may far surpass that of commercially available Graphics Processing Units (GPUs). However, this sentiment has been met with skepticism by market analysts, including those at Morgan Stanley, who caution against overestimating the long-term viability of ASICs in decisively challenging the established dominance of GPUs.

On February 12, Morgan Stanley's strategic analyst Joseph Moore and his team disseminated an in-depth report positing that ASICs, as a unique category of chips, neither possess inherent superiority nor inferiority compared to commercial GPUsInstead, these two types of chips are characterized by their distinct methods of achieving similar results.

According to Morgan Stanley, while ASICs may excel in certain specific application scenarios, their effectiveness is often heavily contingent on the customization needs of particular clientsAlthough the development costs associated with ASICs are generally lower, the overall system costs, as well as the expenses linked with software deployment, can be significantly higher than for commercially viable GPUsThis raises concerns about the total cost of ownership (TCO) for customers, who may find themselves investing considerable time and resources in adapting software to work with ASICs, ultimately leading to greater expenditure overall.

Furthermore, the CUDA ecosystem, developed by NVIDIA, is already well-established, allowing clients to seamlessly deploy and execute varied workloadsThis aspect contributes to increased TCO, as companies using GPUs benefit from a mature ecosystem that streamlines processesMorgan Stanley anticipates that NVIDIA, as a current leader in the chip market, will continue to hold a dominant position, barring any unforeseen developments.

When comparing the scope of customization benefits of ASICs against the broader applications of GPUs, Morgan Stanley also highlights that ASICs might possess limited versatility

Advertisements

They argue that in cases where the application scenarios are more specialized, ASICs can indeed provide a competitive edgeASICs are designed to cater specifically to particular cloud service providers or enterprise customers, delivering enhanced performance and efficiency that can stand out in the market.

A prime example that illustrates this point is Google's Tensor Processing Unit (TPU). The success of the TPU can be attributed to Google's pioneering work in developing the Transformer technology, which underpins many of today’s large language models (LLMs). Google’s guidance directed Broadcom to create a chip specifically optimized for this technology, resulting in substantial revenue—over $8 billion—for Broadcom.

Nonetheless, NVIDIA isn't standing idleThe company is actively working on optimizing its GPUs to keep pace with the demands of Transformer models, thereby reclaiming market share in this competitive landscapeIn the realm of cloud computing, NVIDIA's GPUs have frequently proven to be more competitive than ASICs, supporting the notion that the future landscape might tilt further toward GPUs as a versatile tool for broader workloads.

In light of these insights, Morgan Stanley suggests that the advantages of customized ASICs may be more pronounced in traditional workloadsNVIDIA's focus on training multi-modal Artificial General Intelligence (AGI) models involves capabilities that may be considered excessive for certain legacy applications.

Additionally, Morgan Stanley acknowledges that while ASICs can entail lower development costs, their overall system costs can be more burdensome compared to GPUsThe report underscores that ASICs may appear cost-effective at first glance—some ASIC hardware can be priced as low as $3,000, against NVIDIA's H100 GPU, which costs around $20,000. However, this seemingly attractive hardware pricing may mask higher overall costs.

For instance, when comparing cluster costs, it is revealed that building an ASIC cluster can be significantly pricier than an NVIDIA setup, particularly as NVIDIA has designed a copper-based NVLINK domain utilizing 72 GPUs, while ASICs may require pricier optical technologies to build similar infrastructures

Advertisements

Moreover, contrary to equal costs of high-bandwidth memory (HBM), NVIDIA often enjoys an advantage owing to its exclusive procurement capabilities for the new HBM technologies and the CoWoS (Chip-on-Wafer-on-Substrate) system.

On the software front, the NVIDIA CUDA ecosystem is robust and mature, enabling clients to deploy and run varied workloads with easeIn contrast, when opting for ASICs or other substitutes, customers often face lengthy adaptation processes that drain resources and escalate TCOFor example, clients using Trainium are expected to invest several weeks or even months in system deployment—a reality echoed by a cloud service executive recentlly who remarked that his ASIC team regularly lags behind NVIDIA's advancements by 2 to 3 yearsThis discrepancy highlights the economic challenges faced by companies that pursue ASIC solutions.

Consequently, even as NVIDIA introduces cost-effective chips like the L4 and L40, the market sentiment leans heavily towards the adoption of high-performance GPUs, due to their substantial advantages in terms of performance and ecosystem support.

In summary, Morgan Stanley emphasizes that while lower-priced processors may attract some initial interest during their launch phase, the lack of a mature ecosystem and long-term support often drives customers back toward NVIDIAExceptions like TPU, Trainium, and AMD's MI300 remain outliers, suggesting that cheaper processors do hold some inherent valueHowever, the anticipated market share for these lower-cost alternatives frequently falls short of initial expectations.

Ultimately, NVIDIA's dominance in the market appears to be unassailableMorgan Stanley argues that NVIDIA's leadership in the AI chip domain stems not only from its impressive technological capabilities but also from its deeply integrated ecosystem and ongoing commitment to research and development.

Remarkably, the report notes that NVIDIA plans to allocate approximately $16 billion to R&D this year, a stark contrast to the mere budget of less than $1 billion typically assigned for the development of customized ASICs, and sometimes, even less than that.

This financial prowess enables NVIDIA to maintain development cycles of four to five years while supporting three sequential design teams, allowing for a continuous stream of high-performance, market-leading chips

Advertisements

Advertisements

Advertisements

REPLY NOW

Leave A Reply