Nvidia replaces Intel in Dow Jones Industrial Average
How Intel got left behind in the AI chip boom
Nvidia replaces Intel in Dow Jones Industrial Average
How Intel got left behind in the AI chip boom
NVIDIA’S H200, blew rivals out of the water
Get paid faster with FreshBooks
FreshBooks is an easy-to-use billing, payments, and accounting platform that streamlines your bookkeeping and keeps you tax-time ready all year round. The ideal solution for small businesses, FreshBooks automates invoicing and expenses, processes payments, connects bank accounts, and provides real-time reporting. As you grow, level up with team management, payroll, accountant access, and more.
***
Nvidia replaces Intel in Dow Jones Industrial Average
Nvidia is set to take Intel's place on the prestigious Dow Jones Industrial Average index, marking the end of a 25-year reign for the chipmaking giant. The move comes on November 8, as Nvidia has solidified its position as a leader in the global semiconductor industry. Driven by the surging demand for its graphics processing units (GPUs) that power generative artificial intelligence technologies.
Intel, once the dominant force in chipmaking, has struggled to keep pace with rival TSMC and missed out on the AI boom after passing on an investment in OpenAI, the owner of ChatGPT. In contrast, Nvidia's shares have skyrocketed two-fold over the past year, with the company's H100 GPUs being snapped up by tech giants such as Microsoft, Meta, Google, and Amazon to build AI-powered computer clusters.
The company's next-generation AI GPU, Blackwell, is already generating "insane" demand, according to Nvidia. Intel, on the other hand, has found it challenging to gain traction in the AI chip market, dominated by Nvidia's technologically advanced processors.
The Dow Jones Industrial Average index, which is weighted by share price rather than market capitalization, is set to welcome Nvidia as its newest member. This means that companies with higher share prices, like Nvidia, will have a greater impact on index movements, regardless of their total market value….
Read on The Wall Street Journal
***
How Intel got left behind in the AI chip boom
In 2005, Intel faced a pivotal decision that could have altered the course of the artificial intelligence (AI) revolution. Then-CEO Paul Otellini proposed that Intel acquire Nvidia, a Silicon Valley upstart known for its graphics chips, for as much as $20 billion. Some Intel executives believed Nvidia's chip design could eventually be used for data centers, a key approach for future AI systems.
However, the Intel board resisted the idea, citing the company's poor track record of absorbing acquisitions. Confronted with skepticism, Otellini backed away, a decision that, in hindsight, was described as a "fateful moment."
Today, Nvidia is the dominant AI chip maker, with a market value of over $3 trillion - roughly 30 times that of the struggling Intel, which has fallen below $100 billion. Intel's failure to capitalize on the AI boom is representative of the broader challenges the company now faces, including missed opportunities, wayward decisions, and poor execution.
Intel's insular corporate culture, focused on its lucrative x86 chip business, worked against the company as it repeatedly tried and failed to become a leader in AI chips. Projects like the Larrabee graphics chip, led by current CEO Patrick Gelsinger, were ultimately abandoned, allowing Nvidia to pull ahead.…
Read on The New York Times
***
NVIDIA’S H200, blew rivals out of the water
MLCommons, a nonprofit that evaluates artificial intelligence software and hardware, released test results that showed that Nvidia’s most advanced—and broadly available—chip, the H200, blew rivals out of the water when it came to powering large language models. We’re talking about what’s known in AI-land as inference, which includes generating or summarizing text, rather than the computer power required to train the models.
For instance, Nvidia’s H200s handled a large language model developed by Meta Platforms 44% faster than Advanced Micro Devices’ most advanced chip, the Instinct MI300X, according to the test. An AMD spokesperson pointed out that the MI300X was neck-and-neck with Nvidia’s H100 in the same test and noted that this is AMD’s first time submitting its MI300X chip. We’re not sure why that makes a difference, although AMD may learn how to better optimize its chips for these kinds of tests over time.
This result is important because if Nvidia had any potential chinks in its armor, they’d be in the realm of AI inference. Lots of younger chip makers are betting that AI customers will prefer using cheaper chips rather than paying for the Rolls Royce of chips from Nvidia. Would-be Nvidia rival Cerebras, for instance, this week said it could power AI at one-fifth the price of Nvidia’s chips.
Even aside from these test results, it’s hard to definitively prove that businesses are getting more bang for their buck if they use non-Nvidia chips for inference. That’s because the price to rent Nvidia’s chips varies by cloud provider.
In the separate “open” division—which allows chip firms submitting their chips to use more hacks and tricks to improve their performance—Nvidia used two techniques, pruning and sparsity, to speed up its handling of Meta’s LLM, Llama 2 70b, Nvidia product marketing director Dave Salvator said. Both of those techniques require developers to remove or only use parts of a model during inference to make it more efficient.
These techniques aren’t necessarily easy to execute. Sparsity, for instance, even tripped up OpenAI last year….