Has AI progress really slowed down?
Jensen Huang says the elements of AI scaling are all advancing
A very ChatGPT Christmas: Savvy Black Friday shoppers use AI to find deals
Has AI progress really slowed down?
Jensen Huang says the 3 elements of AI scaling are all advancing
200+ hours of research on AI tools & hacks packed in 3 hours
This free 3-hour Mini Course on AI & ChatGPT (worth $399) will help you become a master of 20+ AI tools & prompting techniques and save 16 hours/week.
Get it now for absolutely free! (for first 100 users only) 🎁
This course will teach you how to:
* Build business that make $10,000 by just using AI tools
* Make quick & smarter decisions using AI-led data insights
* Write emails, content & more in seconds using AI
* Solve complex problems, research 10x faster & save 16 hours every week
Register & save your seat now! (100 free seats only)
***
A very ChatGPT Christmas: Savvy Black Friday shoppers use AI to find deals
Securing the best Black Friday deals once meant camping out for hours at Best Buy or Walmart and rushing into stores at dawn. Today, savvy shoppers are turning to AI tools like ChatGPT to find the best bargains.
Jim Malervy, a 46-year-old from Philadelphia, is using ChatGPT and other AI tools for his Christmas shopping for the second year in a row. Last year, an AI-powered pricing app helped him save $50 on an iPad for his older daughter. This year, he's hunting for a Margot Robbie rollerblading Barbie doll for his younger daughter, aiming for a 30% discount using apps like PayPal's Honey, which scours the web for the best prices.
Malervy appreciates how AI reduces the stress of Black Friday shopping. "Sometimes you get anxiety," he said. "You want to make sure you’re getting the right price." According to a survey by research group Attest, Malervy is among the 44% of likely Black Friday shoppers planning to use AI this year. Companies like OpenAI and Perplexity AI have expanded their search and shopping features to attract holiday shoppers.
Retailers such as Amazon and Walmart are also leveraging AI to simplify shopping and deal-finding. "I do think this is a glimpse of the future," said Walmart CFO John David Rainey, noting that the retailer is developing AI tools to enhance product searches.
In today's era of dynamic pricing and rapidly changing deals, shoppers need to be more strategic. Earlier this year, one shopper, Akibu, used ChatGPT to find a bracelet deal she had seen on Instagram but couldn't locate again. By inputting promo codes and dates, she unlocked a new shopping experience. She plans to use the bot for her Black Friday shopping and beyond....
***
Has AI progress really slowed down?
For over a decade, companies have banked on a compelling principle: artificial intelligence systems would continue to improve if they were made larger. This wasn't just a hopeful notion. In 2017, researchers at Baidu, a Chinese tech giant, showed that increasing data and computing power in machine learning algorithms led to predictable improvements, whether the system was for image recognition, speech, or language generation. OpenAI recognized this trend in 2020, coining the term "scaling laws," which has since become a cornerstone of the industry.
This belief led AI firms to invest hundreds of millions in expanding computing clusters and datasets. The investment paid off, turning rudimentary text generators into today's sophisticated chatbots.
However, recent reports from Reuters and Bloomberg indicate that leading AI companies are now facing diminishing returns from scaling their systems. Just days earlier, The Information reported that OpenAI's unreleased Orion model failed to meet internal expectations, raising doubts about further advancements. Andreessen Horowitz's co-founders, a notable Silicon Valley venture capital firm, have also noted that increasing computing power no longer yields the same "intelligence improvements."
Critics have declared the end of scaling before. "At every stage of scaling, there are always arguments," said Amodei last week. "The latest one is, 'we're going to run out of data, or the data isn't high quality enough, or models can't reason.'" He remains optimistic, believing that scaling will continue. Reflecting on OpenAI's early days on Y-Combinator's podcast, CEO Sam Altman credited the company's success to a "religious level of belief" in scaling, a concept once considered "heretical." Responding to a recent post on X from Marcus claiming his predictions of diminishing returns were accurate, Altman asserted, "there is no wall."....
Read on Time
***
Jensen Huang says the 3 elements of AI scaling are all advancing
If the foundation models driving the rush toward generative AI stop improving, Nvidia could face significant challenges. Silicon Valley's value proposition hinges on the relentless demand for increased computing power. During Nvidia's third-quarter earnings call on Wednesday, CEO Jensen Huang addressed concerns about potential stagnation in AI progress and whether the company's Blackwell chips could reignite advancements.
"Foundation model pre-training scaling is intact and continuing," Huang assured. He explained that scaling isn't as limited as some might think. While it was once true that models improved primarily through more data and pre-training, AI now has the capability to generate synthetic data and self-validate its outputs. However, the availability of new data is dwindling, and the effectiveness of synthetic data for pre-training remains uncertain.
As the AI ecosystem evolves, tools for enhancing models are becoming increasingly important. Initially, post-training improvements relied on large teams of humans manually verifying AI responses. Huang highlighted OpenAI's Strawberry or o1 model, which employs advanced strategies like "chain of thought reasoning" and "multi-path planning." These methods encourage models to think more deeply and methodically, resulting in higher-quality responses. "The longer it thinks, the better and higher quality answer it produces," Huang noted.
Huang emphasized that pre-training, post-training improvements, and new reasoning strategies all contribute to model enhancement. However, more complex computations require more powerful hardware, especially since users expect rapid responses. This drives the demand for Nvidia's Blackwell chips, he concluded....