OpenAI declares Code Red
To advance AI: ‘It’s back to the age of research again’
OpenAI declares Code Red
Inside Silicon Valley’s ‘soup wars’
To advance AI: ‘It’s back to the age of research again’
Create a Branded Coaching Ecosystem - All In One Platform
Join 7-figure business coach and former IBM marketing executive, Julie Ciardi, for a free workshop on how she scaled her coaching business to $3M by prioritizing a premium client experience on Kajabi.
Julie will show how combining courses, coaching and community into one branded hub creates a cohesive experience that boosts engagement and drives long-term revenue, all powered by Kajabi.
***
OpenAI declares Code Red
OpenAI CEO Sam Altman declared a “code red” on Monday—around 3 years after ChatGPT was released to the public—urging staff to prioritize improving ChatGPT as competitors like Google and Anthropic close the gap. The startup once dominated the AI market but now faces growing pressure to maintain its position.
In an internal announcement reviewed by The Information, Altman said OpenAI will delay several initiatives to focus on improving ChatGPT. The company is pausing work on advertising, shopping agents, health agents, and a personal assistant called Pulse. This reallocation of resources means teams will concentrate on core improvements such as speed, reliability, enhanced personalization, and the ability to answer more questions accurately.
The company will now conduct daily calls with teams working on ChatGPT, and Altman is encouraging temporary team transfers to accelerate development. Google is the main concern. The search giant’s AI users are growing steadily. Google’s latest AI model, Gemini 3, has outperformed competitors on many industry benchmarks and popular metrics. OpenAI reportedly called an internal “code orange” in October as competition increased; this new “code red” marks a move to its highest urgency level. This comes after Google itself declared its own “code red” when ChatGPT first arrived, marking a full-circle moment in the AI race....
***
Inside Silicon Valley’s ‘soup wars’
In the high-stakes arms race between Meta and OpenAI for AI dominance, the weapon of choice has evolved. First, it was unlimited compute, then $100 million signing bonuses. Now, the battle has entered a new, bizarrely intimate phase: soup wars.
Mark Chen, chief research officer at OpenAI, said on a podcast with tech podcaster Ashlee Vance that the recruitment war has shifted. According to Chen, Meta has aggressively pursued half of his direct reports—backed by a $10 billion war chest for talent—but CEO Mark Zuckerberg has added a personal touch to the poaching attempts. Zuckerberg, Chen said, has personally “hand-cooked” and “hand-delivered” soup to researchers he wanted to recruit away from OpenAI. And it wasn’t a joke, the executive insisted.
“It was shocking to me at the time,” Chen admitted. But in Silicon Valley, if the enemy brings broth, you must respond in kind. Chen confessed he has now adopted the tactic, delivering soup to his own recruits as he hopes to poach talent from Meta. However, he draws the line at manual labor. “No, no, no … It’s better if you get, like, Michelin-star soup,” Chen said.
The cozy theatrics disguise a harsher reality: The pool of people who can design and train cutting-edge large language models is microscopic. Industry insiders estimate there are fewer than 1,000 researchers globally with the expertise to push the frontier on their own. The soup is somewhat reminiscent of earlier tech talent wars, when Google and Facebook tried to outbid each other with free sushi, in-house baristas, and on-campus gyms....
***
To advance AI: ‘It’s back to the age of research again’
OpenAI cofounder Ilya Sutskever believes the tides of the AI industry will have to shift back to the research phase. On an episode of the “Dwarkesh Podcast” published Tuesday, Sutskever, who is widely seen as a pioneer in modern artificial intelligence, challenged the conventional wisdom that scaling could be the key road map to AI’s progress.
The wisdom goes that the more compute you have or the more training data you have, the smarter your AI tool will be. Sutskever said in the interview that, for around the past half-decade, this “recipe” has produced impactful results. It’s also efficient for companies because the method provides a simple and “very low-risk way” of investing resources compared to pouring money into research that could lead nowhere. However, Sutskever, who now runs Safe Superintelligence Inc., believes that method is running out of runway; data is finite, and organizations already have access to a massive amount of compute, he said.
“Is the belief really: ‘Oh, it’s so big, but if you had 100x more, everything would be so different?’ It would be different, for sure. But is the belief that if you just 100x the scale, everything would be transformed? I don’t think that’s true,” Sutskever said. “So it’s back to the age of research again, just with big computers.”
One area that will require more research, according to Sutskever, is getting models to generalize — essentially learn using small amounts of information or examples — as well as humans do. “The thing, which I think is the most fundamental, is that these models somehow just generalize dramatically worse than people,” he said. “It’s super obvious. That seems like a very fundamental thing.”….

