OpenAI co-founder Ilya Sutskever’s SSI valued at $32bn
The secrets behind Sam Altman’s firing from OpenAI
Fecebook’s AI research lab ‘dying or a new beginning’
AI TRAINING PRESENTED BY BLOTATO
Making viral posts 10x faster with AI
The Rundown: Blotato is a all-in one AI content tool that helps you create, remix, and distribute-AI-optimized content across multiple platforms, accelerating your brand presence 10x faster.
Step-by-step:
1. Sign up for a free account at blotato.com
2. Import content from every source-e.g., YouTube, TikToc, PDF, podcast-or type your own preferred topic.
3. Select your target social platforms to generate viral-optimized posts tailored to each one.
4. Customize the drafts with your insights, add AI images or faceless videos, and then publish to all social platforms seamlessly!
Pro-tip:
You can schedule your content calendar in advance to maintain consistent posting across all platforms while saving hours of manual work each week.
Get Blotato, its great for business!
***
OpenAI co-founder Ilya Sutskever’s SSI valued at $32bn
OpenAI co-founder Ilya Sutskever has raised $2bn for his artificial intelligence start-up in a deal which values the year-old company at $32bn though it currently has no product. Sutskever, who left OpenAI last year after a failed coup against chief executive Sam Altman, launched Safe Superintelligence last June with Daniel Gross, who led Apple’s AI efforts, and Daniel Levy, an AI researcher. The funding round underscores investors’ keen appetite for bankrolling AI start-ups led by prominent researchers or talented engineers.
SSI has set out to create AI models that are dramatically more powerful and more intelligent than current cutting-edge models from rivals such as OpenAI, Anthropic, and Google. The company has given few details about how it intends to beat those better-funded rivals, but Sutskever told the Financial Times last year that he and the team had “identified a new mountain to climb that’s a bit different from what I was working on previously”. The group has been tight-lipped even with its investors, said multiple familiar people, however, three people close to the company said it was working on unique ways of developing and scaling AI models. People close to the company said SSI is focused on surpassing human intelligence. Sutskever co-founded OpenAI and served as the San Francisco group’s chief scientist when it launched AI models and products, including chatbot ChatGPT, which kicked off the recent AI investment boom.
Google and Nvidia have joined prominent venture capital investors to back Safe Superintelligence (SSI). It is increasingly common for major cloud providers to invest heavily in AI startups that not only build foundational models but also serve as significant customers of their infrastructure. For instance, Amazon and Google have both invested in Anthropic, while Microsoft has placed substantial bets on OpenAI. Nvidia has also backed OpenAI, as well as Elon Musk's xAI….
***
The secrets behind Sam Altman’s firing from OpenAI
Thiel had backed Altman’s first venture fund more than a decade before, and remained a mentor to the younger investor when Altman became the face of the artificial-intelligence revolution as the chief executive of OpenAI. OpenAI’s instantly viral launch of ChatGPT in November 2022 had propelled tech stocks to one of their best years in decades. Yet Thiel was worried. Years before he met Altman, Thiel had taken another AI-obsessed prodigy named Eliezer Yudkowsky under his wing, funding his institute, which pushed to make sure that any AI smarter than humans would be friendly to its maker. That March, Yudkowsky had argued in Time magazine that unless the current wave of AI research was halted, “literally everyone on Earth will die.”
“You don’t understand how Eliezer has programmed half the people in your company to believe in that stuff,” Thiel warned Altman. “You need to take this more seriously.” Altman picked at his vegetarian dish and tried not to roll his eyes. This was not the first dinner where Thiel had warned him that the company had been taken over by “the EAs,” by which he meant people who subscribed to effective altruism. EA had lately pivoted from trying to end global poverty to trying to prevent runaway AI from murdering humanity. Thiel had repeatedly predicted that “the AI safety people” would “destroy” OpenAI. “Well, it was kind of true of Elon, but we got rid of Elon,” Altman responded at the dinner, referring to the messy 2018 split with his co-founder, Elon Musk, who once referred to the attempt to create artificial intelligence as “summoning the demon.”
Nearly 800 OpenAI employees had been riding a rocket ship and were about to have the chance to buy beachfront second homes with the imminent close of a tender offer valuing the company at $86 billion. There was no need to panic. Altman, at 38 years old, was wrapping up the best year of a charmed career, a year in which he became a household name, met with presidents and prime ministers around the world, and—most important within the value system of Silicon Valley—delivered a new technology that seemed like it was very possibly going to change everything. But as the two investing partners celebrated beneath the exposed rafters of L.A.’s hottest new restaurant, four members of OpenAI’s six-person board, including two with direct ties to the EA community, were holding secret video meetings. And they were deciding whether they should fire Sam Altman—though not because of EA….
***
Fecebook’s AI research lab ‘dying or a new beginning’
FAIR—an acronym for Fundamental AI Research—was once the crown jewel of AI development at Meta(Fecebook). But as Mark Zuckerberg has pivoted the company towards generative AI products over the past two years, the vaunted lab has become something of an orphan inside the organization, increasingly shoved out of the limelight by more commercially-focused AI groups within the company. The newest Llama model, for instance, was the product of Meta’s separate GenAI team, not FAIR. Meanwhile, FAIR has languished, with talented researchers departing for rival companies and startups: More than half of the 14 authors of the original Llama research paper published in February 2023 had left the company six months later, while at least eight top researchers have left over the past year.
Yann LeCun, Meta’s chief scientist, who is considered one of the “godfathers” of deep learning and who founded FAIR, is now leading the team. “It’s more like a new beginning in which FAIR is refocusing on the ambitious and long-term goal of what we call AMI (Advanced Machine Intelligence),” LeCun said. The 64-year old French-born computer scientist has long argued that the term artificial general intelligence (AGI) is misleading and coined the term AMI, which he says is about helping machines understand the world, reason, plan, and learn as efficiently as animals and humans. He also has gone on record as skeptical that current LLM-based approaches to AI will ever get to human-level intelligence.
When OpenAI’s ChatGPT launched in late November 2022, Meta was perceived as having fallen far behind OpenAI, Anthropic and Google in generative AI. But FAIR helped get Meta back in the game by developing the first freely-available generative AI model that could rival the models being offered by those other companies. It was called Llama—a play on the acronym for large language models, or LLMs, the type of AI that powered innovations like ChatGPT—and it took the AI world by storm. Since the consolidation of FAIR into the product organization, ex-employees say Meta has steadily deprioritized the kind of open-ended, exploratory research that FAIR was known for, shifting resources instead toward product-driven initiatives under GenAI.
He sees FAIR, along with the AI research labs at other companies like Microsoft and Google, becoming less supportive of an academic mindset. “This is happening industry-wide,” he said. “More and more people are being forced to move into generative AI.” William Falcon, founder and CEO of Lightning AI, did some of his PhD research while at FAIR in 2019. “You could do what you do as a professor with a lot more compute and [were] paid a lot of money,” he said. However, he also said that was essentially a “honeymoon period” and that it’s a “natural evolution” to get back to product building. Erik Meijer, a former Meta engineer and researcher whose team was laid off in 2022 said that he was “never a fan” of research divisions inside companies like Meta. “If companies want to do fundamental research, they should give money with no strings attached to universities,” he said. “Industrial labs should work closely with the product teams to create a pipeline of future innovations via a tight feedback loop between production and research.” “It’s a cycle,” he said of the current product focus across the AI industry, which he acknowledged could be frustrating for researchers. “It’s time for exploitation, but exploration will come back very soon, that’s why these companies keep their blue sky team,” he said, “As Yann says, it’s probably the perfect timing to start fresh, so they are prepared to be on top of the next wave of innovations.”….