IBM the enterprise AI
OpenAI o1 the thinking AI
Google's steps into reasoning AI
Transform Holiday Customer Support with AI
"It's 3 AM on Christmas Eve. Support queues exploding. Customers need answers about last-minute gifts, travel changes, and urgent returns. And your morning team starts in 4 hours..."
Today's customers expect instant answers, but scaling support teams isn't always the answer. Leading retailers and travel companies are taking a different approach - deploying Freddy AI Agents that:
Handles 85% of routine inquiries automatically
Converts midnight product questions into morning sales
Turns returns into repeat customers
Manages urgent travel changes in minutes
Works across all channels - web, mobile, social
And the best part? It deploys in minutes, not months.
Get the free guide that's helping businesses deliver exceptional 24/7 customer service without breaking their budget or burning out their teams.
Get the Free Holiday Support Guide
*
Powered by Freddy AI Agent | Freshworks
Real Results: "It was truly a game-changer for the team" - Simon Birch, Customer Service Manager, Hobbycraft
***
IBM the enterprise AI
IBM is asserting its dominance in the open-source AI arena with the launch of its Granite 3.1 series. This update focuses on enhancing the capabilities of smaller models, which are more manageable and cost-effective for enterprises. "We've boosted all the numbers — the performance of pretty much everything across the board has improved," said David Cox, VP for AI models at IBM Research.
Cox highlighted that IBM's models are optimized for enterprise use cases, where performance is measured not just by speed but by efficiency. One key aspect of this efficiency is reducing the time users spend to achieve desired results. "You should spend less time fiddling with prompts," Cox noted. "The stronger a model is in an area, the less time you have to spend engineering prompts." Additionally, smaller models require fewer compute and GPU resources, lowering operational costs.
Agentic AI systems, which often need to process and reason over longer sequences of information, benefit from the 128k context length in Granite 3.1. This extended context allows these systems to better understand and respond to complex queries or tasks. The performance improvements in Granite 3.1 are the result of several process and technical innovations. Rather than merely increasing the quantity of training data, IBM has focused on enhancing the quality of the data used to train the models, Cox explained.….
***
OpenAI o1 the thinking AI
OpenAI's latest model, o1, sidesteps common reasoning pitfalls that typically challenge generative AI by effectively fact-checking itself. This is achieved by allowing the model more time to consider all aspects of a question, according to OpenAI. What sets o1 apart from other generative AI models is its ability to "think" before responding, enhancing its qualitative performance.
When given additional time, o1 can approach tasks holistically, planning ahead and executing a series of actions to arrive at an answer. This capability makes o1 particularly adept at complex tasks that require synthesizing multiple subtasks, such as identifying privileged emails in an attorney's inbox or brainstorming a product marketing strategy.
Noam Brown, a research scientist at OpenAI, explained that o1 is trained with reinforcement learning. This training method encourages the model to "think" before responding by rewarding correct answers and penalizing incorrect ones. Brown highlighted that OpenAI used a new optimization algorithm and a specialized training dataset containing reasoning data and scientific literature tailored for reasoning tasks. "The longer [o1] thinks, the better it does," he noted.
Pablo Arredondo, VP at Reuters, shared his insights. According to Arredondo, o1 outperforms OpenAI's previous models, such as GPT-4o, in tasks like analyzing legal briefs and solving LSAT logic games. "We saw it tackling more substantive, multi-faceted analysis," Arredondo told TC. "Our automated testing also showed gains across a wide range of simple tasks.".....
***
Google's steps into reasoning AI
Google has unveiled a new experimental AI model, Gemini 2.0 Flash Thinking Experimental, which is designed for advanced reasoning tasks. According to its model card, it excels in multimodal understanding, reasoning, and coding, and can tackle complex problems in fields like programming, math, and physics.
Logan Kilpatrick, who leads product for AI Studio, described Gemini 2.0 Flash Thinking Experimental as "the first step in [Google's] reasoning journey." Jeff Dean, chief scientist for Google DeepMind, noted that the model is "trained to use thoughts to strengthen its reasoning."
Dean highlighted that increasing inference time computation — the amount of computing power used to run the model as it processes a question — yields promising results. This approach allows the model to pause, consider related prompts, and explain its reasoning before summarizing the most accurate answer.
Built on the recently announced Gemini 2.0 Flash model, Gemini 2.0 Flash Thinking Experimental is similar to other reasoning models like OpenAI's o1. These models can effectively fact-check themselves, avoiding common pitfalls of AI, though they typically take longer to generate solutions….
Cheers! SBalley Team