Here is the news of the week.
MiniMax Releases Groundbreaking M1 AI Model with 1 million context window
Shanghai’s MiniMax has launched MiniMaxM1, the first open-source, hybrid attention reasoning model supporting up to 1 million token contexts, powered by lightning attention and MoE architecture. MiniMax claims that M1, which is trained with a new CISPO RL algorithm, matches or exceeds closed‑weight rivals like DeepSeek R1 in reasoning, code, and long‑context benchmarks.
Baidu Unveils AI Avatar in E-commerce Livestream
Luo Yonghao’s AI-powered avatar debuted on Baidu’s livestream, showcasing synchronized two digital hosts powered by the ERNIE foundational model. The duo interacted with each other, communicated with the viewers, and introduced 133 products in 6 hours. The broadcast attracted over 13 million viewers, signaling China’s prowess in AI-driven innovation.
Google Introduces Live AI Search and Expands Gemini 2.5
Google has enhanced its search experience with Search Live in AI Mode, offering real-time voice interactions with multimodal responses directly within the Google app.
Additionally, Google expanded its Gemini 2.5 family with the introduction of Gemini 2.5 Flash-Lite, an efficient model designed for rapid, cost-effective tasks such as translation and summarization. Gemini 2.5 also introduced Deep Think, a developer-oriented feature improving step-by-step reasoning. This capability significantly boosts performance across coding, STEM, and multimodal tasks.