Luma AI’s Ray3: Lights, Camera, AI!
Startup Luma AI unveiled Ray3, an AI toolkit that brings Hollywood-level wizardry to your phone. Integrated with Adobe, Ray3 can generate HDR video (10-, 12-, 16-bit color) and even turn boring SDR footage into vivid HDR. Its built-in reasoning engine lets creators sketch out camera movements or scene edits, and the AI dutifully follows multi-step instructions. It’s like having a tiny James Cameron in your pocket, minus the ego.
Meta’s smart glasses
Meta’s Connect 2025 event, Zuck & Co. pivoted from metaverse musings to real hardware. While the glasses encountered problems during the live demo due to a race condition bug, the future is not as bleak. The Ray-Ban Meta smart glasses priced at $799, rocking a microLED display that projects messages, maps, and more right in your field of view. You control these AR specs with a neural wristband (bye-bye, clunky controllers). It’s the closest thing to wearing Tony Stark’s tech.
OpenAI’s Manhattan project for AI
Sam Altman is going big. OpenAI is teaming up with Oracle, SoftBank, and Nvidia to build out an AI super-infrastructure that makes current data centers look like Lego blocks. They’re planning five new U.S. data centers (bringing as much power as 7 nuclear reactors worth of energy!) and exploring a bold new “GPU leasing” deal worth $100B with Nvidia. In short, OpenAI wants endless computing power on tap, betting that in the AI race, bigger is better (and necessary).
Oracle bets billions on cloud AI
Larry Ellison must be feeling lucky. Why? Well, it is reported that Oracle is reportedly close to a $20 billion deal with Meta to host and train Meta’s AI models. This comes right after Oracle’s whopping $300B contract with OpenAI and a new partnership with Elon Musk’s xAI. The strategy? Offer faster, cheaper cloud infrastructure to undercut Amazon and Microsoft. If this pays off, Oracle’s cloud might go from underdog to top dog in the AI era. Bold move, Larry.
Musk’s xAI drops a game-changer
Elon Musk’s new AI venture, xAI, just launched a model called Grok 4 Fast that claims GPT-5 level smarts at a fraction of the cost. We’re talking near top-tier reasoning benchmarks with 98% lower token costs. It achieves this by cutting out “thinking overhead” and streamlining how it chews through data. Translation: powerful AI answers, cheap enough to deploy en masse. It’s Musk’s way of saying “Competition, bring it on.”
Brain implants: Neuralink’s next step
As per a Bloomberg report, Elon’s neuro-lab: Neuralink is gearing up for its first human trials this October after getting FDA’s nod. The company’s implantable chip can translate thoughts to text, initially aimed to help paralyzed patients communicate. Long term, Musk envisions people using thoughts to control computers and even converse with AI—because typing is so 2020, right? It’s equal parts exciting and sci-fi-level eerie.
Alibaba’s model mega-mix
Not to be outdone, Alibaba unveiled its Qwen3 AI stack with a twist: Mixture-of-Experts (MoE) models at trillion-parameter scale. The system can tap into 512 expert models but activates just a handful per query for super efficiency. End result? Over 10× throughput improvement and support for ridiculously long context (think entire novels in one prompt). Two 80B-version models lead the charge—one tuned for chatty assistants, another for complex reasoning. In the AI model arms race, Alibaba just loudly entered the chat.
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
Microsoft’s developer boost (and cool chips)
Redmond had a productive week too. Microsoft is hunting down pesky legacy code with new Copilot-powered agents that not only find problems in old .NET/Java code, but also auto-generate fixes, unit tests, and containerize apps. Early trials showed dramatic wins – an Xbox team cut migration effort by 88% and Ford saw a 70% reduction in update time. On another front, Windows 11 now comes with built-in support for running AI models (ONNX runtime) across CPUs, GPUs, and specialized NPUs from various vendors. And about “cooling chips from the inside out”? Microsoft researchers are exploring liquid cooling inside chips to solve overheating as AI silicon gets hotter (literally). The future: faster chips that keep their chill.