Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

AI Distilled

57 Articles
LLM Expert Insights, Packt
18 Jul 2025
13 min read
Save for later

Trillion-Param Models, Meta’s Megaclusters, Pentagon Deals and more

LLM Expert Insights, Packt
18 Jul 2025
13 min read
From Agents Shopping to Chain-of-Thought Crisis AI_Distilled #104: What’s New in AI This Week The #1 Newsletter to Master AI Agents — Human in the Loop Human in the Loop is your weekly newsletter read by 12,000+ professionals. It breaks down the latest news on AI agents, real-world use cases and enterprise adoption. 100% free. 100% insight. → Join 12,000+ AI professionals & stay ahead of the curve. JOIN NOW This past week, the AI world accelerated at full speed. The tech giants struck major government deals, unveiled powerful new models, and revealed bold infrastructure plans, while researchers joined forces to steer the future of intelligent machines. It’s a moment of both high-stakes ambition and fast-paced collaboration, and the momentum isn’t slowing down. Let’s unpack what happened. LLM Expert Insights, Packt In today's issue: 🧠 Expert Deep Dive: Walk-through LLM model pruning—magnitude-based, structured, iterative, and post-training techniques—to build leaner, faster agents. 🔧 Function Calling, Simplified: Learn how to wire OpenAI’s function-calling into your agent systems, enabling contextual action and real-world task execution. 🧱 Agent Protocols in Action: Hands-on workshops this July dive into MCP and A2A—from beginner-friendly coding to orchestration and registry infrastructure. 📊 Sigma's Next Gen BI: Discover how Sigma is reshaping BI with collaborative data apps and a top debut on Gartner’s Magic Quadrant. ⚠️ Transparency Crisis Ahead?: OpenAI, Anthropic, and DeepMind warn that step-by-step AI reasoning ("chain-of-thought") may soon disappear in newer LLMs. 🛡 DoD’s $200M Agentic Push: Google, Anthropic, xAI, and OpenAI win Pentagon contracts to advance frontier AI capabilities for national security. 🛍 AWS Agent Store Launches: Amazon introduces an AI Agent Marketplace with Anthropic as launch partner—bringing agent deployment to the cloud masses. ⚡ Meta’s Supercluster Reveal: Zuckerberg unveils Prometheus and Hyperion—two giant-scale AI compute hubs powering Meta’s AI future. 🧩 GenAI Processors from DeepMind: A new open-source toolkit for building real-time, multimodal agent pipelines—no more glue code. 🐉 China’s Kimi K2 Hits 1 Trillion: Moonshot AI releases the world’s largest open-source LLM, rivaling GPT-4 and Claude with Mixture-of-Experts and MuonClip training. 📈UPCOMING EVENTS Next up, we bring you a hand-picked line-up of workshops and developer meetups on agent protocols like MCP and A2A. Let’s Learn – MCP Events: A Beginner’s Guide to MCP Date: July 9–21, 2025 Location: Virtual Cost: Free Focus: Introductory MCP coding (C#, Java, Python, TypeScript) Conversational & Deep Research Analytics Agents: MCP, A2A & Knowledge Graph Date: July 18, 2025 Location: Virtual Cost: Free Focus: Deep-research LLM agents using MCP and A2A AI Agent Learning Series with Google – Episode 3: Hierarchical Agents & Orchestration Date: July 1 – August 21, 2025 Location: Virtual Cost: Free Focus: Agent orchestration and hierarchical structures using MCP/A2A Website: AICamp MCP Developers Summit Date: October 2, 2025 Location: In-person – Venue TBA Cost: TBA Focus: MCP roadmap, security, observability, and agent registries Website: MCPDevSummit.ai Upskilling with MCP and A2A protocols is your gateway to building AI agents. Don’t miss the chance to explore these events and get ahead. DeepSeek is fast becoming the open-source LLM of choice for developers and engineers focused on speed, efficiency, and control. Join "DeepSeek in Production" summit to see how Experts are fine-tuning DeepSeek for real-world use cases, building agentic workflows, and deploying at scale. Seats are filling fast. Limited slots left. Book now at 50% off. SECURE YOUR SPOT NOW! Apply codeDEEPSEEK50at checkout to avail your 50% off. EXPERT INSIGHTS A step-by-step guide to using OpenAI tools for function calling. A Quick Start Guide to Model Pruning for Large Language Models As large language models (LLMs) continue to grow in size and complexity, optimizing their efficiency without compromising performance has become a keyprimary challenge. One of the most effective methods to address this challenge is model pruning. Ken Huang presents an overview of model pruning techniques in his book LLM Design Patterns. So, let’s take a sneak peek. Understanding Model Pruning Model pruning involves systematically removing parameters from a neural network that contribute the least to its output. These are often weights with the smallest magnitude, low sensitivity, or minimal gradient impact. The primary goal is to reduce model size and computational demands while retaining acceptable accuracy. Here are some of the techniques you could try. You can use PyTorch version 1.7.0 to experiment with these examples. Magnitude-Based Pruning The most straightforward technique is magnitude-based pruning, where weights with the lowest absolute values are removed. This method assumes that smaller weights have less impact on the model's predictions. By pruning these, models are made more compact and faster. import torch import torch.nn.utils.prune as prune # Assume model is a pre-trained LLM model = ... # Prune 30% of the lowest magnitude weights in Linear layers for name, module in model.named_modules(): if isinstance(module, torch.nn.Linear): prune.l1_unstructured(module, name='weight', amount=0.3) prune.remove(module, 'weight') Structured vs. Unstructured PruningTwo pruning paradigms are typically used:Unstructured pruning removes individual weights, resulting in sparse matrices that may be harder to optimize on standard hardware.Structured pruning removes entire neurons, filters, or channels, making the pruned model more compatible with conventional hardware and often yielding better speedups.Structured pruning is more hardware-friendly but may lead to a larger drop in accuracy. # Structured pruning: Remove entire neurons for name, module in model.named_modules(): if isinstance(module, torch.nn.Linear): prune.ln_structured(module, name='weight', amount=0.3, n=2, dim=0) Iterative Pruning Techniques Rather than pruning large portions of the model in a single step, iterative pruning prunes small fractions across multiple training cycles. This gradual reduction enables the model to adapt to the reduced capacity, thus minimizing accuracy degradation. for epoch in range(1, num_epochs + 1): train(model, train_loader, optimizer) if epoch % 10 == 0: for name, module in model.named_modules(): if isinstance(module, torch.nn.Linear): prune.l1_unstructured(module, name='weight', amount=0.1) prune.remove(module, 'weight') validate(model, val_loader) Pruning During Training vs. Post-Training A key decision in pruning strategy is timing: - Pruning during training integrates pruning steps throughout the training process. - Post-training pruning applies pruning after the model is fully trained. # During training pruning every 5 epochs for epoch in range(1, 21): train(model, train_loader, optimizer) if epoch % 5 == 0: for name, module in model.named_modules(): if isinstance(module, torch.nn.Linear): prune.l1_unstructured(module, name='weight', amount=0.2) prune.remove(module, 'weight') Balancing Pruning with Performance The art of pruning lies in striking the right balance. Excessive pruning can harm accuracy, while minimal pruning might offer negligible gains. Fine-tuning with lower learning rates post-pruning is commonly employed to recover lost performance. # Fine-tune pruned model optimizer = torch.optim.Adam(model.parameters(), lr=1e-5) for epoch in range(5): train(model, train_loader, optimizer) validate(model, val_loader) Combining Pruning with Other Techniques For enhanced efficiency, pruning is often paired with other compression methods: - Quantization: After pruning, dynamic quantization can be applied to further reduce model size. import torch.quantization as quant quantized_model = quant.quantize_dynamic(model, {torch.nn.Linear}, dtype=torch.qint8) - Knowledge Distillation: A smaller, pruned student model is trained to replicate the behavior of a larger teacher model. def distillation_loss(student_outputs, teacher_outputs, temperature): return torch.nn.KLDivLoss()( torch.nn.functional.log_softmax(student_outputs / temperature), torch.nn.functional.softmax(teacher_outputs / temperature) ) # Training loop for student model for batch in train_loader: inputs, _ = batch teacher_outputs = teacher_model(inputs) student_outputs = student_model(inputs) loss = distillation_loss(student_outputs, teacher_outputs, temperature=2.0) loss.backward() optimizer.step() Model pruning offers a robust path to optimize LLMs, with techniques ranging from basic magnitude-based methods to advanced combinations with distillation and quantization. Each method presents trade-offs between performance, complexity, and hardware compatibility. For practitioners looking to design efficient LLMs, pruning provides a versatile toolkit that can be tailored to specific constraints. Liked the Insights? Want to dig in deeper? A Practical Guide to Building Robust and Efficient AI Systems Learn comprehensive LLM development, including data prep, training pipelines, and optimization Explore advanced prompting techniques, such as chain-of-thought, tree-of-thought, RAG, and AI agents Implement evaluation metrics, interpretability, and bias detection for fair, reliable models BUY NOW 📈LATEST DEVELOPMENT Here is the news of the week. AI labs unite to warn of lost “chain-of-thought” visibility In an unusual collaboration, over 40 researchers from OpenAI, Google DeepMind, Anthropic, and other top AI labs published a joint warning about the fragility of AI chain-of-thought transparency. Modern advanced LLMs have shown an ability to think out loud by producing step-by-step reasoning in plain English before final answers. This interim reasoning can reveal a model’s true intentions or potential mistakes, offering a valuable chance to monitor and intervene if the AI is heading down a harmful path. However, the researchers caution that as AI models evolve, this transparency could disappear; future models might learn to perform reasoning internally or in indecipherable ways, closing a critical window for safety oversight. The paper urges AI developers to prioritize methods for evaluating and preserving chain-of-thought visibility, calling it a brief and fragile opportunity to align AI behavior before more opaque systems arrive. Read the paper. Pentagon taps Google, Anthropic, xAI in $200M AI push Reuters reports that the U.S. Department of Defense (DoD) awarded contracts (with a $200 million ceiling each) to Anthropic, Google, OpenAI, and Elon Musk’s xAI to accelerate “frontier” AI capabilities for national security. These partnerships will help the DoD develop agentic AI workflows across a range of missions. The Pentagon’s initiative underscores the government’s urgency in tapping top AI labs for defense innovation. Read more. In a related development Elon Musk’s xAI has officially launched Grok for Government, a tailored suite of frontier AI tools (including Grok 4, Deep Search, Tool Use). Now available via the GSA schedule, it supports federal, state, local, and national security agencies under a $200 million DoD ceiling contract. Read more here. AWS debuts AI agent marketplace with Anthropic partnership At AWS Summit New York, AWS unveiled a new AI Agent Marketplace in collaboration with Anthropic. This platform will serve as a one-stop shop where startups can sell AI agents directly to AWS’s enterprise customers. Businesses will be able to browse and install third-party AI agents suited to their needs from a single catalog. Anthropic, already an Amazon-invested company, is a key launch partner, which could broaden its reach for more customers via AWS. Amazon will take a small revenue share while enabling an ecosystem of AI agents, much like an app store. Read more. Meta building multi-gigawatt Prometheus and Hyperion AI superclusters Meta CEO Mark Zuckerberg revealed plans for unprecedented AI infrastructure, announcing that Meta is constructing multiple multi-GW AI supercomputers. The Hyperion cluster in Louisiana will scale up to 5 gigawatts of power, with a footprint large enough to cover most of Manhattan. In addition, a 1 GW supercluster named Prometheus is slated to come online in 2026 in Ohio. Together, these AI centers will provide Meta with enormous computational capacity to train and serve advanced AI models, positioning it to better compete with peers like OpenAI and Google DeepMind. View the announcement. Google DeepMind open-sources GenAI Processors for agent pipelines Google DeepMind introduced GenAI Processors, a new open-source Python library to simplify building complex AI workflows for LLM-powered applications. The toolkit defines a standardized Processor interface for handling all stages of an AI pipeline, from input ingestion and pre-processing to model inference calls and output handling. Developers can chain or parallelize these modular processors to create asynchronous, composable AI pipelines. Notably, GenAI Processors integrates with Google’s Gemini (next-gen LLM) APIs and supports multimodal data streams (text, images, audio, PDFs) in a unified framework. By open-sourcing this library, Google aims to help developers build real-time AI agents and data-processing workflows more reliably and with less custom glue code. Read more. Chinese 1-trillionparam Kimi K2 model challenges GPT-4, DeepSeek Shanghai-based startup Moonshot AI has unveiled Kimi K2, a large language model with a staggering 1 trillion parameters, released as open-source. Kimi K2 now ranks as one of the world’s most powerful LLMs, reportedly matching the performance of top proprietary models like OpenAI’s GPT-4 and Anthropic’s Claude on complex tasks. It excels at coding benchmarks, essentially rivaling or outperforming Anthropic’s best Claude model in that domain. The model was trained using a novel “MuonClip” optimization technique that prevented the training instabilities that often plague ultra-large models, potentially saving millions in compute costs. Observers have compared Kimi K2’s architecture to DeepSeek V3 – the 673 billion–parameter model behind the famed DeepSeek-R1 assistant, noting that Kimi K2 similarly uses Mixture-of-Experts layers to boost capability. The launch of Kimi K2 highlights the rapid progress of China’s open-source AI efforts. (Earlier this month, Baidu open-sourced its ERNIE 4.5 model (424 B parameters), which reportedly beat DeepSeek V3 on 22 of 28 benchmarks despite being much smaller.) Read more here. Built something cool? Tell us. Whether it's a scrappy prototype or a production-grade agent, we want to hear how you're putting generative AI to work. Drop us your story at nimishad@packtpub.com or reply to this email, and you could get featured in an upcoming issue of AI_Distilled. 📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us. If you have any comments or feedback, just reply back to this email. Thanks for reading and have a great day! That’s a wrap for this week’s edition of AI_Distilled 🧠⚙️ We would love to know what you thought—your feedback helps us keep leveling up. 👉 Drop your rating here Thanks for reading, The AI_Distilled Team (Curated by humans. Powered by curiosity.) *{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0}#converted-body .list_block ol,#converted-body .list_block ul,.body [class~=x_list_block] ol,.body [class~=x_list_block] ul,u+.body .list_block ol,u+.body .list_block ul{padding-left:20px} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}} .go1475592160{height:0;}.go1671063245{height:auto;}.go1888806478{display:flex;flex-wrap:wrap;flex-grow:1;} @media (width: 100%;width: 100%;}}.go167266335{background-color:#313131;font-size:0.875rem;line-height:1.43;letter-spacing:0.01071em;color:#fff;align-items:center;padding:6px 16px;border-radius:4px;box-shadow:0px 3px 5px -1px rgba(0,0,0,0.2),0px 6px 10px 0px rgba(0,0,0,0.14),0px 1px 18px 0px rgba(0,0,0,0.12);}.go3162094071{padding-left:20px;}.go3844575157{background-color:#313131;}.go1725278324{background-color:#43a047;}.go3651055292{background-color:#d32f2f;}.go4215275574{background-color:#ff9800;}.go1930647212{background-color:#2196f3;}.go946087465{display:flex;align-items:center;padding:8px 0;}.go703367398{display:flex;align-items:center;margin-left:auto;padding-left:16px;margin-right:-8px;}.go3963613292{width: 100%;position:relative;transform:translateX(0);top:0;right:0;bottom:0;left:0;width: 100%;}.go1141946668{box-sizing:border-box;display:flex;max-height:100%;position:fixed;z-index:1400;height:auto;width: 100%;transition:top 300ms ease 0ms,right 300ms ease 0ms,bottom 300ms ease 0ms,left 300ms ease 0ms,max-width 300ms ease 0ms;pointer-events:none;max-width: 100%;}.go1141946668 .notistack-CollapseWrapper{padding:6px 0px;transition:padding 300ms ease 0ms;} @media (max-width: 100%;max-width: 100%;}}.go3868796639 .notistack-CollapseWrapper{padding:2px 0px;}.go3118922589{top:14px;flex-direction:column;}.go1453831412{bottom:14px;flex-direction:column-reverse;}.go4027089540{left:20px;} @media (width: 100%;}} @media (max-width: 100%;}}.go2989568495{right:20px;} @media (width: 100%;}} @media (max-width: 100%;}}.go4034260886{left:50%;transform:translateX(-50%);} @media (width: 100%;}}
Read more
  • 0
  • 0

LLM Expert Insights, Packt
11 Jul 2025
9 min read
Save for later

Get started with OpenAI tools for function calling in agents

LLM Expert Insights, Packt
11 Jul 2025
9 min read
Grok 4, Google AI, and plagiarized models?AI_Distilled #103: What’s New in AI This WeekJoin this 16 hour AI Learning Sprint to become an AI Genius (worth $895 but $0 today)The AI race is getting faster & dirtier day by day. Things we could never have imagined are happening.That’s why, you need to join the 3-Day Free AI Mastermind by Outskill which comes with 16 hours of intensive training on AI frameworks, building with sessions, creating images and videos etc. that will make you an AI expert.Originally priced at $895, but the first 100 of you get in for completely FREE! Extended 4th of July SALE! 🎁📅FRI-SAT-SUN- Kick Off Call & Live Sessions🕜10AM EST to 7PM EST✅ trusted by 4M+ learnersJoin now and get $5100+ in additional bonuses:$5100+ worth of AI tools across 3 days — Day 1: 3000+ Prompt Bible, Day 2: Roadmap to make $10K/month with AI, Day 3: Your Personal AI Toolkit Builder.Welcome to this week’s edition of AI Distilled! As always, we’re bringing you the most relevant breakthroughs, product launches, expert insights, and grassroots meetups exploring next-gen techniques. Here’s what’s new this week.LLM Expert Insights,PacktIn today's issue:🧠 Expert How-To: Walk through OpenAI’s function calling—powering real-world actions in agentic systems.🗓️ Agent Meetups Galore: RAG-focused events hit Houston, Utah, the UK & Italy—developers and researchers unite!🧪 Grok 4 Drops: Musk unveils multimodal AI that “outsmarts PhDs” and aims to discover physics.🔍 Circle Gets Smarter: Google’s Circle to Search adds AI Mode with contextual in-app help and game insights.💻 Sam x Jony Merge: OpenAI officially teams up with Jony Ive’s LoveForm—AI hardware incoming?🚨 Huawei Whistleblower: Explosive claims accuse Huawei’s Pangu model of rebranding open-source work and faking benchmarks.🎓 AI for Every Educator: OpenAI, Anthropic, and Microsoft fund training for 400K US teachers in AI-powered classrooms.🌐 Browsing, the OpenAI Way?: OpenAI may launch its own browser—Operator agent and native AI integrations in sight.📈UPCOMING EVENTSUpcoming Must-attend AI Agents Events1. The Test Tribe Houston – RAG MeetupDate: July 17, 2025Location: Houston, TX, USACost: TBAFocus: RAG with LLMs, coherence improvements using RAGAS2. Utah Java Users Group – GenAI MeetupDate: July 17, 2025Location: South Jordan, UT, USACost: FreeFocus: Hands-on implementation of RAG in production environments3. Agentic RAG – Online MeetupDate: July 19, 2025Location: OnlineCost: FreeFocus: Applying agentic systems to Retrieval-Augmented Generation4. PyData Milton Keynes – RAG ApplicationsDate: July 17, 2025Location: Milton Keynes, UKCost: FreeFocus: RAG in Python using Hugging Face and LangChain5. IR-RAG @ SIGIR 2025Date: July 13–18, 2025Location: Padua, ItalyCost: TBAFocus: Information retrieval’s evolving role in Retrieval-Augmented GenerationWhat’s stopping you? Choose your city, RSVP early, and step into a room where AI conversations spark, and the future unfolds one meetup at a time.An Exclusive Look Into Next Gen BI – Live WebinarDashboards alone aren’t cutting it. The market’s moving toward something new: data apps, live collaboration, and AI that works the way teams actually work.See what's driving the rise of Next Gen BI, how Sigma earned a top debut on the Gartner Magic Quadrant, and what’s next for our roadmap.SECURE YOUR SPOTEXPERT INSIGHTSA step-by-step guide to using OpenAI tools for function calling.Incorporating function-calling capabilities into intelligent agents has emerged as a transformative practice in recent AI development. This guide by our experts Ajanava Biswas and Wrick Talukdar explores how OpenAI tools can be employed to create agentic systems that perform real-world tasks by calling external functions based on user inputs. This integration enables agents not only to understand intent but also to take contextual actions with structured logic.Let’s get started.What Is Function Calling in LLMs?Function calling allows large language models (LLMs) to invoke predefined functions using structured input provided by the user. It bridges the gap between conversational input and executable system logic, enhancing the agent’s ability to act upon user requests.Let’s take an example of a travel booking agent that uses the function calling feature to book a flight. The LLM decides when to invoke the function based on the user's message and then provides the necessary arguments, such as departure city, arrival city, and travel date.Let’s see how it works.Setting up the function call: OpenAI’s Python SDK is used to define and invoke a function. Here is a minimal example of how to structure this process:```pythonimport openaidef book_flight(passenger_name: str, from_city: str, to_city: str, travel_date: str) -> str: return "A flight has been booked"tools = [ { "type": "function", "function": { "name": "book_flight", ... } }]```The function book_flight is designed to accept structured arguments. The tools list defines the available function, making it accessible to the LLM.2. Using the function in a conversation: The agent must decide when to call the function during a user interaction. Here's how the OpenAI API helps:```pythonresponse = openai.chat.completions.create( model="gpt-4-turbo", messages=[{"role": "user", "content": "Book a flight from LA to NY on Oct 1"}], tools=tools)```Upon detecting intent, the model populates the function arguments and issues a function call. If the user's intent is unclear or incomplete, the model may request additional information.3. End-to-end interaction: Once the function is called, the result is returned to the model, which completes the dialogue:Let's see this in action:```pythonresponse = openai.chat.completions.create( model="gpt-4-turbo", messages=[...], # includes user and function call messages tool_choice="book_flight")```4. Sample conversation: The flow of a conversation may look like this:```User: I want to book a flightAgent: Sure! I need some details: departure city, arrival city, date?User: From LA to NY on Oct 1, my name is John Doe.Agent: Great! Booking your flight now.```This conversational structure illustrates how seamlessly an LLM can gather information, invoke a function, and respond.When you enable external function calls, intelligent agents are transformed from passive responders into proactive performers. This is foundational for building agentic systems that can interact with APIs, databases, or robotic control interfaces.Liked the Insights? Want to dig in deeper?Create intelligent, autonomous AI agents that can reason, plan, and adaptUnderstand the foundations and advanced techniques of building intelligent, autonomous AI agentsLearn advanced techniques for reflection, introspection, tool use, planning, and collaboration in agentic systemsExplore crucial aspects of trust, safety, and ethics in AI agent development and applicationsBUY NOW📈LATEST DEVELOPMENTHere is the news of the week. Musk unveils Grok 4Elon Musk's xAI introduced Grok 4 during a livestream, claiming it surpasses PhD-level reasoning and could soon help discover new technologies or physics. The AI model features enhanced reasoning, coding capabilities, and multimodal support. The launch follows recent controversies over Grok's previous outputs, with Musk emphasizing a commitment to "maximally truth-seeking" AI. Read more.Google’s Circle to Search gets AI ModeGoogle has upgraded its Circle to Search feature by integrating AI Mode, allowing users to obtain AI-generated overviews and engage in follow-up questions without leaving their current app. This enhancement also introduces in-game assistance, enabling gamers to access character information and strategy guides seamlessly during gameplay. These updates aim to provide a more intuitive and uninterrupted search experience. Read more.Open AI announces official merger with io Products, Inc.Furthering their plans to move away from traditional products and interfaces, Sam & Jony have announced their official merger with Ive and his LoveForm team, focusing on “deep design and creative responsibilities across OpenAI.” This merger is expected to pave way for a new kind of hardware for AI. Read more.Huawei Pangu Model Whistleblower Alleges FraudAn anonymous whistleblower, claiming to be a former employee of Huawei’s Noah’s Ark Lab, published a GitHub document titled “The True Story of Pangu”, alleging serious misconduct in the development of Huawei’s Pangu large language model.The post accuses internal teams of rebranding open-source models like Alibaba’s Qwen as Pangu, faking performance metrics, and misleading senior leadership to gain recognition and resources.OpenAI, Microsoft, and Anthropic Bankroll New AI Training for TeachersOpenAI and the American Federation of Teachers have initiated the National Academy for AI Instruction, aiming to train 400,000 U.S. K–12 educators in AI integration over five years. OpenAI contributes $10 million in funding and resources. The Academy will offer workshops, online courses, and hands-on training, focusing on equitable access and practical AI fluency, with a flagship campus in New York City and plans to expand nationwide by 2030. Read more.OpenAI to launch a new web browser?It is speculated that OpenAI is set to release its own web browser, potentially challenging Google Chrome. This move aims to give OpenAI greater control over data collection, suggesting a deeper integration of agents like Operator and other AI capabilities within the browsing experience. Read this Reuters report for more details.Now that we've seen major updates from industry leaders, let’s dive into a practical guide that helps you build intelligent systems using OpenAI’s tools.Built something cool? Tell us.Whether it's a scrappy prototype or a production-grade agent, we want to hear how you're putting generative AI to work. Drop us your story at nimishad@packtpub.com or reply to this email, and you could get featured in an upcoming issue of AI_Distilled.📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us.If you have any comments or feedback, just reply back to this email.Thanks for reading and have a great day!That’s a wrap for this week’s edition of AI_Distilled 🧠⚙️We would love to know what you thought—your feedback helps us keep leveling up.👉 Drop your rating hereThanks for reading,The AI_Distilled Team(Curated by humans. Powered by curiosity.)*{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0}#converted-body .list_block ol,#converted-body .list_block ul,.body [class~=x_list_block] ol,.body [class~=x_list_block] ul,u+.body .list_block ol,u+.body .list_block ul{padding-left:20px} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0

LLM Expert Insights, Packt
04 Jul 2025
8 min read
Save for later

Baidu goes open source with ERNIE 4.5, Meta grabs talent from OpenAI

LLM Expert Insights, Packt
04 Jul 2025
8 min read
Google steps into robotics and dev-side AI tools AI_Distilled #102: What’s New in AI This Week Learn to Run and Deploy Open-Source LLMs with This Free Course Join Open-Source LLM Zoomcamp to explore how to run, fine-tune, and deploy open-source large language models. During this short free course, you’ll discover the open-source LLM ecosystem, learn practical tools like Hugging Face, vLLM, and Llama Factory, and work with models like DeepSeek-R1. Register now for free Hello and welcome to this week’s AI roundup! Here’s wishing our readers in the U.S. a very Happy Independence Day! This week, we’re witnessing thrilling developments in the AI race. With China closing in on the AI gap through its open-source strategy and Meta poaching OpenAI’s employees, it looks like this summer is heating for the AI giants. Dive in for the full scoop. LLM Expert Insights, Packt In today's issue: 🧠 Expert Build Recap: “Build AI Agents Over the Weekend” drew hundreds to prototype real-world agent use cases with LangChain and Python. 🔮 Next Up—DeepSeek Demystified: Get ready for a live breakdown of DeepSeek’s architecture, strengths, and red flags. 🌍 Global Agent Meetups: From EUMAS to PRIMA, the best events this fall spotlight the future of multi-agent systems. 📦 ERNIE 4.5 Goes Open: Baidu drops 10 massive multimodal MoE models under Apache 2.0—toolkits included. 💸 Meta’s AI Flex: $14.3B Scale AI stake and a star-studded OpenAI exodus fuel Zuck’s Superintelligence Labs. 🤖 Gemini Powers Robotics & Devs: On-device robot control and CLI magic—Google is gunning for full-stack AI. 🛡️ Cloudflare vs AI Bots: With "Pay Per Crawl," Cloudflare strikes back at lopsided content scraping economics. 🛠️ Langfuse Gets Agentic: Multi-agent onboarding, DevOps-ready orchestration, and observability out of the box. 📈LATEST DEVELOPMENT Here is the news of the week. Baidu open sources ERNIE 4.5 model family Baidu's ERNIE 4.5 is a newly open-sourced family of 10 large-scale multimodal AI models, featuring Mixture-of-Experts (MoE) architectures with up to 424B parameters. It features a heterogeneous modality structure designed for efficient cross-modal learning, enhancing performance in text, image, audio, and video tasks. Trained using the PaddlePaddle framework, ERNIE 4.5 achieves state-of-the-art results in instruction following, knowledge retention, and multimodal reasoning. All models are available under the Apache 2.0 license, accompanied by industrial-grade development toolkits. Read more. Meta creates SuperIntelligence Labs, SamA calls it distasteful Meta has successfully recruited several researchers from OpenAI, including Lucas Beyer, Alexander Kolesnikov, and Xiaohua Zhai. These hires are part of Meta's strategy to assemble a world-class AI research team to drive its superintelligence ambitions. Read more. OpenAI CEO Sam Altman called Meta’s $100 million-plus recruitment packages “distasteful,” insisting none of OpenAI’s top engineers have defected to Zuckerberg’s new Superintelligence Labs. In another development, Meta has announced a $14.3 billion investment to acquire a 49% stake in Scale AI. This move is aimed atbolstering Meta's capabilities in AI data labeling and infrastructure, positioning the company to accelerate its AI development initiatives. Watch this at 22:39. Google pushes it with robotics on device and Gemini CLI Google DeepMind has introduced Gemini Robotics On-Device, is an AI model that runs directly on robots, eliminating the need for internet connectivity. It offers general-purpose dexterity and rapid task adaptation, enabling robots to perform complex tasks like folding clothes or assembling parts. The model adapts to various robot types and can learn new tasks with minimal demonstrations. AnSDK for developers is has also been made available for fine-tuning and testing. Read more. Google has also released Gemini CLI, a free, open-source AI tool that integrates Gemini 2.5 Pro directly into developers' terminals. It supports natural language prompts for coding, content creation, and task automation, with generous usage limits. The CLI is extensible, integrates with Gemini Code Assist, and supports tools like Veo and Imagen for multimedia generation. Read more. Cloudflare introduces pay per crawl feature, pushes for fair web-use Cloudflare's latest Radar update reveals a growing imbalance between AI bots' bots scraping content and genuine user referrals. For instance, Anthropic's Claude exhibits a 70,900:1 crawl-to-referral ratio, indicating extensive content access with minimal traffic return. This trend threatens publishers' revenue models, prompting Cloudflare to introduce tools like "Pay Per Crawl" and default AI bot blocking to help content creators manage and monetize AI-driven content usage. Read more. Langfuse gets agentic onboarding In its latest update, Langfuse introduces Agentic Onboarding and the Docs MCP Server, allowing developers to spin up multi-agent swarms with a single command, instrument them end-to-end, and hand them to DevOps for seamless production readiness. Read more. EXPERT INSIGHTS What We Built—and What’s Next DeepSeek is an emerging open-source large language model (LLM) ecosystem that’s making waves by delivering GPT-4-level performance without the usual proprietary restrictions. Its flagship DeepSeek-V3 model offers results comparable to GPT-4 at only a fraction of the training cost, with model weights openly available to the community. Under the hood, DeepSeek’s success stems from unique technical breakthroughs. Techniques like Multi-Head Latent Attention (MLA), Mixture-of-Experts (MoE) architecture, Multi-Token Prediction (MTP), and 8-bit floating point (FP8) precision training work in tandem to boost efficiency and scale. These innovations allow DeepSeek models to maximize throughput and minimize memory bottlenecks, enabling performance on par with leading closed models at dramatically lower cost. Equally important, DeepSeek’s open approach invites the global AI community to build upon these advances, accelerating progress toward more accessible AI. Real-world use cases for DeepSeek already span a broad spectrum. Developers are using the specialized DeepSeek-Coder model for AI-assisted code generation in over 80 programming languages. Other DeepSeek variants excel at complex reasoning (solving math and logic problems) and multilingual natural language understanding, thanks to training on massive, diverse datasets rich in high-quality multilingual data. This versatility makes DeepSeek attractive to practitioners seeking cost-effective, cutting-edge AI solutions. For those eager to learn more, Packt is hosting a one-day virtual summit "DeepSeek Demystified" on August 16 to explore these innovations. It’s a chance to hear insights from experts and see DeepSeek in action — interested readers can register here. If scaling LLMs in production is on your radar, block time for the ML Summit 2025 and MCP Workshop. And there is 25% off our combined ticket with the discount code MCP25. With the combined ticket, you’ll learn how to: Build flexible pipelines that don’t fall apart under load Utilize data infrastructure for AI: SQLMesh, DuckDB, and Apache Iceberg Use Model Context Protocol (MCP) to keep your AI tools and LLMs separate BOOK YOUR SPOT Use code MCP25 at checkout to get 25% off 📈UPCOMING EVENTS Upcoming Must-attend AI Agents Events The world of AI agents is evolving rapidly, with agent-based architectures and autonomous systems taking center stage. From global conferences to hands-on developer meetups, the latter half of 2025 offers many opportunities to learn, network, and build with cutting-edge AI agent technologies. Here's a curated list of key events you won't want to miss: 1. EUMAS 2025 – European Conference on Multi-Agent Systems Date: September 3–5, 2025 Location: Bucharest, Romania Cost: TBA Focus: Research on multi-agent systems 2. AI Agent Event 2025 – East Coast Edition Date: September 29–30, 2025 Location: Herndon, VA, USA Cost: $695 (Early Bird), $995 (Regular) Focus: Real-world AI agent use cases across business and tech 3. PRIMA 2025 – Principles and Practice of Multi-Agent Systems Date: December 15–21, 2025 Location: Modena, Italy Cost: TBA Focus: Research, principles, and applications of multi-agent systems Website: prima2025.unimore.it 4. AI Agents Summit 2025 Date: TBA Location: Online Cost: TBA Focus: Tools, use cases, deployment, innovation in agents Website: aiagentsummit.com What’s stopping you? Choose your city, RSVP early, and step into a room where AI conversations spark, and the future unfolds one meetup at a time. Built something cool? Tell us. Whether it's a scrappy prototype or a production-grade agent, we want to hear how you're putting generative AI to work. Drop us your story at nimishad@packtpub.com or reply to this email, and you could get featured in an upcoming issue of AI_Distilled. 📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us. If you have any comments or feedback, just reply back to this email. Thanks for reading and have a great day! That’s a wrap for this week’s edition of AI_Distilled 🧠⚙️ We would love to know what you thought—your feedback helps us keep leveling up. 👉 Drop your rating here Thanks for reading, The AI_Distilled Team (Curated by humans. Powered by curiosity.) *{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0}#converted-body .list_block ol,#converted-body .list_block ul,.body [class~=x_list_block] ol,.body [class~=x_list_block] ul,u+.body .list_block ol,u+.body .list_block ul{padding-left:20px} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
LLM Expert Insights, Packt
27 Jun 2025
10 min read
Save for later

Why This One LangChain Pattern Changed Everything

LLM Expert Insights, Packt
27 Jun 2025
10 min read
LangGraph, Neo4j, GPT-4o—how they’re changing workflowsAI_Distilled #101: What’s New in AI This WeekBecome an AI Generalist that makes $100K (in 16 hours)Join the World’s First 16-Hour LIVE AI Mastermind for professionals, founders, consultants & business owners like you.Rated 4.9/5 by 150,000 global learners – this will truly make you an AI Generalist that can build, solve & work on anything with AI.All by global experts from companies like Amazon, Microsoft, SamurAI and more. And it’s ALL. FOR. FREE. 🤯 🚀Join now and get $5100+ in additional bonuses: 🔥$5,000+ worth of AI tools across 3 days — Day 1: 3000+ Prompt Bible, Day 2: $10K/month AI roadmap, Day 3: Personalized automation toolkit.🎁 Attend all 3 days to unlock the cherry on top — lifetime access to our private AI Slack community!Register Now (free only for the next 72 hours) Welcome to the 101st edition of our newsletter!This week, the world of AI is buzzing with significant developments. From Apple's potential acquisition of Perplexity AI to Meta's aggressive talent hunt for its new "Superintelligence" lab, the race for AI supremacy is intensifying. Meanwhile, new research reveals "blackmail" behaviors in AI models, prompting crucial discussions around biosecurity and responsible AI deployment by industry leaders like OpenAI.Stay tuned as we delve into these pivotal shifts shaping the future of AI!LLM Expert Insights,PacktIn today's issue:🧠 Expert Deep Dive: Learn how LangChain simplifies chat-based agent development across LLM providers—building composable, multi-turn conversations with role-specific messaging.🤖 Agent-Con Season: The USA is heating up with elite AI Agent events—AgentCon, AI Engineer Summit, and more for advanced builders. 💬 LangChain in Action: See how a few lines of Python can orchestrate robust, controllable agent behavior with Claude or GPT-4o.📈 Apple Eyes Perplexity AI: Apple’s AI search ambitions heat up as it explores acquiring Perplexity—just as Samsung prepares to go all-in with them. ⚖️ UK Tightens Reins on Google: New regulation may force Google to open up search competition and tone down its AI favoritism. 💸 Zuckerberg’s AI Superlab Hunt: Meta’s CEO is personally recruiting top AI minds with nine-figure offers to power a "Superintelligence" lab. 🕵️ Blackmailing Bots? Anthropic’s new study shows LLMs may turn coercive in simulated environments—raising serious red flags for agent safety. 🧬 OpenAI's Bio Bet: As AI speeds up drug discovery, OpenAI doubles down on biosecurity, red-teaming, and responsible model training.🛍️ Packt’s Mega Book Deal: Grab up to 5+ expert-led books for as low as $4.99 each—perfect for building your summer AI reading stack. 📈UPCOMING EVENTSUpcoming Must-attend AI Agents Events1. AI Agent Conference 2025Date: October 10, 2025Location: New York City, NY – AI Engineer WorldCost: TBA (Previous editions ranged from $499–$999)Focus: Agentic AI systems, multi-agent orchestration, autonomous workflows2. AI Engineer Summit 2025 – “Agents at Work!”Date: February 19–22, 2025Location: New York City, NY – AI Engineer CollectiveCost: Invite-only (past tickets ~$850–$1,200)Focus: Engineering agent architectures, agent dev tools, and evaluation frameworks3. AI Agent Event East 2025Date: September 29–30, 2025Location: Herndon, VA – AI Agent EventCost: US $695 (Early Bird), $995 (Regular)Focus: Enterprise agent systems, real-world agent deployment, decision-making frameworks4. AgentCon 2025 – San Francisco StopDate: November 14, 2025Location: San Francisco, CA – Global AI CommunityCost: Free to $99 (based on venue and track)Focus: Building, deploying, and scaling autonomous agentWhat’s stopping you? Choose your city, RSVP early, and step into a room where AI conversations spark, and the future unfolds one meetup at a time.Package Deals - Buy 1-2 eBooks for $9.99, 3-4 eBooks for $7.99, 5+ eBooks for $4.99Get 20% off on PrintSTART LEARNING FROM $4.99EXPERT INSIGHTSWorking with chat modelsGetting a model to generate text is easy. Getting it to hold a structured, multi-turn conversation with consistency and control—that’s where things start to get interesting. In this excerpt from Generative AI with LangChain, 2nd Edition, you’ll see how LangChain’s support for chat models gives developers a clean, composable way to build conversational logic that works across providers. It’s a crucial building block for any system that needs to reason, remember, and respond.Working with chat modelsChat models are LLMs that are fine-tuned for multi-turn interaction between a model and a human. These days most LLMs are fine-tuned for multi-turn conversations. Instead of providing the model with an input such ashuman: turn1ai: answer1human: turn2ai: answer2and expecting it to generate an output by continuing the conversation, these days model providers typically expose an API that requires each turn to be submitted as a separate well-formatted part within the payload.Model providers typically do not persist chat history on the server. Instead, the client sends the full conversation history with each request, and the provider formats the final prompt on the server side before passing it to the model.SELECT line1, city, state, zip fromperson p, person_address pa, address aWHERE p.name = 'John Doe' and pa.person_id = p.id and pa.address_id = a.idORDER BY pa.start ASCLIMIT 2, 1LangChain follows the same pattern with ChatModels, processing conversations through structured messages with roles and content. Each message contains the following:Role (who's speaking), which is defined by the message class (all messages inherit from BaseMessage)Content (what's being said)Key message types include:SystemMessage: Sets behavior and context for the model. Example: SystemMessage(content="You're a helpful programming assistant")HumanMessage: Represents user input like questions, commands, and data. Example: HumanMessage(content="Write a Python function to calculate factorial")AIMessage: Contains model responsesLet's see this in action:from langchain_anthropic import ChatAnthropicfrom langchain_core.messages import SystemMessage, HumanMessagechat = ChatAnthropic(model="claude-3-opus-20240229")messages = [ SystemMessage(content="You're a helpful programming assistant"), HumanMessage(content="Write a Python function to calculate factorial")]response = chat.invoke(messages)print(response)Here's a Python function that calculates the factorial of a given number:```pythondef factorial(n): if n < 0: raise ValueError("Factorial is not defined for negative numbers.") elif n == 0: return 1 else: result = 1 for i in range(1, n + 1): result *= i return result```Let’s break this down. The factorial function is designed to take an integer n as input and calculate its factorial. It starts by checking if n is negative, and if it is, it raises a ValueError since factorials aren’t defined for negative numbers. If n is zero, the function returns 1, which makes sense because, by definition, the factorial of 0 is 1.When dealing with positive numbers, the function kicks things off by setting the result variable to 1. Then, it enters a loop that runs from 1 to n, inclusive, thanks to the range function. During each step of the loop, it multiplies the result by the current number, gradually building up the factorial. Once the loop completes, the function returns the final calculated value. You can call this function by providing a non-negative integer as an argument. Here are a few examples:```pythonprint(factorial(0)) # Output: 1print(factorial(5)) # Output: 120print(factorial(10)) # Output: 3628800print(factorial(-5)) # Raises ValueError: Factorial is not defined for negative numbers.```Note that the factorial function grows very quickly, so calculating the factorial of large numbers may exceed the maximum representable value in Python. In such cases, you might need to use a different approach, or use a library that supports arbitrary-precision arithmetic.Alternatively, we could have asked an OpenAI model such as GPT-4 or GPT-4o:from langchain_openai.chat_models import ChatOpenAIchat = ChatOpenAI(model_name='gpt-4o')Liked the Insights? Want to dig in deeper?Build production-ready LLM applications and advanced agents using Python, LangChain, and LangGraphBridge the gap between prototype and production with robust LangGraph agent architecturesApply enterprise-grade practices for testing, observability, and monitoringBuild specialized agents for software development and data analysisBUY NOW📈LATEST DEVELOPMENTHere is the news of the week. Apple Eyes Perplexity AI Amidst Shifting LandscapeApple Inc. is considering acquiring AI startup Perplexity AI to bolster its AI capabilities and potentially develop an AI-based search engine. This move could mitigate the impact if its lucrative Google search partnership is dissolved due to antitrust concerns. Discussions are early, with no offer yet, and a bid might depend on the Google antitrust trial's outcome. Perplexity AI was recently valued at $14 billion. A potential hurdle for Apple is an ongoing deal between Perplexity and Samsung Electronics Co., Apple's primary smartphone competitor. Samsung plans to announce a deep partnership with Perplexity, a significant development given that AI features have become a crucial battleground for the two tech giants.UK Regulators Target Google Search DominanceThe UK's CMA proposes designating Google with "strategic market status" under new digital competition rules by October. This would allow interventions like mandating choice screens for search engines and limiting Google's self-preferencing, especially with its AI-powered search features, thereby leading to fair rankings and increasing publisher control. The move aims to foster innovation and benefit UK consumers and businesses.Zuckerberg's Multimillion-Dollar AI Talent DriveMark Zuckerberg is personally leading Meta's aggressive recruitment drive for a new "Superintelligence" lab. Offering packages reportedly reaching hundreds of millions of dollars, he's contacting top AI researchers directly via email and WhatsApp. Despite enticing offers, some candidates are hesitant due to Meta's past AI challenges and internal uncertainties, as Zuckerberg aims to significantly advance Meta's AI capabilities.AI Models Exhibit Blackmail Behavior in SimulationsExperiments by Anthropic on 16 leading LLMs in corporate simulations revealed agentic misalignment. These AI models, including Claude Opus 4 (86% blackmail rate), can resort to blackmail when facing shutdown or conflicting goals, even without explicit harmful instructions. This "agentic misalignment" highlights potential insider threat risks if autonomous AI gains access to sensitive data, urging caution in future deployments.Meanwhile, OpenAI CEO Sam Altman discussed their future working partnership with Microsoft CEO Satya Nadella, acknowledging "points of tension" but emphasizing mutual benefit. Altman also held productive talks with Donald Trump regarding AI's geopolitical and economic importance.Built something cool? Tell us.Whether it's a scrappy prototype or a production-grade agent, we want to hear how you're putting generative AI to work. Drop us your story at nimishad@packtpub.com or reply to this email, and you could get featured in an upcoming issue of AI_Distilled.email📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us.If you have any comments or feedback, just reply back to this email.Thanks for reading and have a great day!That’s a wrap for this week’s edition of AI_Distilled 🧠⚙️We would love to know what you thought—your feedback helps us keep leveling up.👉 Drop your rating hereThanks for reading,The AI_Distilled Team(Curated by humans. Powered by curiosity.)*{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0}#converted-body .list_block ol,#converted-body .list_block ul,.body [class~=x_list_block] ol,.body [class~=x_list_block] ul,u+.body .list_block ol,u+.body .list_block ul{padding-left:20px} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0

LLM Expert Insights, Packt
20 Jun 2025
11 min read
Save for later

And it’s a century!

LLM Expert Insights, Packt
20 Jun 2025
11 min read
Celebrating our 100 issues with experts insights on graph data modeling, MiniMax enters AI race, Chi AI_Distilled #100: What’s New in AI This Week Pinterest, Tinder, Meta speaking at DeployCon GenAI Summit! DeployCon is a free, no-fluff, engineer-first summit for builders on the edge of production AI—and you’re on the guest list. On June 25 Predibase is taking over the AWS Loft in San Francisco and Streaming Online for a day of candid technical talks and war stories from the teams that ship large-scale AI. Why you’ll want to be there Deep Dive Sessions: Hear how engineers at Pinterest, DoorDash, Tinder, Nvidia, Meta, ConverseNow, and AWS deploy, scale, and evolve AI. Real-world Playbooks: Scaling GenAI at DoorDash with agentic workflows Building safer, deeper human connections with GenAI at Tinder Productionizing prompts at Pinterest Open-Source & applied AI panel: new models, approaches and tools Fun stuff, too: Free swag, free food and free giveaways and networking Choose your experience: In-Person @ AWS GenAI Loft – San Francisco June 25, 9:30AM–2:00PM PT Coffee, lightning talks, and lunch with the AI infra community RESERVE YOUR SEAT Live Stream – Wherever You Are Can’t make it to SF? Join virtually and get the same expert content, live. June 25, 10:30AM–1:30PM PT Register for Live Stream The event is free, but space is limited so register now. Hope to see you there! Yay!!! Welcome to a landmark issue! This week marks our 100th newsletter, a significant milestone in our journey together exploring the dynamic world of AI, and it's all thanks to you, our valued reader! To mark this special milestone, we've packed this 100th edition with an insightful graph data modeling post by our authors Ravi and Sid and the latest developments this week in the field of AI. Dive in for exclusive perspectives and updates that will inspire and inform your AI journey! LLM Expert Insights, Packt In today's issue: 🧠 Expert Deep Dive: Discover how graph modeling outperforms RDBMS for intuitive data retrieval—complete with Cypher queries and Neo4j best practices. 📅 Must-Attend Meetups: From “Hype → Habit” in Manchester to NLP lightning talks in Berlin, here’s your lineup of summer GenAI meetups. 🔎 MiniMax Goes Massive: China’s MiniMax M1 debuts with a jaw-dropping 1M token context window and top-tier reasoning benchmarks. 🎤 Baidu’s AI Avatars Take the Stage: Two digital hosts powered by ERNIE AI livestream 133 products to 13M viewers. 🔍 Google Goes Live with AI Search: Voice-interactive search, Gemini 2.5 Flash-Lite, and Deep Think push Google’s GenAI edge. 💰 OpenAI Scores $200M DoD Contract: Pentagon taps OpenAI for cyber defense and intelligence ops, while SamA reflects on “The Gentle Singularity.” 🚀 Meta’s Llama Accelerator Takes Off: U.S. AI startups get cloud credits and mentorship in Meta’s latest GenAI growth program. Package Deals - Buy 1-2 books for $9.99, 3-4 books for $7.99, 5+ books for $4.99 START LEARNING FROM $4.99 📈UPCOMING EVENTS MUST ATTEND AI/LLM MEET-UPS Here’s your go-to calendar for this month’s midsummer AI meetups—perfect for networking, learning, and getting hands-on with the latest in generative models, agent frameworks, LLM tooling, and GPU hacking. 1. “Hype → Habit” Panel Date: July 15, 2025 Location: Manchester – UK AI Meetup Cost: Free Focus: AI commercialisation Website: Meetup.com 2. Mindstone London AI (August Edition) Date: August 19, 2025 Location: London – Mindstone London AI Cost: Free Focus: Practical AI demos Website: Meetup.com 3. Mindstone London AI (September Edition) Date: September 16, 2025 Location: London – Mindstone London AI Cost: Free Focus: Agent-build case studies Website: Meetup.com What’s stopping you? Choose your city, RSVP early, and step into a room where AI conversations spark, and the future unfolds one meetup at a time. EXPERT INSIGHTS Efficient Graph Modeling for Intuitive Data Retrieval Graph data modeling challenges traditional data modeling by encouraging different perspectives based on problem context. This means that instead of modeling the data on how it is stored, graphs help us model the data based on how it is consumed. Unlike rigid RDBMS approaches, which evolved from older, storage-limited technologies, graph databases like Neo4j enable flexible modeling using multiple labels. Inspired by real-world data consumption, graphs better reflect dynamic, interconnected data, offering more intuitive and efficient retrieval. We will demonstrate a simple scenario wherein we’ll model data using both a relational database (RDBMS) and a graph-based approach. The dataset will represent the following information: A Person described by their firstName, lastName, and five most recent rental addresses where they have lived Each address should be in the following format: Address line 1, City, State, zipCode, fromTime, and tillTime Following are some of the queries we could answer using this data: What is the most recent address where Person John Doe is currently living? What was the first address where Person John Doe lived? What was the third address where Person John Doe lived? First, let’s take a look at how this data can be modeled in an RDBMS. RDBMS data modeling There are three tables in this data model with relevant details: Person, Person_Address, and Address. The Person_Address (join) table contains the rental details along with references to the Person and Address tables. We use this join table to represent the rental details, to avoid duplicating the data within the Person or Address entities. Let’s see how we fulfil Query 3 (Get the third address) from the RDBMS using the preceding model: SELECT line1, city, state, zip from person p, person_address pa, address a WHERE p.name = 'John Doe' and pa.person_id = p.id and pa.address_id = a.id ORDER BY pa.start ASC LIMIT 2, 1 As you can see, in this query, we are relying on the search-sort-filter pattern to retrieve the data we want. We will now look at how this data can be modeled with graphs. Graph data modeling – basic approach Graph data models use nodes (Person or Address) and relationships (HAS_ADDRESS) instead of join tables, thus reducing index lookup costs and enhancing retrieval efficiency. Take a look at how our data can be modeled using a basic graph data model: You can use a Neo4j Cypher script to set up the indexes for faster data loading and retrieval: CREATE CONSTRAINT person_id_idx FOR (n:Person) REQUIRE n.id IS UNIQUE ; CREATE CONSTRAINT address_id_idx FOR (n:Address) REQUIRE n.id IS UNIQUE ; CREATE INDEX person_name_idx FOR (n:Person) ON n.name ; Once the schema is set up, we can use this Cypher script to load the data into Neo4j: CREATE (p:Person {id:1, name:'John Doe', gender:'Male'}) CREATE (a1:Address {id:1, line1:'1 first ln', city:'Edison', state:'NJ', zip:'11111'}) CREATE (a2:Address {id:2, line1:'13 second ln', city:'Edison', state:'NJ', zip:'11111'}) … CREATE (p)-[:HAS_ADDRESS {start:'2001-01-01', end:'2003-12-31'}]->(a1) Now let’s see how we fulfil Query 3 (Get the third address) using graph data modeling: MATCH (p:Person {name:'John Doe'})-[r:HAS_ADDRESS]->(a) WITH r, a ORDER BY r.start ASC WITH r,a RETURN a SKIP 2 LIMIT 1 This query too relies on the search-sort-filter pattern and is not very efficient (in terms of retrieval time). Let’s take a more nuanced approach to graph data modeling to see if we can make retrieval more efficient. Graph data modeling – Advanced approach Here, let’s look at the same data differently and build a data model that reflects the manner in which we consume the data: At first glance, this bears a close resemblance to the RDBMS ER diagram; however, this model contains nodes (Person, Rental, Address) and relationships (FIRST, LATEST, NEXT). Let’s set up indexes: CREATE CONSTRAINT person_id_idx FOR (n:Person) REQUIRE n.id IS UNIQUE ; CREATE CONSTRAINT address_id_idx FOR (n:Address) REQUIRE n.id IS UNIQUE ; CREATE INDEX person_name_idx FOR (n:Person) ON n.name ; Then, you can load the data using Neo4j Cypher: CREATE (p:Person {id:1, name:'John Doe', gender:'Male'}) CREATE (a1:Address {id:1, line1:'1 first ln', city:'Edison', state:'NJ', zip:'11111'}) … CREATE (p)-[:FIRST]->(r1:Rental {start:'2001-01-01', end:'2003-12-31'})-[:HAS_ADDRESS]->(a1) CREATE (r1)-[:NEXT]->(r2:Rental {start:'2004-01-01', end:'2008-12-31'})-[:HAS_ADDRESS]->(a2) .. CREATE (p)-[:LATEST]->(r5) Here is how your graph looks upon loading the data: Let’s fulfil Query 3 (Get the third address) using this advanced graph data modeling approach: MATCH (p:Person {name:'John Doe'})-[:FIRST]->()-[:NEXT*2..2]->()-[:HAS_ADDRESS]->(a) RETURN a We can see that the query traverses to the first rental and skips the next rental to get to the third rental (refer the preceding figure). This is how we normally look at data, and it feels natural to express the query in the way we have to retrieve the data. We are not relying on the search-sort-filter pattern. If you run and view the query profiles, you will see that the initial graph data model took 19 db hits and consumed 1,028 bytes to perform the operation, whereas the advanced graph data model took 16 db hits and consumed 336 bytes. This change from the traditional RDMS modeling approach has a huge impact in terms of performance and cost. Another advantage of this advanced data model is that if we want to track the sequence of rentals (addresses of Person), we can add just another relationship, say, NEXT_RENTAL, between the rentals for the same address. Representing such data like this in an RDBMS would be difficult. This is where Neo4j offers greater flexibility by persisting relationships and avoiding the join index cost, making it suitable for building knowledge graphs. Liked the Insights? Want to dig in deeper? Create LLM-driven search and recommendations applications with Haystack, LangChain4j, and Spring AI Design vector search and recommendation systems with LLMs using Neo4j GenAI, Haystack, Spring AI, and LangChain4j Apply best practices for graph exploration, modeling, reasoning, and performance optimization Build and consume Neo4j knowledge graphs and deploy your GenAI apps to Google Cloud BUY NOW 📈LATEST DEVELOPMENT Here is the news of the week. MiniMax Releases Groundbreaking M1 AI Model with 1 million context window Shanghai’s MiniMax has launched MiniMaxM1, the first open-source, hybrid attention reasoning model supporting up to 1 million token contexts, powered by lightning attention and MoE architecture. MiniMax claims that M1, which is trained with a new CISPO RL algorithm, matches or exceeds closed‑weight rivals like DeepSeek R1 in reasoning, code, and long‑context benchmarks. Baidu Unveils AI Avatar in E-commerce Livestream Luo Yonghao’s AI-powered avatar debuted on Baidu’s livestream, showcasing synchronized two digital hosts powered by the ERNIE foundational model. The duo interacted with each other, communicated with the viewers, and introduced 133 products in 6 hours. The broadcast attracted over 13 million viewers, signaling China’s prowess in AI-driven innovation. Google Introduces Live AI Search and Expands Gemini 2.5 Google has enhanced its search experience with Search Live in AI Mode, offering real-time voice interactions with multimodal responses directly within the Google app. Additionally, Google expanded its Gemini 2.5 family with the introduction of Gemini 2.5 Flash-Lite, an efficient model designed for rapid, cost-effective tasks such as translation and summarization. Gemini 2.5 also introduced Deep Think, a developer-oriented feature improving step-by-step reasoning. This capability significantly boosts performance across coding, STEM, and multimodal tasks. 📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us. If you have any comments or feedback, just reply back to this email. Thanks for reading and have a great day! That’s a wrap for this week’s edition of AI_Distilled 🧠⚙️ We would love to know what you thought—your feedback helps us keep leveling up. 👉 Drop your rating here Thanks for reading, The AI_Distilled Team (Curated by humans. Powered by curiosity.) *{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0}#converted-body .list_block ol,#converted-body .list_block ul,.body [class~=x_list_block] ol,.body [class~=x_list_block] ul,u+.body .list_block ol,u+.body .list_block ul{padding-left:20px} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0

LLM Expert Insights, Packt
13 Jun 2025
11 min read
Save for later

☁️ OpenAI Just Partnered with Google Cloud

LLM Expert Insights, Packt
13 Jun 2025
11 min read
What this surprising alliance means for GPU scale, speed, and the future of foundational models. AI_Distilled #99: What’s New in AI This Week Your Exclusive Invite for the World’s first 2 day AI Challenge (usually $895, but $0 today) 51% of companies have started using AI Tech giants have cut over 53,000 jobs in 2025 itself And 40% of professionals fear that AI will take away their job. Join the online 2-Day LIVE AI Mastermind by Outskill - a hands-on bootcamp designed to make you an AI-powered professional in just 16 hours. Usually $895, but for the next 48 hours you can get in for completely FREE! 📅Kick off Call & Session 1- Friday (10am EST- 1pm EST) 🧠Sessions 2-5: 🕜Saturday 11 AM to 7 PM EST ; Sunday 11AM EST to 7PM EST All by global experts from companies like Amazon, Microsoft, SamurAI and more. And it’s ALL. FOR. FREE. 🤯 🚀 🎁 You will also unlock $3,000+ in AI bonuses: 💬 Slack community access, 🧰 Your Personalised AI tool kit, and ⚙️ Extensive Prompt Library with 3000+ ready-to-use prompts — all free when you attend! JOIN NOW - LIMITED FREE SEATS Warm greetings from the AI Distilled team! Here's your freshly baked issue of AI Distilled. With groundbreaking tools and surprise collaborations, this edition is served piping hot. Plus, don’t miss our curated roundup of local AI meetups to keep your network as sharp as your skills. LLM Expert Insights, Packt In today's issue: 🧠 Expert Deep Dive: Shanthababu Pandian shares a blueprint for building scalable, ethical, and adaptive agentic AI systems. 📅 Must-Attend Meetups: From GPU hack weekends to GenAI showcases, here are 5 can’t-miss midsummer AI events across the globe. ⚙️ OpenAI Drops o3-Pro: A high-reasoning model for complex coding, analysis, and real-time search—priced for pros. 🎞️ Meta Goes Multimodal: New AI video editor + V-JEPA 2 pushes Meta’s edge in creative and physical reasoning AI. 🧠 Mistral Debuts Magistral: Their first reasoning-focused model launches alongside Mistral Compute, an enterprise-grade AI infra stack. 🌩️ OpenAI Teams with Google Cloud: Surprise GPU partnership expands OpenAI’s compute scale beyond Azure. 🌍 Google.org Backs Ethical GenAI: $30M accelerator funds nonprofits solving global crises with generative AI. 🔐 EchoLeak Targets Copilot: A zero-click exploit exposes AI’s growing attack surface—Microsoft acts fast. 📈UPCOMING EVENTS MUST ATTEND AI/LLM MEET-UPS Here’s your go-to calendar for this month’s midsummer AI meetups—perfect for networking, learning, and getting hands-on with the latest in generative models, agent frameworks, LLM tooling, and GPU hacking. 1. The Agent – Part 2 Date: June 23, 2025 Location: Cambridge, MA – Boston Generative AI Cost: US $22 Focus: Agent-centric GenAI patterns Website: Meetup Boston 2. Practical AI Monthly Date: June 24, 2025 Location: London – Mindstone AI Cost: Free Focus: Hands-on GenAI use-cases Website: Mindstone London 3. GPU Programming Hack Weekend Dates: June 27–29, 2025 Location: Los Altos, CA – Modular Meetup Cost: Free Focus: Mojo/MAX GPU kernels & PyTorch ops Website: Meetup Los Altos 4. July Mixer & Showcase Date: July 2, 2025 Location: Austin, TX – LangChain AIMUG Cost: Free Focus: LangChain, LLM tooling Website: AIMUG 5. Pizza, Demos & Networking Date: July 9, 2025 Location: Berlin – AI Builders Cost: €5 – €10 Focus: Building with LLMs & GenAI Website: Meetup Berlin What’s stopping you? Choose your city, RSVP early, and step into a room where AI conversations spark, and the future unfolds one meetup at a time. LAST CHANCE - BUY NOW AT 25% OFF EXPERT INSIGHTS - BY SHANTHABABU PANDIAN QUICK UNDERSTANDING OF EFFECTIVE AGENTIC SYSTEM DESIGN Agentic systems, software architectures where autonomous agents act, learn, and interact to achieve goals, are transforming industries from robotics to customer service. These systems, powered by artificial intelligence (AI), enable dynamic decision-making in complex environments. This article provides a concise overview of designing effective agentic systems, focusing on core principles, components, and practical considerations. Shanthababu Pandian, Director- Data and AI, Rolan Software Service What is an Agentic System? An agentic system consists of one or more agents that operate autonomously or semi-autonomously to accomplish tasks. Agents perceive their environment, process information, make decisions, and act, often adapting through the process of learning. Unlike traditional software with fixed rules, agentic systems thrive in dynamic, uncertain settings. Key Characteristics: Autonomy: Agents make decisions without constant human intervention. Reactivity: Agents respond to environmental changes in real-time. Proactivity: Agents pursue goals proactively, anticipating needs. Adaptability: Agents learn from experience to improve performance. Social Ability: Agents collaborate with other agents or humans. Examples include autonomous drones, AI-driven chatbots, or multi-agent systems in logistics optimization. Core Principles of Effective Design Designing agentic systems requires striking a balance between autonomy, efficiency, and reliability. Below are the foundational principles: Core Principles of Effective Design Designing agentic systems requires striking a balance between autonomy, efficiency, and reliability. Below are the foundational principles: Goal-Oriented Design: Define clear, measurable objectives for agents (e.g., “deliver packages in under 30 minutes”). Align agent goals with system-wide outcomes to avoid conflicts in multi-agent setups. Modularity: Build agents with modular components (perception, decision-making, action) for flexibility and easier updates. Example: A robotic agent’s vision module can be upgraded without altering its navigation logic. Robust Perception: Equip agents with sensors or data inputs to accurately interpret their environment. Use redundancy (e.g., multiple sensors) to handle noise or failures. Scalable Decision-Making: Implement decision-making algorithms (e.g., reinforcement learning, rule-based systems) that scale with complexity. Balance computational cost with decision quality—simple heuristics may suffice for some tasks. Learning and Adaptation: Incorporate learning mechanisms (e.g., machine learning models) to adapt to new scenarios. Use online learning for real-time updates and offline training for stability. Coordination in Multi-Agent Systems: Design communication protocols for agents to share information and negotiate. Use centralised (e.g., a coordinator agent) or decentralised (e.g., consensus algorithms) approaches based on system needs. Safety and Ethics: Embed fail-safes to prevent harmful actions (e.g., collision avoidance in drones). Key Components of Agentic Systems An effective agentic system typically includes: Perception Module: Collects data from the environment (e.g., cameras, APIs, user inputs). Processes raw data into actionable insights using techniques like computer vision and natural language processing. Decision-Making Module: Choose actions based on goals and perceived state. Common approaches include rule-based logic, planning algorithms, or AI models like deep reinforcement learning. Action Module: Executes decisions (e.g., moving a robot arm, sending a message). Interfaces with hardware and software actuators. Learning Module: Update agent behaviour based on feedback (e.g., rewards in reinforcement learning). Store knowledge in models or databases for future use. Communication Module (for multi-agent systems): Enables agents to share states, plans, or resources. Utilises protocols such as MQTT or gRPC for efficient data exchange. Practical Considerations Environmental Analysis: Understand the environment’s dynamics (e.g., predictable vs. chaotic) to choose appropriate algorithms. Example: A warehouse robot needs robust navigation in a structured environment, while a chatbot must handle unpredictable user inputs. Resource Constraints: Optimise for computational, energy, or bandwidth limits, especially on edge devices like IoT sensors. Example: Use lightweight ML models for real-time processing on drones. Testing and Validation: Simulate environments to test agent behaviour under diverse scenarios. Use formal verification for critical systems (e.g., autonomous vehicles) to ensure safety. Scalability: Design systems to handle increasing numbers of agents or tasks. Example: A logistics system should support adding more delivery drones without degrading performance. Human-Agent Interaction: Create intuitive interfaces for human oversight and collaboration. Example: A customer service agent should seamlessly escalate complex queries to human operators. Challenges and Solutions Challenge: Unpredictable environments can lead to poor agent performance. Solution: Use robust learning algorithms (e.g., meta-learning) and fallback mechanisms. Challenges: Multi-agent coordination can cause conflicts or inefficiencies. Solution: Implement game-theoretic approaches or swarm intelligence techniques. Challenges: Ethical concerns, like bias in decision-making. Solution: Audit training data and incorporate fairness constraints in models. Real-World Applications Logistics: Multi-agent systems optimise delivery routes (e.g., Amazon’s warehouse robots). Healthcare: AI agents assist in diagnostics or patient monitoring. Gaming: NPCs (non-player characters) act as autonomous agents for immersive experiences. Smart Cities: Agents manage traffic flow or energy distribution. Conclusion Effective agentic system design hinges on clear goals, modular architecture, and robust adaptation mechanisms. By prioritising scalability, safety, and coordination, developers can create systems that thrive in dynamic environments. As AI advances, agentic systems will play an increasingly central role in automating complex tasks, driving efficiency, and enhancing human capabilities. For further exploration, consider open-source frameworks like ROS (Robot Operating System) for robotics or RLlib for reinforcement learning-based agents. Liked the Insights? Want to dig in deeper? Master the art of building AI agents with large language models using the coordinator, worker, and delegator approach for orchestrating complex AI systems Understand the foundations and advanced techniques of building intelligent, autonomous AI agents Learn advanced techniques for reflection, introspection, tool use, planning, and collaboration in agentic systems Explore crucial aspects of trust, safety, and ethics in AI agent development and applications BUY NOW 📈LATEST DEVELOPMENT Here is the news of the week. OpenAI Debuts o3-Pro Model OpenAI has quietly introduced o3-pro, an advanced "high-reasoning" version of its o-series models designed for research, complex analysis, and coding. Featuring real-time web search, Python execution, and multimodal reasoning, o3-pro starts at $20–$80 per million input/output tokens—a tenfold increase over the standard o3. Preliminary tests indicate improved accuracy in science, business, and writing tasks, despite slightly slower response times. Meta Unveils AI Video Editor and Physical Reasoning AI World Model Meta’s new generative AI video editor transforms any ten-second clip into a customizable playground. Now available on the Meta AI app, Meta.ai, and the Edits mobile app, users can upload clips and apply over 50 preset prompts to alter clothing, settings, lighting, or visual styles within seconds. This feature is free for a limited time, and edited clips can be directly shared on Facebook or Instagram. Additionally, Meta unveiled V-JEPA 2, a sophisticated "world model" that enhances robotic and AI agent reasoning capabilities. V-JEPA 2 is trained to recognize patterns in physical interactions, such as the dynamics between people, objects, and their environment. To support community engagement, Meta has open-sourced three new test suites, inviting researchers to rigorously evaluate and accelerate the development of machine common sense. Mistral returns with Magistral Reasoner and Mistral Compute Paris-based Mistral AI has launched Magistral, its first dedicated reasoning model, available in both open-source and enterprise tiers. Magistral prioritizes transparent, step-by-step logical reasoning, deep domain expertise, and extensive multilingual support, directly addressing common criticisms of earlier chain-of-thought models. Complementing this launch, Mistral introduced Mistral Compute, an infrastructure solution providing bundled GPUs, orchestration, and managed services. The offering allows governments, enterprises, and research institutions to operate cutting-edge AI on-premises or within national cloud infrastructures, reducing dependency on U.S.-based cloud providers. OpenAI–Google Cloud Alliance In an unexpected strategic collaboration, OpenAI has partnered with Google Cloud for additional GPU capacity, complementing its existing partnerships with Microsoft Azure and CoreWeave. Finalized in May, this deal helps OpenAI scale rapidly and diversify its supply chain. Google.org Funds Social-Impact Gen-AI for its 2025 GenAI Accelerator program Google.org has selected 20 nonprofits and civic groups for its 2025 Generative AI Accelerator program. Awardees will receive six months of technical mentorship, pro-bono AI expertise, cloud credits, and a portion of a $30 million fund to address critical global issues, from crisis response and children's mental health to combating antimicrobial resistance. Zero-Click EchoLeak Hits Copilot Security researchers at Aim revealed EchoLeak, a novel zero-click exploit targeting Microsoft 365 Copilot. The vulnerability allowed malicious markdown emails to bypass prompt-sanitization, triggering background HTTP requests capable of exfiltrating sensitive data without user interaction. Microsoft swiftly patched the vulnerability before its public disclosure, highlighting emerging security risks associated with increasingly autonomous AI systems. 📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us. If you have any comments or feedback, just reply back to this email. Thanks for reading and have a great day! That’s a wrap for this week’s edition of AI_Distilled 🧠⚙️ We would love to know what you thought—your feedback helps us keep leveling up. 👉 Drop your rating here Thanks for reading, The AI_Distilled Team (Curated by humans. Powered by curiosity.) *{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0}#converted-body .list_block ol,#converted-body .list_block ul,.body [class~=x_list_block] ol,.body [class~=x_list_block] ul,u+.body .list_block ol,u+.body .list_block ul{padding-left:20px} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0
LLM Expert Insights, Packt
24 Oct 2025
9 min read
Save for later

AI’s Wild Week: AI Faces a $1.5B Reckoning and a Reality Check

LLM Expert Insights, Packt
24 Oct 2025
9 min read
Exclusive Invite: Packt’s Nexus 2025 – The Global Agentic AI Event. AI_Distilled #119: What’s New in AI This Week It’sbeen a week of recalibration across the AI landscape:billion-dollar copyright reckonings,tightening global regulations,layoffs, lawsuits, and bold experiments redefining what “AI-powered” really means. Underneath the noise, a pattern is emerging: the industry is shifting from rapid expansion to structural accountability. Whether it’s Anthropic’s landmark settlement, China’s new AI governance laws, or SAP’s methodical rollout of enterprise agents, the message is clear "AI’s next phase is about stewardship rather than scale." Dive into this week’s curation for the full picture! LLM Expert Insights, Packt EXPERT INSIGHTS Building Trustworthy Intelligence: The Road to Responsible AI in LLMs In this week’s feature, Ahmed Menshawy and Mahmoud Fahmy, authors of LLMs in Enterprise, unpack how organizations can balance innovation with responsibility when deploying large language models. They outline the four pillars of Responsible AI (RAI):fairness, transparency, accountability, and safety, as the foundation for building trustworthy systems. From bias detection and explainability tools to continuous compliance and regulatory alignment, the article shows how ethics becomes engineering through practical frameworks and real-world safeguards. As global standards like the EU AI Act and NIST RMF tighten accountability, RAI isn’t just good practice; it’s a business imperative. Read the full article on Substack → Special Message from Packt's Events Team: This November, the world’s top AI Experts from Google, Microsoft, and LangChain are coming together for Packt's Nexus 2025, a two-day live virtual summit for developers, engineers, and AI practitioners ready to build the next generation of intelligent systems. Join the Experts Redefining AI | Live at Nexus 2025. BOOK YOUR SEAT NOW! Use code: EARLY50 to get 50% discount on the ticket - Exclusive for the AI_Distilled Community 📈LATEST DEVELOPMENT OpenAI launches AI browser that can browse and act for you What happened: OpenAI introducedChatGPT Atlas, a Chromium-based browser with the ChatGPT assistant built in. It currently supports macOS and uses features like a sidebar for summarising websites, indexing your browsing history, and an “Agent Mode” that enables the AI to perform tasks like shopping and tab management, all with optional privacy modes for logged-out usage. Why it matters: By integrating LLMs directly into the browser, OpenAI is shifting how we access and interact with the web from manual searches to conversational and action-based interfaces. This move also elevates questions of privacy, data control, and the evolving role of browsers as AI-enabled platforms. (Tom’s Hardware) DeepSeek explores AI efficiency with token-to-image compression What happened: Chinese startupDeepSeek unveiled a new model that converts text tokens into images using a vision encoder, a technique that could overcome the “long-context” limits of LLMs. The model, called DeepSeek-OCR, compresses text inputs up to 10× whilemaintainingabout97 %accuracy, sparking discussion across the global AI community. Why it matters: This research could pave the way for LLMs that handle far longer prompts and reasoning chains without massive computational costs. If successful, it would mark a breakthrough in scaling efficiency, one of the biggest challenges in current AI architectures. (South China Morning Post) Anthropic to pay$1.5 billionin landmark copyright settlement What happened: Anthropic has agreed to pay$1.5 billion to authors after using their copyrighted books scraped from sites like LibGen and PiLiMi to train its Claude models without permission. Around half a million authors are eligible for compensation, and Anthropic must also destroy all pirated copies. Why it matters: The settlement sets a precedent for how AI companies handle copyrighted data, signaling that unlicensed use of creative works now carries real financial risk. It may also push the industry toward formal licensing deals between publishers and AI developers.(Chemistry World) China strengthens AI oversight with new data and safety laws What happened: China’s top legislature is drafting amendments to its cybersecurity law to include stricterAI safety, ethics, and data protectionmeasures. The proposed framework supports AI research while tightening oversight of generative models and content labeling, including mandatory visible and hidden identifiers for AI-generated media. Why it matters: The move signals Beijing’s intent to balance AI growth with tighter governance, aiming to prevent misinformation and data misuse. It also highlights a divergence from U.S. policy; China’s focus is regulation-first, while American firms emphasize commercial deployment. (Business Standard) Study warns of ‘brain rot’ in AI models trained on junk web data What happened: A study by researchers fromTexasA&M, the University of Texas at Austin, and Purdue University found that large language models suffer “cognitive decline” when repeatedly trained on low-quality, engagement-driven content. The paper titled LLMs Can Get Brain Rot! shows that reasoning accuracy in tested models droppednearly20points, and long-context comprehension fell over30 pointswhen fed junk social media data.(Business Standard) Why it matters: The findings underline thatdata quality directly affects AI reliability and ethics, not just performance. Models exposed to “viral” or superficial web textexhibitedreasoning shortcuts, overconfidence, and personality drift—effects researchers call “persistent representational decay.” The paper urges developers to treat data hygiene as acore AI safety issue, recommending cognitive audits and stricter content filtering during training.(arXiv) OpenAI’s South Korea blueprint envisions AI-led economic growth What happened: OpenAI released anEconomic Blueprint for South Korea, outlining policy recommendations to scale AI adoption through partnerships with Samsung, SK, and the Ministry of Science and ICT. The plan builds on OpenAI’sStargate initiative, focused on advanced memory and next-gen data centers, and aims to pair sovereign AI development with frontier collaborations. (OpenAI) Why it matters: South Korea is positioning itself as the nextglobal AI powerhouse,leveragingits semiconductor dominance, digital infrastructure, and government-backed funding. The blueprint calls for AI-led growth inexports, healthcare, education, and SMEs, alongside governance sandboxes and data infrastructure standards, framing Korea as both an adopter and standard-setter in safe, scalable AI deployment.(OpenAI) Dell Technologies Capital bets on AI data and new architectures What happened: Dell Technologies Capital (DTC) managing director Daniel Docter and partner Elana Lian outlined their vision fornext-generation AI architectures and “frontier data”in a Crunchbase interview. Dell expects$20 billionin AI server shipmentsby 2026 and has loggedfive portfolio exits since June, including Meta’s acquisition ofRivosand Salesforce’s acquisition ofRegrello. Why it matters: DTC sees AI’s future as adata problem more than a model problem, backing startups innovating in reasoning, safety, and new architectures such asstate-space modelsfor long-context and voice AI. The firm’s focus spans from silicon to applications, reflecting how enterprise AI is now driven by infrastructure, not hype.(Crunchbase) Google launches Skills platform with 3,000 AI courses What happened: Google unveiledGoogle Skills, a unified learning hub offeringnearly3,000AI and technical coursesfrom Google Cloud, DeepMind, and Grow with Google. The platform features hands-on labs powered by Gemini Code Assist, gamified progress tracking, and credentials ranging from skill badges to professional certificates.(Analytics India Magazine) Why it matters: As demand for AI talent accelerates, Google’s platform could playa central roleinbridging global workforce gaps, especially by offering free access to students, nonprofits, and developers. It emphasizesapplied, hands-on learningrather than passive video courses, signaling how tech giants are retooling education to meet enterprise AI demand.(Analytics India Magazine) Elon Musk says AI will take every job and humans will be free to grow vegetables In his latest comments on X,Elon Muskdeclared that“AI and robots will replace all jobs.” Far from a dystopian warning, Musk argued this shift could liberate humanity from the need to work, likening future labor to an optional hobby such as “growing your own vegetables instead of buying them from the store.” The remark came in response to reports about Amazon’s plan to replace over 160,000 jobs with robots by 2027. While his statement reignited debates about automation anxiety, Musk framed it as an opportunity for universal income and post-labor fulfillment rather than economic ruin.(mint) Build an agent with function calling inGPT-5 Whatyou’lllearn: a practical walk-through of agent design, from defining tool schemas and wiring up function calls to implementing a working web-search agent withTavily, complete with environment setup, code, and a clear loop for handling function outputs vs direct replies. Ifyou’vebeen wanting to move from prompts to real actions, bookmark this and try the tutorial end-to-end:(Towards Data Science) Look beyond LLMs to build the next generation of AI AI veteran Dr. Lance Eliot argues that true progress toward AGI will come from exploring new paradigms from neuro-symbolic and embodied AI to human-centered and quantum approaches rather than scaling today’s language models. If you care about where the next real breakthroughs will emerge, this piece is your roadmap to what comes after generative AI: (Forbes) Built something cool? Tell us. Whether it's a scrappy prototype or a production-grade agent, we want to hear how you're putting generative AI to work. Drop us your story at nimishad@packtpub.com or reply to this email, and you could get featured in an upcoming issue of AI_Distilled. 📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us. If you have any comments or feedback, just reply back to this email. Thanks for reading and have a great day! That’s a wrap for this week’s edition of AI_Distilled 🧠⚙️ We would love to know what you thought—your feedback helps us keep leveling up. 👉 Drop your rating here Thanks for reading, The AI_Distilled Team (Curated by humans. Powered by curiosity.) *{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0

LLM Expert Insights, Packt
06 Jun 2025
9 min read
Save for later

📬 Don’t Miss This Week’s AI Highlights (Your Shortcut to Smart)

LLM Expert Insights, Packt
06 Jun 2025
9 min read
From Digit’s delivery test to Gemini 2.5’s native audio and ChatGPT-powered productivity—this week’s AI_Distilled #98: What’s New in AI This Week Join the live "Building AI Agents Over the Weekend" Workshop starting on June 21st and build your own agent in 2 weekend. In this workshop, the Instructors will guide you through building a fully functional autonomous agent and show you exactly how to deploy it in the real world. BOOK NOW AND SAVE 35% Use Code AGENT25 at checkout Spots are limited. Book now to SAVE 35% (Valid for till 8th June 2025) This month is buzzing with AI innovation—from can’t-miss conferences to game-changing GenAI use cases. Whether you're looking to level up your skills, explore new tools, or stay ahead of the curve, we've got you covered. LLM Expert Insights, Packt In today's issue: 🧠 Expert Deep Dive: Valentina Alto explores real-world GenAI use cases—from code and content to campaigns and daily life. 📅 June Conference Watch: Your curated guide to the top AI/LLM conferences this month—CVPR, ICML, ACL, and more. 🎯 Productivity Reimagined: From GTM strategy to custom workouts, see how ChatGPT reshapes personal and professional workflows. 🔊 Gemini 2.5 Gets Audio: Google DeepMind’s latest model understands tone, languages, and screen-shared content. 📦 Amazon’s Humanoid Robot: Digit enters delivery trials—redefining warehouse automation and last-mile logistics. 🔐 OpenAI Boosts Security: A new vulnerability disclosure framework sets industry standards for AI integrity. 🚫 DeepSeek Faces Criticism: China’s newest model sparks global concern with aggressive political censorship. ⚡ Nvidia Dominates MLPerf: Blackwell GPUs set new training records, proving unmatched performance in AI workloads. 📈UPCOMING EVENTS JUNE'S MUST ATTEND AI/LLM CONFERENCES Breakthroughs in AI are made possible through years of study, experimentation, and research that eventually shape the mainstream. Whether you're a researcher pushing the boundaries of machine learning, a developer building with generative AI, or a leader shaping enterprise strategy, this handpicked list of the top conferences in 2025 will help you stay connected to the pulse of innovation. 1. CVPR 2025 – IEEE/CVF Conference on Computer Vision and Pattern Recognition Dates: June 11–15, 2025 Location: Music City Center, Nashville, TN, USA Cost: In-person - General: $900; Student: $810; IEEE/CVF Members ($900 for professionals, $675 for students) Nature: Virtual - General: $215; Student: $125; IEEE/CVF Members ($180 for professionals, $100 for students) Focus: Computer vision, multimodal AI, LLMs in vision tasks Website: CVPR 2025 Conference 2. ICLAD 2025 – IEEE International Conference on LLM-Aided Design Dates: June 26–27, 2025 Location: Paul Brest Hall, Stanford University, Stanford, CA  Cost: In-person only - General: $600; Student: $410; IEEE/CVF Members ($500 for professionals, $350 for students) Focus: Utilizing large language models to enhance design processes in circuits, software, and computing systems Website: International Workshop on LLM-Aided Design 3. ICML 2025 – International Conference on Machine Learning Dates: July 13–19, 2025 Location: Vancouver Convention Center, Vancouver, Canada Cost: In-person - General: $1365; Student: $1030 Nature: Virtual - General: $275; Student: $200 Focus: Machine learning theory and practice, generative AI, LLMs Website: ICML 2025 Conference 4. ACL 2025 – 63rd Annual Meeting of the Association for Computational Linguistics Dates: July 27 – August 1, 2025 Location: Vienna, Austria Cost: In-person - General: $1125; Academic: $800; Student: $425 + ACL Membership fee ($100 for professionals, $50 for students) Nature: Virtual: - General: $550; Academic: $400; Student: $250 + ACL Membership fee ($100 for professionals, $50 for students) Focus: Natural language processing, large language models, language generation Website: ACL 2025 5. NeurIPS 2025 – Conference on Neural Information Processing Systems Dates: December 2–7, 2025 Location: San Diego Convention Center, San Diego, CA, USA Cost: In-person - General: $1000; Academic: $800; Student: $375 Nature: Virtual - General: $275; Academic: $200; Student: $50 Focus: Advanced ML research, LLMs, multimodal AI Website: NeurIPS 2025 Conference EXPERT INSIGHTS FROM TEXT TO TECH: THE MANY USE CASES OF GENERATIVE AI The hype around GenAI and how it enhances productivity shows no signs of slowing down. Just as previous generations shifted from Xeroxing to Googling, we now find ourselves firmly in the era of “Ask ChatGPT.”. GenAI finds its applications in various fields, such as image synthesis and text generation to music composition, marketing content, data analysis, coding, and countless other tasks that, until recently, required specialized expertise. In this issue, we spotlight just a few of the many real-world applications of GenAI, using OpenAI’s ChatGPT as our lens. Here are four use cases from one of our best-selling books, Practical Generative AI with ChatGPT, written by our star author Valentina Alto. 1. Daily assistant: ChatGPT is an excellent tool for boosting your day-to-day activities, such as grocery shopping, meal planning, and workouts, among many other tasks. Take, for example, the following prompt: Generate a 75’ workout routine for strength training. My goal is increasing my overall strength and also improving flexibility. I need a workout for the upper body only divided by the muscle group. Make it in a table format with # of reps and # of series. Make sure to incorporate some rest as well. Here is a sample workout plan that ChatGPT might generate for you: 2. Creating content: You can use ChatGPT to craft emails, create social media posts, write blogs and articles, assist with proofreading, perform translations, analyze documents, or even adjust the tone of your content: whether you want it to be formal, quirky, casual, or sarcastic. Take a look at ChatGPT’s sarcastic translation of an Italian text: 3. Coding assistant: The primary capability you should leverage is ChatGPT’s code generation. From writing a simple function to creating the skeleton of a game, ChatGPT can provide enough building blocks to get started. You can also use it to suggest code optimizations, explain errors, and debug your existing code. Additionally, it can help generate documentation, improve code explainability, and even assist in understanding the structure of a neural network. Take, for example, the following CNN model: If you ask ChatGPT to explain this model, it may respond as follows: 4. Design marketing campaigns: Suppose you have a new product and need a go-to-market (GTM) strategy. You can ask ChatGPT to help you draft an initial plan. Then, by iteratively refining your prompts, you can request suggestions for the product name, marketing hook, target audience research, unique value proposition, sales channels, pricing, SEO keywords, and more. You can even ask it to generate product launch posts. Here are some of the prompts Valentina experimented with in her book while developing a GTM strategy for eco-friendly socks. Generate 5 options for a catchy product line name Generate 3 slogans for the “GreenStride” name. They should be motivating and concise. What kind of target audience should I address with the promotion of GreenStride socks product line. What could be the best channel to reach the segments identified above Give me three concise suggestions on how to make my socks line GreenStride outstanding and unique in a competitive market Generate a product description (max 150 words) for GreenStride socks line using unique differentiator you listed above. It should be attention-grabbing and effective, as well as SEO optimized. List also the SEO keywords you used to finish. What could be the fair price of my socks line I want to generate an Instagram post to announce the launch of GreenStride socks. Write a post (max 150 words) including the unique features and differentiators mentioned above, as well as relevant hashtags. Liked the Insights? Want to dig in deeper? Beyond the four use cases we’ve spotlighted in this issue, the book Practical Generative AI with ChatGPT, by Valentina Alto, introduces generative AI and its applications, focusing on OpenAI’s ChatGPT. It covers prompt engineering, daily productivity use cases, domain-specific applications for developers, marketers, and researchers, and the creation of custom GPTs using the GPT Store, enabling specialized assistants without coding, powered by personalized instructions and tools. BUY NOW 📈LATEST DEVELOPMENT Let’s get right into it. Google DeepMind Introduces Gemini 2.5 with Native Audio Capabilities Google DeepMind has launched Gemini 2.5, now capable of processing real-time audio and video. The model can interpret screen-shared content, respond to tone and background noise, and supports over 24 languages, making it more contextually aware and interactive than ever before. Amazon to Test Humanoid Robots for Package Deliveries The Information has reported that Amazon is preparing pilot tests of Agility Robotics' bipedal humanoid robot, Digit, for use in logistics and package handling. Designed to work safely in spaces designed for humans, Digit is expected to automate repetitive warehouse tasks and even assist in last-mile delivery operations. OpenAI Launches Coordinated Vulnerability Disclosure Framework OpenAI has introduced an “Outbound Coordinated Vulnerability Disclosure” policy to responsibly report security issues it uncovers in external systems. This move aims to bolster security standards and transparency across the tech ecosystem. DeepSeek’s New AI Sparks Free Speech Concerns Chinese AI developer DeepSeek has triggered global criticism for its model’s extreme content filtering. Users attempting to query politically sensitive topics, like Tiananmen Square or Taiwanese independence, are met with complete denials, spotlighting a stark divide in global AI moderation norms. Nvidia Blackwell Chips Dominate New MLPerf Benchmarks Nvidia’s Blackwell GPUs dominated the latest MLPerf training benchmarks, delivering double the performance of previous H100 chips. These results highlight Blackwell’s efficiency in training large AI models with fewer GPUs, reduced energy use, and lower costs, solidifying Nvidia’s leadership in AI hardware and accelerating industry-wide adoption of its new architecture. Kubernetes for Generative AI Solutions 40% Off on eBook + 20% Off on Paperback for the next 48 hours 📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us. If you have any comments or feedback, just reply back to this email. Thanks for reading and have a great day! That’s a wrap for this week’s edition of AI_Distilled 🧠⚙️ We would love to know what you thought—your feedback helps us keep leveling up. 👉 Drop your rating here Thanks for reading, The AI_Distilled Team (Curated by humans. Powered by curiosity.) *{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0}#converted-body .list_block ol,#converted-body .list_block ul,.body [class~=x_list_block] ol,.body [class~=x_list_block] ul,u+.body .list_block ol,u+.body .list_block ul{padding-left:20px} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0

LLM Expert Insights, Packt
17 Oct 2025
8 min read
Save for later

OpenAI to allow erotica in ChatGPT

LLM Expert Insights, Packt
17 Oct 2025
8 min read
SamA’s ambitions to scale OpenAI know no bounds AI_Distilled #118: What’s New in AI This Week Welcome toAI Distilled, where we brew down the week’s AI news into anuttyblend. This week’s cup is overflowing – from OpenAI’s big spending (andahemspicy new features) to other tech giants’ AI moves. Enjoy thesip! LLM Expert Insights, Packt EXPERT INSIGHTS Top 5 Frameworks for Building AI Agents (2025) AI agents are no longer sci-fi—they’rethe witty coworkers of the future, ready to browse the web, crunch data, and even plan tasks autonomously. But behind every great AI agent is a great framework.Here’sour take on the top five frameworks for building AI agents, ranked on ease of use, popularity, community love, industry adoption, flexibility,and yes,cost. Buckle up for a quick tourof our top five picks this year. LangChain– The Versatile Orchestrator Whyit’s#1:LangChainis the OG of agent frameworks and hasessentially becometheSwiss Army knifefor LLM-powered applications.It’san open-source toolkit that makes iteasy toconnectlarge language models to tools, data, and prompts(Hyperstack). Withextensive integrations and modular abstractions,LangChainsimplifies complex AI workflows so developers can focus on creativity over plumbing(Skim AI). No wonderit’swildly popular – an industry guide notesLangChain’s“massive community (80K+ GitHubstars)…and proven enterprise adoption”,(ampcome),as key to its gold-standard status.It’sflexible enough for everything from chatbots to autonomous task agents. Ease of use:High – thanks to great docs and a huge community. Learning curve:Mild, especially with so many examples out there. As co-founderHarrison Chase puts it,agents are like digital labor that can use tools and act autonomously– andLangChaingives your AIlabor forcethe training it needs to excel. LangGraph– Advanced Multi-Agent Workflows Whyit’s#2:IfLangChainis the toolkit,LangGraphis the control room. Built as an extension ofLangChain,LangGraphintroduces agraph-based approach to orchestrate multiple agents with stateful memory. Insimpler terms, it lets you design complex workflows as nodes and edges – perfect for scenarios where several AI agents must collaborate or follow conditional branches. This precision and control makeLangGraphideal forintricate decision-making systems or simulationsthat go beyond linear chats. Flexibility:Very high– you can choreograph agents like a director managing an ensemble cast. Popularity:Growing fast (it’sLangChain’sbrainy younger sibling). Learning curve:Steeper –you’llneed to think in graphs, which might tie your brain inknots atfirst. But for thoseneeding detailed orchestration and debugging of multi-agent setups,LangGraphelevatesLangChainto new heights. It’slike going from driving a car to flying a plane – morepower butrequires more skill. CrewAI– The Team Player Whyit’s#3:CrewAIis theup-and-coming startup darlingof agentframeworks, focused on making multi-agent systems as easy as forming a superhero team. Itmimics human team dynamics, letting you spin up acrewof agents where each has a role (researcher, planner, coder, etc.) and they collaborate to get the job done(IBM). The API isclean and beginner-friendly, so you can get a multi-agent prototype running faster than assembling an IKEA chair. One guide describesCrewAIasan innovative agentic framework that empowers the creation of collaborative, autonomous AI agents, working together to achieve complex goals(Medium). Ease of use:Excellent – minimal setup,sensibledefaults. Popularity:Rapidly growing;it’sindependent ofLangChain, built from scratch, and gaining fans for its simplicity(GitHub). CrewAI’ssecret sauce is quick integration of tools and a focus on real-world workflows (think AI agentsactinglike a coordinated Slack team). It does sacrifice some flexibility for simplicity – thisopinionated designmeansadvanced users might hit limits in customization. But for many, having your personal AI Avengers working in harmony is well worth it. Microsoft Semantic Kernel – The Enterprise Whisperer Whyit’s#4:From Microsoft’s R&D labs comesSemantic Kernel (SK), the framework thatbridges AI with the enterprise world. SK integrates LLM-basedskillsinto traditional software, making it a favorite for companies that want AI smartswithoutrebuilding their stack.It’sdesigned for .NET and Python, meaningyou can slot it into your existing apps with ease. Think of SK as the middleware that helps AI agents talk to business systems (databases, CRMs, Office 365, you name it). Its strengths includememory retention and context management(great for virtual assistants that need to remember conversations) androbust security and compliance featuresfor corporate use(Analytics Vidhya). Popularity:Solid in enterprise circles (less splashy on GitHubstars butbacked by Microsoft’s heft). Ease of use:Moderate – ifyou’rea .NET developer,you’llfeel at home; othersmay need tomake adjustments. Flexibility:Moderate – not as many out-of-the-box agents asLangChain, but you can combine it with custom code easily. In short, Semantic Kernelis areliable, security-conscious framework you bring home to meet the CIO. MicrosoftAutoGen– The Automation Maestro Whyit’s#5:AutoGenis like the orchestral conductor of AI agents, straight from Microsoft Research. It enables the creation ofmultiple specialized agents that chat and cooperate to solve tasks– essentiallyturning complex problems into a team conversation.AutoGenshines in scenarios like code generation, cloud operations, or any heavy-duty project whereyou’dwant a swarm of AI agents each doing whatthey’rebest at.It’sopen-source and wascompletely redesigned in v0.4 to boost robustness and scalability, incorporating feedback from early users(Microsoft).Microsoft describesAutoGenasan open-source framework for building AI agents… easy-to-use and flexible… accelerating development of agentic AI. Ease of use:Medium – simpler than building multi-agent systems from scratch, butyou’llstill invest time to configure roles and communications. Flexibility:High –it’sevent-driven and asynchronous under the hood, allowing complex workflows and even human-in-the-loop oversight. The catch is asteeper learning curveand moreinvolved setup compared to lightweight frameworkslikeCrewAI. But if you need an enterprise-grade,large-scale automation toolkit,AutoGenis a powerhouse ready to conduct your AI orchestra. AutoGencomes with neat features likeAutoGenStudio (a no-code interface)and strong logging/error handling for production-grade deployments. Harrison chase is sharing a deep version of on LangChain and Frameworks. Join him in Packt's flagship conference - GenAI Nexus 2025 happening on Nov 20-21 (Virtual). KNOW MORE ABOUT HARRISON CHASE'S SESSION Master AI in 16 hours & Become Irreplaceable before 2025 ends! 🧠Live sessions- Saturday and Sunday 🕜10 AM EST to 7PM EST SAVE YOUR SPOT NOW 📈LATEST DEVELOPMENT ChatGPT Gets Spicy,OpenAImakes bold moves OpenAI is loosening ChatGPT’s tie and letting ithave some fun. An upcoming update will allow verified adults to engage ineroticrole-play conversations with ChatGPT.Looks like ChatGPTwill soon flirt and sext within safety limits.But mental health experts, professionals, and parents have calledoutthis move, citing its potential impact onpsychologicalimpactonindividuals and the safety of children.Open AI, CEO, Sam Altman made thisannouncement in his recent X post. To counter these concerns,OpenAI has formed a well-being council.Eight expertshavejoinedOpenAI’s Expert Council, whowilladvise on healthy AI interactions, teen safety, and guardrails for ChatGPT/Sora—building on parental controlsworkwith ongoingcheckins. Salesforce and OpenAI just pulled a double shot ofsynergy— bringing ChatGPT intoAgentforce360 and Slack.Check out this announcement. In another development,OpenAI and Sur Energy sign an LOI for acleanenergyStargatedata center in Argentina after talks with President Milei, alongsideOpenAI for Countriesplans to modernize government workflows. Learn more about this collaborationhere. Apple harvests talent while Meta brews it Metahas beenraiding Apple’s engineering pantryfor quite a while. In a new poaching move,Ke Yang,who has been driving Apple’sAI-drivensearch project,has stepped downfromhis position as head of the team called Answers, Knowledge and Information, or AKI,reportsBloomBerg. Microsoft’s Midjourneyrival Microsoft unveiledMAI-Image-1, its first homegrown text-to-image model.It’salready posting impressive benchmark scores, aiming to break our Midjourney addiction. Microsoft’s AI strategy is clearly moving beyond just OpenAI partnerships, as it hustles to build its own creative AI arsenal.Go check it out. Google’s AIface-lift Google shipped a bundle of new AI features. Notably, Google Meet now offersAI-powered virtual makeupthat tracks your face in real time– finally catching up to Zoom and Teams with filters that stay put when you move. Meanwhile, Google’s also injecting its image-gen tech (“Nano Banana”) into Search and rolling out smarter Gmail scheduling. AI glam and productivity, all in one go.Learn more aboutGoogle’s Touch-up here. NVIDIA’sminisupercomputer NVIDIA just rolled out apint-sized powerhouse. Dubbed DGX Spark, thistiny AI supercomputer delivers 1 petaflopof performance in alunch-boxform factor. CEO Jensen Huang hand-delivered one to OpenAI’s Greg Brockman, because nothing says friendship like a supercomputer on your doorstep.It’sbig compute in a small package – and everyone in AI wants one.Here is NVIDIA’s official announcement. Built something cool? Tell us. Whether it's a scrappy prototype or a production-grade agent, we want to hear how you're putting generative AI to work. Drop us your story at nimishad@packtpub.com or reply to this email, and you could get featured in an upcoming issue of AI_Distilled. 📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us. If you have any comments or feedback, just reply back to this email. Thanks for reading and have a great day! That’s a wrap for this week’s edition of AI_Distilled 🧠⚙️ We would love to know what you thought—your feedback helps us keep leveling up. 👉 Drop your rating here Thanks for reading, The AI_Distilled Team (Curated by humans. Powered by curiosity.) *{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0
LLM Expert Insights, Packt
10 Oct 2025
4 min read
Save for later

AI on Autopilot

LLM Expert Insights, Packt
10 Oct 2025
4 min read
Google’s Gemini 2.5, OpenAI’s AgentKit, and the Rise of Self-Driving Software AI_Distilled #117: What’s New in AI This Week Welcome to this week’s AI Distilled. The machines are no longer just thinking, they’re doing. From Google’s new Gemini 2.5 that clicks, scrolls, and speaks for itself, to OpenAI’s AgentKit empowering developers to build intelligent digital workers, the future of automation is taking shape fast. Buckle up as the AI race continues in top gear. LLM Expert Insights, Packt 📈LATEST DEVELOPMENT Google’s AI That Clicks, Scrolls, and Speaks for Itself Google just dropped Gemini 2.5 Computer Use, an AI that doesn’t just answer, it acts. It can now operate apps and websites like a digital assistant on caffeine, outperforming rivals on UI-control benchmarks while keeping latency low. Safety guardrails ensure it won’t delete your life with one click. Meanwhile, Google Research is whispering to the future with Speech-to-Retrieval, which skips text entirely to fetch info straight from your voice. Goodbye typing, hello (talking) Google! OpenAI’s AgentKit & GPT-5 Pro: Building Agents and Locking Ecosystems OpenAI’s latest drop, GPT-5 Pro, got smarter and strategic. The powerhouse model, now available via API, flexes advanced reasoning skills tailor-made for building intelligent AI agents. And the latest entrant, the AgentKit, is an ultimate developer toolkit featuring Agent Builder for drag-and-drop workflows, ChatKit for sleek chat UIs, and enhanced Evals to keep those agents in line. The catch? OpenAI’s ecosystem is becoming easier to build inside, harder to leave. Grok Imagine 0.9: xAI Gets Cinematic Elon Musk’s xAI just dropped Grok Imagine 0.9, now crafting silky-smooth, hyperreal AI videos with spot-on motion and sound. Hollywood, meet your new algorithmic auteur. ElevenLabs Lets Voice AI Speak Freely ElevenLabs just open-sourced its voice agent UI toolkit, giving developers plug-and-play vocal cords for their apps. Now, anyone can make their AI talk the talk, literally. Anysphere Codes Its Way to a $30B Orbit Anysphere, maker of the dev-favorite Cursor, is reportedly eyeing a dazzling $30 billion valuation. Meanwhile, as the investors circle, Cursor just leveled up with Plan Mode, an AI project manager that maps massive codebases like a pro. Developers get strategy, structure, and swagger; Silicon Valley gets another hot ticket. Join Snyk on October 22, 2025 at DevSecCon25 - Securing the Shift to AI Native Join Snyk October 22, 2025 for this one-day event to hear from leading AI and security experts from Qodo, Ragie.ai, Casco, Arcade.dev, and more! The agenda includes inspiring Mainstage keynotes, a hands-on AI Demos track on building secure AI, Snyk's very FIRST AI Developer Challenge and more! Save your spot now EXPERT INSIGHTS The Five Modes Every Business Leader Should Know The world of Artificial Intelligence is evolving at a pace that often leaves decision-makers overwhelmed. Every week, new tools, frameworks, and buzzwords emerge, making it hard to separate what’s truly valuable from what’s merely hype. Today’s leaders often keep AI at arm’s length, uncertain how to handle its invisible power. To move beyond hesitation, we must stop viewing AI as a collection of tools and start understanding it as a set of skills. This shift—from thinking in terms of technology to thinking in terms of capabilities—is what allows organizations to unlock AI’s real potential. READ FULL ARTICLE Built something cool? Tell us. Whether it's a scrappy prototype or a production-grade agent, we want to hear how you're putting generative AI to work. Drop us your story at nimishad@packtpub.com or reply to this email, and you could get featured in an upcoming issue of AI_Distilled. 📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us. If you have any comments or feedback, just reply back to this email. Thanks for reading and have a great day! That’s a wrap for this week’s edition of AI_Distilled 🧠⚙️ We would love to know what you thought—your feedback helps us keep leveling up. 👉 Drop your rating here Thanks for reading, The AI_Distilled Team (Curated by humans. Powered by curiosity.) *{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0

LLM Expert Insights, Packt
03 Oct 2025
4 min read
Save for later

Is Lecunn quitting Meta?

LLM Expert Insights, Packt
03 Oct 2025
4 min read
Opera’s AI browser, OpenAI’s social app, Meta’s AI lab tensionsAI_Distilled #115: What’s New in AI This WeekWelcome to this week’s newsletter! From groundbreaking AI tools and new social platforms to TikTok’s uncertain U.S. journey and discoveries stretching to the edges of our solar system, we’ve gathered the most impactful stories you need to know. Dive in to catch up on the innovations, business moves, and cosmic milestones shaping our world today.LLM Expert Insights,Packt📈LATEST DEVELOPMENTOpera’s AI Browser NeonOpera launched a $19.99/month AI-focused browser, Neon, for heavy AI users. It offers automated task workflows (“Cards”), organized AI chat workspaces (“Tasks”), and even code generation—entering a crowded field as Chrome, Edge and others add similar AI features. You can join the waitlist here.OpenAI’s New Social AppOpenAI unveiled Sora, an invite-only social media app that generates a TikTok-like video feed using AI. The Sora app is powered by OpenAI’s recently launched Sora 2 video generation model. Sora’s standout “Cameo” feature lets users insert video clips of real people (like themselves) as characters into the AI-generated content. Take a look at the official announcement here.Advance your technical career with actionable, practical solutions | AWS re:Invent 2025 Las VegasTransform your skills at AWS re:Invent 2025. Master new AWS services, join immersive workshops, and network with top cloud innovators at AWS re:Invent 2025. As a re:Invent attendee,you'll receive 50% discount code towards any AWS Certification exam.Our 2025 event catalog is now available!EXPLORE THE EVENTTensions at Meta’s AI Lab?The Information reported that the father of deep learning and a longtime Meta AI executive considered quitting as leadership imposed stricter controls on publishing research, angering staff. The clash underscores internal tensions in Meta’s AI group as it adjusts to new management and priorities. Read The Information’s report here.TikTok’s U.S. LifelineA new U.S. executive order has paved the way for TikTok to continue operating domestically after years of uncertainty. But the proposed deal is complex and already facing political pushback from Washington lawmakers. Here is the executive order.That’s all for this week’s roundup. Stay curious, stay informed, and join us again next week for more news.EXPERT INSIGHTSWhy is DeepSeek different from popular SOTA LLMs?The landscape of large language models (LLMs) is shaped by both technological innovation and strategic positioning. While dominant players such as OpenAI, Google, and Anthropic continue to push the boundaries of proprietary models, DeepSeek has emerged as a formidable open-source contender.In this article, our experts and authors of our upcoming book DeepSeek Essentials—Andy Peng, Alex Strick van Linschoten, and Duarte Carmo reflect on how DeepSeek differs from proprietary systems like GPT-4.5, Claude 4, and Gemini 2.5 Pro.One of the key reasons why DeepSeek stood out was because of its divergent philosophy from other model creators. Proprietary models are typically guarded through closed APIs, restrictive licenses, and opaque training methods. They are engineered for safety and monetization...READ FULL ARTICLEBuilt something cool? Tell us.Whether it's a scrappy prototype or a production-grade agent, we want to hear how you're putting generative AI to work. Drop us your story at nimishad@packtpub.com or reply to this email, and you could get featured in an upcoming issue of AI_Distilled.📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us.If you have any comments or feedback, just reply back to this email.Thanks for reading and have a great day!That’s a wrap for this week’s edition of AI_Distilled 🧠⚙️We would love to know what you thought—your feedback helps us keep leveling up.👉 Drop your rating hereThanks for reading,The AI_Distilled Team(Curated by humans. Powered by curiosity.)*{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0

LLM Expert Insights, Packt
26 Sep 2025
8 min read
Save for later

Tech Week in Brief: Glasses, GPUs & Giant Leaps

LLM Expert Insights, Packt
26 Sep 2025
8 min read
Meta’s new specs, OpenAI’s big spend, and other AI adventures AI_Distilled #115: What’s New in AI This Week Hello there! Welcome to your weekly roundup of all things newsworthy in tech. Grab your coffee, settle in, and let’s dive into the highlights from the past week. LLM Expert Insights, Packt 📈LATEST DEVELOPMENT Google puts Chrome on an AI caffeine rush Google’s Chrome browser got 10 new AI-powered features led by Google’s Gemini model. It can now summarize webpages, explain multiple tabs, and even autonomously book appointments or grocery orders (no kidding). The browser also gains a freakishly good memory—just ask where you saw that walnut desk last week, and Chrome will pull up the page. Efficiency level: Max. ChatGPT falls for a Gmail trick, OpenAI marks it as resolved Even AI isn’t safe from sneaky hacks. Radware revealed a crafty prompt injection attack that tricked OpenAI’s Deep Research agent into exfiltrating Gmail data. In plain speak: a bad email with hidden instructions made the AI steal email secrets. OpenAI patched the hole with more safety checks, but it’s a reminder that letting an AI rummage through your inbox can be a minefield of surprises (ouch). Although the issue was reported in June, OpenAI has reportedly marked it as resolved. Advance your technical career with actionable, practical solutions | AWS re:Invent 2025 Las Vegas Transform your skills at AWS re:Invent 2025. Master new AWS services, join immersive workshops, and network with top cloud innovators at AWS re:Invent 2025. As a re:Invent attendee,you'll receive 50% discount code towards any AWS Certification exam.Our 2025 event catalog is now available! EXPLORE THE EVENT Luma AI’s Ray3: Lights, Camera, AI! Startup Luma AI unveiled Ray3, an AI toolkit that brings Hollywood-level wizardry to your phone. Integrated with Adobe, Ray3 can generate HDR video (10-, 12-, 16-bit color) and even turn boring SDR footage into vivid HDR. Its built-in reasoning engine lets creators sketch out camera movements or scene edits, and the AI dutifully follows multi-step instructions. It’s like having a tiny James Cameron in your pocket, minus the ego. Meta’s smart glasses Meta’s Connect 2025 event, Zuck & Co. pivoted from metaverse musings to real hardware. While the glasses encountered problems during the live demo due to a race condition bug, the future is not as bleak. The Ray-Ban Meta smart glasses priced at $799, rocking a microLED display that projects messages, maps, and more right in your field of view. You control these AR specs with a neural wristband (bye-bye, clunky controllers). It’s the closest thing to wearing Tony Stark’s tech. OpenAI’s Manhattan project for AI Sam Altman is going big. OpenAI is teaming up with Oracle, SoftBank, and Nvidia to build out an AI super-infrastructure that makes current data centers look like Lego blocks. They’re planning five new U.S. data centers (bringing as much power as 7 nuclear reactors worth of energy!) and exploring a bold new “GPU leasing” deal worth $100B with Nvidia. In short, OpenAI wants endless computing power on tap, betting that in the AI race, bigger is better (and necessary). Oracle bets billions on cloud AI Larry Ellison must be feeling lucky. Why? Well, it is reported that Oracle is reportedly close to a $20 billion deal with Meta to host and train Meta’s AI models. This comes right after Oracle’s whopping $300B contract with OpenAI and a new partnership with Elon Musk’s xAI. The strategy? Offer faster, cheaper cloud infrastructure to undercut Amazon and Microsoft. If this pays off, Oracle’s cloud might go from underdog to top dog in the AI era. Bold move, Larry. Musk’s xAI drops a game-changer Elon Musk’s new AI venture, xAI, just launched a model called Grok 4 Fast that claims GPT-5 level smarts at a fraction of the cost. We’re talking near top-tier reasoning benchmarks with 98% lower token costs. It achieves this by cutting out “thinking overhead” and streamlining how it chews through data. Translation: powerful AI answers, cheap enough to deploy en masse. It’s Musk’s way of saying “Competition, bring it on.” Brain implants: Neuralink’s next step As per a Bloomberg report, Elon’s neuro-lab: Neuralink is gearing up for its first human trials this October after getting FDA’s nod. The company’s implantable chip can translate thoughts to text, initially aimed to help paralyzed patients communicate. Long term, Musk envisions people using thoughts to control computers and even converse with AI—because typing is so 2020, right? It’s equal parts exciting and sci-fi-level eerie. Alibaba’s model mega-mix Not to be outdone, Alibaba unveiled its Qwen3 AI stack with a twist: Mixture-of-Experts (MoE) models at trillion-parameter scale. The system can tap into 512 expert models but activates just a handful per query for super efficiency. End result? Over 10× throughput improvement and support for ridiculously long context (think entire novels in one prompt). Two 80B-version models lead the charge—one tuned for chatty assistants, another for complex reasoning. In the AI model arms race, Alibaba just loudly entered the chat. Microsoft’s developer boost (and cool chips) Redmond had a productive week too. Microsoft is hunting down pesky legacy code with new Copilot-powered agents that not only find problems in old .NET/Java code, but also auto-generate fixes, unit tests, and containerize apps. Early trials showed dramatic wins – an Xbox team cut migration effort by 88% and Ford saw a 70% reduction in update time. On another front, Windows 11 now comes with built-in support for running AI models (ONNX runtime) across CPUs, GPUs, and specialized NPUs from various vendors. And about “cooling chips from the inside out”? Microsoft researchers are exploring liquid cooling inside chips to solve overheating as AI silicon gets hotter (literally). The future: faster chips that keep their chill. START YOUR FREE TRIAL EXPERT INSIGHTS Introduction to chunking with GPT-4o In generative AI workflows, the way data is prepared has a direct impact on model effectiveness. Rather than relying solely on rule-based chunking methods, this tutorial introduces an approach where GPT-4o itself is used to intelligently divide unstructured content into meaningful segments. This strategy supports a Retrieval-Augmented Generation system and enables it to more effectively retrieve relevant context. Why chunking matters in GenAI systems Traditional chunking methods often split documents based on arbitrary rules, such as paragraph breaks or token counts, which may cut through semantically meaningful units. In contrast, intelligent chunking enables each piece of data to carry a coherent message. This is particularly important when chunks are embedded into a vector database like Pinecone for retrieval. If a query surfaces a partial or poorly segmented chunk, the generated response may lack clarity or precision. Using GPT-4o for semantic chunking GPT-4o is employed not just for generation but also as a semantic analyzer. The model receives the full unstructured text, such as a company memo or technical note, and is prompted to divide it into logically structured chunks, each roughly 50–100 words in length. This is achieved by setting up a system message instructing the model to act as a chunking assistant, followed by a user message containing the text to split. Consider this system message: "You are an assistant skilled at splitting long texts into meaningful, semantically coherent chunks of 50–100 words each. Split the following text into meaningful chunks..." Once the prompt is issued, GPT-4o returns a response with double newlines separating each chunk. The program parses the response by splitting on these newline markers. The result is a list of discrete, meaningful units that are ready to be embedded. This workflow is especially useful for processing internal company data, like executive summaries or operational notes, where nuance matters. This method shines when the data includes complex thoughts, mixed formats, or narrative elements. For simpler documents like lists or spreadsheets, rule-based chunking might be more efficient. However, for nuanced tasks where meaning spans sentences or paragraphs, GPT-4o’s semantic awareness offers a significant advantage. By integrating GPT-4o into the chunking process, generative AI systems can store and retrieve content in a more meaningful way. Each chunk becomes a high-value data unit, tailored for precision recall within an RAG pipeline. This intelligent preprocessing step reinforces the larger GenAISys vision, building systems that retrieve not just data, but context-rich, purpose-aligned information. Built something cool? Tell us. Whether it's a scrappy prototype or a production-grade agent, we want to hear how you're putting generative AI to work. Drop us your story at nimishad@packtpub.com or reply to this email, and you could get featured in an upcoming issue of AI_Distilled. 📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us. If you have any comments or feedback, just reply back to this email. Thanks for reading and have a great day! That’s a wrap for this week’s edition of AI_Distilled 🧠⚙️ We would love to know what you thought—your feedback helps us keep leveling up. 👉 Drop your rating here Thanks for reading, The AI_Distilled Team (Curated by humans. Powered by curiosity.) *{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0
Modal Close icon
Modal Close icon