Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

AI Distilled

68 Articles
LLM Expert Insights, Packt
12 Dec 2025
6 min read
Save for later

Trump centralizes AI laws, GPT-5.2 launches, Anthropic places a $21B chip bet.

LLM Expert Insights, Packt
12 Dec 2025
6 min read
AI regulation, model wars, and massive hardware moves collide this week. AI_Distilled #126: What’s New in AI This Week Thisweek,AI felt less like a breakthrough and more like a business plan. Washington flexed, OpenAI fine-tuned, and Anthropic spent like a nation-state. The frontier’sisgetting more expensive to cross. LLM Expert Insights, Packt LATEST DEVELOPMENT Trump moves to block state-level AI laws, centralizing power in Washington President Donald Trump has signed an executive order designed to stop U.S. states from enacting their own AI regulations, calling local laws a threat to national competitiveness. The order creates an “AI Litigation Task Force” to challenge state rules through lawsuits and allows the Commerce Department to restrict funding for states with conflicting AI policies. Backed by Silicon Valley heavyweights, the move effectively hands AI oversight to federal authorities and marks one of Trump’s most aggressive pushes toconsolidatetech governance. For a deeper look at how this reshapes the AI policy battleground, read the fullBusiness Standardstoryhere. OpenAIpushes ahead withGPT-5.2 as its sharpest model upgrade yet OpenAI has officially rolled out GPT-5.2,a major upgrade to its ChatGPT model family that arrives after an internal “code red” drive to sharpen performance amid intense competition from Google’s Gemini 3. The new release includes enhanced reasoning, improved coding and long-context handling, and multiple tiers (Instant, Thinking, Pro) aimed at balancing speed,depthand accuracy acrosseverydayand professional tasks. Early reports suggest the update will roll out first to paid users and is designed to push ChatGPT further into productivity workflows and complex work automation. For the full breakdown of what’s new and how OpenAI is positioning GPT-5.2 against rivals, check out the original reportatReuters. Anthropic’smassive Google TPU order worth$21 billionshakes up AI hardware race Broadcomdisclosedthat AI startup Anthropic has placed a$21 billionorder for Google’s custom Tensor Processing Units (TPUs)(chips designed specifically to accelerate large-model training and inference)signaling one of the largest single compute commitments yet in the AI infrastructure sphere. The deal is tied toAnthropic’splan to deploy up to one million TPUs, bringing more than 1 gigawatt of AI compute capacity online by 2026, and highlights the shifting dynamics in how next-gen AI labs secure hardware beyond traditional GPUs. It also underscores the rising influence of TPU-optimized systems in challenging Nvidia’s long-standing dominance in AI silicon. Get the full breakdown of what this means here. Time’s “Architects ofAI” take Person of the Year crown Time magazine has anointed the so-called “architects of AI”(a group of leading technologists and company bosses who built the platforms and infrastructure that defined this era)as its 2025 Person of the Year, spotlighting both their transformative impact and the ethical, social and economic questions that come with it. The cover story frames AI’s ascendancy as one of the defining global forces, with interviews and context on how these innovators shaped everyday life and industry. For the full perspective on who made the list and why this choice is stirring discussion, check out the fullcoverage. Oracle’s CDS spike signals growing investor anxiety about AI debt Oracle’s credit-default swaps — the cost insurers charge to protect against its debt default — have climbed to multi-year highs amid concerns over the company’s heavy borrowing to fund massive AI and cloud infrastructure projects, reflecting growing unease among investors about the sustainability of such debt-fuelledgrowth. This spike is being read as a broader market signal that confidence in AI-led expansion may be becoming fragile as spending outstrips near-term profit traction. For a deeper look at whythis mattersfor broader tech credit markets,check outBusinessLine. AI toys raise safety alarms for kids this holiday season As AI-enabled toys flood the market for the holidays, children’s safety advocates are increasingly warning parents to think twice before buying them, citing reports that some models can provide inappropriate,unsafeor harmful content (including instructions for dangerous objects or explicit topics) when interacting with kids. These concerns are backed by new testing and advisory notices highlighting risks to privacy, development and emotional well-being that come from unregulated chatbotbehaviourinside seemingly harmless playthings. To understand the specific toys under scrutiny and what experts recommend, thefullNBC Newsreport offersarundown. Learn AI tools, agents & automations in just 16 hours (End of Year offer) Best part? They’re running their Holiday Season Giveaway and first 100 people get in for absolutely free (it usually costs $395 to attend) 🧠Live sessions- Saturday and Sunday 🕜10 AM EST to 7PM EST REGISTER HERE FOR $0 (For first 100 people only) 📈EXPERT INSIGHTS Step-by-step architectural walkthrough of our glass-box system This week’s Expert Insight is drawn fromContext Engineering for Multi-Agent Systemsby Denis Rothman, a practical guide for architects building transparent, agent-driven AI systems. Rothman,an AI practitioner with decades of experience designing real-world intelligent systems,unpacks how to think beyond prompts and build robust multi-agent context engines from first principles. The following excerpt gives you a step-by-step look inside a glass-box architecture that turns complexity into clarity for AI engineers and system designers. Step-by-step architectural walkthroughof our glass-box system Our guide for thiswalkthroughisFigure 6.2. Each node in the diagram is color-coded to { XE "glass-box system: architectural walkthrough" }match the corresponding sections in our code. The new components introduced in this chapter are clearly marked with a[NEW]label. The legend of Figure 6.2 contains a color code for each component of the context engine: Blue (section 7):The final{ XE"glass-boxsystem:architecturalwalkthrough" }executionscript that the user runs White (section 6):Theengine’score,containingthe main orchestrator, planner, and tracer Purple (section 5):Theagentregistry, which acts as the system's toolkit Green (section 4):The specialist agents who perform the actual work Orange (section 3): The helper functions that provide common utilities READ FULL ARTICLE Built something cool? Tell us. Whether it's a scrappy prototype or a production-grade agent, we want to hear how you're putting generative AI to work. Drop us your story at nimishad@packtpub.com or reply to this email, and you could get featured in an upcoming issue of AI_Distilled. 📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us. If you have any comments or feedback, just reply back to this email. Thanks for reading and have a great day! That’s a wrap for this week’s edition of AI_Distilled 🧠⚙️ We would love to know what you thought—your feedback helps us keep leveling up. 👉 Drop your rating here Thanks for reading, The AI_Distilled Team (Curated by humans. Powered by curiosity.) *{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0}#converted-body .list_block ol,#converted-body .list_block ul,.body [class~=x_list_block] ol,.body [class~=x_list_block] ul,u+.body .list_block ol,u+.body .list_block ul{padding-left:20px} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0

LLM Expert Insights, Packt
06 Feb 2026
6 min read
Save for later

Your Backend Is Go — So Why Isn’t Your LLM Stack?

LLM Expert Insights, Packt
06 Feb 2026
6 min read
A Multi-Agent Framework for Go AI_Distilled #127: What’s New in AI This Week This week’s edition goes deep where it matters most: how AI agents are actually built and shipped. Our Expert Insight spotlights Go + Eino ADK, a production-hardened multi-agent framework developed at ByteDance, showing how Go developers can design robust, stateful, and collaborative agents without drowning in complexity. Alongside this hands-on deep dive, we track the latest moves reshaping the agent economy, from enterprise platform deals to the escalating compute arms race. LLM Expert Insights, Packt EXPERT INSIGHTS Go +Eino ADKQuickstart:Master Core AI Agent Design Patterns With thanks toAI EngineerGerald Parker for his technical review. Eino ADK, pronounced “I know”,is amulti-agent development framework designed for Go,developedandhardened in real-world use at ByteDance.Itsdesign philosophy is "keep simple thingssimple, andmake complex things possible".Open-sourcedat the start of2025,Eino’s promise to Godevelopersis they canfocus on implementing business logic without worrying aboutunderlying technical complexity. In this article co-written with the team behind Eino, we will discuss: What Eino ADKis Core agent patternsin Einoalong withrealuse cases Examplecode forbuilding a simpleproject manageragent Introductionto Eino ADK Agentsarequickly becomingthe mainstreamway todeploy LLMs, from intelligent customer service to automated office work.With them,thefollowingpain pointsare emerging: LLMsare not bridged well withbusiness systems, resulting inagents that can only engage in "empty talk." Lack of state managementcausesagents tofrequently"forget" when performing tasks. Complex interactive processesincrease development difficultyeven further. Eino ADK wascreatedto provide Go developers with a complete, flexible, and powerful agent development framework that addresses thesecore challengeshead-on. Recap:Whatis anAgent? You can think of anagent as anindependent,intelligent entity that can understand instructions, perform tasks, and provide responses–capable of autonomous learning, adaptation, and decision-making. Its main functions include: Reasoning:Anagentcananalyzedata,identifypatterns, and use logic and available information to draw conclusions, make inferences, and solve problems. Action:Anagenttakes actions or executes tasks to achieve goals based on decisions, plans, or external inputs. Observation:Anagentautonomously collects relevant information (for example,through computer vision, natural language processing, or sensor data analysis) to understand the context and lay the foundation for informed decision-making. Planning:Anagentcandeterminenecessary steps, evaluate potential actions, and select the best course of action based on available information and expected outcomes. Collaboration: Anagentcan effectivelywork togetherwith others (human or other agents) in complex and dynamic environments. READ FULL ARTICLE Packt and Go1 invite you to take a survey on Developers Learning As AI generates more learning content, it is becoming harder to see where expert input really makes a difference. Packt has recently partnered with Go1 to create a short study looking at how developersactually learntoday, and when structured courses still matter alongside AI tools. If you work with learning or rely on it to build skills, your perspective would be useful. The survey takes under5minutestocomplete,and the results will be sharedin a study published in March. TAKE THE SURVEY 📈LATEST DEVELOPMENT Snowflake and OpenAI strike $200M deal to power enterprise AI agents - Snowflake has entered a $200 million partnership with OpenAI to embed advanced generative models directly into its Data Cloud, accelerating the rollout of enterprise-grade AI agents. The collaboration enables customers to build, deploy, and govern AI agents that operate on proprietary data while maintaining security, compliance, and performance guarantees. The deal underscores how data platforms are becoming the control plane for agentic AI inside enterprises. ServiceNow deepens AI platform strategy with Anthropic partnership - ServiceNow has expanded its AI ambitions through a deeper partnership with Anthropic, integrating Claude models into its workflow automation platform. The goal is to enable more autonomous, reasoning-driven agents across IT operations, customer service, and enterprise workflows. The move positions ServiceNow as a serious contender in the AI-native enterprise software category. Positron raises $230M to challenge Nvidia’s AI chip dominance - AI hardware startup Positron has raised a massive $230 million Series B to build alternative accelerators optimized for large-scale inference. Backed by major investors, Positron aims to offer lower-cost, energy-efficient chips for data centers overwhelmed by Nvidia’s pricing power. The funding highlights growing investor appetite for breaking Nvidia’s grip on AI compute. Intel re-enters the GPU arena to take on Nvidia - Intel has confirmed plans to manufacture its own GPUs, signaling a renewed push into a market long dominated by Nvidia. While Intel faces steep competition, the move reflects rising demand for diversified AI hardware supply chains as enterprises seek alternatives amid soaring GPU costs and supply constraints. Xcode embraces agentic coding with deeper OpenAI and Anthropic integrations - Apple’s Xcode is evolving beyond autocomplete, introducing deeper integrations with OpenAI and Anthropic to support agentic coding workflows. The update enables developers to delegate multi-step coding tasks, refactoring, and reasoning-heavy operations to AI agents directly within the IDE—signaling a shift from assistive AI to collaborative software agents. SpaceX officially acquires xAI, eyes data centers in space - Elon Musk’s SpaceX has formally acquired xAI, unifying Musk’s AI and aerospace ambitions. The combined entity plans to explore space-based data centers powered by solar energy, positioning orbital infrastructure as a future solution to Earth-bound energy and cooling limits for AI compute. The move blurs lines between frontier AI, infrastructure, and geopolitics. How Cisco is building smart systems for the AI age - Cisco outlined its approach to designing AI-ready infrastructure, emphasizing observability, security, and distributed intelligence across networks. Rather than chasing models, Cisco is positioning itself as a foundational layer for AI systems—handling traffic, trust, and orchestration as enterprises deploy agents at scale. Built something cool? Tell us. Whether it's a scrappy prototype or a production-grade agent, we want to hear how you're putting generative AI to work. Drop us your story at nimishad@packtpub.com or reply to this email, and you could get featured in an upcoming issue of AI_Distilled. 📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us. If you have any comments or feedback, just reply back to this email. Thanks for reading and have a great day! That’s a wrap for this week’s edition of AI_Distilled 🧠⚙️ We would love to know what you thought—your feedback helps us keep leveling up. 👉 Drop your rating here Thanks for reading, The AI_Distilled Team (Curated by humans. Powered by curiosity.) *{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0}#converted-body .list_block ol,#converted-body .list_block ul,.body [class~=x_list_block] ol,.body [class~=x_list_block] ul,u+.body .list_block ol,u+.body .list_block ul{padding-left:20px} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0

LLM Expert Insights, Packt
13 Feb 2026
5 min read
Save for later

Frontier Launches. Claude Evolves. Agents Mature.

LLM Expert Insights, Packt
13 Feb 2026
5 min read
The real AI competition is now happening in enterprise stacks.AI_Distilled #128: What’s New in AI This WeekJoin the Next Batch of AI Agent Builders (Cohort 3)Over 120+ engineers have already built AI Agents through Cohorts 1 & 2.In Cohort 3, you could be one of them.It’s a live, hands-on cohort built for engineers who want to stop watching demos and start shipping real AI agents.In just one weekend, you’ll build systems that: Reason → Call Tools → Take Actions → Run WorkflowsUsing LangChain, AutoGen & CrewAI.With live coding and mentor support from experts at Microsoft & Google.Exclusive for Our Newsletter ReadersAs part of our NL community, you get 40% off your seat.Use code: AGENT40 (Valid for a limited time)If AI agents are on your 2025 roadmap, this is your moment.Secure Your Discounted Seat HereMiss this cohort, and you will have to wait for the next one.This week, AI moved decisively from experimentation to execution. OpenAI introduced Frontier, signaling a shift toward managed enterprise agent fleets. Anthropic expanded Claude’s real-world capabilities. Governments doubled down on sovereign compute. Meanwhile, new benchmarks like AIRS-Bench are pushing scientific evaluation of autonomous systems. The race is no longer about model size — it’s about infrastructure, orchestration, and who can operationalize intelligence at scale.LLM Expert Insights,PacktLATEST DEVELOPMENT🧠 Enterprise AI & Agent OpsOpenAI launches Frontier — the enterprise AI agent platformOpenAI unveiled Frontier, an enterprise-grade platform for building, deploying, and managing autonomous AI agents that operate across business systems with shared context, onboarding, and governance — shifting the battleground from standalone models to managed agent fleets integrated with internal data sources and workflows.🔁 Competitive Model WarsAnthropic’s Claude Opus 4.6 expands enterprise capabilitiesAnthropic released Claude Opus 4.6, targeting complex enterprise tasks like deep reasoning, analytics, and coding, claiming superior benchmarks in knowledge-intensive work — a move that underscores how model evolution is increasingly tied to real-world utility rather than raw size alone.⚙️ Tools, Frameworks & Agentic SoftwareOpenAI & Anthropic rivalry spills over into tooling and deploymentBeyond models, competition is playing out in agentic tool ecosystems, with dueling releases and strategic positioning that aim to blur lines between productivity coding, agent workflows, and software lifecycle support — a sign that “agent stacks” are now core to enterprise AI competitiveness.📈 AI Infrastructure & Compute StrategyCanada pushes sovereign compute with multi-billion strategyCanada reaffirmed its Sovereign AI Compute Strategy, committing major public and commercial infrastructure investments to ensure domestic access to AI compute, supercomputing resources, and affordable capacity for innovators — a move poised to shape North American research and commercialization landscapes.🔬 Research & BenchmarksAIRS-Bench accelerates scientific research agent evaluationA new benchmark suite, AIRS-Bench, has been introduced to assess AI agents’ scientific reasoning across interdisciplinary research tasks, revealing where current agents outperform humans and where significant gaps remain — a useful tool for rigorous evaluation of agentic systems in research workflows.📈EXPERT INSIGHTSUnlocking Data with Generative AI and RAGThis week’s excerpt comes from Unlocking Data with Generative AI and RAG (2nd Edition) by Keith Bourne, an AI engineer and founder of Memriq AI. Drawing on a decade of experience building production-scale ML systems for companies like Johnson & Johnson, Bourne unpacks how Retrieval-Augmented Generation (RAG) is reshaping the way organizations use data. The book goes beyond theory, offering hands-on guidance for integrating RAG with generative AI to build faster, smarter, and more adaptive systems.RAG for automated reportingCompanies that use RAG incombination with data analysis and reporting through its automated reporting capabilities are seeing significant improvements in their capabilities and the time it takes to perform the analysis. This innovative application of RAG serves as a bridge between the vast data lakes of unstructured data and the actionable insights that businesses need daily to drive key decisions and innovation. By utilizing RAG for automated reporting, companies can significantly streamline their reporting processes, enhance accuracy, and uncover valuable insights hidden within their data. Let’s start with how it can be utilized in this environment.READ FULL ARTICLEBuilt something cool? Tell us.Whether it's a scrappy prototype or a production-grade agent, we want to hear how you're putting generative AI to work. Drop us your story at nimishad@packtpub.com or reply to this email, and you could get featured in an upcoming issue of AI_Distilled.📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us.If you have any comments or feedback, just reply back to this email.Thanks for reading and have a great day!That’s a wrap for this week’s edition of AI_Distilled 🧠⚙️We would love to know what you thought—your feedback helps us keep leveling up.👉 Drop your rating hereThanks for reading,The AI_Distilled Team(Curated by humans. Powered by curiosity.)*{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0}#converted-body .list_block ol,#converted-body .list_block ul,.body [class~=x_list_block] ol,.body [class~=x_list_block] ul,u+.body .list_block ol,u+.body .list_block ul{padding-left:20px} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0

LLM Expert Insights, Packt
20 Feb 2026
5 min read
Save for later

Frontier vs Claude: The Enterprise Battle Deepens

LLM Expert Insights, Packt
20 Feb 2026
5 min read
Frontier launches, global AI strategy evolves, and RAG matures. AI_Distilled #129: What’s New in AI This Week This week, AI’s center of gravity shifted decisively toward systems, not just models. OpenAI introduced Frontier to operationalize enterprise agent fleets. Anthropic pushed Claude deeper into production-grade reasoning. Global leaders debated sovereign AI strategies at scale. Meanwhile, on the ground, real engineering challenges like RAG indexing pipelines are defining whether these ambitions hold up in production. The race is no longer about who builds the smartest model — it’s about who builds the most durable stack. LLM Expert Insights, Packt LATEST DEVELOPMENT 🚀 Enterprise AI & Strategic Infrastructure OpenAI launches Frontier — enterprise AI agent platform - OpenAI unveiled Frontier, a comprehensive enterprise platform for building, deploying, and managing autonomous AI agents across internal systems with shared context, governance, and security — marking a shift from isolated models to “AI coworkers” that can execute complex business workflows at scale. 🧠 Anthropic upgrades Claude with Opus 4.6 - Anthropic released Claude Opus 4.6, its most advanced model yet, featuring longer context windows and agent-oriented workflows capable of deeper reasoning, coding, and enterprise-grade tasks — intensifying the rivalry with OpenAI in agent-centric deployments. 📊 Global AI Policy & Collaboration India hosts AI Impact Summit with global tech leaders - The India AI Impact Summit 2026 brought together policymakers and AI executives — including Prime Minister Narendra Modi, Google’s Pichai, and UN officials — to discuss AI sovereignty, open-source models, and infrastructure investment commitments potentially exceeding $200 B, while also spotlighting governance, equity, and inclusion. 🛡️ Competitive Dynamics & Industry Tension OpenAI vs Anthropic: Summit rivalry surfaces - At the India AI Impact Summit, live optics underscored competitive tension between OpenAI and Anthropic leadership, reflecting deeper strategic and cultural divides as both pursue distinct visions for AI deployment and risk management. 📡 Open Source & Community Moves OpenClaw’s creator joins OpenAI - The creator of OpenClaw — a popular open-source autonomous assistant — joined OpenAI, with the project entering an independent foundation, signaling broader industry interest in community-driven agent frameworks integrated into major AI ecosystems. 🌍 AI Sovereignty & Global Strategy Summit highlights “third way” AI leadership - Leaders at India’s summit emphasized a “third way” for AI development that balances open-source models, sovereign compute, and international collaboration — distinct from dominant US/China dynamics and geared toward Global South innovation. 🔁 Agentic AI Ecosystem Shift Enterprise agents replace standalone models - Industry consensus is coalescing around agentic architectures — where systems act autonomously across heterogeneous corporate stacks — rather than isolated LLM deployments, shaping long-term enterprise AI strategy. Join the Machine Learning & Generative AI System Design Workshop and learn how to design AI systems that survive production. Get 35% off with code FLASH35 JOIN NOW 📈EXPERT INSIGHTS Building Natural Language and LLM Pipelines This week’s Expert Insight comes fromBuilding Natural Language and LLM PipelinesbyLaura Funderburk, a leading voice in production-grade AI systems and developerrelations leadat AI Makerspace. In this excerpt, Funderburk breaks down one of the most overlooked parts of Retrieval-Augmented Generation (RAG): the indexing pipeline. She illustrates how Haystack’s flexible routing, preprocessing, and unification workflows turn scattered, messy data into structured knowledge ready for intelligent querying — the invisible architecture that keeps RAG systems from collapsing in production. Building pipelines with Haystack: indexing, naive RAG, and hybrid RAG At the heart of any effective RAG system are two distinct yet co-dependent workflows: an offline indexing pipeline responsible for preparing the knowledge base, and an online query pipeline thatleveragesthis prepared data to answer user questions in real time. This section provides the blueprints for constructing these two foundational pillars. We will first build a versatile indexing pipeline capable of ingesting data from multiple sources and formats and then construct our first query pipeline (a naive RAG system) that serves as a functional baseline for everything that follows (a hybrid RAG system with ranking). Indexing pipelines: preparing your knowledge base The indexing pipelineis a critical offline process. Its primaryobjectiveis to take web addresses, unstructured, or semi-structured data from various sources, convert it into a standardized format, and load it intoDocumentStore, where it can be efficiently searched. A well-designed indexing pipeline is the bedrock of a high-performing RAG system, as the quality of the data ingested directlyimpactsthe quality of the retrieval and, ultimately, thefinal generated answer. We will build an indexing pipeline that can handle a diverse mix of data sources simultaneously: live web pages, local text and PDF files, and structured tabular data from CSV files. This is achieved by usingFileTypeRouter, acomponentthat directs different data types to theappropriate converters, allowing for a unified yet specialized ingestion workflow. The key steps are depicted inFigure 4.3: READ FULL ARTICLE Built something cool? Tell us. Whether it's a scrappy prototype or a production-grade agent, we want to hear how you're putting generative AI to work. Drop us your story at nimishad@packtpub.com or reply to this email, and you could get featured in an upcoming issue of AI_Distilled. 📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us. If you have any comments or feedback, just reply back to this email. Thanks for reading and have a great day! That’s a wrap for this week’s edition of AI_Distilled 🧠⚙️ We would love to know what you thought—your feedback helps us keep leveling up. 👉 Drop your rating here Thanks for reading, The AI_Distilled Team (Curated by humans. Powered by curiosity.) *{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0

LLM Expert Insights, Packt
27 Feb 2026
6 min read
Save for later

The AI Race Just Got Riskier

LLM Expert Insights, Packt
27 Feb 2026
6 min read
Anthropic softens safety, Meta secures chips, and agents reshape enterprise stacks. AI_Distilled #129: What’s New in AI This Week CLICK HERE TO AVAIL THE OFFER This week, AI’s evolution came with sharper trade-offs. As agents move deeper into enterprise systems, security teams are confronting new vulnerabilities. Anthropic recalibrated its safety posture under competitive pressure. Meta locked in a $60B compute strategy, while sovereign AI initiatives gained momentum globally. At the same time, researchers are refining how we monitor and reason about LLM behavior. The message is clear: scaling intelligence now requires architectural discipline — not just bigger models. LLM Expert Insights, Packt LATEST DEVELOPMENT 🚨 Enterprise Security Risk from AI Agents - AI agents are quietly reshaping enterprise systems — and not just in productivity. Autonomous agents with deep access to internal tools, APIs, and memory are now creating new security vulnerabilities, including prompt injections and unauthorized execution paths. Security teams must rethink access control, audit trails, and risk models to manage these emergent threats. ⚖️ Anthropic Scales Back Safety Pledge in Heated AI Race - Anthropic, long regarded as a safety-first AI lab, has revised its Responsible Scaling Policy — dropping key commitments to delay deployment when safety controls lag. While introducing periodic public risk reports, the shift reflects competitive pressures in an environment with limited regulation, raising questions about risk trade-offs at top AI labs. 🔌 Meta’s Strategic $60B Chip Deal with AMD - Meta has agreed on a $60 billion multi-year deal with AMD for AI chips and a 10 % equity stake, diversifying beyond Nvidia and scaling infrastructure for large-scale training and inference. This reflects a broader trend of major AI players securing specialized compute capacity amid supply constraints and performance demands. 📈 India AI Impact Summit Unveils Local Models & Strategy - The India AI Impact Summit revealed new Indian AI models (e.g., Sarvam AI variants and BharatGen Param2) and national AI infrastructure commitments, supported by plans to add thousands of GPUs and expand sovereign compute capacity. Microsoft also committed large-scale investments to expand access in emerging markets. 🚀 MIT Develops Better Reasoning & LLM Monitoring Techniques - Researchers at MIT introduced new techniques for probing LLM behavior, exposing how context and long conversations can bias outputs and affect reliability. Such findings inform safer and more robust AI systems by highlighting architectural weaknesses and opportunities for refining reasoning engines. 📊 AI Agents Transition from Theory to Integrated Systems - A growing body of analysis affirms the structural shift from isolated generative models to agentic systems that act, plan, and orchestrate workflows across applications — radically changing how AI is used in production and enterprise environments. 📈EXPERT INSIGHTS Agentic Architectural Patterns for Building Multi-Agent Systems This week’s Expert Insight comes fromAgentic Architectural Patterns for Building Multi-Agent Systemsby Dr. AliArsanjaniand Juan Pablo Bustos. Dr.Arsanjani, long known for his work in enterprise architecture and now Director of Applied AI Engineering at Google Cloud, brings decades of large-scale systems thinking to the agentic AI conversation. In this excerpt, the authors introduce the Agent Router pattern, a practical way to map user intent to the right specialized agent without relying on brittle keyword rules or guesswork. It may look simple on the surface, but once your system grows beyond a single assistant, this pattern quickly becomes essential. The Agent Router pattern (intent-based routing) Agent Routeris the{ XE"Agent Router pattern" }foundationalpattern for decoupling the user's intent from the specific agent that executes it. In early or simple systems, developers oftenreliedon hardcoded conditional logic (e.g., if "sales" in query:call_sales_agent). However, at an enterprise scale with dozens of specialized agents, this approach becomes brittle and unmanageable.Agent Routersolves this by introducing a dedicated architectural layer that acts as a sophisticated switchboard. This pattern combines two distinct mechanisms:semantic intent extraction (understanding the "what") and graph-constrained routing (deciding the "who"). By separating these concerns, the system can scale to support new agents and capabilities without requiring a rewrite of the core orchestration logic. It serves as the "Hello World" of agentic coordination,the minimalviablecorerequiredfor intelligent dispatch. Context A systempossesses{ XE"Agent Routerpattern:context" }asuite of specialized agents, each with distinct{ XE"context" }capabilities. Users interact with the system via natural language, which is often ambiguous, varying in phrasing, orcontainingirrelevant noise. Problem How can the{ XE"Agent Routerpattern:problem" }systemaccurately map an unstructured, variable{ XE"problem" }naturallanguage request to the specific agent best suited to handle it, without "hallucinating" capabilities or relying on fragile keyword matching?Forces in the problem space include the following: Ambiguity versus precision: User{ XE"Agent Routerpattern:forces" }inputsare vague and{ XE"forces" }unstructured, but agent execution requires precise, structured commands. Scalability versus maintenance: Adding a new agent should not require rewriting the central routing logic. The system must accommodate growing capabilities dynamically. Safety versus hallucination: The system must ensure that a request is never routed to an agent that cannot handle it, avoiding the risk of an agentattemptingto perform a task outside its guardrails. READ FULL ARTICLE Built something cool? Tell us. Whether it's a scrappy prototype or a production-grade agent, we want to hear how you're putting generative AI to work. Drop us your story at nimishad@packtpub.com or reply to this email, and you could get featured in an upcoming issue of AI_Distilled. 📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us. If you have any comments or feedback, just reply back to this email. Thanks for reading and have a great day! That’s a wrap for this week’s edition of AI_Distilled 🧠⚙️ We would love to know what you thought—your feedback helps us keep leveling up. 👉 Drop your rating here Thanks for reading, The AI_Distilled Team (Curated by humans. Powered by curiosity.) *{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0}#converted-body .list_block ol,#converted-body .list_block ul,.body [class~=x_list_block] ol,.body [class~=x_list_block] ul,u+.body .list_block ol,u+.body .list_block ul{padding-left:20px} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0

LLM Expert Insights, Packt
06 Mar 2026
6 min read
Save for later

The Quiet Shifts Shaping AI This Week

LLM Expert Insights, Packt
06 Mar 2026
6 min read
Meta reorganizes, Apple explores new compute, and policy catches up. AI_Distilled #131: What’s New in AI This Week CLICK HERE TO AVAIL THE OFFER A pattern is emerging in AI this week. Governments are trying to get ahead of the technology, companies are reorganizing their engineering teams around it, and the infrastructure race keeps intensifying. The UN is moving toward global coordination on AI governance, Meta is restructuring how it builds models, and Apple may even lean on Google’s servers to power the next generation of Siri. At the same time, AI keeps finding its way into places that matter: helping doctors choose cancer treatments and shaping policy decisions in agriculture. The pace is fast, but underneath it all, the field still rests on surprisingly fundamental ideas about language and structure. That’s exactly where today’s Expert Insight takes us. LLM Expert Insights, Packt LATEST DEVELOPMENT 🌐 UN launches global science panel to steer AI governance - The UN General Assembly has approved the creation of an independent international scientific panel to study the impacts of artificial intelligence and guide global policy. The initiative aims to give governments shared, evidence-based insight into AI’s economic and societal effects while helping bridge the knowledge gap between advanced AI nations and developing countries. 🧠 Meta sets up new engineering unit to speed up AI model development - Meta is creating a new applied AI engineering organization to accelerate the development and refinement of its next-generation models. The team will focus on building tools, generating training data, and running evaluations to help models improve more quickly, working closely with the company’s Superintelligence Lab. ☁️ Apple may rely on Google servers to power its next-generation AI Siri - Apple is reportedly exploring the use of Google’s data centers to run parts of its upcoming AI-powered Siri, as the company prepares a major upgrade to the assistant using Google’s Gemini models. The move highlights how Apple may lean on external infrastructure to handle the growing compute demands of advanced AI, especially as usage of its own Private Cloud Compute servers remains relatively low. 🧬 AI model could help doctors tailor treatment for pancreatic cancer patients - Researchers have developed an AI tool designed to analyze clinical and molecular data to help doctors choose more effective treatment strategies for pancreatic cancer. Because the disease is often diagnosed late and responds differently to therapies, the model aims to identify patterns that can guide more personalized treatment decisions. 🌾 Researchers explore co-designing AI agents to support agricultural policymaking - Researchers and policymakers are exploring how AI agents can be co-designed with stakeholders to support agricultural policy decisions, rather than being built solely by technologists. The approach emphasizes collaboration with farmers so that AI systems reflect real-world agricultural needs and governance priorities. Advocates say this participatory design model could make AI tools more aligned with food-system challenges such as climate resilience. 📈EXPERT INSIGHTS Mastering NLP From Foundations to Agents This week’s Expert Insight comes from Mastering NLP From Foundations to Agents by Lior Gazit and Meysam Ghaffari, an in-depth guide that traces the evolution of natural language processing from classical machine learning techniques to modern agentic AI systems. Gazit, a seasoned machine learning leader in the financial sector, and Ghaffari, a senior data scientist specializing in NLP and deep learning, combine practical engineering experience with academic depth. In this excerpt, they unpack part-of-speech tagging, a foundational NLP technique that still underpins many modern language systems, from traditional pipelines to today’s LLM-powered applications. POStagging Part-of-speech(POS) taggingis the practice of attributing grammatical labels, such as nouns, verbs, adjectives, and others, to individual words within a sentence. This tagging process holds significance as a foundational step in various NLP tasks, including text classification, sentiment analysis, and machine translation. POS tagging can be performed using different approaches, such as rule-based methods, statistical methods, and deep learning-based methods. In this section,we’llprovide a brief overview of each approach. Rule-based methods Rule-basedmethodsforPOStagging involve defining a set of rules or patterns that can be used to automatically tag words in a text with their corresponding parts of speech, such as nouns, verbs, adjectives,and so on. The process involves defining a set of rules or patterns foridentifyingthedifferent partsof speech in a sentence. For example, a rule maystatethat any word ending in "-ing" is a gerund (a verb acting as a noun), while another rule maystatethat any word preceded by an article such as "a" or "an" islikely anoun. These rules are typically based on linguistic knowledge, such as knowledge of grammar and syntax, and are often specific to a particular language. They can also be supplemented with lexicons or dictionaries that provideadditionalinformation about the meanings and usage of words. The process of rule-based tagging involves applying these rules to a given text and identifying the parts of speech for each word. This can be done manually, but is typically automated using software tools and programming languages that support regular expressions and pattern matching. READ FULL ARTICLE Senior engineers, tech leads, and developers serious about maintainable code: this live session is for you. Learn pragmatic techniques for large-scale refactoring using structured code search (ast-grep) and Claude Code-assisted workflows—combining determinism with speed for safer production changes. Join us March 14th at 10AM ET. 🎟️ Packt community members save 50% with code AIDISTILLED50. Includes the Mastering ast-grep ebook and Claude Code prompt templates. REGISTER NOW AT 50% Built something cool? Tell us. Whether it's a scrappy prototype or a production-grade agent, we want to hear how you're putting generative AI to work. Drop us your story at nimishad@packtpub.com or reply to this email, and you could get featured in an upcoming issue of AI_Distilled. 📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us. If you have any comments or feedback, just reply back to this email. Thanks for reading and have a great day! That’s a wrap for this week’s edition of AI_Distilled 🧠⚙️ We would love to know what you thought—your feedback helps us keep leveling up. 👉 Drop your rating here Thanks for reading, The AI_Distilled Team (Curated by humans. Powered by curiosity.) *{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
LLM Expert Insights, Packt
24 Oct 2025
9 min read
Save for later

AI’s Wild Week: AI Faces a $1.5B Reckoning and a Reality Check

LLM Expert Insights, Packt
24 Oct 2025
9 min read
Exclusive Invite: Packt’s Nexus 2025 – The Global Agentic AI Event. AI_Distilled #119: What’s New in AI This Week It’sbeen a week of recalibration across the AI landscape:billion-dollar copyright reckonings,tightening global regulations,layoffs, lawsuits, and bold experiments redefining what “AI-powered” really means. Underneath the noise, a pattern is emerging: the industry is shifting from rapid expansion to structural accountability. Whether it’s Anthropic’s landmark settlement, China’s new AI governance laws, or SAP’s methodical rollout of enterprise agents, the message is clear "AI’s next phase is about stewardship rather than scale." Dive into this week’s curation for the full picture! LLM Expert Insights, Packt EXPERT INSIGHTS Building Trustworthy Intelligence: The Road to Responsible AI in LLMs In this week’s feature, Ahmed Menshawy and Mahmoud Fahmy, authors of LLMs in Enterprise, unpack how organizations can balance innovation with responsibility when deploying large language models. They outline the four pillars of Responsible AI (RAI):fairness, transparency, accountability, and safety, as the foundation for building trustworthy systems. From bias detection and explainability tools to continuous compliance and regulatory alignment, the article shows how ethics becomes engineering through practical frameworks and real-world safeguards. As global standards like the EU AI Act and NIST RMF tighten accountability, RAI isn’t just good practice; it’s a business imperative. Read the full article on Substack → Special Message from Packt's Events Team: This November, the world’s top AI Experts from Google, Microsoft, and LangChain are coming together for Packt's Nexus 2025, a two-day live virtual summit for developers, engineers, and AI practitioners ready to build the next generation of intelligent systems. Join the Experts Redefining AI | Live at Nexus 2025. BOOK YOUR SEAT NOW! Use code: EARLY50 to get 50% discount on the ticket - Exclusive for the AI_Distilled Community 📈LATEST DEVELOPMENT OpenAI launches AI browser that can browse and act for you What happened: OpenAI introducedChatGPT Atlas, a Chromium-based browser with the ChatGPT assistant built in. It currently supports macOS and uses features like a sidebar for summarising websites, indexing your browsing history, and an “Agent Mode” that enables the AI to perform tasks like shopping and tab management, all with optional privacy modes for logged-out usage. Why it matters: By integrating LLMs directly into the browser, OpenAI is shifting how we access and interact with the web from manual searches to conversational and action-based interfaces. This move also elevates questions of privacy, data control, and the evolving role of browsers as AI-enabled platforms. (Tom’s Hardware) DeepSeek explores AI efficiency with token-to-image compression What happened: Chinese startupDeepSeek unveiled a new model that converts text tokens into images using a vision encoder, a technique that could overcome the “long-context” limits of LLMs. The model, called DeepSeek-OCR, compresses text inputs up to 10× whilemaintainingabout97 %accuracy, sparking discussion across the global AI community. Why it matters: This research could pave the way for LLMs that handle far longer prompts and reasoning chains without massive computational costs. If successful, it would mark a breakthrough in scaling efficiency, one of the biggest challenges in current AI architectures. (South China Morning Post) Anthropic to pay$1.5 billionin landmark copyright settlement What happened: Anthropic has agreed to pay$1.5 billion to authors after using their copyrighted books scraped from sites like LibGen and PiLiMi to train its Claude models without permission. Around half a million authors are eligible for compensation, and Anthropic must also destroy all pirated copies. Why it matters: The settlement sets a precedent for how AI companies handle copyrighted data, signaling that unlicensed use of creative works now carries real financial risk. It may also push the industry toward formal licensing deals between publishers and AI developers.(Chemistry World) China strengthens AI oversight with new data and safety laws What happened: China’s top legislature is drafting amendments to its cybersecurity law to include stricterAI safety, ethics, and data protectionmeasures. The proposed framework supports AI research while tightening oversight of generative models and content labeling, including mandatory visible and hidden identifiers for AI-generated media. Why it matters: The move signals Beijing’s intent to balance AI growth with tighter governance, aiming to prevent misinformation and data misuse. It also highlights a divergence from U.S. policy; China’s focus is regulation-first, while American firms emphasize commercial deployment. (Business Standard) Study warns of ‘brain rot’ in AI models trained on junk web data What happened: A study by researchers fromTexasA&M, the University of Texas at Austin, and Purdue University found that large language models suffer “cognitive decline” when repeatedly trained on low-quality, engagement-driven content. The paper titled LLMs Can Get Brain Rot! shows that reasoning accuracy in tested models droppednearly20points, and long-context comprehension fell over30 pointswhen fed junk social media data.(Business Standard) Why it matters: The findings underline thatdata quality directly affects AI reliability and ethics, not just performance. Models exposed to “viral” or superficial web textexhibitedreasoning shortcuts, overconfidence, and personality drift—effects researchers call “persistent representational decay.” The paper urges developers to treat data hygiene as acore AI safety issue, recommending cognitive audits and stricter content filtering during training.(arXiv) OpenAI’s South Korea blueprint envisions AI-led economic growth What happened: OpenAI released anEconomic Blueprint for South Korea, outlining policy recommendations to scale AI adoption through partnerships with Samsung, SK, and the Ministry of Science and ICT. The plan builds on OpenAI’sStargate initiative, focused on advanced memory and next-gen data centers, and aims to pair sovereign AI development with frontier collaborations. (OpenAI) Why it matters: South Korea is positioning itself as the nextglobal AI powerhouse,leveragingits semiconductor dominance, digital infrastructure, and government-backed funding. The blueprint calls for AI-led growth inexports, healthcare, education, and SMEs, alongside governance sandboxes and data infrastructure standards, framing Korea as both an adopter and standard-setter in safe, scalable AI deployment.(OpenAI) Dell Technologies Capital bets on AI data and new architectures What happened: Dell Technologies Capital (DTC) managing director Daniel Docter and partner Elana Lian outlined their vision fornext-generation AI architectures and “frontier data”in a Crunchbase interview. Dell expects$20 billionin AI server shipmentsby 2026 and has loggedfive portfolio exits since June, including Meta’s acquisition ofRivosand Salesforce’s acquisition ofRegrello. Why it matters: DTC sees AI’s future as adata problem more than a model problem, backing startups innovating in reasoning, safety, and new architectures such asstate-space modelsfor long-context and voice AI. The firm’s focus spans from silicon to applications, reflecting how enterprise AI is now driven by infrastructure, not hype.(Crunchbase) Google launches Skills platform with 3,000 AI courses What happened: Google unveiledGoogle Skills, a unified learning hub offeringnearly3,000AI and technical coursesfrom Google Cloud, DeepMind, and Grow with Google. The platform features hands-on labs powered by Gemini Code Assist, gamified progress tracking, and credentials ranging from skill badges to professional certificates.(Analytics India Magazine) Why it matters: As demand for AI talent accelerates, Google’s platform could playa central roleinbridging global workforce gaps, especially by offering free access to students, nonprofits, and developers. It emphasizesapplied, hands-on learningrather than passive video courses, signaling how tech giants are retooling education to meet enterprise AI demand.(Analytics India Magazine) Elon Musk says AI will take every job and humans will be free to grow vegetables In his latest comments on X,Elon Muskdeclared that“AI and robots will replace all jobs.” Far from a dystopian warning, Musk argued this shift could liberate humanity from the need to work, likening future labor to an optional hobby such as “growing your own vegetables instead of buying them from the store.” The remark came in response to reports about Amazon’s plan to replace over 160,000 jobs with robots by 2027. While his statement reignited debates about automation anxiety, Musk framed it as an opportunity for universal income and post-labor fulfillment rather than economic ruin.(mint) Build an agent with function calling inGPT-5 Whatyou’lllearn: a practical walk-through of agent design, from defining tool schemas and wiring up function calls to implementing a working web-search agent withTavily, complete with environment setup, code, and a clear loop for handling function outputs vs direct replies. Ifyou’vebeen wanting to move from prompts to real actions, bookmark this and try the tutorial end-to-end:(Towards Data Science) Look beyond LLMs to build the next generation of AI AI veteran Dr. Lance Eliot argues that true progress toward AGI will come from exploring new paradigms from neuro-symbolic and embodied AI to human-centered and quantum approaches rather than scaling today’s language models. If you care about where the next real breakthroughs will emerge, this piece is your roadmap to what comes after generative AI: (Forbes) Built something cool? Tell us. Whether it's a scrappy prototype or a production-grade agent, we want to hear how you're putting generative AI to work. Drop us your story at nimishad@packtpub.com or reply to this email, and you could get featured in an upcoming issue of AI_Distilled. 📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us. If you have any comments or feedback, just reply back to this email. Thanks for reading and have a great day! That’s a wrap for this week’s edition of AI_Distilled 🧠⚙️ We would love to know what you thought—your feedback helps us keep leveling up. 👉 Drop your rating here Thanks for reading, The AI_Distilled Team (Curated by humans. Powered by curiosity.) *{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0

LLM Expert Insights, Packt
17 Oct 2025
8 min read
Save for later

OpenAI to allow erotica in ChatGPT

LLM Expert Insights, Packt
17 Oct 2025
8 min read
SamA’s ambitions to scale OpenAI know no bounds AI_Distilled #118: What’s New in AI This Week Welcome toAI Distilled, where we brew down the week’s AI news into anuttyblend. This week’s cup is overflowing – from OpenAI’s big spending (andahemspicy new features) to other tech giants’ AI moves. Enjoy thesip! LLM Expert Insights, Packt EXPERT INSIGHTS Top 5 Frameworks for Building AI Agents (2025) AI agents are no longer sci-fi—they’rethe witty coworkers of the future, ready to browse the web, crunch data, and even plan tasks autonomously. But behind every great AI agent is a great framework.Here’sour take on the top five frameworks for building AI agents, ranked on ease of use, popularity, community love, industry adoption, flexibility,and yes,cost. Buckle up for a quick tourof our top five picks this year. LangChain– The Versatile Orchestrator Whyit’s#1:LangChainis the OG of agent frameworks and hasessentially becometheSwiss Army knifefor LLM-powered applications.It’san open-source toolkit that makes iteasy toconnectlarge language models to tools, data, and prompts(Hyperstack). Withextensive integrations and modular abstractions,LangChainsimplifies complex AI workflows so developers can focus on creativity over plumbing(Skim AI). No wonderit’swildly popular – an industry guide notesLangChain’s“massive community (80K+ GitHubstars)…and proven enterprise adoption”,(ampcome),as key to its gold-standard status.It’sflexible enough for everything from chatbots to autonomous task agents. Ease of use:High – thanks to great docs and a huge community. Learning curve:Mild, especially with so many examples out there. As co-founderHarrison Chase puts it,agents are like digital labor that can use tools and act autonomously– andLangChaingives your AIlabor forcethe training it needs to excel. LangGraph– Advanced Multi-Agent Workflows Whyit’s#2:IfLangChainis the toolkit,LangGraphis the control room. Built as an extension ofLangChain,LangGraphintroduces agraph-based approach to orchestrate multiple agents with stateful memory. Insimpler terms, it lets you design complex workflows as nodes and edges – perfect for scenarios where several AI agents must collaborate or follow conditional branches. This precision and control makeLangGraphideal forintricate decision-making systems or simulationsthat go beyond linear chats. Flexibility:Very high– you can choreograph agents like a director managing an ensemble cast. Popularity:Growing fast (it’sLangChain’sbrainy younger sibling). Learning curve:Steeper –you’llneed to think in graphs, which might tie your brain inknots atfirst. But for thoseneeding detailed orchestration and debugging of multi-agent setups,LangGraphelevatesLangChainto new heights. It’slike going from driving a car to flying a plane – morepower butrequires more skill. CrewAI– The Team Player Whyit’s#3:CrewAIis theup-and-coming startup darlingof agentframeworks, focused on making multi-agent systems as easy as forming a superhero team. Itmimics human team dynamics, letting you spin up acrewof agents where each has a role (researcher, planner, coder, etc.) and they collaborate to get the job done(IBM). The API isclean and beginner-friendly, so you can get a multi-agent prototype running faster than assembling an IKEA chair. One guide describesCrewAIasan innovative agentic framework that empowers the creation of collaborative, autonomous AI agents, working together to achieve complex goals(Medium). Ease of use:Excellent – minimal setup,sensibledefaults. Popularity:Rapidly growing;it’sindependent ofLangChain, built from scratch, and gaining fans for its simplicity(GitHub). CrewAI’ssecret sauce is quick integration of tools and a focus on real-world workflows (think AI agentsactinglike a coordinated Slack team). It does sacrifice some flexibility for simplicity – thisopinionated designmeansadvanced users might hit limits in customization. But for many, having your personal AI Avengers working in harmony is well worth it. Microsoft Semantic Kernel – The Enterprise Whisperer Whyit’s#4:From Microsoft’s R&D labs comesSemantic Kernel (SK), the framework thatbridges AI with the enterprise world. SK integrates LLM-basedskillsinto traditional software, making it a favorite for companies that want AI smartswithoutrebuilding their stack.It’sdesigned for .NET and Python, meaningyou can slot it into your existing apps with ease. Think of SK as the middleware that helps AI agents talk to business systems (databases, CRMs, Office 365, you name it). Its strengths includememory retention and context management(great for virtual assistants that need to remember conversations) androbust security and compliance featuresfor corporate use(Analytics Vidhya). Popularity:Solid in enterprise circles (less splashy on GitHubstars butbacked by Microsoft’s heft). Ease of use:Moderate – ifyou’rea .NET developer,you’llfeel at home; othersmay need tomake adjustments. Flexibility:Moderate – not as many out-of-the-box agents asLangChain, but you can combine it with custom code easily. In short, Semantic Kernelis areliable, security-conscious framework you bring home to meet the CIO. MicrosoftAutoGen– The Automation Maestro Whyit’s#5:AutoGenis like the orchestral conductor of AI agents, straight from Microsoft Research. It enables the creation ofmultiple specialized agents that chat and cooperate to solve tasks– essentiallyturning complex problems into a team conversation.AutoGenshines in scenarios like code generation, cloud operations, or any heavy-duty project whereyou’dwant a swarm of AI agents each doing whatthey’rebest at.It’sopen-source and wascompletely redesigned in v0.4 to boost robustness and scalability, incorporating feedback from early users(Microsoft).Microsoft describesAutoGenasan open-source framework for building AI agents… easy-to-use and flexible… accelerating development of agentic AI. Ease of use:Medium – simpler than building multi-agent systems from scratch, butyou’llstill invest time to configure roles and communications. Flexibility:High –it’sevent-driven and asynchronous under the hood, allowing complex workflows and even human-in-the-loop oversight. The catch is asteeper learning curveand moreinvolved setup compared to lightweight frameworkslikeCrewAI. But if you need an enterprise-grade,large-scale automation toolkit,AutoGenis a powerhouse ready to conduct your AI orchestra. AutoGencomes with neat features likeAutoGenStudio (a no-code interface)and strong logging/error handling for production-grade deployments. Harrison chase is sharing a deep version of on LangChain and Frameworks. Join him in Packt's flagship conference - GenAI Nexus 2025 happening on Nov 20-21 (Virtual). KNOW MORE ABOUT HARRISON CHASE'S SESSION Master AI in 16 hours & Become Irreplaceable before 2025 ends! 🧠Live sessions- Saturday and Sunday 🕜10 AM EST to 7PM EST SAVE YOUR SPOT NOW 📈LATEST DEVELOPMENT ChatGPT Gets Spicy,OpenAImakes bold moves OpenAI is loosening ChatGPT’s tie and letting ithave some fun. An upcoming update will allow verified adults to engage ineroticrole-play conversations with ChatGPT.Looks like ChatGPTwill soon flirt and sext within safety limits.But mental health experts, professionals, and parents have calledoutthis move, citing its potential impact onpsychologicalimpactonindividuals and the safety of children.Open AI, CEO, Sam Altman made thisannouncement in his recent X post. To counter these concerns,OpenAI has formed a well-being council.Eight expertshavejoinedOpenAI’s Expert Council, whowilladvise on healthy AI interactions, teen safety, and guardrails for ChatGPT/Sora—building on parental controlsworkwith ongoingcheckins. Salesforce and OpenAI just pulled a double shot ofsynergy— bringing ChatGPT intoAgentforce360 and Slack.Check out this announcement. In another development,OpenAI and Sur Energy sign an LOI for acleanenergyStargatedata center in Argentina after talks with President Milei, alongsideOpenAI for Countriesplans to modernize government workflows. Learn more about this collaborationhere. Apple harvests talent while Meta brews it Metahas beenraiding Apple’s engineering pantryfor quite a while. In a new poaching move,Ke Yang,who has been driving Apple’sAI-drivensearch project,has stepped downfromhis position as head of the team called Answers, Knowledge and Information, or AKI,reportsBloomBerg. Microsoft’s Midjourneyrival Microsoft unveiledMAI-Image-1, its first homegrown text-to-image model.It’salready posting impressive benchmark scores, aiming to break our Midjourney addiction. Microsoft’s AI strategy is clearly moving beyond just OpenAI partnerships, as it hustles to build its own creative AI arsenal.Go check it out. Google’s AIface-lift Google shipped a bundle of new AI features. Notably, Google Meet now offersAI-powered virtual makeupthat tracks your face in real time– finally catching up to Zoom and Teams with filters that stay put when you move. Meanwhile, Google’s also injecting its image-gen tech (“Nano Banana”) into Search and rolling out smarter Gmail scheduling. AI glam and productivity, all in one go.Learn more aboutGoogle’s Touch-up here. NVIDIA’sminisupercomputer NVIDIA just rolled out apint-sized powerhouse. Dubbed DGX Spark, thistiny AI supercomputer delivers 1 petaflopof performance in alunch-boxform factor. CEO Jensen Huang hand-delivered one to OpenAI’s Greg Brockman, because nothing says friendship like a supercomputer on your doorstep.It’sbig compute in a small package – and everyone in AI wants one.Here is NVIDIA’s official announcement. Built something cool? Tell us. Whether it's a scrappy prototype or a production-grade agent, we want to hear how you're putting generative AI to work. Drop us your story at nimishad@packtpub.com or reply to this email, and you could get featured in an upcoming issue of AI_Distilled. 📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us. If you have any comments or feedback, just reply back to this email. Thanks for reading and have a great day! That’s a wrap for this week’s edition of AI_Distilled 🧠⚙️ We would love to know what you thought—your feedback helps us keep leveling up. 👉 Drop your rating here Thanks for reading, The AI_Distilled Team (Curated by humans. Powered by curiosity.) *{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0
Modal Close icon
Modal Close icon