Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

AI Distilled

61 Articles
Shreyans from Packt
03 Oct 2024
10 min read
Save for later

OpenAI raises $6.6 billion funding, valuation at $157 billion

Shreyans from Packt
03 Oct 2024
10 min read
98% cost reduction for GPT 4o miniAI_Distilled #70: OpenAI raises $6.6 billion funding, valuation at $157 billionThis 3 hour power packed workshop that will teach you 25+ AI Tools, make you a master of prompting & talk about hacks, strategies & secrets that only the top 1% know of.By the way, here’s sneak peek into what’s inside the workshop:-Making money using AI-The latest AI developments, like GPT o1-Creating an AI clone of yourself, that functions exactly like YOU-10 BRAND new AI tools to automate your work & cut work time by 50%Best thing? It's usually $399, but it's absolutely free for the first 100 readers.Save your seat now (Offer valid for 24 hours only)Welcome to AI_Distilled. Before we get to the newsletter, I have one quick message: Next week, we are hosting an AMA with Supreet Kaur: Navigating LLMs & AI Innovation. You should check it out.Today, we’ll talk about:Techwave:[Sponsored] Free 3 hour AI and ChatGPT workshop for professionalsOpenAI raises $6.6 billion funding, valuation at $157 billionOpenAI makes4 major announcements at DevDay, 98% cost reduction for GPT-4 to 4o miniMicrosoftlaunches redesigned Copilotwith Voice, Vision, and Chain of Thought capabilities.Metaunveils open-source Llama StackNotebookLM now summarizes YouTube videos. Andrej Karpathy'sNotebookLM tweet goes viralAwesome AI:Pika 1.5Graphite Code ReviewerHelicone:LLM-Observability for DevelopersMagic Patterns: Prototype your product ideas with AIRows: The new way to spreadsheetMasterclass:Anthropic reduces the error rate ofRAGs by 67% using this simple methodLangchain shows offnew tool: controllable Agentopen-source NotebookLM alternativeusing Llama 3.1 405BAndrew Ngannounces course on Meta's Llama 3.2, launching October 9Using task-specific models from AI21 Labs on AWSHackHub:o1-engineer: AI-powered code generation and editingCrawl4AI: LLM Friendly Web Crawler & ScraperLlama Stack:Model components of the Llama Stack APIsexo: Run your own AI cluster at home with everyday devicesTTS: a deep learning toolkit for Text-to-SpeechCheers!Shreyans SinghEditor-in-Chief, PacktLast Chance! For the next 48 hours only, save $150 on your full event pass!Use code LASTCHANCE40 at checkoutImagine being part of 10+ Power Talks, 12+ Hands-On Workshops, and 3 Interactive Roundtables—while networking with 30+ top industry leaders and hundreds of tech professionals from across the globe. This is your opportunity to dive into cutting-edge AI solutions at the Generative AI in Action 2024 Conference.It’s all happening November 11-13 (Virtual)—don’t miss your chance!BOOK YOUR SEAT NOW (before prices go up!)BOOK NOW AT $399.99 $239.99⚡ TechWave: AI/GPT News & AnalysisOpenAI raises $6.6 billion funding, valuation at $157 billionOpenAI has raised $6.6 billion in funding from investors like Microsoft, Nvidia, Thrive Capital, and Khosla Ventures, valuing the company at $157 billion. This significant investment comes as OpenAI restructures and undergoes leadership changes, including the departure of its CTO. Despite losses, OpenAI is projected to make $3.6 billion in revenue this year, with expectations for a major revenue increase next year. Investors are betting on the company's future growth, especially as it continues to pursue advanced AI goals like artificial general intelligence (AGI).OpenAI makes4 major announcements at DevDay, 98% cost reduction for GPT-4 to 4o miniAt OpenAI's 2024 DevDay, several key developer-focused features and tools were announced. One major update was prompt caching, offering a 50% discount on repeated prompts over 1,024 tokens, which lowers costs for developers automatically. Another significant launch was the WebSocket Realtime API, enabling real-time audio input/output for GPT-4 models, allowing developers to stream audio, text, and tool functions with low latency. OpenAI also simplified model distillation, making fine-tuning easier by allowing smaller models to learn from larger ones. Additionally, OpenAI extended free fine-tuning offers for GPT-4 models, and hinted at future support for image input through the Realtime API.Microsoftlaunches redesigned Copilotwith Voice, Vision, and Chain of Thought capabilities.Microsoft's October 2024 announcement highlights the evolution of Copilot. The updated Copilot integrates voice and vision capabilities, making interactions feel more natural and personalized. It offers practical help like summarizing news, taking notes at appointments, and assisting with life’s complexities. The tool aims to reduce information overload and provide a supportive, adaptive experience.Metaunveils open-source Llama StackMeta has introduced Llama Stack distributions to simplify the development of generative AI applications using its Llama large language models (LLMs). These distributions bundle multiple Llama Stack API providers into a single endpoint, allowing developers to work seamlessly with Llama models across different environments, including on-premises, cloud, and mobile devices. The Llama Stack provides essential building blocks for the entire AI development process, from model training to running AI agents.NotebookLM now summarizes YouTube videos. Andrej Karpathy'sNotebookLM tweet goes viralUsers can now upload videos or audio recordings, allowing NotebookLM to summarize key concepts and generate insights from these sources. It can transcribe and analyze audio or video content, creating helpful study guides or summaries. Additionally, users can now share Audio Overviews with a public link, making it easier to distribute content summaries.💻 Awesome AI: Tools for WorkPika 1.5Create stunning, cinematic video clips with advanced visual effects and longer scenes. It introduces new features like "Unreal Pikaffects," enabling users to manipulate objects in ways that go beyond real-life capture, such as exploding or inflating them. It also offers cinematic camera moves like Bullet Time and Crane Down, along with lifelike character actions like running or skateboarding.Graphite Code ReviewerGraphite Reviewer is an AI-powered tool that provides immediate, actionable feedback on pull requests, helping teams catch bugs, logical errors, and enforce best practices before human review. It integrates seamlessly with your codebase, offering code-aware suggestions without storing or using your team's data for training.Helicone / LLM-Observability for DevelopersHelicone is an open-source platform designed for developers to log, monitor, and debug large language models (LLMs). It provides tools for instant analytics, prompt management, and cost tracking, allowing users to filter, segment, and analyze their requests efficiently.Magic Patterns: Prototype your product ideas with AIMagic Patterns is an AI-powered design tool that allows users to quickly prototype product ideas by generating user interfaces (UIs) from prompts or images. It features an AI-native editor for iterating on components and designs, which can be exported to React or Figma.Rows — The new way to spreadsheetRows features an AI-powered assistant that helps users with tasks like data entry, classification, and translation, making it easier to work with complex information.🔛 Masterclass: AI/LLM TutorialsAnthropic reduces the error rate ofRAGs by 67% using this simple methodContextual Retrieval is an enhancement of traditional Retrieval-Augmented Generation (RAG) used in AI models to improve the accuracy of retrieving relevant information from large knowledge bases. Standard RAG uses embeddings to break down a knowledge base into chunks and retrieves relevant information based on semantic similarity. However, this method can lose important context, leading to retrieval errors. Contextual Retrieval addresses this by adding chunk-specific context before generating embeddings and BM25 (a ranking method based on exact matches), reducing retrieval errors by up to 67% when combined with reranking.Langchain shows offnew tool: controllable AgentThe Controllable-RAG-Agent is a sophisticated AI tool designed to answer complex questions using Retrieval-Augmented Generation (RAG) techniques. It employs a structured graph for reasoning and breaks down queries into smaller, manageable tasks. The agent ensures that answers are based solely on the provided data, preventing hallucinations, or incorrect content. It features multi-step reasoning, adapts its plan as new information is processed, and evaluates performance using metrics like answer correctness and relevance.open-source NotebookLM alternativeusing Llama 3.1 405BConvert your PDFs into podcasts with open-source AI models (Llama 3.1 405B, MeloTTS, Bark).Note: Only the text content of the PDFs will be processed. Images and tables are not included. The total content should be no more than 100,000 characters due to the context length of Llama 3.1 405B.Andrew Ngannounces course on Meta's Llama 3.2, launching October 9The course "Introducing Llama 3.2," offered by Amit Sangani, Senior Director of AI Partner Engineering at Meta, focuses on building multimodal applications using the Llama 3.2 family of models, which range from 1B to 405B parameters. It covers essential concepts from tokenization to tool-calling, as well as Llama's new stack, which simplifies application development.Using task-specific models from AI21 Labs on AWSIn this blog post, you'll learn how to use AI21 Labs' Task-Specific Models (TSMs) on AWS to streamline tasks like summarization, paraphrasing, and answering questions based on specific contexts. By subscribing to AI21 Labs in AWS Marketplace, setting up a SageMaker domain, and accessing these models through SageMaker JumpStart, you can easily deploy and customize them for your business. Unlike general foundation models, these TSMs are pre-trained for specific commercial tasks, offering greater accuracy and cost-efficiency with less need for complex prompt engineering.🚀 HackHub: AI Toolso1-engineer: AI-powered code generation and editingThe `o1-engineer` tool is a command-line utility that helps developers manage and interact with their projects more efficiently. It leverages OpenAI's API to automate tasks like code generation, file and folder management, project planning, and code review. By using commands like `/add`, `/edit`, and `/planning`, users can modify project structures, plan tasks, and streamline workflows directly from the terminal.Crawl4AI: LLM Friendly Web Crawler & ScraperCrawl4AI is an open-source, asynchronous web crawler designed to efficiently extract data for large language models (LLMs) and AI applications. It supports features like crawling multiple URLs simultaneously, extracting media and links, executing custom JavaScript, and managing sessions for dynamic web content. The tool allows for structured data extraction using CSS selectors or JSON strategies and offers advanced techniques for clustering and chunking content.Llama Stack:Model components of the Llama Stack APIsThe Llama Stack provides a set of APIs that cover the entire AI development lifecycle, including model training, inference, safety, memory management, and evaluation. Developers can mix and match local or cloud-based providers to implement these APIs, making it flexible for different use cases.exo: Run your own AI cluster at home with everyday devicesExo allows you to run AI models across multiple devices, like phones, laptops, or Raspberry Pis, forming a distributed AI cluster. It automatically discovers devices and splits model computations across them based on their resources. Unlike traditional systems with a master-worker architecture, Exo uses peer-to-peer connections, allowing all devices to contribute equally.TTS: a deep learning toolkit for Text-to-SpeechCoqui TTS is a deep learning toolkit for advanced text-to-speech (TTS) generation, designed for research and production use. It supports over 1,100 languages with pre-trained models and offers tools for training new models and fine-tuning existing ones. Coqui TTS includes various TTS models like Tacotron and Glow-TTS, speaker encoders for multi-speaker synthesis, and vocoders like MelGAN for high-quality audio output.📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us.If you have any comments or feedback, just reply back to this email.Thanks for reading and have a great day!*{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{line-height:0;font-size:75%} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0
  • 8585

Shreyans Singh
05 Sep 2024
9 min read
Save for later

OpenAI co-founder Sutskever's new safety-focused AI startup SSI raises $1 billion

Shreyans Singh
05 Sep 2024
9 min read
xAI Colossus supercomputer with 100K H100 GPUs comes onlineAI_Distilled #66: OpenAI co-founder Sutskever's new safety-focused AI startup SSI raises $1 billion200+ hours of research on AI-led career growth strategies & hacks packed in 3 hoursThe only AI Crash Course you need to master 20+ AI tools, multiple hacks & prompting techniques in just 3 hoursYou’ll save 16 hours every week & find remote jobs using AI that will pay you upto $10,000/moGet It Here For Free (Valid For Next 24 hours Only!)Welcome to AI_Distilled. Today, we’ll talk about:Techwave:[Sponsored] 3-hour Mini Course on AI (worth $399) for FREEOpenAI co-founder Sutskever's new safety-focused AI startup SSI raises $1 billionxAI Colossus supercomputer with 100K H100 GPUs comes onlineOpenAI Japan announces next-generation model 'GPT Next'100M Token Context Windows is here350M downloads of Llama since 2023Awesome AI:Build web applications quickly by generating front-end codePowerful APIs for speech-to-text, text-to-speech, and language understandingv0 by VercelRevolutionize Your Storyboarding ProcessMeasure developer shipping velocity, accuratelyMasterclass:Natural Language Processing and Machine Learning for DevelopersBuild a generative AI image description applicationVisualizing and interpreting decision treesRethinking the Role of PPO in RLHFEnhancing Paragraph Generation with a Latent Language Diffusion Model Transparency is often lacking in datasets used to train large language modelsHackHub:A natural language interface for computersLLM app development platform2^x Image Super-ResolutionVideo generation platform based on diffusion modelsPop Audio-based Piano Cover GenerationCheers!Shreyans SinghEditor-in-Chief, PacktLive Webinar: The Power of Data Storytelling in Driving Business Decisions (September 10, 2024 at 9 AM CST)Data doesn’t have to be overwhelming. Join our webinar to learn about Data Storytelling and turn complex information into actionable insights for faster decision-making.Click below to check the schedule in your time zone and secure your spot. Can't make it? Register to get the recording instead.REGISTER FOR FREE⚡ TechWave: AI/GPT News & AnalysisOpenAI co-founder Sutskever's new safety-focused AI startup SSI raises $1 billionSafe Superintelligence (SSI), co-founded by Ilya Sutskever, who was previously the chief scientist at OpenAI. SSI has raised $1 billion in funding to develop safe AI systems that surpass human abilities. The company, valued at $5 billion, plans to use the money for computing power and hiring top talent. Sutskever, along with Daniel Gross and Daniel Levy, started SSI in June 2024.xAI Colossus supercomputer with 100K H100 GPUs comes onlineElon Musk's X (formerly Twitter) has brought online the world's most powerful AI training system, called Colossus, using 100,000 Nvidia H100 GPUs. The supercomputer will soon expand with an additional 50,000 H100 and H200 GPUs, bringing the total to 200,000. Developed by Dell in just 122 days, Colossus will be used for training advanced AI models, such as xAI's Grok version 2.OpenAI Japan announces next-generation model 'GPT Next'Tadao Nagasaki, CEO of OpenAI Japan, announced that ChatGPT has reached over 200 million active users by the end of August, marking it as the fastest software in history to reach this milestone. He highlighted the growing adoption of ChatGPT Enterprise among companies like Apple, Coca-Cola, and Moderna. Nagasaki also discussed OpenAI's future plans, introducing the next-generation AI model, "GPT Next," which he claims will be 100 times more powerful than previous models like GPT-4, supporting advanced capabilities across various data formats.100M Token Context Windows is hereMagic has developed ultra-long context AI models, capable of processing up to 100 million tokens of context during inference, which could revolutionize tasks like code synthesis. To improve testing, Magic introduced HashHop, a method that eliminates these oversights by using random hashes, forcing models to store and retrieve complex information. Magic also announced new partnerships with Google Cloud and NVIDIA to scale AI infrastructure and raised $465M to support their work.350M downloads of Llama since 2023Meta's Llama models have rapidly become one of the most widely used open-source AI model families, with over 350 million downloads, driven by its availability on platforms like Hugging Face and partnerships with major cloud providers like AWS and Azure. Llama 3.1 has expanded its capabilities, offering enhanced context lengths, multilingual support, and new safety tools. Its open-source nature encourages innovation, with companies like AT&T, DoorDash, and Accenture using Llama to enhance customer experiences, streamline operations, and drive AI-powered solutions across industries.💻 Awesome AI: Tools for WorkGPT EngineerBuild web applications quickly by generating front-end code using technologies like React, Tailwind, and Vite. Users can describe their app ideas, sync them with GitHub, and deploy them with a single click.OpenHomeAI-powered voice interface that enables natural, seamless conversations with devices using its Voice SDK, allowing any platform to integrate smart voice control. It offers powerful APIs for speech-to-text, text-to-speech, and language understanding, making it ideal for applications like medical transcription and smart home automation. 500 features, including instant translation, emotion detection, and media control.v0 by VercelGenerate web development components and full interfaces quickly using chat-based prompts. It helps developers create UI elements like buttons, modals, and pages by simply describing what they need, enabling faster development workflows.StoryboarderRapidly transform ideas into detailed storyboards, animatics, and screenplays. With features like Image-To-Video, the platform can turn static images into dynamic videos, enhancing storytelling and saving time. It supports various media projects, including commercials, films, and social media content, and offers integrated scriptwriting, consistent art styles, and expert support to streamline the creative process.Maxium AIAccurately measure developer efficiency by tracking shipping velocity and performance, going beyond just lines of code or commits. It integrates with GitHub to provide a standardized evaluation mechanism across different tech stacks and programming languages.🔛 Masterclass: AI/LLM TutorialsBuild a generative AI image description applicationThis guide explains how to build an application for generating image descriptions using Anthropic's Claude 3.5 Sonnet model on Amazon Bedrock and AWS CDK. By integrating Amazon Bedrock’s multimodal models with AWS services like Lambda, AppSync, and Step Functions, you can quickly develop a solution that processes images and generates descriptions in multiple languages. The use of Generative AI CDK Constructs streamlines infrastructure setup, making it easier to deploy and manage the application.Visualizing and interpreting decision treesTensorFlow recently introduced a tutorial on using dtreeviz, a leading visualization tool, to help users visualize and interpret decision trees. dtreeviz shows how decision nodes split features and how training data is distributed across different leaves. For example, a decision tree might use features like the number of legs and eyes to classify animals. By visualizing the tree with dtreeviz, you can see how each feature influences the model's predictions and understand why a particular decision was made.Rethinking the Role of PPO in RLHFIn Reinforcement Learning with Human Feedback (RLHF), there's a challenge where the reward model uses comparative feedback (i.e., comparing multiple responses) while the fine-tuning phase of RL uses absolute rewards (i.e., evaluating responses individually). This discrepancy can lead to issues in training. To address this, researchers introduced Pairwise Proximal Policy Optimization (P3O), a new method that integrates comparative feedback throughout the RL process. By using a pairwise policy gradient, P3O aligns the reward modeling and fine-tuning stages, improving the consistency and effectiveness of training. This approach has shown better performance in terms of reward and alignment with human preferences compared to previous methods.Enhancing Paragraph Generation with a Latent Language Diffusion Model The PLANNER model, introduced in 2023, enhances paragraph generation by combining latent semantic diffusion with autoregressive techniques. Traditional models like GPT often produce repetitive or low-quality text due to "exposure bias," where the training and inference processes differ. PLANNER addresses this by using a latent diffusion approach that refines text iteratively, improving coherence and diversity. It encodes paragraphs into latent codes, processes them through a diffusion model, and then decodes them into high-quality text. This method reduces repetition and enhances text quality.Transparency is often lacking in datasets used to train large language modelsA recent study highlights the lack of transparency in datasets used to train large language models (LLMs). As these datasets are combined from various sources, crucial information about their origins and usage restrictions often gets lost. This issue not only raises legal and ethical concerns but can also impact model performance by introducing biases or errors if the data is miscategorized. To address this, researchers developed the Data Provenance Explorer, a tool that provides clear summaries of a dataset’s origins, licenses, and usage rights.🚀 HackHub: AI ToolsOpenInterpreter/open-interpreterOpen Interpreter is a tool that allows language models (like GPT-4) to execute code locally on your machine, supporting languages like Python, JavaScript, and shell scripts. It works like ChatGPT but with the ability to interact with your system's resources.langgenius/difyDify is an open-source platform for developing AI applications using large language models (LLMs). It provides an intuitive interface for building AI workflows, managing models, and integrating tools like Google Search or DALL·E. Dify supports a wide variety of LLMs and offers features like a prompt IDE, document retrieval (RAG), agent-based automation, and detailed observability for monitoring performance.Tohrusky/Final2xFinal2x is a cross-platform tool designed to enhance image resolution and quality using advanced super-resolution models such as RealCUGAN, RealESRGAN, and Waifu2x. It's ideal for anyone looking to improve image resolution efficiently across various platforms.ali-vilab/VGenVGen is an open-source video generation platform from Alibaba's Tongyi Lab that offers a wide range of tools for generating videos from various inputs like text, images, and motion instructions. It features state-of-the-art models like I2VGen-xl for image-to-video synthesis and DreamVideo for custom subject and motion generation. VGen supports tasks like video generation from human feedback and video latent consistency modeling.sweetcocoa/pop2pianoPop2Piano is a deep learning model that automatically generates piano covers from pop music audio. Traditionally, creating a piano cover involves understanding the song's melody, chords, and mood, which is challenging even for humans. Prior methods used melody and chord extraction, but Pop2Piano skips these steps, directly converting pop music waveforms into piano covers using a Transformer-based approach. The model was trained on a large dataset of synchronized pop songs and piano covers (300 hours), enabling it to generate plausible piano performances without explicit musical extraction modules.📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us.If you have any comments or feedback, just reply back to this email.Thanks for reading and have a great day!*{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{line-height:0;font-size:75%} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0
  • 7485

LLM Expert Insights, Packt
23 May 2025
10 min read
Save for later

AI Breakthroughs: Code, Communication, and Recruitment Redefined!

LLM Expert Insights, Packt
23 May 2025
10 min read
Miss this week’s AI news and you might just fall behind.AI_Distilled #96: What’s New in AI This WeekYou can now run and fine-tune Qwen3 and Meta's new Llama 4 models with 128K context length & superior accuracy. Unsloth is an open-source project that allows easy fine-tuning of LLMs and that also uploads accurately quantized models to Hugging Face. GitHub repo: https://github.com/unslothai/unslothUnsloth's new Dynamic 2.0 quants outperform other quantization methods on 5-shot MMLU & KL Divergence benchmarks, meaning you can now run + fine-tune quantized LLMs while preserving as much precision as possible. Read more here . Tutorial for running Qwen3 here.Tutorial for running Llama 4 here.Welcome to another exciting edition of our AI_Distilled! This week, we're witnessing a surge in innovative AI solutions, with companies like OpenAI and Microsoft rolling out tools that streamline development and enhance user interaction. From Apple opening its models to developers to the fierce competition for AI's top talent, join us as we explore the latest breakthroughs shaping our digital world.LLM Expert Insights,PacktIn today's issue:📅 June’s AI Must-Attends: From AI Engineer World’s Fair to Packt’s Agent Bootcamp—here are 6 events you don’t want to miss this month.🔌 MCP, Explained: Paul Singh breaks down the Model Context Protocol—your plug-and-play solution for seamless AI tool integration.💻 Codex Arrives: OpenAI rolls out Codex, a powerful AI coding agent for writing features, fixing bugs, and navigating codebases.🧠 Windows Gets Smarter: Microsoft integrates native MCP into Windows and launches AI Foundry for seamless agent automation.🎟️ Google AI Ultra Drops: A new $249.99/mo subscription offers Gemini upgrades, cinematic video tools, and 30TB of storage.🍏 Apple Opens Up: Developers may soon build apps with Apple’s AI models—announcement expected at WWDC 2025.🏁 AI Talent Wars: OpenAI, Google & more compete for elite researchers—offering private jets and millions in perks.👨‍💻 Copilot’s New AI Agent: GitHub's upgraded Copilot now tackles coding issues with draft PRs, vision models, and full MCP support.🎧 On-Device Audio AI: Stability AI & Arm launch a mobile-ready model for text-to-audio generation—11 seconds of sound in 8.📈EXPERT INSIGHTSJUNE'S MUST ATTEND AI/LLM EVENTSIn June 2025, a number of exciting AI conferences are already generating buzz. Here are the Top 5 not-to-miss events in the next month (for more information and registration details, please visit the links):1. AI Engineer World’s FairDate: June 3–5, 2025Location: San Francisco, California, USACost: $299–1,799 in-personThe AI Engineer World's Fair, from June 3-5, 2025, in San Francisco, is the largest technical conference for AI engineers. It would host approximately 3,000 attendees, featuring 150 talks and 100 practical workshops. Topics include Generative AI, AI agents, LLMs, infrastructure, and AI in Fortune 500 companies, offering unparalleled networking and learning opportunities for industry professionals.2. Data + AI SummitDate: June 9–12, 2025Location (Hybrid): San Francisco, California, US, and available online.Cost: $1,395–1,895 in-person. Free for virtual admission. Discounted tickets are available with group-rate pricing.The Data + AI Summit is a four-day event hosted by Databricks. It includes panel discussions, networking opportunities, and training workshops on topics such as data engineering, data governance, and machine learning.3. The AI Summit LondonDate: June 11–12, 2025Location: Tobacco Dock, London, UKCost: £125–2,499AI Summit London, spanning over two days, will cover a wide range of topics including agentic AI in action and ethical use of AI. With a strong lineup of sponsors and thousands of guests, the summit offers great opportunities for networking with leading AI practitioners.4. Packt’s AI Agent Bootcamp (Build AI Agents Over the Weekend)Date: June 21–22 and 28–29, 2025Location: Live Virtual WorkshopCost: Our AI Agent Bootcamp aims to equip developers, ML engineers, data scientists, technical professionals, and software architects with the practical skills to design, build, and deploy AI agents using frameworks like LangChain, AutoGen, and CrewAI, moving from theoretical understanding of LLMs to practical application.5. CDAO GovernmentDate: June 25–26, 2025Location: Washington, D.C., USCost: $499 in-person; Free for VP and C-level government executives.The CDAO Government conference in Washington, D.C., is unique as it unites U.S. government data leaders to explore AI, governance, and ethical data use in public services. Celebrating its 13th anniversary, this event offers an excellent opportunity to learn how to securely leverage AI's capabilities for government data challenges.This was just a quick peek into spaCy pipelines — but there’s much more to explore.For instance, the spacy-transformers extension integrates pretrained transformer models directly into your spaCy pipelines, enabling state-of-the-art performance. Additionally, the spacy-llm plugin allows you to incorporate LLMs like GPT, Cohere, etc. for inference and prompt-based NLP tasks.Master AI Tools, Set Automations & Build Agents – all in 16 hours (for free)AI is no longer just a buzzword — it’s the most valuable skill of this decade– to make money, to get hired and to be future-paced.That’s why, you need to join the 2-Day Free AI Upskilling Sprint by Outskill which comes with 16 hours of intensive training on AI frameworks, tools and tactics that will make you an AI expert.Originally priced at $499, but the first 100 of you get in for completely FREE! Claim your spot now for $0! 🎁📅23rd May- Kick Off Call & Session 1✅Live sessions- 24th & 25th May🕜11AM EST to 7PM ESTJOIN NOW(Limited Free Seats! 🚨)EXPERT INSIGHTS BY PAUL SINGHModel Context Protocol (MCP) and what it means for youIf you're working on AI design or tool integration, the Model Context Protocol (MCP) offers a seamless, standardized way to connect AI tools, data sources, and LLM applications. Developed by Anthropic, MCP is an open protocol designed to simplify the often complex and time-consuming process of integrating rapidly evolving AI models with tools and services. Think of it as the USB-C of the AI world—plug-and-play, regardless of the LLMs or tools you're working with, and without diving into the intricate technicalities of MCP itself.MCP operates on a client-server model, where your LLM application runs a local MCP client that communicates with one or more MCP servers. A service provider only needs to implement a single MCP server, which can then handle APIs, databases, and other services, without requiring constant code adjustments for each new integration.Take a look at how three different MCP servers integrate with APIs and services:MCP leverages the lightweight JSON-RPC message format (a simple remote procedure call protocol), stateful connections, server-client capability negotiation, and reflection. Reflection allows the client to query the server about its capabilities, which can then be surfaced to the LLM automatically via the orchestrating application’s prompt.When designing with MCP, it's important to keep your architecture modular, test each component thoroughly, document your iterations, and ensure security by validating inputs and controlling access.MCP is gaining traction with large organizations like Microsoft, which is integrating it into key products such as Semantic Kernel, Copilot Studio, and GitHub Copilot. I envision a near future where MCP-as-a-Service becomes the de facto standard, eliminating deployment overhead and enabling seamless AI-to-AI or agent-to-agent communication. For example, MCP endpoints could allow straightforward integration without server management, while internal repositories of MCP clients could democratize standardized tool access across organizations.To read more about MCP, you can check out these resources: https://modelcontextprotocol.io and https://aka.ms/mcp. I’ll continue to share how our customers and various industries are adopting MCP and the lessons we’re learning along the way. Stay tuned for more.Join Packt’s Accelerated Agentic AI Bootcamp this June and learn to design, build, and deploy autonomous agents using LangChain, AutoGen, and CrewAI. Hands-on training, expert guidance, and a portfolio-worthy project—delivered live, fast, and with purpose.This is it.35% off this Workshop - Limited Time OfferIf you’re in—move now.Code: AGENT35RESERVE YOUR SEAT NOW!📈LATEST DEVELOPMENTOpenAI Introduces Codex for Enhanced Code GenerationOpenAI has released Codex, a cloud-based AI agent for software engineering. Available in ChatGPT Pro, Enterprise, and Team, Codex (powered by codex-1) can write features, fix bugs, and answer codebase questions, operating in isolated environments. It learns from real-world tasks, producing human-like code and iteratively running tests. Developers can monitor progress, review changes with verifiable evidence, and guide Codex with AGENTS.md files.Microsoft Unveils Windows AI Foundry and Native MCP for Future AI AgentsMicrosoft is advancing its AI vision with native Model Context Protocol (MCP) in Windows and the Windows AI Foundry. This crucial groundwork, leveraging Anthropic's "USB-C of AI" protocol, aims to enable automated AI agents to seamlessly interact with apps, web services, and Windows functions. This initiative will empower features like natural language file searches and AI-powered system controls, reshaping how users engage with their devices.Google Launches AI Ultra: A VIP Pass to Advanced AIGoogle is launching Google AI Ultra, a new $249.99/month subscription (with an initial discount) offering the highest usage limits and access to its most capable AI models and premium features. Tailored for creative professionals, developers, and researchers, it includes Gemini with enhanced reasoning, Flow for cinematic video creation, Whisk for animated image generation, and advanced NotebookLM. Subscribers also get Gemini integration in Google apps (Gmail, Docs, Chrome), Project Mariner for multi-task management, YouTube Premium, and 30 TB storage.Apple to Open AI Models for DevelopersApple is reportedly preparing to allow third-party developers to build software using its AI models, aiming to boost new application creation. This move, expected to be unveiled at WWDC on June 9th, would let developers integrate Apple's underlying AI technology into their apps, starting with on-device models. This could help Apple compete in the AI landscape and enhance Apple Intelligence's appeal.GitHub Copilot Launches New AI Coding AgentGitHub Copilot now features an AI coding agent that tackles low-to-medium complexity tasks by simply assigning it issues. It operates in secure, customizable environments, pushing commits to draft pull requests with transparent session logs. This agent, enhanced by Model Context Protocol (MCP) and vision models, allows developers to offload routine work, ensuring security through human approval for pull requests and adhering to existing policies.📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us.If you have any comments or feedback, just reply back to this email.Thanks for reading and have a great day!That’s a wrap for this week’s edition of AI_Distilled 🧠⚙️We would love to know what you thought—your feedback helps us keep leveling up.👉 Drop your rating hereThanks for reading,The AI_Distilled Team(Curated by humans. Powered by curiosity.)*{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0}#converted-body .list_block ol,#converted-body .list_block ul,.body [class~=x_list_block] ol,.body [class~=x_list_block] ul,u+.body .list_block ol,u+.body .list_block ul{padding-left:20px} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0
  • 4047

Shreyans from Packt
12 Sep 2024
9 min read
Save for later

Apple Intelligence comes to iPhone, iPad, and Mac starting next month

Shreyans from Packt
12 Sep 2024
9 min read
Replit Agent early accessAI_Distilled #67: Apple Intelligence comes to iPhone, iPad, and Mac starting next monthGrow your business & career by 10x using AI Strategies in 4 hrs! 🤯Imagine a future where your business runs like a well-oiled machine, effortlessly growing and thriving while you focus on what truly matters.This isn't a dream—it's the power of AI, and it's within your reach.Join our AI Business Growth & Strategy Crash Course and discover how to revolutionize your approach to business on 12th September at 10 AM EST.In just 4 hours, you’ll gain the tools, insights, and strategies to not just survive, but dominate your market.Sign up here to save your seat! 👈Welcome to AI_Distilled. Today, we’ll talk about:Techwave:[Sponsored] Grow your career by 10x using AI Strategies in 4 hrs!Apple Intelligence comes to iPhone, iPad, and Mac starting next monthReplit Agent early accessAI system developed by Google DeepMind that designs novel proteinsIntroducing LLaVA V1.5 7B on GroqCloudFunction Calling in Google AI StudioAwesome AI:Polymet - Idea to prototype within secondsClipAnything - Choppityfal.aiEarkick - Your Personal AI ChatbotOuterbase | The interface for your databaseMasterclass:Voice Trigger System for SiriAlign Meta Llama 3 to human preferences with DPOAn Intuitive Intro to RLEnhancing LLMs with Structured Outputs and Function CallingSafely repairing broken builds with MLHackHub:Agents for software development Open-source LLM app development platformbuild, manage & run useful autonomous agentsUnderstand Human Behavior to Align True NeedsGenerative models for conditional audio generationCheers!Shreyans SinghEditor-in-Chief, Packt💡Recommended Reading: Essential Concepts of Vector DatabasesUnderstand why vector databases are important in modern data management and how to use them effectively.The course is about 4 hours long and is aimed at people interested in advanced data management techniques.The course includes hands-on sessions for setting up and using these databases, as well as integrating them with Large Language Models and frameworks like LangChain.Get it for $84.99⚡ TechWave: AI/GPT News & AnalysisApple Intelligence comes to iPhone, iPad, and Mac starting next monthApple announced the launch of "Apple Intelligence," a personal intelligence system integrated with iOS 18, iPadOS 18, and macOS Sequoia, starting in October 2024. This system uses advanced generative models and personal context to enhance everyday tasks, like writing assistance, smarter notifications, and a more flexible Siri. Features like a photo Clean Up tool, transcription in Notes and Phone apps, and AI-powered email prioritization will debut first in the U.S., with expanded language and feature support in the following months.Replit Agent early accessReplit Agent is an AI tool that helps users create software projects by understanding natural language prompts. Currently in early access for Replit Core and Teams subscribers, it assists in building web-based applications by guiding users through each step, from selecting technologies to deploying the final product. The agent is designed for prototyping and works closely with users to refine and develop their applications.AI system developed by Google DeepMind that designs novel proteinsAlphaProteo is an AI system developed by Google DeepMind that designs novel proteins to bind to specific target molecules. This technology can accelerate biological research by creating protein binders that aid in drug development, disease understanding, and more. AlphaProteo builds on the success of AlphaFold but goes further by generating new proteins, not just predicting their structures. It has shown high success rates in binding to key targets, such as proteins involved in cancer and viral infections like SARS-CoV-2.Introducing LLaVA V1.5 7B on GroqCloudLLaVA v1.5 7B is a new multimodal AI model available on GroqCloud, enabling developers and businesses to create applications that integrate image, audio, and text inputs. Built from a combination of OpenAI’s CLIP and Meta’s Llama 2, LLaVA v1.5 excels in tasks like visual question answering, image captioning, and multimodal dialogue.Function Calling in Google AI StudioGoogle AI Studio now supports function calling, allowing users to easily test the model's capabilities directly in the interface. This new feature makes it more convenient to experiment with the AI without leaving the UI. Google AI Studio offers free fine-tuning.💻 Awesome AI: Tools for WorkPolymet - Idea to prototype within secondsPolymet is an AI-powered tool that helps users quickly turn ideas into prototypes by generating designs and production-ready code in seconds. Users can describe what they need, iterate on the design with their team, and then export the code and designs, which can easily integrate with tools like Figma and existing codebases.ClipAnything - ChoppityChoppity is an AI-powered video editing tool that allows users to quickly find and clip moments from any video using visual, audio, and sentiment analysis. With its "ClipAnything" feature, users can search for specific parts of a video, such as key events, people, or emotions, without having to manually review hours of footage.fal.aiFal.ai is a generative media platform designed for developers to create and deploy AI-powered applications, particularly focused on text-to-image models. It offers fast, cost-effective inference with models like FLUX.1 and Stable Diffusion, optimized for various creative tasks.Earkick - Your Personal AI ChatbotEarkick is an AI-powered mental health app that helps users track and improve their emotional well-being in real time through a personal chatbot named Panda. Earkick tracks mental readiness, mood, and calmness, while providing daily insights, breathing techniques, and guided self-care sessions.Outerbase | The interface for your databaseOuterbase is an AI-powered platform that simplifies working with databases for engineers, researchers, and analysts. It supports SQL and NoSQL databases, allowing users to manage data securely while using AI tools to write queries, fix mistakes, and generate charts and visualizations instantly. Outerbase's table editor, dashboards, and data catalog help users organize, analyze, and share insights efficiently.🔛 Masterclass: AI/LLM TutorialsVoice Trigger System for SiriApple's voice trigger system for Siri includes a first-stage low-power detector to identify potential triggers, and a second-stage, high-precision model to confirm the trigger. It also incorporates speaker identification to ensure the device responds only to its primary user. This sophisticated setup addresses challenges like background noise and phonetically similar words while maintaining power efficiency and privacy.Align Meta Llama 3 to human preferences with DPODPO involves fine-tuning a large language model (LLM) based on feedback from human annotators who rate or rank the model's responses according to desired values, such as helpfulness and honesty. SageMaker Studio provides the computational environment to fine-tune the model using Jupyter notebooks with powerful GPU instances, while SageMaker Ground Truth simplifies the process of gathering human feedback by managing workflows for data annotation. Together, they allow you to align the Llama 3 model’s responses with specific organizational values efficiently.An Intuitive Intro to RLReinforcement learning (RL) is a type of machine learning where an agent learns by interacting with its environment, making decisions, and receiving feedback in the form of rewards or penalties. The goal is to maximize cumulative rewards over time. The agent starts with little to no knowledge and improves through trial and error, learning from past experiences. In RL, actions taken by the agent change the state of the environment, and based on the rewards received, the agent adjusts its future actions. A key concept in RL is balancing exploration (trying new things) and exploitation (using known strategies for rewards).Enhancing LLMs with Structured Outputs and Function CallingEnhancing LLMs with structured outputs and function calling improves their ability to provide accurate and useful responses. Structured outputs ensure consistency and clarity by organizing information in a logical format, reducing ambiguity. Function calling allows LLMs to perform specific tasks, such as retrieving real-time data or executing external functions, making them more interactive and versatile. Combined with techniques like Retrieval-Augmented Generation (RAG), which integrates relevant external information into the model’s responses, these enhancements lead to more reliable, accurate, and contextually rich conversations with LLMs.Safely repairing broken builds with MLGoogle's engineers have developed a machine learning model called DIDACT to automatically repair broken code builds by analyzing historical data of build errors and their fixes. This model suggests potential fixes to developers directly within their Integrated Development Environment (IDE). In a controlled experiment, the use of these machine learning-suggested fixes improved productivity by reducing active coding and feedback time, and increasing the number of completed code changes.🚀 HackHub: AI ToolsAll-Hands-AI/OpenHandsOpenHands is an AI-powered platform designed to assist with software development, allowing agents to perform tasks similar to human developers. These agents can modify code, run commands, browse the web, call APIs, and even use resources like StackOverflow. OpenHands is easy to set up using Docker and can be run in various modes, including scriptable or interactive CLI.langgenius/difyDify is an open-source platform for developing AI applications, offering an intuitive interface that integrates workflows, agent capabilities, model management, and observability features. Dify's core features include a visual AI workflow builder, integration with numerous LLMs, agent tools, and a retrieval-augmented generation (RAG) pipeline for document handling.TransformerOptimus/SuperAGISuperAGI is an open-source framework designed for developers to create, manage, and run autonomous AI agents. It allows seamless operation of multiple agents simultaneously and provides tools to extend their capabilities. With features like graphical interfaces, performance telemetry, and integration with multiple vector databases, SuperAGI enables AI agents to efficiently handle tasks, learn from experience, and optimize token usage.lllyasviel/Paints-UNDOPaints-Undo is an open-source project that provides AI models designed to simulate the drawing process in digital art. By inputting a completed image, users can generate a sequence of steps showing how that image might have been created, mimicking the "undo" function in digital painting software.Stability-AI/stable-audio-toolsStable-Audio-Tools is an open-source library for working with audio generation models. It provides tools for training and running models that generate audio, including a Gradio interface for testing. Users can install the library via PyPI, and the repository includes scripts for both training models and performing inference.📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us.If you have any comments or feedback, just reply back to this email.Thanks for reading and have a great day!*{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{line-height:0;font-size:75%} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0
  • 3762

LLM Expert Insights, Packt
30 May 2025
10 min read
Save for later

Ready to dive into this week’s top five?

LLM Expert Insights, Packt
30 May 2025
10 min read
How to boost LLM performance during pre-training: A preview AI_Distilled #97: What’s New in AI This Week Build Your AI Chatbot with Free LLM Boomcamp Join LLM Zoomcamp, a free online course starting on June 2 and build an end-to-end AI chatbot tailored to your use case. In 10 weeks, you’ll learn key skills like working with LLMs and RAG, vector search for indexing and retrieval, how to evaluate and monitor performance, and key best practices for building robust, real-world applications. REGISTER NOW FOR FREE It’s time for the final issue of May 2025. In this edition, we bring you the top five news highlights of the week, upcoming events shaping the AI and LLM landscape, and a sneak peek into techniques for optimizing LLM performance. LLM Expert Insights, Packt In today's issue: 🧠 Expert Deep Dive: This week, we explore pre-training optimization techniques—from quantization to flash attention—for building faster, smarter LLMs. 📅 Webinar Watchlist: June’s top AI/LLM webinars cover automation, cybersecurity, healthcare, legal AI, and multimodal fine-tuning. 🔌 Build AI Agents This Weekend: Join Packt’s Accelerated Agentic AI Bootcamp—hands-on, fast-paced, and 35% off. 📚 Optimize Your LLM Stack: Learn more from Generative AI with Python and PyTorch—a guide to efficient training and deployment. 🚀 DeepSeek V3 Debuts: China’s latest open-source model steps up with better reasoning and dev capabilities. 📰 Publishers vs. AI Search: Google CEO Sundar Pichai defends AI-powered results amid growing backlash from content creators. 📱 Apple Rebrands for 2026: WWDC will unveil iOS 26 and align all platforms under a unified OS naming strategy. 🎨 Sam Altman x Jony Ive: OpenAI teams up with the design legend to build magical, AI-first consumer products. 🧠 Anthropic Traces Thoughts: Claude’s internal reasoning gets visualized through groundbreaking interpretability research. 📈UPCOMING EVENTS JUNE'S MUST ATTEND AI/LLM WEBINARS In June 2025, a number of exciting AI webinars are already generating buzz. Here are the Top 5 not-to-miss events in the next month (for more information and registration details, please visit the links): 1. AI-Enhanced Motion Control: Innovations Driving Automation Forward Date: June 5, 2025 Time: 12:00 PM – 1:00 PM ET Location: Online Cost: Free Hosted by the Association for Advancing Automation, this webinar explores how AI is revolutionizing motion control systems, enhancing precision, efficiency, and adaptability across various industries. 2. AI Security Webinar – Practical Measures to Mitigate AI and Cybersecurity Risks Date: June 11, 2025 Time: 11:00 AM – 12:30 PM BST Location: Online Cost: Free Presented by The Alan Turing Institute, this interactive webinar brings together industry experts and SMEs to share practical, cost-efficient, and high-impact security measures that deliver maximum AI and cybersecurity protection for businesses. 3. Clinical Large Language Models in Healthcare – Applications, Challenges, and Opportunities Date: June 12, 2025 Time: 10:00 AM – 11:00 AM CEST Location: Online Cost: Free Organized by the Helmholtz Information & Data Science Academy in collaboration with NORA, this webinar features Anne Torill Nordsletta discussing the role of large language models in healthcare, exploring applications, challenges, and future opportunities in the clinical setting. 4. Inside the TBI Playbook: How I Use AI to Win the Hardest Cases Date: June 17, 2025 Time: 1:00 PM – 2:30 PM EST Location: Online Cost: Free Hosted by Anytime AI™, this CLE-accredited webinar features attorney Taylor Ernst sharing insights on leveraging AI in traumatic brain injury litigation. Attendees will learn about practical applications of AI tools in complex legal cases. 5. Multi-Modal LLM Fine-Tuning of Unstructured Data with Dataloop & SingleStore Date: June 18, 2025 Time: 10:00 AM – 11:00 AM PST Location: Online Cost: Free Presented by SingleStore, this webinar explores techniques for fine-tuning multi-modal large language models on unstructured data, covering integration strategies with Dataloop and SingleStore platforms. Machine Learning Summit 2025 JULY 16–18 | LIVE (VIRTUAL) 20+ ML Experts | 20+ Sessions | 3 Days of Practical Machine Learning and 35% OFF BOOK NOW AND SAVE 35% Use Code EMAIL35 at checkout when purchasing the 3-day ticket Limited to the first 50 customers EXPERT INSIGHTS PRE-TRAINING OPTIMIZATION TECHNIQUES FOR LLMs The scale of data and computation required for large language models (LLMs), along with the significant capital investment needed to train and deploy them, necessitates the exploration of optimization techniques throughout the LLM lifecycle. In this issue, we focus on potential improvements during the pre-training phase, as this is the most resource-intensive step, involving a vast amount of data and sensitivity to architectural design. Here are some techniques you can employ to improve LLM performance and efficiency: 1. Quantization: Quantization aims to reduce the number of bits needed to store these weights by binning floating-point values into lower-precision buckets. This reduces memory usage with minimal impact on performance. Small precision losses are acceptable as long as the model’s performance is within the required levels. For instance, a weight value like 3.1457898 could be quantized to 3.1458 using a scheme that retains four decimal places. Such a scheme might lead to slight changes (during the backward pass of the training step, for example, a higher margin of error) while computing loss or while updating weights. Take, for instance, 4-bit quantization, which uses small bins where the density of weights is higher and fewer larger bins for weights away from the mean. The 4-bit float representation employs an intelligent approach based on the distribution of model weights. Most weights tend to cluster near zero, with minor differences requiring higher precision, while fewer weights have larger values. To accommodate this, asymmetric binning is used: smaller bins are allocated for values near the mean to maintain precision, while fewer larger bins handle outliers further from the mean. 2. Mixed precision: This is another technique to reduce memory and computational demands without sacrificing significant accuracy. These methods combine different numerical formats, such as float16, int8, and more, to optimize efficiency and performance during training or inference. 3. Data efficiency: Large datasets are costly to process, and redundant or noisy data can negatively impact model performance. Therefore, data efficiency techniques can be applied to achieve high model accuracy and generalization with a reduced or optimized dataset. This process includes filtering data for quality, reducing redundancy, and applying sampling techniques to emphasize high-value samples. 4. Sparse attention: Instead of computing attention weights for every pair of tokens in the input sequence, sparse attention focuses only on a subset of tokens, exploiting patterns in the data or task-specific properties. To put things into perspective, think about decoder-only architectures like GPT trained with an auto-regressive language objective. Such an objective puts a constraint on the attention layer to be causal, and thus, only the lower triangular attention matrix is useful (but the computation is still done for the whole matrix). Different architectures leverage specific patterns, like local or strided attention mechanisms, to bring in efficiency in computation time. 5. Flash attention: Flash attention takes the route of hardware-based improvements and efficiencies to compute attention scores. There are two popular techniques for sparse attention: Kernel fusion and Tiling. Kernel fusion reduces the number of I/O operations by combining all steps (elementwise operations, matrix multiplication, softmax, etc.) into a single read-write operation. This technique is pretty effective during inference. Tiling, on the other hand, breaks down the overall attention calculation into smaller and manageable groups of operations that fit into fast and low-latency GPU memory. For instance, instead of computing softmax across the entire attention matrix at once, FlashAttention computes it over smaller chunks in a numerically stable and tiled fashion, thus making use of faster memory without the need to store a large matrix. 6. Mixture of Experts (MoE) architecture: MOE is an advanced architecture designed to leverage a subset of components (or experts) rather than the whole architecture itself, thereby achieving higher scalability and efficiency. The Experts in this architecture are independent modules or blocks of the network, where each can be trained to specialize in a specific task. While the Router is a module that learns to select which experts to leverage (or activate) for a given input based on different criteria. The Router itself can be a neural network. 7. Efficient architectures: There are a number of different patterns and techniques that have been developed and leveraged by different architectural improvements over the years. Some of the popular architectures are Linformer, Reformer, and Big Bird. Apart from pre-training optimizations, there are other techniques as well, such as fine-tuning and improvements in inference time. More recently, the availability and popularity of small language models and specialized hardware and frameworks has also contributed to significant improvements in the overall efficiency of resource-constrained environments. Liked the Insights? Want to dig in deeper? If you wish to learn more about these techniques or wish to dive deep into foundational aspects of the LLM ecosystem, you can check out the book, Generative AI with Python and PyTorch, Second Edition, by Joseph Babcock and Raghav Bali. BUY NOW 📈LATEST DEVELOPMENT Let’s kick things off with the top stories of the week. China is aiming for the top spot in the AI race with DeepSeek V3's latest release DeepSeek just released -V3-0324, claiming a major boost in reasoning, front-end development capabilities, and smarter tool use. The release positions DeepSeek as a serious contender to models like Code Llama and Codex. You can try out the open-source weights from this HuggingFace card. Publishers claim AI-Search is an internet takeover, Pichai defends it as an innovation In a podcast with Nilay Patel (Editor-in-Chief of The Verge), Google CEO Sundar Pichai shared candid thoughts on AI’s impact on the internet. He defended AI-generated search results amid backlash, insisting they won’t kill the open web. As Google walks a tightrope between innovation and publisher outrage, Pichai expressed confidence that AI will ultimately “enhance,” not erase, human content. He dodged revenue concerns but acknowledged the risks of unchecked AI growth. Catch the full conversation here. Apple’s branding power move with iOS26 A Bloomberg report says that Apple is set to revamp its OS branding game at WWDC-2025. The rebranding will sync all platforms with the upcoming 2026 launch year, setting the stage for a unified, modernized software identity with iOS 26, macOS 26, and watchOS 26. SamA and Ive team up for AI-first products OpenAI is collaborating with design icon Jony Ive and his firm LoveFrom to craft AI-powered products. Jony Ive, Scott Cannon, Evans Hankey, and Tang Tan led io team will collaborate closely with Open AI’s research and engineering teams, with LoveFrom leading design and creative responsibilities. Their goal: to recapture the magic, creativity, and wonder of early Apple-era technology. Hear more about their vision in this video. Anthropic inching towards interpretable AI? Anthropic just cracked open the black box of AI thinking with its latest research, Tracing Thoughts. Using a novel method called dictionary learning, researchers mapped how language models like Claude internally form and organize thoughts. They uncovered thousands of hidden features that resemble abstract concepts and reasoning steps. This breakthrough gives us a glimpse into not just what AI predicts—but how it thinks. Dive into this investigative research here. 📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us. If you have any comments or feedback, just reply back to this email. Thanks for reading and have a great day! That’s a wrap for this week’s edition of AI_Distilled 🧠⚙️ We would love to know what you thought—your feedback helps us keep leveling up. 👉 Drop your rating here Thanks for reading, The AI_Distilled Team (Curated by humans. Powered by curiosity.) *{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0}#converted-body .list_block ol,#converted-body .list_block ul,.body [class~=x_list_block] ol,.body [class~=x_list_block] ul,u+.body .list_block ol,u+.body .list_block ul{padding-left:20px} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0
  • 84

LLM Expert Insights, Packt
31 Oct 2025
5 min read
Save for later

🚀 Your Next LLM May Be Trained on 4 Bits—Thanks to Nvidia

LLM Expert Insights, Packt
31 Oct 2025
5 min read
NVFP4 could redefine efficiency in model training—check out the most important new benchmarks this w AI_Distilled #120: What’s New in AI This Week Welcome to this week’s edition of AI Distilled! This week’s roundup spans thecutting edgeof model optimization, on-device AI, and enterprise infrastructure. Nvidia advances efficiency with 4-bit precision training, whileMiniMax’sopen-weight M2 model sets new performance benchmarks. Meta, Intel, Qualcomm, and AWS unveil major infrastructure updates, andArizeAI’s partnership withInfogainstrengthens agent observability. We also spotlight Microsoft’s coding patent and Forrester’s surprising insights on AI-driven rehiring trends. LLM Expert Insights, Packt 📈LATEST DEVELOPMENT New Models & Training Techniques Nvidia unlocks 4-bit precision training Nvidia’s new NVFP4 quantization method allows large language models to train in 4-bit precision whilemaintaining8-bit FP8 accuracy. By mixing precision to manage numerical outliers, NVFP4 dramatically reduces memory andcomputecosts—making custom model training more affordable. (VentureBeat) MiniMax-M2sets benchmark for open-weightLLMs MiniMaxhas launched MiniMax-M2, an open-weight mixture-of-experts language model featuring 230 billion parameters with 10 billionactive. Despite itsrelatively smallactive footprint, M2 topped the Artificial Analysis Intelligence Index and excelled in coding and agent benchmarks. Released under an MIT license, it offers enterprises a flexible, cost-efficientoptionfor experimentation and deployment without restrictive licensing barriers. (VentureBeat) Build, Deploy and Scale AI Agents - Nexus 2025 Packt's Annual AI Workshop Build and fine-tune your own LLMs and Agents and deploy them in production with workshops on MCP, A2A, Context Engineering, and many more. 🕜 Nov 20-21 BOOK NOW - 50% OFF Tools, Agents,& Platforms Framework for agentic help desks InfoWorldhasoutlined a six-step roadmap for deploying AI help-desk agents—from defining measurable goals to embedding the agent in real user channels. The guide emphasizes that effective agents must act, not merelychat, combining governance, tool access, and human oversight. (InfoWorld) ExecuTorch1.0 powers on-device AI MetahaslaunchedExecuTorch1.0, an open-source inference framework that runs anyPyTorchmodel directly on mobile and edge devices. Supporting CPU, GPU,and NPU acceleration,ExecuTorchenables low-latency AI for vision, speech, and language—while safeguarding user privacy. (InfoWorld) Partnerships & Investments ArizeAI +Infogainboost agent observability Arize AIhasjoined forces withInfogain’sIgnis platform to unify LLM evaluation and monitoring. The integration adds tracing,prompt-optimization, and real-time compliance checks, giving enterprises a clearer view of agent performance across lifecycles. (PR Newswire) Maincodecommits $30 M to Melbourne AI factory Australian developerMaincodeis investing $30 million in its new MC-2 AI Factory, due January 2026. Equipped with AMD Instinct GPUs and EPYC CPUs, MC-2 will specialize in precise, client-specific LLMs—powering the next generation ofitsMatilda models. (ARN Net) Infrastructure & Hardware Intel shifts focus to data-center chips Amid constrained 10/7-node capacity, Intel is prioritizing wafer supply for server processors over consumer chips. The company noted surging AI demand and plans to adjust pricing and mix toward data-center workloads. (Network World) AWS & Anthropic complete Project Rainier Amazon Web Serviceshasfinished Project Rainier, an$8 billionsupercomputing cluster forAnthropic’sClaude models. Built with over 500,000Trainium2 chips (scaling to 1 million), the system boosts sustainability through hybrid cooling and vertical power delivery—enabling faster, greener LLM training. (DataCenter Knowledge) Qualcomm unveils AI200andAI250 accelerators Qualcommhas unveiledits AI200 and AI250 accelerators, built for rack-scale inference of large language and generative models. The AI200 offers 768 GB LPDDR memory, while the AI250 adds near-memory computing for up to 10× higher bandwidth. Both feature liquid cooling, confidential computing, and high efficiency—marking Qualcomm’s bold expansion into data-center-grade AI infrastructure. (Qualcomm) Market & Predictions Microsoft patents symbolic-guided code generation Microsoft is seeking a patent for a system that enhances how large language modelsgenerate source code. The proposed method involvesidentifying“symbolic properties” from high-quality code examples, training a model to recognize those properties from natural language prompts, and then guiding the LLM to produce code that aligns with those patterns. The goal: moreaccurate, reliable AI-generated code.(The Daily Upside) EXPERT INSIGHTS Designing Multi-Agent Systems with OpenAI Agents SDK InBuilding Agents with OpenAI Agents SDK, authorHenry Habibexplores one of the most powerful frontiers in appliedAI –the ability for multiple agents to collaborate seamlessly to solve complex tasks. Just asorganizations rely on specialized teams to achieve ambitious goals, intelligent systems can distribute work among multiple AI agents,each with a specific role,expertise, and responsibility. This week’sExpert Insightbreaks down how multi-agent systems function, how “handoffs” make them work, and how OpenAI Agents SDK simplifies building these architectures for real-world applications. READ MORE Built something cool? Tell us. Whether it's a scrappy prototype or a production-grade agent, we want to hear how you're putting generative AI to work. Drop us your story at nimishad@packtpub.com or reply to this email, and you could get featured in an upcoming issue of AI_Distilled. 📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us. If you have any comments or feedback, just reply back to this email. Thanks for reading and have a great day! That’s a wrap for this week’s edition of AI_Distilled 🧠⚙️ We would love to know what you thought—your feedback helps us keep leveling up. 👉 Drop your rating here Thanks for reading, The AI_Distilled Team (Curated by humans. Powered by curiosity.) *{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
LLM Expert Insights, Packt
24 Oct 2025
9 min read
Save for later

AI’s Wild Week: AI Faces a $1.5B Reckoning and a Reality Check

LLM Expert Insights, Packt
24 Oct 2025
9 min read
Exclusive Invite: Packt’s Nexus 2025 – The Global Agentic AI Event. AI_Distilled #119: What’s New in AI This Week It’sbeen a week of recalibration across the AI landscape:billion-dollar copyright reckonings,tightening global regulations,layoffs, lawsuits, and bold experiments redefining what “AI-powered” really means. Underneath the noise, a pattern is emerging: the industry is shifting from rapid expansion to structural accountability. Whether it’s Anthropic’s landmark settlement, China’s new AI governance laws, or SAP’s methodical rollout of enterprise agents, the message is clear "AI’s next phase is about stewardship rather than scale." Dive into this week’s curation for the full picture! LLM Expert Insights, Packt EXPERT INSIGHTS Building Trustworthy Intelligence: The Road to Responsible AI in LLMs In this week’s feature, Ahmed Menshawy and Mahmoud Fahmy, authors of LLMs in Enterprise, unpack how organizations can balance innovation with responsibility when deploying large language models. They outline the four pillars of Responsible AI (RAI):fairness, transparency, accountability, and safety, as the foundation for building trustworthy systems. From bias detection and explainability tools to continuous compliance and regulatory alignment, the article shows how ethics becomes engineering through practical frameworks and real-world safeguards. As global standards like the EU AI Act and NIST RMF tighten accountability, RAI isn’t just good practice; it’s a business imperative. Read the full article on Substack → Special Message from Packt's Events Team: This November, the world’s top AI Experts from Google, Microsoft, and LangChain are coming together for Packt's Nexus 2025, a two-day live virtual summit for developers, engineers, and AI practitioners ready to build the next generation of intelligent systems. Join the Experts Redefining AI | Live at Nexus 2025. BOOK YOUR SEAT NOW! Use code: EARLY50 to get 50% discount on the ticket - Exclusive for the AI_Distilled Community 📈LATEST DEVELOPMENT OpenAI launches AI browser that can browse and act for you What happened: OpenAI introducedChatGPT Atlas, a Chromium-based browser with the ChatGPT assistant built in. It currently supports macOS and uses features like a sidebar for summarising websites, indexing your browsing history, and an “Agent Mode” that enables the AI to perform tasks like shopping and tab management, all with optional privacy modes for logged-out usage. Why it matters: By integrating LLMs directly into the browser, OpenAI is shifting how we access and interact with the web from manual searches to conversational and action-based interfaces. This move also elevates questions of privacy, data control, and the evolving role of browsers as AI-enabled platforms. (Tom’s Hardware) DeepSeek explores AI efficiency with token-to-image compression What happened: Chinese startupDeepSeek unveiled a new model that converts text tokens into images using a vision encoder, a technique that could overcome the “long-context” limits of LLMs. The model, called DeepSeek-OCR, compresses text inputs up to 10× whilemaintainingabout97 %accuracy, sparking discussion across the global AI community. Why it matters: This research could pave the way for LLMs that handle far longer prompts and reasoning chains without massive computational costs. If successful, it would mark a breakthrough in scaling efficiency, one of the biggest challenges in current AI architectures. (South China Morning Post) Anthropic to pay$1.5 billionin landmark copyright settlement What happened: Anthropic has agreed to pay$1.5 billion to authors after using their copyrighted books scraped from sites like LibGen and PiLiMi to train its Claude models without permission. Around half a million authors are eligible for compensation, and Anthropic must also destroy all pirated copies. Why it matters: The settlement sets a precedent for how AI companies handle copyrighted data, signaling that unlicensed use of creative works now carries real financial risk. It may also push the industry toward formal licensing deals between publishers and AI developers.(Chemistry World) China strengthens AI oversight with new data and safety laws What happened: China’s top legislature is drafting amendments to its cybersecurity law to include stricterAI safety, ethics, and data protectionmeasures. The proposed framework supports AI research while tightening oversight of generative models and content labeling, including mandatory visible and hidden identifiers for AI-generated media. Why it matters: The move signals Beijing’s intent to balance AI growth with tighter governance, aiming to prevent misinformation and data misuse. It also highlights a divergence from U.S. policy; China’s focus is regulation-first, while American firms emphasize commercial deployment. (Business Standard) Study warns of ‘brain rot’ in AI models trained on junk web data What happened: A study by researchers fromTexasA&M, the University of Texas at Austin, and Purdue University found that large language models suffer “cognitive decline” when repeatedly trained on low-quality, engagement-driven content. The paper titled LLMs Can Get Brain Rot! shows that reasoning accuracy in tested models droppednearly20points, and long-context comprehension fell over30 pointswhen fed junk social media data.(Business Standard) Why it matters: The findings underline thatdata quality directly affects AI reliability and ethics, not just performance. Models exposed to “viral” or superficial web textexhibitedreasoning shortcuts, overconfidence, and personality drift—effects researchers call “persistent representational decay.” The paper urges developers to treat data hygiene as acore AI safety issue, recommending cognitive audits and stricter content filtering during training.(arXiv) OpenAI’s South Korea blueprint envisions AI-led economic growth What happened: OpenAI released anEconomic Blueprint for South Korea, outlining policy recommendations to scale AI adoption through partnerships with Samsung, SK, and the Ministry of Science and ICT. The plan builds on OpenAI’sStargate initiative, focused on advanced memory and next-gen data centers, and aims to pair sovereign AI development with frontier collaborations. (OpenAI) Why it matters: South Korea is positioning itself as the nextglobal AI powerhouse,leveragingits semiconductor dominance, digital infrastructure, and government-backed funding. The blueprint calls for AI-led growth inexports, healthcare, education, and SMEs, alongside governance sandboxes and data infrastructure standards, framing Korea as both an adopter and standard-setter in safe, scalable AI deployment.(OpenAI) Dell Technologies Capital bets on AI data and new architectures What happened: Dell Technologies Capital (DTC) managing director Daniel Docter and partner Elana Lian outlined their vision fornext-generation AI architectures and “frontier data”in a Crunchbase interview. Dell expects$20 billionin AI server shipmentsby 2026 and has loggedfive portfolio exits since June, including Meta’s acquisition ofRivosand Salesforce’s acquisition ofRegrello. Why it matters: DTC sees AI’s future as adata problem more than a model problem, backing startups innovating in reasoning, safety, and new architectures such asstate-space modelsfor long-context and voice AI. The firm’s focus spans from silicon to applications, reflecting how enterprise AI is now driven by infrastructure, not hype.(Crunchbase) Google launches Skills platform with 3,000 AI courses What happened: Google unveiledGoogle Skills, a unified learning hub offeringnearly3,000AI and technical coursesfrom Google Cloud, DeepMind, and Grow with Google. The platform features hands-on labs powered by Gemini Code Assist, gamified progress tracking, and credentials ranging from skill badges to professional certificates.(Analytics India Magazine) Why it matters: As demand for AI talent accelerates, Google’s platform could playa central roleinbridging global workforce gaps, especially by offering free access to students, nonprofits, and developers. It emphasizesapplied, hands-on learningrather than passive video courses, signaling how tech giants are retooling education to meet enterprise AI demand.(Analytics India Magazine) Elon Musk says AI will take every job and humans will be free to grow vegetables In his latest comments on X,Elon Muskdeclared that“AI and robots will replace all jobs.” Far from a dystopian warning, Musk argued this shift could liberate humanity from the need to work, likening future labor to an optional hobby such as “growing your own vegetables instead of buying them from the store.” The remark came in response to reports about Amazon’s plan to replace over 160,000 jobs with robots by 2027. While his statement reignited debates about automation anxiety, Musk framed it as an opportunity for universal income and post-labor fulfillment rather than economic ruin.(mint) Build an agent with function calling inGPT-5 Whatyou’lllearn: a practical walk-through of agent design, from defining tool schemas and wiring up function calls to implementing a working web-search agent withTavily, complete with environment setup, code, and a clear loop for handling function outputs vs direct replies. Ifyou’vebeen wanting to move from prompts to real actions, bookmark this and try the tutorial end-to-end:(Towards Data Science) Look beyond LLMs to build the next generation of AI AI veteran Dr. Lance Eliot argues that true progress toward AGI will come from exploring new paradigms from neuro-symbolic and embodied AI to human-centered and quantum approaches rather than scaling today’s language models. If you care about where the next real breakthroughs will emerge, this piece is your roadmap to what comes after generative AI: (Forbes) Built something cool? Tell us. Whether it's a scrappy prototype or a production-grade agent, we want to hear how you're putting generative AI to work. Drop us your story at nimishad@packtpub.com or reply to this email, and you could get featured in an upcoming issue of AI_Distilled. 📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us. If you have any comments or feedback, just reply back to this email. Thanks for reading and have a great day! That’s a wrap for this week’s edition of AI_Distilled 🧠⚙️ We would love to know what you thought—your feedback helps us keep leveling up. 👉 Drop your rating here Thanks for reading, The AI_Distilled Team (Curated by humans. Powered by curiosity.) *{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0

LLM Expert Insights, Packt
06 Jun 2025
9 min read
Save for later

📬 Don’t Miss This Week’s AI Highlights (Your Shortcut to Smart)

LLM Expert Insights, Packt
06 Jun 2025
9 min read
From Digit’s delivery test to Gemini 2.5’s native audio and ChatGPT-powered productivity—this week’s AI_Distilled #98: What’s New in AI This Week Join the live "Building AI Agents Over the Weekend" Workshop starting on June 21st and build your own agent in 2 weekend. In this workshop, the Instructors will guide you through building a fully functional autonomous agent and show you exactly how to deploy it in the real world. BOOK NOW AND SAVE 35% Use Code AGENT25 at checkout Spots are limited. Book now to SAVE 35% (Valid for till 8th June 2025) This month is buzzing with AI innovation—from can’t-miss conferences to game-changing GenAI use cases. Whether you're looking to level up your skills, explore new tools, or stay ahead of the curve, we've got you covered. LLM Expert Insights, Packt In today's issue: 🧠 Expert Deep Dive: Valentina Alto explores real-world GenAI use cases—from code and content to campaigns and daily life. 📅 June Conference Watch: Your curated guide to the top AI/LLM conferences this month—CVPR, ICML, ACL, and more. 🎯 Productivity Reimagined: From GTM strategy to custom workouts, see how ChatGPT reshapes personal and professional workflows. 🔊 Gemini 2.5 Gets Audio: Google DeepMind’s latest model understands tone, languages, and screen-shared content. 📦 Amazon’s Humanoid Robot: Digit enters delivery trials—redefining warehouse automation and last-mile logistics. 🔐 OpenAI Boosts Security: A new vulnerability disclosure framework sets industry standards for AI integrity. 🚫 DeepSeek Faces Criticism: China’s newest model sparks global concern with aggressive political censorship. ⚡ Nvidia Dominates MLPerf: Blackwell GPUs set new training records, proving unmatched performance in AI workloads. 📈UPCOMING EVENTS JUNE'S MUST ATTEND AI/LLM CONFERENCES Breakthroughs in AI are made possible through years of study, experimentation, and research that eventually shape the mainstream. Whether you're a researcher pushing the boundaries of machine learning, a developer building with generative AI, or a leader shaping enterprise strategy, this handpicked list of the top conferences in 2025 will help you stay connected to the pulse of innovation. 1. CVPR 2025 – IEEE/CVF Conference on Computer Vision and Pattern Recognition Dates: June 11–15, 2025 Location: Music City Center, Nashville, TN, USA Cost: In-person - General: $900; Student: $810; IEEE/CVF Members ($900 for professionals, $675 for students) Nature: Virtual - General: $215; Student: $125; IEEE/CVF Members ($180 for professionals, $100 for students) Focus: Computer vision, multimodal AI, LLMs in vision tasks Website: CVPR 2025 Conference 2. ICLAD 2025 – IEEE International Conference on LLM-Aided Design Dates: June 26–27, 2025 Location: Paul Brest Hall, Stanford University, Stanford, CA  Cost: In-person only - General: $600; Student: $410; IEEE/CVF Members ($500 for professionals, $350 for students) Focus: Utilizing large language models to enhance design processes in circuits, software, and computing systems Website: International Workshop on LLM-Aided Design 3. ICML 2025 – International Conference on Machine Learning Dates: July 13–19, 2025 Location: Vancouver Convention Center, Vancouver, Canada Cost: In-person - General: $1365; Student: $1030 Nature: Virtual - General: $275; Student: $200 Focus: Machine learning theory and practice, generative AI, LLMs Website: ICML 2025 Conference 4. ACL 2025 – 63rd Annual Meeting of the Association for Computational Linguistics Dates: July 27 – August 1, 2025 Location: Vienna, Austria Cost: In-person - General: $1125; Academic: $800; Student: $425 + ACL Membership fee ($100 for professionals, $50 for students) Nature: Virtual: - General: $550; Academic: $400; Student: $250 + ACL Membership fee ($100 for professionals, $50 for students) Focus: Natural language processing, large language models, language generation Website: ACL 2025 5. NeurIPS 2025 – Conference on Neural Information Processing Systems Dates: December 2–7, 2025 Location: San Diego Convention Center, San Diego, CA, USA Cost: In-person - General: $1000; Academic: $800; Student: $375 Nature: Virtual - General: $275; Academic: $200; Student: $50 Focus: Advanced ML research, LLMs, multimodal AI Website: NeurIPS 2025 Conference EXPERT INSIGHTS FROM TEXT TO TECH: THE MANY USE CASES OF GENERATIVE AI The hype around GenAI and how it enhances productivity shows no signs of slowing down. Just as previous generations shifted from Xeroxing to Googling, we now find ourselves firmly in the era of “Ask ChatGPT.”. GenAI finds its applications in various fields, such as image synthesis and text generation to music composition, marketing content, data analysis, coding, and countless other tasks that, until recently, required specialized expertise. In this issue, we spotlight just a few of the many real-world applications of GenAI, using OpenAI’s ChatGPT as our lens. Here are four use cases from one of our best-selling books, Practical Generative AI with ChatGPT, written by our star author Valentina Alto. 1. Daily assistant: ChatGPT is an excellent tool for boosting your day-to-day activities, such as grocery shopping, meal planning, and workouts, among many other tasks. Take, for example, the following prompt: Generate a 75’ workout routine for strength training. My goal is increasing my overall strength and also improving flexibility. I need a workout for the upper body only divided by the muscle group. Make it in a table format with # of reps and # of series. Make sure to incorporate some rest as well. Here is a sample workout plan that ChatGPT might generate for you: 2. Creating content: You can use ChatGPT to craft emails, create social media posts, write blogs and articles, assist with proofreading, perform translations, analyze documents, or even adjust the tone of your content: whether you want it to be formal, quirky, casual, or sarcastic. Take a look at ChatGPT’s sarcastic translation of an Italian text: 3. Coding assistant: The primary capability you should leverage is ChatGPT’s code generation. From writing a simple function to creating the skeleton of a game, ChatGPT can provide enough building blocks to get started. You can also use it to suggest code optimizations, explain errors, and debug your existing code. Additionally, it can help generate documentation, improve code explainability, and even assist in understanding the structure of a neural network. Take, for example, the following CNN model: If you ask ChatGPT to explain this model, it may respond as follows: 4. Design marketing campaigns: Suppose you have a new product and need a go-to-market (GTM) strategy. You can ask ChatGPT to help you draft an initial plan. Then, by iteratively refining your prompts, you can request suggestions for the product name, marketing hook, target audience research, unique value proposition, sales channels, pricing, SEO keywords, and more. You can even ask it to generate product launch posts. Here are some of the prompts Valentina experimented with in her book while developing a GTM strategy for eco-friendly socks. Generate 5 options for a catchy product line name Generate 3 slogans for the “GreenStride” name. They should be motivating and concise. What kind of target audience should I address with the promotion of GreenStride socks product line. What could be the best channel to reach the segments identified above Give me three concise suggestions on how to make my socks line GreenStride outstanding and unique in a competitive market Generate a product description (max 150 words) for GreenStride socks line using unique differentiator you listed above. It should be attention-grabbing and effective, as well as SEO optimized. List also the SEO keywords you used to finish. What could be the fair price of my socks line I want to generate an Instagram post to announce the launch of GreenStride socks. Write a post (max 150 words) including the unique features and differentiators mentioned above, as well as relevant hashtags. Liked the Insights? Want to dig in deeper? Beyond the four use cases we’ve spotlighted in this issue, the book Practical Generative AI with ChatGPT, by Valentina Alto, introduces generative AI and its applications, focusing on OpenAI’s ChatGPT. It covers prompt engineering, daily productivity use cases, domain-specific applications for developers, marketers, and researchers, and the creation of custom GPTs using the GPT Store, enabling specialized assistants without coding, powered by personalized instructions and tools. BUY NOW 📈LATEST DEVELOPMENT Let’s get right into it. Google DeepMind Introduces Gemini 2.5 with Native Audio Capabilities Google DeepMind has launched Gemini 2.5, now capable of processing real-time audio and video. The model can interpret screen-shared content, respond to tone and background noise, and supports over 24 languages, making it more contextually aware and interactive than ever before. Amazon to Test Humanoid Robots for Package Deliveries The Information has reported that Amazon is preparing pilot tests of Agility Robotics' bipedal humanoid robot, Digit, for use in logistics and package handling. Designed to work safely in spaces designed for humans, Digit is expected to automate repetitive warehouse tasks and even assist in last-mile delivery operations. OpenAI Launches Coordinated Vulnerability Disclosure Framework OpenAI has introduced an “Outbound Coordinated Vulnerability Disclosure” policy to responsibly report security issues it uncovers in external systems. This move aims to bolster security standards and transparency across the tech ecosystem. DeepSeek’s New AI Sparks Free Speech Concerns Chinese AI developer DeepSeek has triggered global criticism for its model’s extreme content filtering. Users attempting to query politically sensitive topics, like Tiananmen Square or Taiwanese independence, are met with complete denials, spotlighting a stark divide in global AI moderation norms. Nvidia Blackwell Chips Dominate New MLPerf Benchmarks Nvidia’s Blackwell GPUs dominated the latest MLPerf training benchmarks, delivering double the performance of previous H100 chips. These results highlight Blackwell’s efficiency in training large AI models with fewer GPUs, reduced energy use, and lower costs, solidifying Nvidia’s leadership in AI hardware and accelerating industry-wide adoption of its new architecture. Kubernetes for Generative AI Solutions 40% Off on eBook + 20% Off on Paperback for the next 48 hours 📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us. If you have any comments or feedback, just reply back to this email. Thanks for reading and have a great day! That’s a wrap for this week’s edition of AI_Distilled 🧠⚙️ We would love to know what you thought—your feedback helps us keep leveling up. 👉 Drop your rating here Thanks for reading, The AI_Distilled Team (Curated by humans. Powered by curiosity.) *{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0}#converted-body .list_block ol,#converted-body .list_block ul,.body [class~=x_list_block] ol,.body [class~=x_list_block] ul,u+.body .list_block ol,u+.body .list_block ul{padding-left:20px} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0

LLM Expert Insights, Packt
17 Oct 2025
8 min read
Save for later

OpenAI to allow erotica in ChatGPT

LLM Expert Insights, Packt
17 Oct 2025
8 min read
SamA’s ambitions to scale OpenAI know no bounds AI_Distilled #118: What’s New in AI This Week Welcome toAI Distilled, where we brew down the week’s AI news into anuttyblend. This week’s cup is overflowing – from OpenAI’s big spending (andahemspicy new features) to other tech giants’ AI moves. Enjoy thesip! LLM Expert Insights, Packt EXPERT INSIGHTS Top 5 Frameworks for Building AI Agents (2025) AI agents are no longer sci-fi—they’rethe witty coworkers of the future, ready to browse the web, crunch data, and even plan tasks autonomously. But behind every great AI agent is a great framework.Here’sour take on the top five frameworks for building AI agents, ranked on ease of use, popularity, community love, industry adoption, flexibility,and yes,cost. Buckle up for a quick tourof our top five picks this year. LangChain– The Versatile Orchestrator Whyit’s#1:LangChainis the OG of agent frameworks and hasessentially becometheSwiss Army knifefor LLM-powered applications.It’san open-source toolkit that makes iteasy toconnectlarge language models to tools, data, and prompts(Hyperstack). Withextensive integrations and modular abstractions,LangChainsimplifies complex AI workflows so developers can focus on creativity over plumbing(Skim AI). No wonderit’swildly popular – an industry guide notesLangChain’s“massive community (80K+ GitHubstars)…and proven enterprise adoption”,(ampcome),as key to its gold-standard status.It’sflexible enough for everything from chatbots to autonomous task agents. Ease of use:High – thanks to great docs and a huge community. Learning curve:Mild, especially with so many examples out there. As co-founderHarrison Chase puts it,agents are like digital labor that can use tools and act autonomously– andLangChaingives your AIlabor forcethe training it needs to excel. LangGraph– Advanced Multi-Agent Workflows Whyit’s#2:IfLangChainis the toolkit,LangGraphis the control room. Built as an extension ofLangChain,LangGraphintroduces agraph-based approach to orchestrate multiple agents with stateful memory. Insimpler terms, it lets you design complex workflows as nodes and edges – perfect for scenarios where several AI agents must collaborate or follow conditional branches. This precision and control makeLangGraphideal forintricate decision-making systems or simulationsthat go beyond linear chats. Flexibility:Very high– you can choreograph agents like a director managing an ensemble cast. Popularity:Growing fast (it’sLangChain’sbrainy younger sibling). Learning curve:Steeper –you’llneed to think in graphs, which might tie your brain inknots atfirst. But for thoseneeding detailed orchestration and debugging of multi-agent setups,LangGraphelevatesLangChainto new heights. It’slike going from driving a car to flying a plane – morepower butrequires more skill. CrewAI– The Team Player Whyit’s#3:CrewAIis theup-and-coming startup darlingof agentframeworks, focused on making multi-agent systems as easy as forming a superhero team. Itmimics human team dynamics, letting you spin up acrewof agents where each has a role (researcher, planner, coder, etc.) and they collaborate to get the job done(IBM). The API isclean and beginner-friendly, so you can get a multi-agent prototype running faster than assembling an IKEA chair. One guide describesCrewAIasan innovative agentic framework that empowers the creation of collaborative, autonomous AI agents, working together to achieve complex goals(Medium). Ease of use:Excellent – minimal setup,sensibledefaults. Popularity:Rapidly growing;it’sindependent ofLangChain, built from scratch, and gaining fans for its simplicity(GitHub). CrewAI’ssecret sauce is quick integration of tools and a focus on real-world workflows (think AI agentsactinglike a coordinated Slack team). It does sacrifice some flexibility for simplicity – thisopinionated designmeansadvanced users might hit limits in customization. But for many, having your personal AI Avengers working in harmony is well worth it. Microsoft Semantic Kernel – The Enterprise Whisperer Whyit’s#4:From Microsoft’s R&D labs comesSemantic Kernel (SK), the framework thatbridges AI with the enterprise world. SK integrates LLM-basedskillsinto traditional software, making it a favorite for companies that want AI smartswithoutrebuilding their stack.It’sdesigned for .NET and Python, meaningyou can slot it into your existing apps with ease. Think of SK as the middleware that helps AI agents talk to business systems (databases, CRMs, Office 365, you name it). Its strengths includememory retention and context management(great for virtual assistants that need to remember conversations) androbust security and compliance featuresfor corporate use(Analytics Vidhya). Popularity:Solid in enterprise circles (less splashy on GitHubstars butbacked by Microsoft’s heft). Ease of use:Moderate – ifyou’rea .NET developer,you’llfeel at home; othersmay need tomake adjustments. Flexibility:Moderate – not as many out-of-the-box agents asLangChain, but you can combine it with custom code easily. In short, Semantic Kernelis areliable, security-conscious framework you bring home to meet the CIO. MicrosoftAutoGen– The Automation Maestro Whyit’s#5:AutoGenis like the orchestral conductor of AI agents, straight from Microsoft Research. It enables the creation ofmultiple specialized agents that chat and cooperate to solve tasks– essentiallyturning complex problems into a team conversation.AutoGenshines in scenarios like code generation, cloud operations, or any heavy-duty project whereyou’dwant a swarm of AI agents each doing whatthey’rebest at.It’sopen-source and wascompletely redesigned in v0.4 to boost robustness and scalability, incorporating feedback from early users(Microsoft).Microsoft describesAutoGenasan open-source framework for building AI agents… easy-to-use and flexible… accelerating development of agentic AI. Ease of use:Medium – simpler than building multi-agent systems from scratch, butyou’llstill invest time to configure roles and communications. Flexibility:High –it’sevent-driven and asynchronous under the hood, allowing complex workflows and even human-in-the-loop oversight. The catch is asteeper learning curveand moreinvolved setup compared to lightweight frameworkslikeCrewAI. But if you need an enterprise-grade,large-scale automation toolkit,AutoGenis a powerhouse ready to conduct your AI orchestra. AutoGencomes with neat features likeAutoGenStudio (a no-code interface)and strong logging/error handling for production-grade deployments. Harrison chase is sharing a deep version of on LangChain and Frameworks. Join him in Packt's flagship conference - GenAI Nexus 2025 happening on Nov 20-21 (Virtual). KNOW MORE ABOUT HARRISON CHASE'S SESSION Master AI in 16 hours & Become Irreplaceable before 2025 ends! 🧠Live sessions- Saturday and Sunday 🕜10 AM EST to 7PM EST SAVE YOUR SPOT NOW 📈LATEST DEVELOPMENT ChatGPT Gets Spicy,OpenAImakes bold moves OpenAI is loosening ChatGPT’s tie and letting ithave some fun. An upcoming update will allow verified adults to engage ineroticrole-play conversations with ChatGPT.Looks like ChatGPTwill soon flirt and sext within safety limits.But mental health experts, professionals, and parents have calledoutthis move, citing its potential impact onpsychologicalimpactonindividuals and the safety of children.Open AI, CEO, Sam Altman made thisannouncement in his recent X post. To counter these concerns,OpenAI has formed a well-being council.Eight expertshavejoinedOpenAI’s Expert Council, whowilladvise on healthy AI interactions, teen safety, and guardrails for ChatGPT/Sora—building on parental controlsworkwith ongoingcheckins. Salesforce and OpenAI just pulled a double shot ofsynergy— bringing ChatGPT intoAgentforce360 and Slack.Check out this announcement. In another development,OpenAI and Sur Energy sign an LOI for acleanenergyStargatedata center in Argentina after talks with President Milei, alongsideOpenAI for Countriesplans to modernize government workflows. Learn more about this collaborationhere. Apple harvests talent while Meta brews it Metahas beenraiding Apple’s engineering pantryfor quite a while. In a new poaching move,Ke Yang,who has been driving Apple’sAI-drivensearch project,has stepped downfromhis position as head of the team called Answers, Knowledge and Information, or AKI,reportsBloomBerg. Microsoft’s Midjourneyrival Microsoft unveiledMAI-Image-1, its first homegrown text-to-image model.It’salready posting impressive benchmark scores, aiming to break our Midjourney addiction. Microsoft’s AI strategy is clearly moving beyond just OpenAI partnerships, as it hustles to build its own creative AI arsenal.Go check it out. Google’s AIface-lift Google shipped a bundle of new AI features. Notably, Google Meet now offersAI-powered virtual makeupthat tracks your face in real time– finally catching up to Zoom and Teams with filters that stay put when you move. Meanwhile, Google’s also injecting its image-gen tech (“Nano Banana”) into Search and rolling out smarter Gmail scheduling. AI glam and productivity, all in one go.Learn more aboutGoogle’s Touch-up here. NVIDIA’sminisupercomputer NVIDIA just rolled out apint-sized powerhouse. Dubbed DGX Spark, thistiny AI supercomputer delivers 1 petaflopof performance in alunch-boxform factor. CEO Jensen Huang hand-delivered one to OpenAI’s Greg Brockman, because nothing says friendship like a supercomputer on your doorstep.It’sbig compute in a small package – and everyone in AI wants one.Here is NVIDIA’s official announcement. Built something cool? Tell us. Whether it's a scrappy prototype or a production-grade agent, we want to hear how you're putting generative AI to work. Drop us your story at nimishad@packtpub.com or reply to this email, and you could get featured in an upcoming issue of AI_Distilled. 📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us. If you have any comments or feedback, just reply back to this email. Thanks for reading and have a great day! That’s a wrap for this week’s edition of AI_Distilled 🧠⚙️ We would love to know what you thought—your feedback helps us keep leveling up. 👉 Drop your rating here Thanks for reading, The AI_Distilled Team (Curated by humans. Powered by curiosity.) *{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0

LLM Expert Insights, Packt
10 Oct 2025
4 min read
Save for later

AI on Autopilot

LLM Expert Insights, Packt
10 Oct 2025
4 min read
Google’s Gemini 2.5, OpenAI’s AgentKit, and the Rise of Self-Driving Software AI_Distilled #117: What’s New in AI This Week Welcome to this week’s AI Distilled. The machines are no longer just thinking, they’re doing. From Google’s new Gemini 2.5 that clicks, scrolls, and speaks for itself, to OpenAI’s AgentKit empowering developers to build intelligent digital workers, the future of automation is taking shape fast. Buckle up as the AI race continues in top gear. LLM Expert Insights, Packt 📈LATEST DEVELOPMENT Google’s AI That Clicks, Scrolls, and Speaks for Itself Google just dropped Gemini 2.5 Computer Use, an AI that doesn’t just answer, it acts. It can now operate apps and websites like a digital assistant on caffeine, outperforming rivals on UI-control benchmarks while keeping latency low. Safety guardrails ensure it won’t delete your life with one click. Meanwhile, Google Research is whispering to the future with Speech-to-Retrieval, which skips text entirely to fetch info straight from your voice. Goodbye typing, hello (talking) Google! OpenAI’s AgentKit & GPT-5 Pro: Building Agents and Locking Ecosystems OpenAI’s latest drop, GPT-5 Pro, got smarter and strategic. The powerhouse model, now available via API, flexes advanced reasoning skills tailor-made for building intelligent AI agents. And the latest entrant, the AgentKit, is an ultimate developer toolkit featuring Agent Builder for drag-and-drop workflows, ChatKit for sleek chat UIs, and enhanced Evals to keep those agents in line. The catch? OpenAI’s ecosystem is becoming easier to build inside, harder to leave. Grok Imagine 0.9: xAI Gets Cinematic Elon Musk’s xAI just dropped Grok Imagine 0.9, now crafting silky-smooth, hyperreal AI videos with spot-on motion and sound. Hollywood, meet your new algorithmic auteur. ElevenLabs Lets Voice AI Speak Freely ElevenLabs just open-sourced its voice agent UI toolkit, giving developers plug-and-play vocal cords for their apps. Now, anyone can make their AI talk the talk, literally. Anysphere Codes Its Way to a $30B Orbit Anysphere, maker of the dev-favorite Cursor, is reportedly eyeing a dazzling $30 billion valuation. Meanwhile, as the investors circle, Cursor just leveled up with Plan Mode, an AI project manager that maps massive codebases like a pro. Developers get strategy, structure, and swagger; Silicon Valley gets another hot ticket. Join Snyk on October 22, 2025 at DevSecCon25 - Securing the Shift to AI Native Join Snyk October 22, 2025 for this one-day event to hear from leading AI and security experts from Qodo, Ragie.ai, Casco, Arcade.dev, and more! The agenda includes inspiring Mainstage keynotes, a hands-on AI Demos track on building secure AI, Snyk's very FIRST AI Developer Challenge and more! Save your spot now EXPERT INSIGHTS The Five Modes Every Business Leader Should Know The world of Artificial Intelligence is evolving at a pace that often leaves decision-makers overwhelmed. Every week, new tools, frameworks, and buzzwords emerge, making it hard to separate what’s truly valuable from what’s merely hype. Today’s leaders often keep AI at arm’s length, uncertain how to handle its invisible power. To move beyond hesitation, we must stop viewing AI as a collection of tools and start understanding it as a set of skills. This shift—from thinking in terms of technology to thinking in terms of capabilities—is what allows organizations to unlock AI’s real potential. READ FULL ARTICLE Built something cool? Tell us. Whether it's a scrappy prototype or a production-grade agent, we want to hear how you're putting generative AI to work. Drop us your story at nimishad@packtpub.com or reply to this email, and you could get featured in an upcoming issue of AI_Distilled. 📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us. If you have any comments or feedback, just reply back to this email. Thanks for reading and have a great day! That’s a wrap for this week’s edition of AI_Distilled 🧠⚙️ We would love to know what you thought—your feedback helps us keep leveling up. 👉 Drop your rating here Thanks for reading, The AI_Distilled Team (Curated by humans. Powered by curiosity.) *{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0
LLM Expert Insights, Packt
03 Oct 2025
4 min read
Save for later

Is Lecunn quitting Meta?

LLM Expert Insights, Packt
03 Oct 2025
4 min read
Opera’s AI browser, OpenAI’s social app, Meta’s AI lab tensionsAI_Distilled #115: What’s New in AI This WeekWelcome to this week’s newsletter! From groundbreaking AI tools and new social platforms to TikTok’s uncertain U.S. journey and discoveries stretching to the edges of our solar system, we’ve gathered the most impactful stories you need to know. Dive in to catch up on the innovations, business moves, and cosmic milestones shaping our world today.LLM Expert Insights,Packt📈LATEST DEVELOPMENTOpera’s AI Browser NeonOpera launched a $19.99/month AI-focused browser, Neon, for heavy AI users. It offers automated task workflows (“Cards”), organized AI chat workspaces (“Tasks”), and even code generation—entering a crowded field as Chrome, Edge and others add similar AI features. You can join the waitlist here.OpenAI’s New Social AppOpenAI unveiled Sora, an invite-only social media app that generates a TikTok-like video feed using AI. The Sora app is powered by OpenAI’s recently launched Sora 2 video generation model. Sora’s standout “Cameo” feature lets users insert video clips of real people (like themselves) as characters into the AI-generated content. Take a look at the official announcement here.Advance your technical career with actionable, practical solutions | AWS re:Invent 2025 Las VegasTransform your skills at AWS re:Invent 2025. Master new AWS services, join immersive workshops, and network with top cloud innovators at AWS re:Invent 2025. As a re:Invent attendee,you'll receive 50% discount code towards any AWS Certification exam.Our 2025 event catalog is now available!EXPLORE THE EVENTTensions at Meta’s AI Lab?The Information reported that the father of deep learning and a longtime Meta AI executive considered quitting as leadership imposed stricter controls on publishing research, angering staff. The clash underscores internal tensions in Meta’s AI group as it adjusts to new management and priorities. Read The Information’s report here.TikTok’s U.S. LifelineA new U.S. executive order has paved the way for TikTok to continue operating domestically after years of uncertainty. But the proposed deal is complex and already facing political pushback from Washington lawmakers. Here is the executive order.That’s all for this week’s roundup. Stay curious, stay informed, and join us again next week for more news.EXPERT INSIGHTSWhy is DeepSeek different from popular SOTA LLMs?The landscape of large language models (LLMs) is shaped by both technological innovation and strategic positioning. While dominant players such as OpenAI, Google, and Anthropic continue to push the boundaries of proprietary models, DeepSeek has emerged as a formidable open-source contender.In this article, our experts and authors of our upcoming book DeepSeek Essentials—Andy Peng, Alex Strick van Linschoten, and Duarte Carmo reflect on how DeepSeek differs from proprietary systems like GPT-4.5, Claude 4, and Gemini 2.5 Pro.One of the key reasons why DeepSeek stood out was because of its divergent philosophy from other model creators. Proprietary models are typically guarded through closed APIs, restrictive licenses, and opaque training methods. They are engineered for safety and monetization...READ FULL ARTICLEBuilt something cool? Tell us.Whether it's a scrappy prototype or a production-grade agent, we want to hear how you're putting generative AI to work. Drop us your story at nimishad@packtpub.com or reply to this email, and you could get featured in an upcoming issue of AI_Distilled.📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us.If you have any comments or feedback, just reply back to this email.Thanks for reading and have a great day!That’s a wrap for this week’s edition of AI_Distilled 🧠⚙️We would love to know what you thought—your feedback helps us keep leveling up.👉 Drop your rating hereThanks for reading,The AI_Distilled Team(Curated by humans. Powered by curiosity.)*{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0

LLM Expert Insights, Packt
26 Sep 2025
8 min read
Save for later

Tech Week in Brief: Glasses, GPUs & Giant Leaps

LLM Expert Insights, Packt
26 Sep 2025
8 min read
Meta’s new specs, OpenAI’s big spend, and other AI adventures AI_Distilled #115: What’s New in AI This Week Hello there! Welcome to your weekly roundup of all things newsworthy in tech. Grab your coffee, settle in, and let’s dive into the highlights from the past week. LLM Expert Insights, Packt 📈LATEST DEVELOPMENT Google puts Chrome on an AI caffeine rush Google’s Chrome browser got 10 new AI-powered features led by Google’s Gemini model. It can now summarize webpages, explain multiple tabs, and even autonomously book appointments or grocery orders (no kidding). The browser also gains a freakishly good memory—just ask where you saw that walnut desk last week, and Chrome will pull up the page. Efficiency level: Max. ChatGPT falls for a Gmail trick, OpenAI marks it as resolved Even AI isn’t safe from sneaky hacks. Radware revealed a crafty prompt injection attack that tricked OpenAI’s Deep Research agent into exfiltrating Gmail data. In plain speak: a bad email with hidden instructions made the AI steal email secrets. OpenAI patched the hole with more safety checks, but it’s a reminder that letting an AI rummage through your inbox can be a minefield of surprises (ouch). Although the issue was reported in June, OpenAI has reportedly marked it as resolved. Advance your technical career with actionable, practical solutions | AWS re:Invent 2025 Las Vegas Transform your skills at AWS re:Invent 2025. Master new AWS services, join immersive workshops, and network with top cloud innovators at AWS re:Invent 2025. As a re:Invent attendee,you'll receive 50% discount code towards any AWS Certification exam.Our 2025 event catalog is now available! EXPLORE THE EVENT Luma AI’s Ray3: Lights, Camera, AI! Startup Luma AI unveiled Ray3, an AI toolkit that brings Hollywood-level wizardry to your phone. Integrated with Adobe, Ray3 can generate HDR video (10-, 12-, 16-bit color) and even turn boring SDR footage into vivid HDR. Its built-in reasoning engine lets creators sketch out camera movements or scene edits, and the AI dutifully follows multi-step instructions. It’s like having a tiny James Cameron in your pocket, minus the ego. Meta’s smart glasses Meta’s Connect 2025 event, Zuck & Co. pivoted from metaverse musings to real hardware. While the glasses encountered problems during the live demo due to a race condition bug, the future is not as bleak. The Ray-Ban Meta smart glasses priced at $799, rocking a microLED display that projects messages, maps, and more right in your field of view. You control these AR specs with a neural wristband (bye-bye, clunky controllers). It’s the closest thing to wearing Tony Stark’s tech. OpenAI’s Manhattan project for AI Sam Altman is going big. OpenAI is teaming up with Oracle, SoftBank, and Nvidia to build out an AI super-infrastructure that makes current data centers look like Lego blocks. They’re planning five new U.S. data centers (bringing as much power as 7 nuclear reactors worth of energy!) and exploring a bold new “GPU leasing” deal worth $100B with Nvidia. In short, OpenAI wants endless computing power on tap, betting that in the AI race, bigger is better (and necessary). Oracle bets billions on cloud AI Larry Ellison must be feeling lucky. Why? Well, it is reported that Oracle is reportedly close to a $20 billion deal with Meta to host and train Meta’s AI models. This comes right after Oracle’s whopping $300B contract with OpenAI and a new partnership with Elon Musk’s xAI. The strategy? Offer faster, cheaper cloud infrastructure to undercut Amazon and Microsoft. If this pays off, Oracle’s cloud might go from underdog to top dog in the AI era. Bold move, Larry. Musk’s xAI drops a game-changer Elon Musk’s new AI venture, xAI, just launched a model called Grok 4 Fast that claims GPT-5 level smarts at a fraction of the cost. We’re talking near top-tier reasoning benchmarks with 98% lower token costs. It achieves this by cutting out “thinking overhead” and streamlining how it chews through data. Translation: powerful AI answers, cheap enough to deploy en masse. It’s Musk’s way of saying “Competition, bring it on.” Brain implants: Neuralink’s next step As per a Bloomberg report, Elon’s neuro-lab: Neuralink is gearing up for its first human trials this October after getting FDA’s nod. The company’s implantable chip can translate thoughts to text, initially aimed to help paralyzed patients communicate. Long term, Musk envisions people using thoughts to control computers and even converse with AI—because typing is so 2020, right? It’s equal parts exciting and sci-fi-level eerie. Alibaba’s model mega-mix Not to be outdone, Alibaba unveiled its Qwen3 AI stack with a twist: Mixture-of-Experts (MoE) models at trillion-parameter scale. The system can tap into 512 expert models but activates just a handful per query for super efficiency. End result? Over 10× throughput improvement and support for ridiculously long context (think entire novels in one prompt). Two 80B-version models lead the charge—one tuned for chatty assistants, another for complex reasoning. In the AI model arms race, Alibaba just loudly entered the chat. Microsoft’s developer boost (and cool chips) Redmond had a productive week too. Microsoft is hunting down pesky legacy code with new Copilot-powered agents that not only find problems in old .NET/Java code, but also auto-generate fixes, unit tests, and containerize apps. Early trials showed dramatic wins – an Xbox team cut migration effort by 88% and Ford saw a 70% reduction in update time. On another front, Windows 11 now comes with built-in support for running AI models (ONNX runtime) across CPUs, GPUs, and specialized NPUs from various vendors. And about “cooling chips from the inside out”? Microsoft researchers are exploring liquid cooling inside chips to solve overheating as AI silicon gets hotter (literally). The future: faster chips that keep their chill. START YOUR FREE TRIAL EXPERT INSIGHTS Introduction to chunking with GPT-4o In generative AI workflows, the way data is prepared has a direct impact on model effectiveness. Rather than relying solely on rule-based chunking methods, this tutorial introduces an approach where GPT-4o itself is used to intelligently divide unstructured content into meaningful segments. This strategy supports a Retrieval-Augmented Generation system and enables it to more effectively retrieve relevant context. Why chunking matters in GenAI systems Traditional chunking methods often split documents based on arbitrary rules, such as paragraph breaks or token counts, which may cut through semantically meaningful units. In contrast, intelligent chunking enables each piece of data to carry a coherent message. This is particularly important when chunks are embedded into a vector database like Pinecone for retrieval. If a query surfaces a partial or poorly segmented chunk, the generated response may lack clarity or precision. Using GPT-4o for semantic chunking GPT-4o is employed not just for generation but also as a semantic analyzer. The model receives the full unstructured text, such as a company memo or technical note, and is prompted to divide it into logically structured chunks, each roughly 50–100 words in length. This is achieved by setting up a system message instructing the model to act as a chunking assistant, followed by a user message containing the text to split. Consider this system message: "You are an assistant skilled at splitting long texts into meaningful, semantically coherent chunks of 50–100 words each. Split the following text into meaningful chunks..." Once the prompt is issued, GPT-4o returns a response with double newlines separating each chunk. The program parses the response by splitting on these newline markers. The result is a list of discrete, meaningful units that are ready to be embedded. This workflow is especially useful for processing internal company data, like executive summaries or operational notes, where nuance matters. This method shines when the data includes complex thoughts, mixed formats, or narrative elements. For simpler documents like lists or spreadsheets, rule-based chunking might be more efficient. However, for nuanced tasks where meaning spans sentences or paragraphs, GPT-4o’s semantic awareness offers a significant advantage. By integrating GPT-4o into the chunking process, generative AI systems can store and retrieve content in a more meaningful way. Each chunk becomes a high-value data unit, tailored for precision recall within an RAG pipeline. This intelligent preprocessing step reinforces the larger GenAISys vision, building systems that retrieve not just data, but context-rich, purpose-aligned information. Built something cool? Tell us. Whether it's a scrappy prototype or a production-grade agent, we want to hear how you're putting generative AI to work. Drop us your story at nimishad@packtpub.com or reply to this email, and you could get featured in an upcoming issue of AI_Distilled. 📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us. If you have any comments or feedback, just reply back to this email. Thanks for reading and have a great day! That’s a wrap for this week’s edition of AI_Distilled 🧠⚙️ We would love to know what you thought—your feedback helps us keep leveling up. 👉 Drop your rating here Thanks for reading, The AI_Distilled Team (Curated by humans. Powered by curiosity.) *{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0

LLM Expert Insights, Packt
20 Sep 2025
4 min read
Save for later

Apple Just Raised the Bar for AI - Are You Ready?

LLM Expert Insights, Packt
20 Sep 2025
4 min read
Special Dispatch: The Tools & Trends Defining AI’s Next Chapter AI_Distilled #114: Only for You! Thanks for sticking with us - here’s a special weekend drop, just for you. This weekend, we bring you a curated blend of high-impact developments, practical tools, and hands-on learning. From industry-shaping updates in AI hardware and on-device intelligence, to a free chapter from our bestseller catalogue - there’s something here for every AI enthusiast and practitioner. LLM Expert Insights, Packt 📈DID YOU KNOW? Google’s Gemini Model Was Trained Using YouTube Transcripts While not officially disclosed, multiple leaks suggest YouTube’s massive subtitle/transcript corpus was a key source of conversational data for Google’s Gemini family - enabling better multimodal grounding. 📈APPLE'S LATEST LAUNCH & IT'S IMPACT ON THE AI WORLD More On‑Device AI Compute Becomes StandardAs Apple pushes powerful chips into thinner devices with AI‑heavy features, competitors will be under pressure to match that hardware‑software integration. Expect more OEMs putting beefy AI accelerators, optimized NPU/ML subsystems, or even dedicated AI cores into phones, earbuds, watches, etc. Rise of Low‑Latency, Privacy‑Focused AI Features Live translation, health monitoring, gesture or movement inference, these need latency and privacy. Apple’s move to local processing (or edge + private compute hybrid) will push the industry to balance performance and user data protection more carefully. Read more Your Exclusive Invite for the World’s first 2 day AI Challenge (usually $895, but $0 today) - Live sessions- Saturday and Sunday - 10 AM EST to 7PM EST Register Now for Free 📈ACCESS YOUR FREE CHAPTER HERE Ready to go hands-on? We're unlocking a free chapter from one of our most popular books - Generative AI on Google Cloud with LangChain. Solve real-world business problems with hands-on examples of GenAI applications on Google Cloud Learn repeatable design patterns for Gen AI on Google Cloud with a focus on architecture and AI ethics Build and implement GenAI agents and workflows, such as RAG and NL2SQL, using LangChain and Vertex AI 📈ACCESS YOUR FREE CHAPTER HERE Click here to unlock your free eBook of Mastering NLP from Foundations to LLMs - Learn how to build Python-driven solutions with a focus on NLP, LLMs, RAGs, and GPT Master embedding techniques and machine learning principles for real-world applications Understand the mathematical foundations of NLP and deep learning designs Built something cool? Tell us. Whether it's a scrappy prototype or a production-grade agent, we want to hear how you're putting generative AI to work. Drop us your story at nimishad@packtpub.com or reply to this email, and you could get featured in an upcoming issue of AI_Distilled. 📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us. If you have any comments or feedback, just reply back to this email. Thanks for reading and have a great day! That’s a wrap for this week’s edition of AI_Distilled 🧠⚙️ We would love to know what you thought—your feedback helps us keep leveling up. 👉 Drop your rating here Thanks for reading, The AI_Distilled Team (Curated by humans. Powered by curiosity.) *{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0}#converted-body .list_block ol,#converted-body .list_block ul,.body [class~=x_list_block] ol,.body [class~=x_list_block] ul,u+.body .list_block ol,u+.body .list_block ul{padding-left:20px} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0
LLM Expert Insights, Packt
19 Sep 2025
4 min read
Save for later

Anthropic settlement copyright lawsuit reports trend, OpenAI and Google strike gold at ICPC

LLM Expert Insights, Packt
19 Sep 2025
4 min read
TikTok sale near completion, frontier models are scheming AI_Distilled #113: What’s New in AI This Week The past week has brought both reckoning and recognition for the AI industry. Anthropic’s billion-dollar copyright settlement is making publishers and creators wonder how far accountability will go, even as the company reaches a confidential deal with The New York Times. Meanwhile, the competitive spirit is alive and well: OpenAI and Google claimed gold-standard performances at the ICPC World Finals, cementing their models’ ability to tackle some of the toughest programming challenges on the planet. And as regulators circle, TikTok’s U.S. sale edges closer to completion. LLM Expert Insights, Packt 📈LATEST DEVELOPMENT OpenAI strikes gold at ICPC, releases GPT-5Codex, and finds it harder to train scheming models OpenAI achieved gold-medal level performance at the 2025 ICPC World Finals, solving all the 12 problems, at the finals in Baku, Azerbaijan (ICPC Finals). GPT-5-Codex, optimized for real-world software engineering: quicker interactions, better at long tasks, refactoring, complex code review. Codex now integrates seamlessly via CLI, IDE, cloud, GitHub, promising more reliable, context-aware coding. Check it out here! OpenAI, with Apollo Research, found frontier models showing “scheming” (pretending alignment while pursuing hidden agendas). To tackle this the team has introduced deliberative alignment training, reducing covert actions ~30× in tests. You can read more about it here. Google DeepMind’s AI breakthroughs for the future of AGI and fluid dynamics Gemini 2.5 Deep Think too achieved gold-medal level performance at the 2025 ICPC World Finals, solving 10 of 12 problems under competition conditions. Read more here! In another development reported by Google, DeepMind has developed new AI methods using Physics-Informed Neural Networks (PINNs) to discover previously unknown unstable singularities in major fluid dynamics equations. These findings advance mathematical understanding of “blow-ups” in systems like the Navier-Stokes and Euler equations. Read more here! From MongoDB's Director of Engineering: Build AI-Powered Platforms Last Few Tickets Left: 30% off for 72 Hours EXPERT INSIGHTS Mastering Text Data Augmentation Techniques for LLMs Text data augmentation is a foundational technique for enhancing the robustness and generalization of large language models (LLMs). By artificially increasing the size and variability of training data, models can be exposed to more diverse linguistic patterns, reducing overfitting and improving their ability to handle unseen inputs. This tutorial walks through key augmentation techniques that are particularly effective when working with text-based LLMs. Synonym replacement: This is a classic approach that involves replacing words with their synonyms to produce semantically equivalent sentences. Using lexical databases like WordNet, each word in a sentence is checked for synonyms, which are then substituted randomly. This introduces variation without altering the meaning drastically, while ensuring syntactic and semantic consistency, particularly useful for simpler augmentation tasks. The code below replaces words in a sentence with synonyms from WordNet to create varied yet semantically consistent text: from nltk.corpus import wordnet import random def synonym_replacement(text): words = text.split() new_words = [] for word in words: synonyms = wordnet.synsets(word) if synonyms: synonym_words = [lemma.name() for s in synonyms for lemma in s.lemmas() if lemma.name() != word] if synonym_words: new_words.append(random.choice(synonym_words)) else: new_words.append(word) else: new_words.append(word) return ' '.join(new_words) print(synonym_replacement("The quick brown fox jumps over the lazy dog.")) Read the full article on Substack → Built something cool? Tell us. Whether it's a scrappy prototype or a production-grade agent, we want to hear how you're putting generative AI to work. Drop us your story at nimishad@packtpub.com or reply to this email, and you could get featured in an upcoming issue of AI_Distilled. 📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us. If you have any comments or feedback, just reply back to this email. Thanks for reading and have a great day! That’s a wrap for this week’s edition of AI_Distilled 🧠⚙️ We would love to know what you thought—your feedback helps us keep leveling up. 👉 Drop your rating here Thanks for reading, The AI_Distilled Team (Curated by humans. Powered by curiosity.) *{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0

LLM Expert Insights, Packt
12 Sep 2025
4 min read
Save for later

AI Agents. $300B Deals. Nvidia vs Google. All in This Week.

LLM Expert Insights, Packt
12 Sep 2025
4 min read
TPU vs GPU, Rubin CPX vs everything. Here’s what it means.AI_Distilled #109: What’s New in AI This WeekWelcome to another edition of AI Distilled!This week, AI moves deeper into infrastructure, hardware, and hands-on learning. From Nvidia's next-gen chips to OpenAI's $300B Oracle deal and practical workshops for builders - this is your moment to upskill and stay ahead.LLM Expert Insights,Packt📈 UPCOMING PACKT VIRTUAL EVENTSPAID EVENTSBuild AI Agents Over The Weekend (Cohort 2)When: This Weekend - Sat-Sun, Sept 13-14Participate in our hands-on workshop and gain practical skills to build AI agents efficiently.BOOK NOW AT 30% OFFGenerative AI for Finance - Certificate CourseWhen: Oct 4, 5, 18, 19Practical GenAI Skills for the World’s Most Demanding Financial ApplicationsBOOK NOW AT 40% OFF📈PACKT'S FREE LEARNING RESOURCESMCP By Hand (Mini Course)When: Tues, Sept 16, 2025Learn how MCP enables secure and efficient interaction between LLMs and external systems.BOOK NOWGenAI for Finance: Beyond Agent-Washing - From Idea to InfrastructureWhen: Sat, Sept 20, 2025Practical GenAI Skills for the World’s Most Demanding Financial ApplicationsBOOK NOWA2A by Hand (Mini Course)When: Tues. Sept 23, 2025Learn how MCP enables secure and efficient interaction between LLMs and external systemsBOOK NOW📈LATEST DEVELOPMENTChip Wars: Google vs. NvidiaGoogle Intensifies AI Chip Race with NvidiaGoogle is escalating its battle with Nvidia in the AI hardware arena by deploying its Tensor Processing Units (TPUs) in more data centers run by smaller cloud providers—traditionally dominated by Nvidia’s GPUs. TPUs are optimized for inference workloads and lower latency, offering Google potential cost advantages. While Nvidia remains dominant, the race is intensifying, with performance, efficiency, and scale at the heart of the fight. (Forbes)Nvidia Reveals Rubin CPX AI ChipComplementing this rivalry, Nvidia introduced its Rubin CPX AI chip, designed for complex workloads like video and software generation as well as handling contexts of up to a million tokens. With a $100 million launch investment expected to yield billions in AI revenue, Nvidia is reinforcing its leadership in specialized compute hardware.(Reuters)OpenAI in FocusOpenAI Plans Custom Chips and Job PlatformOpenAI confirmed plans to launch its first custom AI chip by 2026 in partnership with Broadcom, with the aim of reducing its reliance on Nvidia. Simultaneously, the company is building a job-matching platform powered by its models, potentially rivaling LinkedIn. Together, these moves signal diversification into hardware and enterprise services.(Reuters, CNBC)OpenAI Signs $300 Billion Oracle DealTo support its compute-hungry ambitions, OpenAI struck a $300 billion deal with Oracle for cloud and data-center capacity—one of the largest tech contracts in history. The five-year agreement underscores the immense scale of infrastructure required to train next-generation models and cements Oracle’s position as a core partner.(NYT)Built something cool? Tell us.Whether it's a scrappy prototype or a production-grade agent, we want to hear how you're putting generative AI to work. Drop us your story at nimishad@packtpub.com or reply to this email, and you could get featured in an upcoming issue of AI_Distilled.📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us.If you have any comments or feedback, just reply back to this email.Thanks for reading and have a great day!That’s a wrap for this week’s edition of AI_Distilled 🧠⚙️We would love to know what you thought—your feedback helps us keep leveling up.👉 Drop your rating hereThanks for reading,The AI_Distilled Team(Curated by humans. Powered by curiosity.)*{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0
Modal Close icon
Modal Close icon