We're back—and completely different.#1: CYBER_AI TodayWe're back—and completely different.Welcome to CYBER_AI, a new newsletter from the Packt team focusing on—well, exactly what it says on the tin: cybersecurity in the age of AI.The world of cybersecurity is changing fast—and artificial intelligence is leading the charge. Every day, new tools powered by AI are helping defenders spot threats faster, protect data smarter, and stay one step ahead of attackers. But the same technology that helps protect us can also be used by hackers to launch more advanced, more convincing attacks.That’s why understanding the mix of AI and security is more important than ever. From detecting phishing emails in seconds to predicting weaknesses before they’re exploited, AI is reshaping what it means to stay safe online. At the same time, trusted ideas like Zero Trust—the principle that no one and nothing should be trusted by default—are becoming even more critical. In a world where AI can fake voices, write code, or slip past simple security checks, Zero Trust provides a steady foundation: always verify, always question, always protect.Join us on Substack to find our bonus articles!In this newsletter, we’ll explore how AI is transforming cybersecurity—what’s new, what’s next, and what you can do to stay secure in the age of intelligent threats.Welcome aboard! The future of cyber defence starts here.Cheers!Austin MillerEditor-in-ChiefLLMs and Agentic AI In Production - Nexus 2025Build and fine-tune your own LLMs and Agents and deploy them in production with workshops on MCP, A2A, Context Engineering, and many more.Book now at 50% off with the code CYBER50News Wipe“Beware of double agents: How AI can fortify — or fracture — your cybersecurity”: This article explores how autonomous “agentic” AI systems can both strengthen and undermine cybersecurity. Microsoft emphasises that organisations must manage AI identities using Zero Trust principles—continuous verification, least privilege, and micro-segmentation. The piece highlights practical ways to secure AI agents as part of enterprise defence.“Zscaler Acquires AI Security Company SPLX”: Zscaler announced its acquisition of SPLX, an AI security firm, to integrate AI asset discovery, red-teaming, and governance into the Zscaler Zero Trust Exchange platform. The move marks a concrete step toward extending Zero Trust security models to cover AI systems and workflows.“Trend Micro Launches End-to-End Protection for Agentic AI Systems”: Trend Micro, in collaboration with NVIDIA, unveiled a new security framework that combines Zero Trust enforcement with AI-native threat detection for what it calls “AI factories.” The launch represents a practical evolution of Zero Trust from human and device access control to full AI system protection.“How low code can give agentic AI guide rails for the enterprise”: This feature examines how enterprises are using low-code and no-code platforms to deploy AI securely. It discusses how organisations can establish governance and Zero Trust-inspired guardrails for AI agents, ensuring safe interaction with data and systems.“AI Security: Defining and Defending Cybersecurity’s Next Frontier”: SentinelOne provides a deep dive into how organisations are adapting cybersecurity frameworks to AI-driven environments. The article focuses on embedding threat modelling, securing AI workflows, and applying Zero Trust strategies to protect AI infrastructures from both misuse and attack.Culture, You, and AIFaking Receipts with AI: Over the past few decades, it’s become easier and easier to create fake receipts. Decades ago, it required special paper and printers—I remember a company in the UK advertising its services to people trying to cover up their affairs. Then, receipts became computerized, and faking them required some artistic skills to make the page look realistic. Now, AI can do it all.Rigged Poker Games: The Department of Justice has indicted thirty-one people over the high-tech rigging of high-stakes poker games.In a typical legitimate poker game, a dealer uses a shuffling machine to shuffle the cards randomly before dealing them to all the players in a particular order. As set forth in the indictment, the rigged games used altered shuffling machines that contained hidden technology allowing the machines to read all the cards in the deck. Because the cards were always dealt in a particular order to the players at the table, the machines could determine which player would have the winning hand. This information was transmitted to an off-site member of the conspiracy, who then transmitted that information via cellphone back to a member of the conspiracy who was playing at the table, referred to as the “Quarterback” or “Driver.” The Quarterback then secretly signaled this information (usually by prearranged signals like touching certain chips or other items on the table) to other co-conspirators playing at the table, who were also participants in the scheme.Scientists Need a Positive Vision for AI: For many in the research community, it’s gotten harder to be optimistic about the impacts of artificial intelligence. As authoritarianism is rising around the world, AI-generated “slop” is overwhelming legitimate media, while AI-generated deepfakes are spreading misinformation and parroting extremist messages. AI is making warfare more precise and deadly amidst intransigent conflicts. AI companies are exploiting people in the global South who work as data labelers, and profiting from content creators worldwide by using their work without license or compensation. The industry is also affecting an already-roiling climate with its enormous energy demands.AI Summarization Optimization: These days, the most important meeting attendee isn’t a person: It’s the AI notetaker. This system assigns action items and determines the importance of what is said. If it becomes necessary to revisit the facts of the meeting, its summary is treated as impartial evidence. But clever meeting attendees can manipulate this system’s record by speaking more to what the underlying AI weights for summarization and importance than to their colleagues. As a result, you can expect some meeting attendees to use language more likely to be captured in summaries, timing their interventions strategically, repeating key points, and employing formulaic phrasing that AI models are more likely to pick up on. Welcome to the world of AI summarization optimization (AISO).From the cutting edgeArtificial intelligence and machine learning in cybersecurity: a deep dive into state-of-the-art techniques and future paradigms from Knowledge and Information Systems, Vol. 67: This open-access review paper surveys the integration of AI/ML into cybersecurity, covering intrusion detection, malware classification, behavioural analysis and threat intelligence. It highlights the shift from traditional defence mechanisms to AI-driven ones, discusses technique categories and outlines future directions and gaps in research (such as adversarial robustness and real-time deployment).Generative AI revolution in cybersecurity: a comprehensive review of threat intelligence and operations from Artificial Intelligence Review, Vol. 58: This review focuses specifically on how generative AI (GenAI) is both a tool and a threat in cybersecurity operations. It explores how GenAI is being used for threat-intelligence generation, automating response operations, as well as how adversaries may use GenAI to automate attacks. The paper provides a detailed taxonomy of use-cases, implications for security operations centres (SOCs), and open issues (e.g., model abuse, data integrity).Strategic Management of AI-Powered Cybersecurity Systems: A Systematic Review (A. Wairagade) from Journal of Engineering Research and Reports, Vol 27 (8): This systematic review synthesises 87 peer-reviewed papers (2015-2024) on how organizations strategically manage AI-based cybersecurity systems. It identifies key themes including AI algorithms for threat detection, governance & risk management, organisational integration issues, ethical/legal concerns and scalability. The paper argues for proactive strategies (human-AI collaboration, governance frameworks, continual learning) to get maximum benefit from AI in cyber defence.Organizational Adaptation to Generative AI in Cybersecurity: A Systematic Review (C. Nott): This qualitative study examines how cybersecurity organisations are adapting to the integration of generative AI (GenAI). Based on analysis of 25 studies (2022-2025), it identifies three adaptation patterns: (1) LLMs integrated for security applications, (2) GenAI frameworks for automated detection/response, and (3) AI/ML-based threat-hunting workflows. The study highlights factors influencing readiness (maturity, regulation, workforce) and persistent challenges (data quality, bias, adversarial threats).*{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more