Keeping the private private#9: Privacy Concerns in the Age of AIKeeping the private privateWelcome to CYBER_AI, a new newsletter from the Packt team focusing on—well, exactly what it says on the tin: cybersecurity in the age of AI.Here we go on another step into the future, into a world where the world of cybersecurity brims with the confidence that AI can bring to our practice. Of course, this goal—like all goals—requires us to set up the foundations properly and figure out how we stand on them. That means, for all those struggling to make these ambitious bounds forward, establishing the “101” topics and making sure they are widely understood. For a look into the future, here's our plan:1. What “Cybersecurity AI” Actually Means2. Machine Learning 101 for Security Professionals3. Threat Detection with AI: From Rules to Models4. Adversarial Machine Learning Basics5. LLMs in Cybersecurity: Capabilities and Limitations6. Securing AI Models and Pipelines7. AI-Enhanced Offensive Techniques8. Privacy and Data Protection in AI Systems9. AI Governance, Ethics, and Risk Management10. Building a Security-Aware AI WorkflowSound good? Head over to Substack and sign up there!Join us on Substack to find our bonus articles!In this newsletter, we’ll explore how AI is transforming cybersecurity—what’s new, what’s next, and what you can do to stay secure in the age of intelligent threats.Welcome aboard! The future of cyber defence starts here.Cheers!Austin MillerEditor-in-ChiefWho is Cyber_AI?In order to keep providing high quality content that meets your needs, we thought that we would reach out and find a little bit about our audience. Take the survey below and get your copy of AI and Cybersecurity: What Everyone Should Know, a short fact file for helping non-specialists get up to speed.Get your copy with this short survey!Head over to Substack to check out this week's article!Join us on Substack to find our bonus articles!News WipeAI as Tradecraft: How Threat Actors Operationalize AI (Microsoft Security Blog): This research article examines how cybercriminals are integrating generative AI into the entire attack lifecycle, from reconnaissance and vulnerability discovery to phishing and malware development. Researchers found that attackers increasingly use AI models to automate social engineering, generate exploit code, and refine attacks through iterative learning. The report frames AI as a “force multiplier” for adversaries because it reduces skill barriers and accelerates operational tempo. It also discusses defensive countermeasures, including AI-driven anomaly detection and security copilots.AI-Enabled Cybercrime Is Costing Americans Billions (Vox): This analysis explores the rapid economic impact of AI-enhanced cybercrime. Experts estimate AI-driven scams caused $16.6 billion in losses in 2024, with generative AI enabling more convincing phishing, deepfake fraud, and identity manipulation campaigns. The article highlights emerging tactics such as AI-generated identities used by foreign operatives to infiltrate companies, voice-cloned financial scams, and automated fraud campaigns. Security researchers warn that AI is not creating entirely new crimes but dramatically scaling existing social-engineering attacks.AI Enabling New Cyber Risks, National Defense Report Says (National Defense Magazine): A newly released cybersecurity report warns that agentic AI systems—AI capable of performing multi-step tasks autonomously—could significantly expand the capabilities of state-sponsored cyber operations. Researchers argue these systems can automate vulnerability discovery, adapt attacks after failed attempts, and reduce the operational cost of large-scale campaigns. The report specifically notes the potential for nation-state actors to leverage AI as a force multiplier in cyber espionage and infrastructure targeting.Anthropic and the Pentagon (Schneier): OpenAI is inandAnthropic is outas a supplier of AI technology for the US defense department. This news caps a week of bluster by the highest officials in the US government towards some of the wealthiest titans of the big tech industry, and the overhanging specter of the existential risks posed by a new technology powerful enough that the Pentagon claims it is essential to national security. At issue is Anthropic’sinsistencethat the US Department of Defense (DoD) could not use its models to facilitate “mass surveillance” or “fully autonomous weapons,” provisions the defense secretary Pete Hegsethderidedas “woke.”Culture, You, and AICanada Needs Nationalized, Public AI (Schneier): Canada has a choice to make about its artificial intelligence future. The Carney administration is investing $2-billion over five years in itsSovereign AI Compute Strategy. Will any value generated by “sovereign AI” be captured in Canada, making a difference in the lives of Canadians, or is this just a passthrough to investment in American Big Tech?How AI Assistants are Moving the Security Goalposts (Krebs): AI-based assistants or “agents” — autonomous programs that have access to the user’s computer, files, online services and can automate virtually any task — are growing in popularity with developers and IT workers. But as so many eyebrow-raising headlines over the past few weeks have shown, these powerful and assertive new tools are rapidly shifting the security priorities for organizations, while blurring the lines between data and code, trusted co-worker and insider threat, ninja hacker and novice code jockey. The new hotness in AI-based assistants — OpenClaw (formerly known as ClawdBot and Moltbot) — has seen rapid adoption since its release in November 2025. OpenClaw is an open-source autonomous AI agent designed to run locally on your computer and proactively take actions on your behalf without needing to be prompted.North Korean Operatives Use AI Tools to Infiltrate Western Tech Companies: Investigators report that North Korean state-backed operatives are increasingly using AI tools to secure remote jobs at Western technology companies. According to Microsoft researchers, the actors rely on AI voice-changing software, face-swap tools, and generative AI-assisted résumé creation to pose as legitimate job applicants. Once hired, the workers funnel salaries to the North Korean regime and potentially gain access to sensitive networks or source code. The operation demonstrates how generative AI is expanding espionage tactics beyond traditional cyber intrusion into AI-assisted identity deception and workforce infiltration.State-Backed Hackers Using Gemini AI for Reconnaissance and Attack Preparation (Gemini): A report from Google’s threat intelligence team reveals that nation-state hacking groups are experimenting with generative AI platforms such as Gemini to assist cyber operations. These groups are using AI to automate reconnaissance, analyze target infrastructure, generate phishing materials, and develop malware components. While the tools are not yet replacing traditional offensive techniques, researchers say AI is becoming a productivity accelerator for espionage and cyber-operations conducted by government-linked actors.*{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more