Exploring Unit 42's findingsSecuring the Autonomous Enterprise: From Observability to ResilienceCurrent security stops at passive observation. Rubrik Agent Operations is the enterprise platform that unifies observability, governance, and recoverability for AI.Join us on November 12th to discover how Rubrik is leveraging its leadership in cyber resilience to protect your autonomous future.Save My Spot#223: Digging into Social Engineering, part 3Welcome to another_secpro!This week, we're back into social engineering - this time, exploring “missed or misclassified critical signals” with Unit 42. If you've missed our other investigations, then check them out here, here and here.Check out _secpro premiumIf you want more, you know what you need to do: sign up to the premium and get access to everything we have on offer. Click the link above to visit our Substack and sign up there!Cheers!Austin MillerEditor-in-ChiefDon't miss out!This week's articleUnit 42 on “Missed or Misclassified Critical Signals”In their latest research, Unit 42 explains that many social engineering attacks don’t need advanced hacking tools. Instead, they work because of three main weaknesses: low detection coverage, alert fatigue, and organisational failures.Check it out todayNews BytesGTIG AI Threat Tracker: Advances in Threat Actor Usage of AI Tools (Google Threat Intelligence Group): A deep technical analysis detailing how adversaries are now embedding generative AI and LLMs into malware and intrusion workflows. The report highlights real-world examples such as “PROMPTFLUX” and “PROMPTSTEAL,” which use AI for obfuscation, adaptive phishing, and command generation, marking a turning point toward autonomous, AI-powered attack operations.CYFIRMA Intelligence Report (CYFIRMA Research and Advisory Team): A comprehensive weekly update covering underground forum chatter, ransomware evolution, and exploitation trends. It documents “Monkey Ransomware,” details TTPs aligned to MITRE ATT&CK (execution via native API, process injection, defense evasion), and lists key vulnerabilities like CVE-2025-61932 in Lanscope Endpoint Manager.CYFIRMA's Analysis of the Monkey Ransomware (CYFIRMA Research): A detailed teardown of the newly observed “Monkey” ransomware variant. It appends a “.monkey” extension, deletes backups, and uses reflective code loading and service creation for persistence. The report provides full TTP mapping, IOCs, and mitigation guidance, suggesting active campaigns targeting APAC organizations.Rigged Poker Games - "The Department of Justice has indicted thirty-one people over the high-tech rigging of high-stakes poker games: In a typical legitimate poker game, a dealer uses a shuffling machine to shuffle the cards randomly before dealing them to all the players in a particular order. As set forth in the indictment, the rigged games used altered shuffling machines that contained hidden technology allowing the machines to read all the cards in the deck. Because the cards were always dealt in a particular order to the players at the table, the machines could determine which player would have the winning hand."Signal’s Post-Quantum Cryptographic Implementation - "Signal hasjust rolled out its quantum-safe cryptographic implementation.Ars Technicahas areally good article with details: Ultimately, the architects settled on a creative solution. Rather than bolt KEM onto the existing double ratchet, they allowed it to remain more or less the same as it had been. Then they used the new quantum-safe ratchet to implement a parallel secure messaging system."Into the blogosphere...AWS to Bare Metal Two Years Later: Answering Your Toughest Questions About Leaving AWS: "When we publishedHow moving from AWS to Bare-Metal saved us $230,000 /yr.in 2023, the story travelled far beyond our usual readership. The discussion threads onHacker NewsandReddit were packed with sharp questions: did we skip Reserved Instances, how do we fail over a single rack, what about the people cost, and when is cloud still the better answer? This follow-up is our long-form reply."Free software scares normal people: "I’m the person my friends and family come to for computer-related help. (Maybe you, gentle reader, can relate.) This experience has taught me which computing tasks are frustrating for normal people."What We Talk About When We Talk About Sideloading: "It bears reminding that “sideload” is a made-up term. Putting software on your computer is simply called “installing”, regardless of whether that computer is in your pocket or on your desk. This could perhaps be further precised as “direct installing”, in case you need to make a distinction between obtaining software the old-fashioned way versus going through a rent-seeking intermediary marketplace like the Google Play Store or the Apple App Store."Aggressive bots ruined my weekend: "On the 25th of October Bear had its first major outage. Specifically, the reverse proxy which handles custom domains went down, causing custom domains to time out. Unfortunately my monitoring tool failed to notify me, and it being a Saturday, I didn't notice the outage for longer than is reasonable. I apologise to everyone who was affected by it. First, I want to dissect the root cause, exactly what went wrong, and then provide the steps I've taken to mitigate this in the future."The bug that taught me more about PyTorch than years of using it:My training loss plateaued and wouldn’t budge. Obviously I’d screwed something up. I tried every hyperparameter combination, rewrote my loss function, spent days assuming I’d made some stupid mistake. Because it’s always user error. This time, it wasn’t. It was a niche PyTorch bug that forced me through layers of abstraction I normally never think about: optimizer internals, memory layouts, dispatch systems, kernel implementations. Taught me more about the framework than years of using it.What Happened To Running What You Wanted On Your Own Machine?: When the microcomputer first landed in homes some forty years ago, it came with a simple freedom—you could run whatever software you could get your hands on. Floppy disk from a friend? Pop it in. Shareware demo downloaded from a BBS? Go ahead! Dodgy code you wrote yourself at 2 AM? Absolutely. The computer you bought was yours. It would run whatever you told it to run, and ask no questions. Today, that freedom is dying. What’s worse, is it’s happening so gradually that most people haven’t noticed we’re already halfway into the coffin.This week's academiaFrom Texts to Shields: Convergence of Large Language Models and Cybersecurity (Tao Li; Ya-Ting Yang; Yunian Pan; Quanyan Zhu): This paper explores how large language models (LLMs) are increasingly converging with cybersecurity tasks: from vulnerability analysis and network/5G security to generative security engineering. It looks both at how LLMs can assist defenders (automation, reasoning, security analytics) and how they introduce new risks (trust, transparency, adversarial use). The authors outline socio-technical challenges like interpretability, human-in-the-loop design, and propose a forward-looking research agenda for secure, effective LLM adoption in cybersecurity.Adversarial Defense in Cybersecurity: A Systematic Review of GANs for Threat Detection and Mitigation (Tharcisse Ndayipfukamiye; Jianguo Ding; Doreen Sebastian Sarwatt; Adamu Gaston Philipo; Huansheng Ning): This is a large scale systematic literature review (PRISMA-compliant) of how Generative Adversarial Networks (GANs) are being used in cybersecurity—both as attack vectors and as defensive tools—from January 2021 through August 2025. It identifies 185 peer-reviewed studies, develops a four-dimensional taxonomy (defensive function, GAN architecture, cybersecurity domain, adversarial threat model), shows publication trends, assesses the effectiveness of GAN-based defences, and highlights key gaps (training instability, lack of benchmarks, limited explainability). The authors propose a roadmap for future work.Securing the AI Frontier: Urgent Ethical and Regulatory Imperatives for AI-Driven Cybersecurity (Vikram Kulothungan): This paper examines the ethical and regulatory challenges that arise when AI is deeply integrated into cybersecurity systems. It traces historical regulation of AI, analyzes current frameworks (for example the EU AI Act), and discusses ethical dimensions such as bias, transparency, accountability, privacy, and human oversight. It proposes strategies to promote AI literacy, public engagement, and global harmonisation of regulatory approaches in the cybersecurity/AI domain.Neuromorphic Mimicry Attacks Exploiting Brain-Inspired Computing for Covert Cyber Intrusions (Hemanth Ravipati) This paper introduces a new threat class: “Neuromorphic Mimicry Attacks (NMAs)”. These attacks target neuromorphic computing systems (brain-inspired chips, spiking neural networks, edge/IoT hardware) by mimicking legitimate neural activity (via synaptic weight tampering, sensory input poisoning) to evade detection. The paper provides a theoretical framework, simulation results using a synthetic neuromorphic dataset, and proposes countermeasures (neural-specific anomaly detection, secure synaptic learning protocols).Towards Adaptive AI Governance: Comparative Insights from the U.S., EU, and Asia (Vikram Kulothungan; Deepti Gupta) This article offers a comparative analysis of how different regions (US, European Union, Asia) approach AI governance, innovation, and regulation—especially in cybersecurity/AI domains. It identifies divergent models (market-driven, risk-based, state-guided), explains tensions for international collaboration, and proposes an “adaptive AI governance” framework blending innovation accelerators, risk oversight, and strategic alignment.Asymmetry by Design: Boosting Cyber Defenders with Differential Access to AI (Shaun Ee; Chris Covino; Cara Labrador; Christina Krawec; Jam Kraprayoon; Joe O’Brien) This work proposes a strategic framework for cybersecurity defence by deliberately shaping access to AI capabilities (“differential access”) such that defenders have prioritized access or harder restrictions on adversaries. It outlines three approaches—Promote Access, Manage Access, Deny by Default—and gives example schemes of how defenders might leverage these in practice. It argues that as adversaries gain advanced AI, defenders must build architectural and policy asymmetries in their favour.*{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}.reverse{display:table;width: 100%;
Read more