A look back at 2025 to understand where we areMost teams dread FedRAMP—until they switch to Paramify. We make the process faster, clearer, and far more efficient by pairing smart automation with experts who help you exactly where you need it most. Come see how fun compliance can actually be, and grab a free gift when you join us for a demo.Schedule your demo here!#227: Wandering Down Memory LaneA look back at 2025 to understand where we are todayWelcome to another_secpro!We're done with social engineering for now, but if you'd like to find out how the adversary moves in the age of AI then make sure to check out the articles link in this introduction:here, here, here, here, and here.Check out _secpro premiumIf you want more, you know what you need to do: sign up to the premium and get access to everything we have on offer. Click the link above to visit our Substack and sign up there!Cheers!Austin MillerEditor-in-ChiefAI Agents Frontier - December 13, SaturdayJoin the pioneers behind AG2 and autonomous research agents for a 5-hour deep dive into controlled orchestration, reproducibility, and safe deployment of scalable multi-agent architectures systems. Discover how to build deterministic, explainable, verifiable agents that eliminate hallucinations and support secure, auditable decision workflows.Limited early-bird seats remaining. Book Your Pass Now!This week's articleA quick look back at the 2025A quick retrospective to take stock of a year of huge upheavals and change. Jump in to see what we've identified as "the big themes" of 2025 and leave your comments on Substack!Check it out todayNews BytesChinese-linked hackers deploy “BRICKSTORM” for long-term access: The Cybersecurity and Infrastructure Security Agency (CISA) has issued an alert describing a sophisticated backdoor called “BRICKSTORM,” used by state-sponsored actors from the People's Republic of China to maintain stealthy, persistent access on compromised VMware vSphere and Windows systems. The implant — written in Golang — grants attackers full interactive shell access, enabling file upload/download, manipulation, and long-term compromise.New Android RAT “Albiriox” targets 400+ financial apps — live remote-control & banking fraud : A recently discovered Android malware dubbed Albiriox operates as a remote-access Trojan (RAT) and banking trojan, giving attackers control over infected devices. Once installed (often via fake landing pages or spoofed app stores), Albiriox can remotely control phone screens, intercept credentials, and execute on-device banking or crypto transactions—effectively draining accounts while under victim’s own sessions.Seven-year browser-extension campaign from “ShadyPanda” infected 4.3M users : The group known as ShadyPanda spent years publishing seemingly legitimate extensions to browsers like Chrome and Edge — accumulating user trust — before silently updating them with malicious code. The campaign reportedly infected around 4.3 million users. The case underscores long-term supply-chain-style extension abuse and raises alarm about post-installation update security.Threat actors abusing calendar subscriptions to deliver phishing & malware lures: A new trend uncovered by threat intelligence shows attackers exploiting subscription-style calendar invites to deliver phishing links. Once subscribed, victims see malicious events or links — a stealthy method that bypasses traditional email phishing filters and broadens attack surface beyond email.Critical vulnerability in React/Next.js frameworks—remote code execution via deserialization bug (Akamai): A newly disclosed flaw, CVE-2025-55182, affects multiple React-based frameworks’ server-function implementations. The vulnerability enables remote code execution when processing incoming “Flight” requests, posing a serious risk to web applications built with React / Next.js. Developers are urged to patch immediately.“Telemetry Complexity Attacks”, a new class of bypass techniques against malware analysis & EDR platforms: A recent research paper demonstrated how adversaries can exploit weaknesses in telemetry collection pipelines used by malware analysis and EDR systems. By generating deeply nested and oversized telemetry data, attackers can trigger serializer or database failures — effectively causing denial-of-analysis (DoA) and hiding malicious behavior from detection. The research flagged real-world systems for failure under this technique.The emergence of “Benzona” ransomware on underground forums: According to the latest intelligence from CYFIRMA, a new ransomware strain called Benzona was spotted being offered on dark-web forums, signaling the ongoing churn and availability of malware-as-a-service (MaaS) tools for criminals.Research claim: cybercrime globally is dominated by middle-aged offenders, not typical “teen hackers”: A study aggregating data from over 400 law-enforcement bodies suggests that most cybercriminals fall into a middle-age demographic — challenging popular stereotypes of cybercrime being driven by young hackers. The findings may reshape how law enforcement and policy target cybercrime demographics.Into the blogosphere...Shai‑Hulud 2.0: How Cortex Detects and Blocks the Resurgent npm Worm: This post details a major supply-chain attack dubbed “Shai-Hulud 2.0,” where a malicious worm compromised thousands of npm packages. It explains how the malware spreads, steals credentials, establishes persistent backdoors, and compromises developer environments — and outlines how the provider’s security tools (Cortex Cloud, XDR, Prisma Cloud) can detect and block such attacks.AI & Security: Revolutionizing Cybersecurity in the Digital Age: This article explores how artificial intelligence (AI) is transforming cybersecurity — shifting defences from reactive to proactive. It examines use-cases where AI helps detect and mitigate threats, analyzes the challenges of integrating AI into security strategies, and highlights how organizations can leverage modern AI/ML to improve their security posture.When Artificial Intelligence Becomes the Battlefield: This post dives into the darker side of AI—describing how attackers are weaponizing AI for ransomware, phishing, browser-based exploits, AI-native malware, and “vibe-hacking” (emotionally targeted phishing/extortion). It outlines real-world incidents and warns of systemic weaknesses in AI governance, urging more robust controls and oversight for AI deployments.Multi‑Dimensional Threat Intelligence Analysis: Looking for AI Adversaries: This analysis recounts how a security team monitored 427 blocked IP addresses over a short period to evaluate whether emerging AI-powered adversarial techniques were in use. The conclusion: no AI-adaptive threats detected — yet. But the report highlights infrastructure evolution (bulletproof hosting, “brand-weaponization”) and warns that adversaries may shift once detection evasion becomes easier. Offers a practical view on real-world threat-intelligence operations.Turning Kubernetes Last Access to Kubernetes Least Access Using KIEMPossible: This recent post explains how identity and permissions inside Kubernetes environments often become sprawling, giving threat actors excessive attack surface. It shows how the tool/approach “KIEMPossible” can help organisations audit, trace, and reduce permissions to enforce least-privilege — significantly reducing risk for cloud workloads.This week's academiaIntrusion detection using TCP/IP single packet header binary image for IoT networks(Mohamed El-Sherif, Ahmed Khattab & Magdy El-Soudani):This paper proposes a novel intrusion detection approach for IoT networks by converting single raw TCP/IP packet headers into binary (black-and-white) images. Then, using a lightweight Convolutional Neural Network (CNN), the system classifies traffic as benign or malicious. On benchmark IoT datasets (Edge-IIoTset and MQTTset), the method achieved perfect or near-perfect detection rates (100% binary accuracy, ~97–100% multiclass accuracy) — all with minimal computational resources. The approach avoids heavy feature engineering or payload inspection, making it suitable for resource-constrained IoT devices and real-time deployment.Adversarial Defense in Cybersecurity: A Systematic Review of GANs for Threat Detection and Mitigation (Tharcisse Ndayipfukamiye, Jianguo Ding, Doreen Sebastian Sarwatt, Adamu Gaston Philipo & Huansheng Ning): This systematic review explores how Generative Adversarial Networks (GANs) are being used not just by attackers — but also defensively — for cybersecurity tasks. The paper consolidates 185 peer-reviewed studies, developing a taxonomy across defensive functions, GAN architectures, threat models, and application domains (e.g., network intrusion detection, IoT, malware analysis). The authors highlight meaningful gains (e.g., better detection accuracy and robustness) but also underscore persistent challenges: instability in GAN training, lack of standard benchmarks, high computational cost, and poor explainability. They propose directions for future research — including hybrid models, transparent benchmarks, and targeting emerging threats such as LLM-driven attacks.Red Teaming with Artificial Intelligence-Driven Cyberattacks: A Scoping Review (Mays Al-Azzawi, Dung Doan, Tuomo Sipola, Jari Hautamäki & Tero Kokkonen): This review maps how AI is transforming offensive cybersecurity — specifically red-teaming and attack simulations. Drawing on a broad literature base, the paper identifies typical AI-driven methods used by attackers (e.g., automated penetration testing, credential harvesting, social-engineering via AI) and common targets (sensitive databases, cloud services, social media, etc.). The review underscores the rising threat from AI-enabled attacks that scale, adapt, and can bypass traditional defenses — thus serving as a warning and a call for defence strategies that account for AI-driven adversaries.Adaptive Cybersecurity: Dynamically Retrainable Firewalls for Real-Time Network Protection (Sina Ahmadi): This paper argues that traditional static firewall rules are increasingly inadequate in the face of rapidly evolving threats. It proposes “dynamically retrainable firewalls”: ML-driven firewall systems that continuously retrain on incoming network data, detect anomalous activity in real-time, and adapt to new threat patterns. The work explores design architectures (micro-services, distributed systems), data sources for retraining, latency and performance trade-offs, and ways to integrate with modern paradigms like Zero Trust. It also discusses future challenges, including AI advances and quantum computing. The study suggests this adaptive firewall approach may be a key pillar for future network security.Neuromorphic Mimicry Attacks Exploiting Brain-Inspired Computing for Covert Cyber Intrusions (Hemanth Ravipati): As neuromorphic computing (brain-inspired hardware) becomes more common — especially in edge devices, IoT, and AI application — this paper demonstrates for the first time a novel class of threats: Neuromorphic Mimicry Attacks (NMAs). Because neuromorphic chips operate with probabilistic and non-deterministic neural activity, attackers can tamper with synaptic weights or poison sensory inputs to mimic legitimate neural signals. Such attacks can evade conventional intrusion detection systems. The paper provides a theoretical framework, simulations, and proposes countermeasures (e.g., neural-specific anomaly detection, secure learning protocols). The study warns that as neuromorphic hardware spreads, these threats will become increasingly relevant.*{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more