Helping beginners see from the top#219: Getting a CISO's viewpointHelping beginners see from the topWelcome to another_secpro!This week, we're poking the brain of CISO expert David Gee to deliver you some insights which line up nicely with his new book: A Day in the Life of a CISO. We've also included our popular PDF resource again, to help you improve your training sessions and help the non-specialists amongst us to make the right moves in the age of AI. Check it out!Check out _secpro premiumIf you want more, you know what you need to do: sign up to the premium and get access to everything we have on offer. Click the link above to visit our Substack and sign up there!Cheers!Austin MillerEditor-in-ChiefMeet Albus—the first AI Identity Agent—and see why AI-native IGA is set to replace legacy governance tools.Join Lumos on Oct 30, 12–1pm EST for a live webinar with CEO Andrej Safundzic and CTO Aurangazeb Khan. Learn how agentic AI is transforming Identity Governance with autonomous policies, approvals, and reviews—and see Albus, the industry’s first identity agent, in action.Register NowThis week's articleRootkits in Focus: A CISO's PerspectiveToday, we’re taking a closer look at two kernel-level Linux rootkits that, while discovered a few years ago, still reflect the techniques seen in many of today’s advanced threats:Syslogkand CMK Rootkit.Syslogk, reported by Avast (part of Gen) in 2022, is a kernel-mode rootkit for Linux based on the older Adore-Ng Linux kernel rootkit. It’s notable for its stealthy behavior: it can hide files, processes, kernel modules, and network connections. What makes it especially evasive is its use of “magic packets”—specific network traffic that acts as a trigger to activate its payload, such as a backdoor, only under certain conditions.Check it out todayNews Bytes“State-of-the-Art in Software Security Visualization: A Systematic Review”: This paper reviews and categorises modern techniques for visualising software system security, particularly to support threat detection, compliance monitoring, and security analytics. It argues that traditional textual or numerical approaches are increasingly insufficient as systems become more complex, and proposes a taxonomy (graph-based, metaphor-based, matrix, notation) of visualization approaches. It also discusses gaps and future research directions.“Vulnerability Management Chaining: An Integrated Framework for Efficient Cybersecurity Risk Prioritization”: This paper proposes a new integrated framework that combines historical exploitation evidence (Known Exploited Vulnerabilities, KEV), predictive threat modeling (EPSS), and technical impact (CVSS) to better prioritise vulnerabilities. The test over ~28,000 real-world CVEs suggests substantial efficiency gains (14-18×) and large reductions in urgent remediation workload, while maintaining high coverage of actual threats.“From Texts to Shields: Convergence of Large Language Models and Cybersecurity”: This paper analyses how large language models (LLMs) are being integrated with cybersecurity across multiple dimensions: network/software security, generative/automated security tools, 5G vulnerability analysis, and security operations. It explores both the potential (e.g. AI-driven analytics, automated reasoning) and the challenges (trust, transparency, adversarial robustness, governance). It lays out a research agenda for securing LLMs in high-stakes environments.“LLM-Assisted Proactive Threat Intelligence for Automated Reasoning”: This paper investigates how LLMs, combined with real-time threat intelligence (via Retrieval-Augmented Generation systems), can improve detection and response to emerging threats. Using feeds like KEV, EPSS, and CVE databases, the authors show that their system (Patrowl framework) better handles recently disclosed vulnerabilities compared to baseline LLMs, improving real-time responsiveness and reasoning in threat analysis.“CAI: An Open, Bug Bounty-Ready Cybersecurity AI”: This research introduces CAI, an open-source AI designed specifically to support bug bounty testing. It benchmarks CAI against human experts in CTF (Capture the Flag) environments and demonstrates that CAI can outperform state-of-the-art results, finding vulnerabilities faster and more efficiently, particularly when humans oversee the system (Human-In-The-Loop). It also shows how CAI can democratise access to powerful security testing tools.“A Framework for Evaluating Emerging Cyberattack Capabilities of AI”: This paper argues that current evaluation frameworks for AI in cybersecurity (e.g., via CTFs, benchmarks) are inadequate to assess real-world risk, and proposes a comprehensive framework to evaluate emerging AI offensive capabilities. It examines dual-use risks, adversarial models, and practical implications for red/blue teams, defenders, and policymakers.This week's academiaSmartAttack: Air-Gap Attack via Smartwatches: Demonstrates a practical ultrasonic covert-channel that uses a smartwatch’s microphone as a receiver to exfiltrate data from air-gapped machines. The study measures range, bit-rate, effects of body occlusion, noise, and suggests mitigations for high-security environments. This paper triggered broad media coverage because it shows how everyday wearables can defeat classical air-gap assumptions. (Mordechai Guri)RAMBO: Leaking Secrets from Air-Gap Computers by Spelling Covert Radio Signals from Computer RAM: Introduces RAMBO, a side-channel that programs RAM access patterns to generate detectable electromagnetic/radio emissions from memory buses. Shows how malware can encode and transmit secrets from air-gapped machines (with SDR receivers) and discusses countermeasures. The attack has been widely reported and discussed in the infosec press. (Mordechai Guri)Security Concerns for Large Language Models: A Survey: A comprehensive academic survey of emergent security/privacy threats tied to LLMs (prompt injection, jailbreaking, data-poisoning/backdoors, misuse for malware/disinformation, and risks from autonomous agents). Summarizes recent studies (2022–2025), evaluates defense approaches and open problems — highly relevant as LLMs increasingly factor into both offensive and defensive cyber operations.(Miles Q. Li and Benjamin C. M. Fung.)Why Johnny Signs with Sigstore: Examining Tooling as a Factor in Software-Signing Adoption in the Sigstore Ecosystem: Qualitative case study / interviews with practitioners on tooling, usability, and adoption barriers for modern software signing (Sigstore ecosystem). Offers practical recommendations to improve adoption of signing/provenance tools — directly relevant to ongoing software supply-chain security conversations after high-profile incidents. This paper has been cited in industry and academic discussions about improving supply-chain resilience. (Kelechi G. Kalu, Sofia Okorafor, Tanmay Singla, Sophie Chen, Santiago Torres-Arias, James C. Davis)“LLMs unlock new paths to monetizing exploits”: Technical/academic analysis showing how large language models lower the cost and change the economics of finding and monetizing software vulnerabilities — enabling more targeted, user-specific exploit generation and tailored extortion. The paper provides proof-of-concept demonstrations and argues for new defense strategies and measurements. It has stirred debate about the near-term impact of LLMs on attacker capabilities. (Nicholas Carlini, Milad Nasr, Edoardo Debenedetti, Barry Wang, Christopher A. Choquette-Choo, Daphne Ippolito, Florian Tramèr, Matthew Jagielski.)“Extortionality” in Ransomware Attacks: A Microeconomic Study of Extortion and Externality: Microeconomic / empirical treatment of ransomware payments and externalities: when victims pay, they may increase incentives for attackers and raise risk for others (an externality). The paper studies decision drivers for ransom payments and discusses policy implications (should ransom payments be regulated, taxed, or subsidized to reduce social harm?). This work is being referenced in policy discussions and media coverage about whether public institutions should be allowed to pay ransoms. (Tim Meurs and collaborators).*{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more