“State-of-the-Art in Software Security Visualization: A Systematic Review”: This paper reviews and categorises modern techniques for visualising software system security, particularly to support threat detection, compliance monitoring, and security analytics. It argues that traditional textual or numerical approaches are increasingly insufficient as systems become more complex, and proposes a taxonomy (graph-based, metaphor-based, matrix, notation) of visualization approaches. It also discusses gaps and future research directions.
“Vulnerability Management Chaining: An Integrated Framework for Efficient Cybersecurity Risk Prioritization”: This paper proposes a new integrated framework that combines historical exploitation evidence (Known Exploited Vulnerabilities, KEV), predictive threat modeling (EPSS), and technical impact (CVSS) to better prioritise vulnerabilities. The test over ~28,000 real-world CVEs suggests substantial efficiency gains (14-18×) and large reductions in urgent remediation workload, while maintaining high coverage of actual threats.
“From Texts to Shields: Convergence of Large Language Models and Cybersecurity”: This paper analyses how large language models (LLMs) are being integrated with cybersecurity across multiple dimensions: network/software security, generative/automated security tools, 5G vulnerability analysis, and security operations. It explores both the potential (e.g. AI-driven analytics, automated reasoning) and the challenges (trust, transparency, adversarial robustness, governance). It lays out a research agenda for securing LLMs in high-stakes environments.
    
        Unlock access to the largest independent learning library in Tech for FREE!
        
            
                Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
                Renews at €18.99/month. Cancel anytime
             
            
         
     
 
“LLM-Assisted Proactive Threat Intelligence for Automated Reasoning”: This paper investigates how LLMs, combined with real-time threat intelligence (via Retrieval-Augmented Generation systems), can improve detection and response to emerging threats. Using feeds like KEV, EPSS, and CVE databases, the authors show that their system (Patrowl framework) better handles recently disclosed vulnerabilities compared to baseline LLMs, improving real-time responsiveness and reasoning in threat analysis.
“CAI: An Open, Bug Bounty-Ready Cybersecurity AI”: This research introduces CAI, an open-source AI designed specifically to support bug bounty testing. It benchmarks CAI against human experts in CTF (Capture the Flag) environments and demonstrates that CAI can outperform state-of-the-art results, finding vulnerabilities faster and more efficiently, particularly when humans oversee the system (Human-In-The-Loop). It also shows how CAI can democratise access to powerful security testing tools.
“A Framework for Evaluating Emerging Cyberattack Capabilities of AI”: This paper argues that current evaluation frameworks for AI in cybersecurity (e.g., via CTFs, benchmarks) are inadequate to assess real-world risk, and proposes a comprehensive framework to evaluate emerging AI offensive capabilities. It examines dual-use risks, adversarial models, and practical implications for red/blue teams, defenders, and policymakers.