AI-Driven Cybersecurity Threats: A Survey of Emerging Risks and Defensive Strategies (Sai Teja Erukude, Viswa Chaitanya Marella, Suhasnadh Reddy Veluru): This 2026 survey paper examines the dual-use nature of AI in cybersecurity, identifying novel threat vectors such as deepfakes, adversarial AI attacks, automated malware, and AI-enabled social engineering. The authors present a comparative taxonomy linking specific AI capabilities with corresponding threat modalities and defense strategies, drawing on over 70 academic and industry references. It also highlights critical gaps in explainability, interdisciplinary defenses, and regulatory alignment necessary to sustain digital trust.
The Promptware Kill Chain: How Prompt Injections Gradually Evolved Into a Multi-Step Malware (Ben Nassi, Bruce Schneier, Oleg Brodt): This paper reframes prompt injection vulnerabilities in large-language-model (LLM) systems as a structured chain of attack steps akin to classical malware campaigns. The authors propose a new “promptware kill chain” model with defined phases—from initial access through privilege escalation to data exfiltration—offering a common framework for threat modeling and cross-domain research between AI safety and cybersecurity.
Artificial intelligence and machine learning in cybersecurity: a deep dive into state-of-the-art techniques and future paradigms: This extensive review analyzes how AI and machine learning (ML) are transforming core cybersecurity functions—including intrusion detection, malware classification, behavioral analytics, and threat intelligence. The paper discusses adversarial machine learning, explainable AI, federated learning, and quantum integration as future paradigms, offering a comprehensive roadmap for intelligence-driven, scalable security architectures.
Generative AI revolution in cybersecurity: a comprehensive review of threat intelligence and operations: Focused on the rise of generative AI (GAI), this work explores how generative models can autonomously detect threats, augment human judgment, and contribute to defensive operations. It also critically assesses the limitations and misuse potential of these models, such as incorrect outputs and exploitation by adversaries, highlighting the balance needed for secure adoption.
A cybersecurity AI agent selection and decision support framework (Masike Malatji): This paper introduces a structured decision support framework that aligns diverse AI agent architectures (reactive, cognitive, hybrid) with the NIST Cybersecurity Framework (CSF) 2.0. It formalizes how AI agents should be selected and deployed across detection, response, and governance functions, offering a practical schema for organizations to move beyond isolated AI tools toward holistic, standards-aligned deployments.
Integrating Artificial Intelligence into the Cybersecurity Curriculum in Higher Education: A Systematic Literature Review (Jing Tian): While focused on education, this systematic literature review is trending among academics because it synthesizes research on how AI and cybersecurity education are being combined in higher education curricula. It examines course design, instructional tools, and pedagogical practices that prepare the next generation of cybersecurity professionals to use and defend against advanced AI systems.