From Texts to Shields: Convergence of Large Language Models and Cybersecurity (Tao Li, Ya-Ting Yang, Yunian Pan & Quanyan Zhu):This paper explores how large language models (LLMs) are increasingly converging with cybersecurity tasks: for example, using LLMs for vulnerability analysis, network and software security tasks, 5G-vulnerability assessment, generative security engineering and automated reasoning in defence scenarios. The authors highlight socio-technical challenges (trust, transparency, human-in-the-loop, interpretability) when deploying LLMs in high-stakes security settings, and propose a forward-looking research agenda to integrate formal methods, human-centred design and organisational policy in LLM-enhanced cyber-operations.
Adversarial Defense in Cybersecurity: A Systematic Review of GANs for Threat Detection and Mitigation (Tharcisse Ndayipfukamiye, Jianguo Ding, Doreen Sebastian Sarwatt, Adamu Gaston Philipo & Huansheng Ning): This survey conducts a PRISMA-style review (2021–Aug 2025) of how Generative Adversarial Networks (GANs) are being used both as attack tools and defensive tools in cybersecurity. They analyse 185 peer-reviewed studies, develop a taxonomy across four dimensions (defensive function, GAN architecture, cybersecurity domain, adversarial threat model), and identify key gaps: training instability, lack of standard benchmarks, high computational cost, limited explainability. They propose a roadmap towards scalable, trustworthy GAN-powered defences.
Securing the AI Frontier: Urgent Ethical and Regulatory Imperatives for AI-Driven Cybersecurity (Vikram Kulothungan): This article examines the ethical and regulatory challenges arising from the deployment of AI in cybersecurity. It traces historical regulation of AI, analyses current global frameworks (e.g., the EU AI Act), and discusses key issues including bias, transparency, accountability, privacy, human oversight. The paper proposes strategies for enhancing AI literacy, public engagement, and global harmonisation of regulation in AI-driven cyber-systems.
A Defensive Framework Against Adversarial Attacks on Machine Learning-Based Network Intrusion Detection Systems (Benyamin Tafreshian & Shengzhi Zhang):The authors propose a multi-layer defensive framework aimed at ML‐based Network Intrusion Detection Systems (NIDS) which are vulnerable to adversarial evasion. Their framework integrates adversarial training, dataset balancing, advanced feature engineering, ensemble learning, and fine-tuning. On benchmark datasets NSL-KDD and UNSW-NB15, they report on average a ~35% increase in detection accuracy and ~12.5% reduction in false positives under adversarial conditions.
Cyber Security: State of the Art, Challenges and Future (W.S. Admass et al.): This article presents an overview of the state of the art in cybersecurity: existing architectures, key challenges, and emerging trends globally. It reviews tactics, techniques, and procedures (TTPs), current defence mechanisms and future research directions.
DYNAMITE: Dynamic Defense Selection for Enhancing Machine Learning-based Intrusion Detection Against Adversarial Attacks (Jing Chen, Onat Güngör, Zhengli Shang, Elvin Li & Tajana Rosing): This paper introduces “DYNAMITE”, a framework for dynamically selecting the optimal defence mechanism for ML-based Intrusion Detection Systems (IDS) when under adversarial attack. Instead of applying a static defence, DYNAMITE uses a meta-ML selection mechanism to pick the best defence in real-time, reducing computational overhead by ~96.2% compared to an oracle and improving F1-score by ~76.7% over random defence and ~65.8% over the best static defence.