Neuromorphic Mimicry Attacks Exploiting Brain‑Inspired Computing for Covert Cyber Intrusions(H. Ravipati): This pioneering work describes a novel class of cyber‑attacks—"Neuromorphic Mimicry Attacks" (NMAs)—targeting neuromorphic chips (brain‑inspired hardware) used in edge/AI devices. By subtly tampering with synaptic weights or poisoning sensory inputs, attackers mimic legitimate neural activity and evade conventional intrusion detection. The paper proposes anomaly detection tailored to neural behavior and secure synaptic learning protocols. It’s attracting attention because as neuromorphic hardware becomes mainstream (e.g. for smart sensors, medical implants, autonomous systems), this attack vector could emerge rapidly in coming years.
From Texts to Shields: Convergence of Large Language Models and Cybersecurity (Tao Li, Ya‑Ting Yang, Yunian Pan, Quanyan Zhu): This interdisciplinary report explores how large language models (LLMs) are both empowering and challenging cybersecurity. It surveys applications like automated vulnerability analysis (e.g. 5G code scanning), generative security tooling, and network threat detection—while also examining socio‑technical concerns around trust, transparency, and fairness. It proposes human‑in‑the‑loop workflows, interpretability mechanisms, and proactive robustness testing. The paper has gained traction due to the surging integration of LLMs in defense tools and Widespread debate around safe deployment.
Red Teaming with Artificial Intelligence‑Driven Cyberattacks: A Scoping Review: (M. Al‑Azzawi, Dung Doan, T. Sipola, J. Hautamäki, T. Kokkonen): This scoping review examines how artificial intelligence can automate red‑team cyberattacks, from reconnaissance to exploit deployment. Analysing nearly 500 studies (11 included in the final review), it categorizes AI methods used to breach systems, extract data, and manipulate targets. The paper warns that AI‑powered adversarial tools are lowering the barrier for sophisticated attacks—and calls for defensive capabilities that also leverage AI to keep pace. It’s widely cited in discussions about where adversarial and defensive AI arms races are headed.
Exploring the Role of Large Language Models in Cybersecurity: A Systematic Survey (Shuang Tian, Tao Zhang, Jiqiang Liu, Jiacheng Wang, et al.): This systematic survey examines use cases of LLMs across the cyber‑attack lifecycle—from reconnaissance to lateral movement and threat intelligence. It highlights LLMs’ strengths in pattern recognition, automated exploration of network configurations, and real‑time response generation, while also covering associated risks—such as prompt injection, hallucinations, data leakage, and adversarial manipulation. The paper is timely, aligning with current concerns about prompt‑injection (now classified as a top risk in OWASP’s 2025 LLM Top 10).