Neuromorphic Mimicry Attacks Exploiting Brain-Inspired Computing for Covert Cyber Intrusions: (Hemanth Ravipati)Neuromorphic computing, which mimics the brain’s neural structure in hardware, is increasingly used for efficient AI/edge computing. This paper introduces Neuromorphic Mimicry Attacks (NMAs), a novel class of threats that exploit the probabilistic, non-deterministic behavior of neuromorphic chips. By manipulating synaptic weights or poisoning sensory inputs, attackers can mimic legitimate neural activity, thereby evading standard intrusion detection systems. The work includes a theoretical framework, simulation experiments, and proposals for defenses—e.g. anomaly detection tuned to synaptic behavior, secure synaptic learning. The paper highlights that neuromorphic architectures introduce new cybersecurity risk surfaces.
APT-LLM: Embedding-Based Anomaly Detection of Cyber Advanced Persistent Threats Using Large Language Models: (Sidahmed Benabderrahmane, Petko Valtchev, James Cheney, Talal Rahwan)This paper tackles the hard problem of detecting Advanced Persistent Threats (APTs), which tend to blend into normal system behavior. Their approach, APT-LLM, uses large language models (e.g. BERT, ALBERT, etc.) to embed process–action provenance traces into semantic-rich embeddings. They then use autoencoder models (vanilla, variational, denoising) to learn normal behavior and flag anomalies. Evaluated on highly imbalanced real-world datasets (some with only 0.004% APT-like traces), they demonstrate substantial gains over traditional anomaly detection methods. The core idea is leveraging the representational strength of LLMs for cybersecurity trace analysis.
Precise Anomaly Detection in Behavior Logs Based on LLM Fine-Tuning: (S. Song et al.)Insider threats are notoriously difficult to detect because anomalies in user behavior often blur with benign but unusual actions. This paper proposes converting user behavior logs into natural language narratives, then fine-tuning a large language model with a contrastive learning objective (first at a global behavior level, then refined per user) to distinguish between benign and malicious anomalies. They also propose a fine-grained tracing mechanism to map detected anomalies back to behavioral steps. On the CERT v6.2 dataset, their approach achieves F1 ≈ 0.8941, outperforming various baseline methods. The method aims to reduce information loss in translation of logs to features and improve interpretability.
Exposing the Ghost in the Transformer: Abnormal Detection for Large Language Models via Hidden State Forensics: (Shide Zhou, Kailong Wang, Ling Shi, Haoyu Wang) As LLMs are embedded into real-world systems, they become potential attack targets (jailbreaks, backdoors, adversarial attacks). This work proposes a detection method that inspects internal hidden states (activation patterns) across layers and uses “hidden state forensics” to detect abnormal behaviors in real-time. The approach is claimed to detect a variety of threats (e.g. backdoors, deviations) with >95% accuracy and low overhead. The method operates without needing to retrain or heavily instrument the model, offering a promising path toward monitoring LLM security in deployment.
Robust Anomaly Detection in O-RAN: Leveraging LLMs against Data Manipulation Attacks: (Thusitha Dayaratne, Ngoc Duy Pham, Viet Vo, Shangqi Lai, Sharif Abuadbba, Hajime Suzuki, Xingliang Yuan, Carsten Rudolph) The Open Radio Access Network (O-RAN) architecture, used in 5G, introduces openness and programmability (xApps), but also novel attack vectors. The authors identify a subtle “hypoglyph” attack: injecting Unicode-wise manipulations (e.g. look-alike characters) into data that evade traditional ML-based anomaly detectors. They propose using LLMs (via prompt engineering) to robustly detect anomalies, even in manipulated data, and demonstrate low detection latency (<0.07 s), making it potentially viable for near-real-time use in RAN systems. This work bridges wireless systems and AI-based security in a timely domain.
Generative AI in Cybersecurity: A Comprehensive Review of Future Directions: (M. A. Ferrag et al.) This is a survey/review paper covering the intersection of Generative AI / LLMs and cybersecurity. It synthesizes recent research on how generative models can be used for threat creation (e.g. adversarial attacks, automated phishing, malware synthesis) and defense (e.g. automated patch generation, security policy synthesis, anomaly detection). The paper also outlines open challenges and risks (e.g. misuse, model poisoning, hallucination) and proposes a structured roadmap for future research. As the field is evolving rapidly, this review is becoming a frequently cited reference point.