Intrusion detection using TCP/IP single packet header binary image for IoT networks(Mohamed El-Sherif, Ahmed Khattab & Magdy El-Soudani):This paper proposes a novel intrusion detection approach for IoT networks by converting single raw TCP/IP packet headers into binary (black-and-white) images. Then, using a lightweight Convolutional Neural Network (CNN), the system classifies traffic as benign or malicious. On benchmark IoT datasets (Edge-IIoTset and MQTTset), the method achieved perfect or near-perfect detection rates (100% binary accuracy, ~97–100% multiclass accuracy) — all with minimal computational resources. The approach avoids heavy feature engineering or payload inspection, making it suitable for resource-constrained IoT devices and real-time deployment.
Adversarial Defense in Cybersecurity: A Systematic Review of GANs for Threat Detection and Mitigation (Tharcisse Ndayipfukamiye, Jianguo Ding, Doreen Sebastian Sarwatt, Adamu Gaston Philipo & Huansheng Ning): This systematic review explores how Generative Adversarial Networks (GANs) are being used not just by attackers — but also defensively — for cybersecurity tasks. The paper consolidates 185 peer-reviewed studies, developing a taxonomy across defensive functions, GAN architectures, threat models, and application domains (e.g., network intrusion detection, IoT, malware analysis). The authors highlight meaningful gains (e.g., better detection accuracy and robustness) but also underscore persistent challenges: instability in GAN training, lack of standard benchmarks, high computational cost, and poor explainability. They propose directions for future research — including hybrid models, transparent benchmarks, and targeting emerging threats such as LLM-driven attacks.
Red Teaming with Artificial Intelligence-Driven Cyberattacks: A Scoping Review (Mays Al-Azzawi, Dung Doan, Tuomo Sipola, Jari Hautamäki & Tero Kokkonen): This review maps how AI is transforming offensive cybersecurity — specifically red-teaming and attack simulations. Drawing on a broad literature base, the paper identifies typical AI-driven methods used by attackers (e.g., automated penetration testing, credential harvesting, social-engineering via AI) and common targets (sensitive databases, cloud services, social media, etc.). The review underscores the rising threat from AI-enabled attacks that scale, adapt, and can bypass traditional defenses — thus serving as a warning and a call for defence strategies that account for AI-driven adversaries.
Adaptive Cybersecurity: Dynamically Retrainable Firewalls for Real-Time Network Protection (Sina Ahmadi): This paper argues that traditional static firewall rules are increasingly inadequate in the face of rapidly evolving threats. It proposes “dynamically retrainable firewalls”: ML-driven firewall systems that continuously retrain on incoming network data, detect anomalous activity in real-time, and adapt to new threat patterns. The work explores design architectures (micro-services, distributed systems), data sources for retraining, latency and performance trade-offs, and ways to integrate with modern paradigms like Zero Trust. It also discusses future challenges, including AI advances and quantum computing. The study suggests this adaptive firewall approach may be a key pillar for future network security.
Neuromorphic Mimicry Attacks Exploiting Brain-Inspired Computing for Covert Cyber Intrusions (Hemanth Ravipati): As neuromorphic computing (brain-inspired hardware) becomes more common — especially in edge devices, IoT, and AI application — this paper demonstrates for the first time a novel class of threats: Neuromorphic Mimicry Attacks (NMAs). Because neuromorphic chips operate with probabilistic and non-deterministic neural activity, attackers can tamper with synaptic weights or poison sensory inputs to mimic legitimate neural signals. Such attacks can evade conventional intrusion detection systems. The paper provides a theoretical framework, simulations, and proposes countermeasures (e.g., neural-specific anomaly detection, secure learning protocols). The study warns that as neuromorphic hardware spreads, these threats will become increasingly relevant.