Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

SecPro

62 Articles
Austin Miller
25 Oct 2024
Save for later

#174: Hacked Back

Austin Miller
25 Oct 2024
A busy week for the SEC makes for excellent new?sWebinar: Introducing a Market-Changing Approach to Mobile App SecurityJoin Guardsquare to learn more about our new guided configuration approach to mobile application protection.Our latest innovation ensures that all developers can effortlessly launch apps with industry-leading protection in less than a day.This webinar will: walk through Guardsquare's new guided configuration approach; discuss how this new approach empowers mobile app publishers to easily configure security features, receive actionable insights, and monitor protection outcomes without sacrificing app performance or user experience; and cover a case study addressing how customers successfully implemented the technology.Register NowSPONSORED#174: Hacked BackA busy week for the SEC makes for excellent newsWelcome to another_secpro!It can be hard to know what to believe when it comes to the internet. Not only are the various stories sometimes obviously contradictory, but they might also be written by people who have an interest in presenting contradictory stories to drive up engagement. With that in mind, here are some talking heads the Editor thinks you can rely on (Editor: along with, of course, the Editor...).Bruce Schneier dispelled exaggerated claims about China breaking modern encryption and highlighted concerns over AI use in whistleblower programs influencing stock markets. He also discussed the indictment of a CEO for security certification fraud and detailed an Israeli operation sabotaging Hezbollah’s communication devices. Meanwhile, Cisco reported a denial-of-service vulnerability in its VPN services, and LinkedIn was fined €310 million by the Irish Data Protection Commission for privacy violations. FortiGuard Labs identified a critical vulnerability in FortiManager software, while new ransomware (Qilin.B) with enhanced evasion tactics was documented by Halcyon. Additionally, Brazil arrested a cybercriminal involved in breaches of sensitive U.S. data, and the SEC charged companies for misleading cybersecurity disclosures.Check out _secpro premiumAs always, make sure to check out the templates, podcasts, and other stuff on ourSubstackand access the very best that we have to offer. You might even learn something!Cheers!Austin MillerEditor-in-ChiefNews BytesBruce Schneier -No, The Chinese Have Not Broken Modern Encryption Systems with a Quantum Computer: "The headline is pretty scary: “China’s Quantum Computer Scientists Crack Military-Grade Encryption.” No, it’s not true. This debunkingsaved me the trouble of writing one. It all seems to have come fromthis news article, which wasn’t bad but was taken widely out of proportion. Cryptography is safe, andwill befor along time."Bruce Schneier -AI and the SEC Whistleblower Program: "Whistleblowing firms can also use the information they uncover to guide market investments byactivist short sellers. Since 2006, the investigative reporting siteSharesleuthclaimsto have tanked dozens of stocks and instigated at least eight SEC cases against companies in pharma, energy, logistics, and other industries, all after its investors shorted the stocks in question. More recently, a new investigative reporting site calledHunterbrook Mediaand partner hedge fund Hunterbrook Capital, have churned out18investigative reports in their first five months of operation and disclosed short sales and other actions alongside each. In at least one report, Hunterbrooksays they filed an SEC whistleblower tip."Bruce Schneier -Justice Department Indicts Tech CEO for Falsifying Security Certifications: TheWall Street Journalisreportingthat the CEO of a still unnamed company has been indicted for creating a fake auditing company to falsify security certifications in order to win government business.Bruce Schneier -More Details on Israel Sabotaging Hezbollah Pagers and Walkie-Talkies: "TheWashington Posthas a long and detailedstoryabout the operation that’s well worth reading (alternate versionhere). The sales pitch came from a marketing official trusted by Hezbollah with links to Apollo. The marketing official, a woman whose identity and nationality officials declined to reveal, was a former Middle East sales representative for the Taiwanese firm who had established her own company and acquired a license to sell a line of pagers that bore the Apollo brand. Sometime in 2023, she offered Hezbollah a deal on one of the products her firm sold: the rugged and reliable AR924."Cisco - Cisco Adaptive Security Appliance and Firepower Threat Defense Software Remote Access VPN Brute Force Denial of Service Vulnerability: "A vulnerability in the Remote Access VPN (RAVPN) service of Cisco Adaptive Security Appliance (ASA) Software and Cisco Firepower Threat Defense (FTD) Software could allow an unauthenticated, remote attacker to cause a denial of service (DoS) of the RAVPN service... An attacker could exploit this vulnerability by sending a large number of VPN authentication requests to an affected device. A successful exploit could allow the attacker to exhaust resources, resulting in a DoS of the RAVPN service on the affected device. Depending on the impact of the attack, a reload of the device may be required to restore the RAVPN service."(Irish) Data Protection Agency - Irish Data Protection Commission fines LinkedIn Ireland €310 million: The inquiry examined LinkedIn’s processing of personal data for the purposes of behavioural analysisand targeted advertisingof users who have created LinkedIn profiles (members). The decision, which was made by the Commissioners for Data Protection, Dr Des Hogan and Dale Sunderland, and notified to LinkedIn on 22 October 2024, concerns the lawfulness, fairness and transparency of this processing. The decision includes a reprimand, an order for LinkedIn to bring its processing into compliance, and administrative fines totalling €310 million.FortiGuard Labs - Missing authentication in fgfmsd: A missing authentication for critical function vulnerability [CWE-306] in FortiManager fgfmd daemon may allow a remote unauthenticated attacker to execute arbitrary code or commands via specially crafted requests. Reports have shown this vulnerability to be exploited in the wild.Halcyon - New Qilin.B Ransomware Variant Boasts Enhanced Encryption and Defense Evasion: Researchers at anti-ransomware solutions provider Halcyon have documented a new version of the Qilin ransomware payload dubbedQilin.B for tracking. According to thePower Rankings: Ransomware Malicious Quartilereport, Qilin (aka Agenda) is a ransomware-as-a-service (RaaS) operation that emerged in July of 2022 that can target both Windows and Linux systems. ‍Qilin operations include data exfiltration for double extortion. Krebs on Security - Brazil Arrests ‘USDoD,’ Hacker in FBI Infragard Breach: "Brazilian authorities reportedly have arrested a 33-year-old man on suspicion of being “USDoD,” a prolific cybercriminal who rose to infamy in 2022 after infiltrating theFBI’s InfraGardprogram and leaking contact information for 80,000 members. More recently, USDoD was behind a breach at the consumer data brokerNational Public Data that led to the leak of Social Security numbers and other personal information for a significant portion of the U.S. population."Krebs on Security - The Global Surveillance Free-for-All in Mobile Ad Data: "Not long ago, the ability to digitally track someone’s daily movements just by knowing their home address, employer, or place of worship was considered a dangerous power that should remain only within the purview of nation states. But a new lawsuit in a likely constitutional battle over a New Jersey privacy law shows that anyone can now access this capability, thanks to a proliferation of commercial services that hoover up the digital exhaust emitted by widely-used mobile apps and websites..."SEC - SEC Charges Four Companies With Misleading Cyber Disclosures:The charges against the four companies result from an investigation involving public companies potentially impacted by the compromise of SolarWinds’ Orion software and by other related activity. “As today’s enforcement actions reflect, while public companies may become targets of cyberattacks, it is incumbent upon them to not further victimize their shareholders or other members of the investing public by providing misleading disclosures about the cybersecurity incidents they have encountered,” said Sanjay Wadhwa, Acting Director of the SEC’s Division of Enforcement.Tenable - CVE-2024-8260: SMB Force-Authentication Vulnerability in OPA Could Lead to Credential Leakage: Tenable Research discovered an SMB force-authentication vulnerability in Open Policy Agent (OPA) that is now fixed in the latest release of OPA. The vulnerability could have allowed an attacker to leak the NTLM credentials of the OPA server's local user account to a remote server, potentially allowing the attacker to relay the authentication or crack the password. The vulnerability affected both the OPA CLI (Community and Enterprise editions) and the OPA Go SDK.This week's toolsgoliate/hidden-tear: It's a ransomware-like file crypter sample which can be modified for specific purposes. Simples.ncorbuk/Python-Ransomware - A Python Ransomware Tutorial with a YouTube tutorial explaining code and showcasing the ransomware with victim/target roles.ForbiddenProgrammer/conti-pentester-guide-leak: Leaked pentesting manuals given to Conti ransomware crooks.codesiddhant/Jasmin-Ransomware: Jasmin Ransomware is an advanced red team tool (WannaCry Clone) used for simulating real ransomware attacks. Jasmin helps security researchers to overcome the risk of external attacks.Upcoming events for _secprosSecTor(October 23rd-26th): SecTor is renowned for bringing together international experts to discuss underground threats and corporate defenses. This cyber security conference offers a unique opportunity for IT security professionals, managers, and executives to connect and learn from experienced mentors. This year, SecTor introduces the ‘Certified Pentester’ program, including a full-day practical examination, adding to the event’s educational offerings.LASCON 2024(October 24-25th): The Lonestar Application Security Conference (LASCON) is an annual event in Austin, TX, associated with OWASP, gathering 400+ web app developers, security engineers, mobile developers, and infosec professionals. Being in Texas, home to numerous Fortune 500 companies, and located in Austin, a startup hub, LASCON attracts leaders, security architects, and developers to share innovative ideas, initiatives, and technology advancements in application security.SANS HackFest Hollywood 2024 (October 29th): Choose Your Experience: In-Person or Live Online - whether you're planning to dive into the full HackFest experience in Hollywood, or the free, curated content offered Live Online, you'll walk away with new tools, techniques, and connections that will have a lasting impact on your career.ODSC West 2024 (October 29th): "Since 2015, ODSC has been the essential event for AI and data science practitioners, business leaders, and those reskilling into AI. It offers cutting-edge workshops, hands-on training, strategic insights, and thought leadership. Whether deepening technical skills, transforming a business with AI, or pivoting into an AI-driven career, ODSC provides unparalleled opportunities for learning, networking, and professional growth."*{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0
  • 9863

Austin Miller
04 Oct 2024
Save for later

#171: Going hAIwire

Austin Miller
04 Oct 2024
A week of madness where AI went haywireIntroducing A Market-Changing Approach to Mobile App Protection by GuardsquareMobile applications face constant, evolving threats; to address these challenges, Guardsquare is proud to announce the launch of our innovative guided configuration approach to mobile app protection. By combining the highest level of protection with unparalleled ease of use, we empower developers and security professionals to secure their applications against even the most sophisticated threats. Guardsquare is setting a new standard for mobile app protection and we invite you to join us on this journey to experience the peace of mind that comes with knowing your mobile applications are protected by the most advanced and user-friendly product on the market.Learn More#171: Going hAIwireA week of madness where AI went haywireIn the lead up to October - Cybersecurity Awareness Month! - we're offering everyone a chance to jump on the _secpro train...For a limited time, get 20% off all subscriptions at the checkout. You can get access to our podcasts, our templates, our security guides, and other _secpro events for a fifth off. And you can cancel anyway. What's there to lose?Thanks and enjoy!Upgrade for 20% off!Welcome to another_secpro!AI developers and users have suffered this week, with multiple reports of difficulties and insecurities coming from the most prominent platforms in the world. If you're the kind of person who has integrated AI into their home- and worklife (as opposed to the Editor, who is currently trying to find an empty cabin in the woods...), there will be plenty worth paying attention to here...Check out _secpro premiumIf you missed it, we sent out the first issue of the new _secproPremium (_secpro Premium #1: Change is Difficult) as a free edition. As a teaser for those thinking of subscribing and as a treat for everyone else. Don't miss out!Cheers!Austin MillerEditor-in-ChiefTime for some news!Aqua Nautilus - perfctl: A Stealthy Malware Targeting Millions of Linux Servers: "The name perfctl comes from the cryptominer process that drains the system’s resources, causing significant issues for many Linux developers. By combining “perf” (a Linux performance monitoring tool) with “ctl” (commonly used to indicate control in command-line tools), the malware authors crafted a name that appears legitimate. This makes it easier for users or administrators to overlook during initial investigations, as it blends in with typical system processes."Bruce Schneier - Weird Zimbra Vulnerability: Hackers can execute commands on a remote computer by sending malformed emails to a Zimbra mail server. It’s critical, but difficult to exploit. "In an email sent Wednesday afternoon, Proofpoint researcher Greg Lesnewich seemed to largely concur that the attacks weren’t likely to lead to mass infections that could install ransomware or espionage malware. The researcher provided the following details..." Findthe rest on Schneier's website.Bruce Schneier - AI and the 2024 US Elections: "For years now, AI has undermined the public’s ability to trust what it sees, hears, and reads. TheRepublican National Committeereleased a provocative ad offering an “AI-generated look into the country’s possible future if Joe Biden is re-elected,” showing apocalyptic, machine-made images of ruined cityscapes and chaos at the border.Fake robocallspurporting to be from Biden urged New Hampshire residents not to vote in the 2024 primary election. This summer, the Department of Justice cracked down on aRussian bot farmthat was using AI to impersonate Americans on social media, and OpenAI disrupted anIranian group using ChatGPT to generate fake social-media comments..." Findthe rest on Schneier's website.Bruce Schneier - California AI Safety Bill Vetoed: "Governor Newsom hasvetoed the state’s AI safety bill. I have mixed feelings about thebill. There’s a lot to like about it, and I want governments to regulate in this space. But, for now, it’s allEU."Bruce Schneier - Hacking ChatGPT by Planting False Memories into Its Data: "This vulnerability hacks a feature that allows ChatGPT to have long-term memory, where it uses information from past conversations to inform future conversations with that same user. A researcher found that he could use that feature to plant “false memories” into that context window that could subvert the model."Cloudflare - How Cloudflare auto-mitigated world record 3.8 Tbps DDoS attack: "Since early September,Cloudflare's DDoS protection systems have been combating a month-long campaign of hyper-volumetric L3/4 DDoS attacks. Cloudflare’s defenses mitigated over one hundred hyper-volumetric L3/4 DDoS attacks throughout the month, with many exceeding 2 billion packets per second (Bpps) and 3 terabits per second (Tbps). The largest attack peaked 3.8 Tbps — the largest ever disclosed publicly by any organization. Detection and mitigation was fully autonomous. The graphs below represent two separate attack events that targeted the same Cloudflare customer and were mitigated autonomously."Interpol - Arrests in international operation targeting cybercriminals in West Africa: "Eight individuals have been arrested as part of an ongoing international crackdown on cybercrime, dealing a major blow to criminal operations in Côte d’Ivoire and Nigeria. The arrests were made as part of INTERPOL’s Operation Contender 2.0, an initiative aimed at combating cyber-enabled crimes, primarily in West Africa, through enhanced international intelligence sharing."Europol - LockBit power cut: four new arrests and financial sanctions against affiliates: "Europol supported a new series of actions against LockBit actors, which involved 12 countries and Eurojust and led to four arrests and seizures of servers critical for LockBit’s infrastructure. A suspected developer of LockBit was arrested at the request of the French authorities, while the British authorities arrested two individuals for supporting the activity of a LockBit affiliate. The Spanish officers seized nine servers, part of the ransomware’s infrastructure, and arrested an administrator of a Bulletproof hosting service used by the ransomware group. In addition, Australia, the United Kingdom and the United States implemented sanctions against an actor who the National Crime Agency had identified as prolific affiliate of LockBit and strongly linked to Evil Corp. The latter comes after LockBit’s claim that the two ransomware groups do not work together. The United Kingdom sanctioned fifteen other Russian citizens for their involvement in Evil Corp’s criminal activities, while the United States also sanctioned six citizens and Australia sanctioned two."Krebs on Security - A Single Cloud Compromise Can Feed an Army of AI Sex Bots: "Organizations that get relieved of credentials to their cloud environments can quickly find themselves part of a disturbing new trend: Cybercriminals using stolen cloud credentials to operate and resell sexualized AI-powered chat services. Researchers say these illicit chat bots, which use custom jailbreaks to bypass content filtering, often veer into darker role-playing scenarios, including child sexual exploitation and rape."Krebs on Security - Crooked Cops, Stolen Laptops & the Ghost of UGNazi: A California man accused of failing to pay taxes on tens of millions of dollars allegedly earned from cybercrime also paid local police officers hundreds of thousands of dollars to help him extort, intimidate and silence rivals and former business partners, the government alleges. KrebsOnSecurity has learned that many of the man’s alleged targets were members of UGNazi, a hacker group behind multiple high-profile breaches and cyberattacks back in 2012.Patchstack- Unauthenticated Stored XSS Vulnerability in LiteSpeed Cache Plugin Affecting 6+ Million Sites: "This plugin suffers from unauthenticated stored XSS vulnerability. It could allow any unauthenticated user from stealing sensitive information to, in this case, privilege escalation on the WordPress site by performing a single HTTP request. The described vulnerability was fixed in version6.5.1and assignedCVE-2024-47374. The CCSS and UCSS generation functions_ccss()and_load() take the required parameters and HTTP headers to generate and save the data. The queue is generated using the following code lines."Securonix- SHROUDED#SLEEP: A Deep Dive into North Korea’s Ongoing Campaign Against Southeast Asia: "The Securonix Threat Research team has uncovered an ongoing campaign, identified as SHROUDED#SLEEP, likely attributed to North Korea’s APT37 (also known as Reaper or Group123). This advanced persistent threat group is believed to be based in North Korea and is delivering stealthy malware to targets across Southeast Asian countries. APT37, unlike other APT groups from the region such as Kimsuky, has a long history of targeting countries outside of the expected South Korean targets. This includes a number of recent campaigns against Southeast Asia countries."This week's toolsgoliate/hidden-tear: It's a ransomware-like file crypter sample which can be modified for specific purposes. Simples.ncorbuk/Python-Ransomware - A Python Ransomware Tutorial with a YouTube tutorial explaining code and showcasing the ransomware with victim/target roles.ForbiddenProgrammer/conti-pentester-guide-leak: Leaked pentesting manuals given to Conti ransomware crooks.codesiddhant/Jasmin-Ransomware: Jasmin Ransomware is an advanced red team tool (WannaCry Clone) used for simulating real ransomware attacks. Jasmin helps security researchers to overcome the risk of external attacks.Upcoming events for _secprosInnovate Cybersecurity Summit (October 6-8th): Powered by the collective knowledge of cybersecurity executives, practitioners, and cutting-edge solution providers, Innovate is the premier resource for CISO education & collaboration.PSC Defense Conference(October 8th): "The PSC Defense Conference is where you will hear from senior executives across the Department of Defense and industry discuss current initiatives aimed at accelerating innovation and delivering capabilities to the Future Force."Cybersecurity Expo 2024(October 8-9th): "Please join us for the annual United States Department of Agriculture (USDA) Cybersecurity Expo on October 8th and October 9th (10:30AM-4:00PM EDT). This virtual event engages and educates cybersecurity professionals and enthusiasts with the goal of raising awareness about cybersecurity and increasing the resiliency in the event of a cyber incident."Red Hat Summit: Connect 2024 (October 15th, 17th, & 22nd): Red Hat® Summit: Connect is coming to cities across Asia Pacific. Join us as we explore the future of Al, hybrid cloud, open source technology, and IT. With plenty of opportunities to engage during sessions, demos, and networking, this year's in-person event will give you access to Red Hat experts and industry leaders- all at no cost.BSidesNYC Conference (October 19th): BSidesNYC is an information security conference coordinated by security professionals within the tri-state area as part of the larger BSides framework. The conference prides itself on building an environment focused on technical content covering various security topics - from offensive security to digital forensics and incident response.SecTor (October 23rd-26th): SecTor is renowned for bringing together international experts to discuss underground threats and corporate defenses. This cyber security conference offers a unique opportunity for IT security professionals, managers, and executives to connect and learn from experienced mentors. This year, SecTor introduces the ‘Certified Pentester’ program, including a full-day practical examination, adding to the event’s educational offerings.LASCON 2024 (October 24-25th): The Lonestar Application Security Conference (LASCON) is an annual event in Austin, TX, associated with OWASP, gathering 400+ web app developers, security engineers, mobile developers, and infosec professionals. Being in Texas, home to numerous Fortune 500 companies, and located in Austin, a startup hub, LASCON attracts leaders, security architects, and developers to share innovative ideas, initiatives, and technology advancements in application security.*{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{line-height:0;font-size:75%} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0
  • 8870

Austin Miller
23 May 2025
Save for later

#199: An ATT&CK Review and into the Blogosphere

Austin Miller
23 May 2025
A look at the week gone byBuilding GenAI infra sounds cool—until it’s 3am and your LLM is downThis free guide helps you avoid the pitfalls. Learn the hidden costs, real-world tradeoffs, and decision framework to confidently answer: build or buy? Includes battle-tested tips from Checkr, Convirza & more.Grab it now!#199: An ATT&CK Review and into the BlogosphereA look at the weekWelcome to another_secpro!For all of you who attended the RSA Conference, we hope you had a great time getting up to scratch with the goings on in this industry. Got something to share? Reply to this email and tell us about your thoughts. This week's issue contains:-Apple's AirPlay Vulnerabilities Expose Devices to Hijacking Risks-U.S. Charges 16 Russians Linked to DanaBot Malware Operation-Budget Cuts to U.S. Cybersecurity Agency Raise Concerns Amid Rising Threats-Anthropic Implements Stricter Safeguards for New AI Model Amid Biosecurity Concerns-Russian Hackers Target Western Firms Supporting Ukraine, U.S. Intelligence Reports-MITRE ATT&CK - Explained- Understanding the use cases of the MITRE ATT&CK Framework-Integrating MITRE ATT&CK with SIEM Tools-Demystifying the MITRE ATT&CK FrameworkCheck out _secpro premiumCheers!Austin MillerEditor-in-ChiefReflecting on MITRE ATT&CKMaking our way through the MITRE ATT&CK's Top Ten most exploited techniques over the last 9 weeks has been fun. We're almost ready to dive into the most exploited T-number, but we thought it'd be good to stop and smell the adversarial roses for a minute first - just make sure you've been paying attention. These T-numbers are on the test, so make sure to go back and check out #10 through #2 in the list below:- #2: T1059- #3: T1333- #4: T1071- #5: T1562- #6: T1486- #7: T1082- #8: T1547- #9: T1506- #10: T1005We have five copies of Glen Singh's Kali Linux book to give away. Leave a comment in order to win a virtual copy!RSA Conference 2025 – Navigating the New Cyber FrontierA reflection on this year's eventsRead the rest here!News BytesApple's AirPlay Vulnerabilities Expose Devices to Hijacking Risks: Researchers at cybersecurity firm Oligo have identified 23 significant security flaws in Apple's AirPlay system, collectively dubbed "AirBorne." These vulnerabilities could allow hackers to hijack devices connected to the same Wi-Fi network, affecting both Apple's native AirPlay protocol and third-party implementations. The discovery underscores the need for prompt security updates to protect users relying on AirPlay-compatible gadgets. Oligo's analysis reveals that the vulnerabilities stem from issues in the AirPlay protocol's implementation, allowing for zero-click remote code execution (RCE) attacks. The flaws are particularly concerning due to their wormable nature, enabling potential rapid spread across devices.U.S. Charges 16 Russians Linked to DanaBot Malware Operation: The U.S. Department of Justice has charged 16 Russian nationals associated with the DanaBot malware operation, a sophisticated tool used globally for cybercrime, espionage, and wartime attacks. DanaBot infected over 300,000 systems and was sold to other hackers via an affiliate model. Notably, it was used in state-linked espionage, including attacks on Ukraine’s defense institutions during the Russian invasion. DanaBot is a modular banking Trojan that has evolved to include functionalities such as credential theft, remote access, and data exfiltration. Its architecture allows for dynamic updates, making it adaptable to various malicious activities. Additional commentary at WeLiveSecurity.Budget Cuts to U.S. Cybersecurity Agency Raise Concerns Amid Rising Threats: Security experts warn that proposed 17% budget cuts to the Cybersecurity and Infrastructure Security Agency (CISA) could leave the U.S. vulnerable to retaliatory cyberattacks, especially as Chinese cyberattacks surge. The cuts would lead to the dismissal of 130 employees and cancellation of key contracts, compromising national cyberdefense at a time of heightened threat. Analysts express concern that the reduction in CISA's budget and workforce will hinder the agency's ability to coordinate threat intelligence sharing and respond effectively to cyber incidents, particularly those targeting critical infrastructure. See commentary by Dark Reading.Anthropic Implements Stricter Safeguards for New AI Model Amid Biosecurity Concerns: Anthropic has released Claude Opus 4, its most advanced AI model, under heightened safety measures due to concerns it could assist in bioweapons development. Internal testing indicated that the model significantly outperformed earlier versions in guiding potentially harmful activities. As a result, Anthropic activated its Responsible Scaling Policy, applying stringent safeguards including enhanced cybersecurity and anti-jailbreak measures. The Responsible Scaling Policy includes AI Safety Level 3 (ASL-3) measures, such as prompt classifiers to detect harmful queries, a bounty program for vulnerability detection, and enhanced monitoring to prevent misuse of the AI model. See Anthropic News.Russian Hackers Target Western Firms Supporting Ukraine, U.S. Intelligence Reports: Hackers affiliated with Russian military intelligence have been targeting Western technology, logistics, and transportation firms involved in aiding Ukraine. The cyber campaign sought to obtain intelligence on military and humanitarian aid shipments, using tactics like spearphishing and exploiting vulnerabilities in small office and home networks. Over 10,000 internet-connected cameras near Ukrainian borders and other key transit points were targeted. The attackers, linked to the group "Fancy Bear," employed advanced persistent threat (APT) techniques, including the exploitation of unsecured IoT devices and spearphishing campaigns, to infiltrate networks and gather intelligence on aid logistics. See the NSA report (PDF).This week's blogsMITRE ATT&CK - Explained: This comprehensive guide breaks down the MITRE ATT&CK framework, detailing its components such as tactics, techniques, and procedures. It also compares ATT&CK with the Cyber Kill Chain model, highlighting how ATT&CK provides a more flexible approach to understanding adversary behaviors across different platforms.Understanding the use cases of the MITRE ATT&CK Framework: Tailored for newcomers, this blog offers a step-by-step approach to utilizing the MITRE ATT&CK framework. It emphasizes the benefits of integrating ATT&CK into cybersecurity practices, such as improved threat detection, incident management, and communication among security professionals.Integrating MITRE ATT&CK with SIEM Tools:This article explores how to integrate the MITRE ATT&CK framework with Security Information and Event Management (SIEM) systems, specifically Microsoft Sentinel. It discusses features like the MITRE ATT&CK Blade, rule creation, and tagging, providing insights into enhancing detection and response capabilities.Demystifying the MITRE ATT&CK Framework: This blog offers a clear explanation of the MITRE ATT&CK framework, discussing its role in understanding cyber-attack patterns and applying appropriate mitigation strategies. It emphasizes the framework's value in improving an organization's cybersecurity posture and adapting to evolving threats.Upcoming events for _secpros this yearHere are the five conferences we're looking forward to the most this year (in no particular order...) and how you can get involved to boost your posture!DSEI (9th-12th September): DSEI stands out as a global platform that bridges defence, security, and cybersecurity. With its broad focus on cutting-edge technologies, this event is critical for those involved in national defence, law enforcement, and private security. Cybersecurity is a prominent theme, with sessions addressing both offensive and defensive cyber strategies.Defcon (7th-10th August): Defcon is a legendary event in the hacker and cybersecurity communities. Known for its hands-on approach, Defcon offers interactive workshops, capture-the-flag contests, and discussions on emerging threats. The conference is ideal for those looking to immerse themselves in technical aspects of cybersecurity.Black Hat (2nd-7th August): Black Hat USA is synonymous with advanced security training and research. This premier event features technical briefings, hands-on workshops, and sessions led by global security experts. Attendees can explore the latest trends in penetration testing, malware analysis, and defensive techniques, making it a must-attend for cybersecurity professionals.*{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0
  • 2980
Subscribe to Packt _SecPro
A weekly newsletter for security professionals, by security professionals. Packed with ways of working from top practitioners combating modern threats.

Austin Miller
30 May 2025
Save for later

#200: The Bicenntenial Giveaway!

Austin Miller
30 May 2025
A look at 200 issuesTrain your own R1 reasoning model with UnslothYou can now run and fine-tune Qwen3 and Meta's new Llama 4 models with 128K context length & superior accuracy. Unsloth is an open-source project that allows easy fine-tuning of LLMs and that also uploads accurately quantized models to Hugging Face. Check it out on Github!Unsloth's new Dynamic 2.0 quants outperform other quantization methods on 5-shot MMLU & KL Divergence benchmarks, meaning you can now run + fine-tune quantized LLMs while preserving as much precision as possible.Tutorial for running Qwen3 here.Tutorial for running Llama 4 here.Take a look!#200: The Bicentennial Giveaway!A look at the past 200 issuesWelcome to another_secpro!200 issues! Where does the time go? We're here providing the same usual content that we always do, but ask our readers to also check out the _secpro archive on Substack for a walk down memory lane or an exciting dive into what you missed before you subscribed. This week's issue contains:-AI Chatbots Enhance Phishing Email Sophistication- U.S. Sanctions Funnull for $200M Romance Baiting Scams Tied to Crypto Fraud-ConnectWise Breached in Cyberattack Linked to Nation-State Hackers-PumaBot Botnet Targets Linux IoT Devices to Steal SSH Credentials and Mine Crypto-Earth Lamia Develops Custom Arsenal to Target Multiple Industries-China-Linked Hackers Exploit Google Calendar in Cyberattacks on Governments- PentestGPT: An LLM-empowered Automatic Penetration Testing Tool-Enhancing Cybersecurity Resilience Through Advanced Red-Teaming Exercises and MITRE ATT&CK Framework Integration-Offense For Defense: The Art and Science of Cybersecurity Red TeamingCheck out _secpro premiumCheers!Austin MillerEditor-in-ChiefReflecting on MITRE ATT&CKMaking our way through the MITRE ATT&CK's Top Ten most exploited techniques over the last 10 weeks has been fun. We're almost ready to dive into the most exploited T-number, but we thought it'd be good to stop and smell the adversarial roses for a minute first - just make sure you've been paying attention. These T-numbers are on the test, so make sure to go back and check out #10 through #2 in the list below:- #2: T1059- #3: T1333- #4: T1071- #5: T1562- #6: T1486- #7: T1082- #8: T1547- #9: T1506- #10: T1005We have five copies of Glen Singh's Kali Linux book to give away. Leave a comment in order to win a virtual copy! And now, here is our number one...#1: T1055Check it out here!News BytesAI Chatbots Enhance Phishing Email Sophistication: AI chatbots like ChatGPT are making scam emails harder to detect due to their flawless grammar and human-like tone, enabling more sophisticated phishing schemes. This evolution demands new detection strategies centering on user vigilance and corporate preemptive measures. See also:Zscaler ThreatLabz 2025 Phishing ReportU.S. Sanctions Funnull for $200M Romance Baiting Scams Tied to Crypto Fraud: The U.S. Department of Treasury's Office of Foreign Assets Control (OFAC) has levied sanctions against a Philippines-based company named Funnull Technology Inc. and its administrator Liu Lizhi for providing infrastructure to conduct romance baiting scams that led to massive cryptocurrency losses. See also: Understanding Romance Scams and Cryptocurrency FraudConnectWise Breached in Cyberattack Linked to Nation-State Hackers: ConnectWise, the developer of remote access and support software ScreenConnect, has disclosed that it was the victim of a cyber attack that it said was likely perpetrated by a nation-state threat actor.PumaBot Botnet Targets Linux IoT Devices to Steal SSH Credentials and Mine Crypto: Embedded Linux-based Internet of Things (IoT) devices have become the target of a new botnet dubbed PumaBot. Written in Go, the botnet is designed to conduct brute-force attacks against SSH instances to expand in size and scale and deliver additional malware to the infected hosts.Earth Lamia Develops Custom Arsenal to Target Multiple Industries: A Chinese threat actor group known as Earth Lamia has been actively exploiting known vulnerabilities in public-facing web applications to compromise organizations across sectors such as finance, government, IT, logistics, retail, and education.China-Linked Hackers Exploit Google Calendar in Cyberattacks on Governments: China-linked hackers are exploiting Google Calendar in cyberattacks on governments, using the platform to deliver malicious links and coordinate attacks, highlighting the need for increased vigilance in monitoring cloud-based services. See also:Securing Cloud-Based Collaboration Tools.This week's academiaPentestGPT: An LLM-empowered Automatic Penetration Testing Tool: This paper introduces PentestGPT, an automated penetration testing tool powered by Large Language Models (LLMs). The study evaluates the performance of LLMs on real-world penetration testing tasks and presents a robust benchmark created from test machines. Findings reveal that while LLMs demonstrate proficiency in specific sub-tasks, they encounter difficulties maintaining an integrated understanding of the overall testing scenario. PentestGPT addresses these challenges with three self-interacting modules, each handling individual sub-tasks to mitigate context loss.Enhancing Cybersecurity Resilience Through Advanced Red-Teaming Exercises and MITRE ATT&CK Framework Integration: This study presents a transformative approach to red-teaming by integrating the MITRE ATT&CK framework. By leveraging real-world attacker tactics and behaviors, the integration creates realistic scenarios that rigorously test defenses and uncover previously unidentified vulnerabilities. The comprehensive evaluation demonstrates enhanced realism and effectiveness in red-teaming, leading to improved vulnerability identification and actionable insights for proactive remediation.Offense For Defense: The Art and Science of Cybersecurity Red Teaming: This article delves into the methodologies, tools, techniques, and strategies employed in red teaming, emphasizing the planning practices that underpin successful engagements. It highlights the strategic application of cyber deception techniques, such as honeypots and decoy systems, to enhance an organization’s threat identification and response capabilities. The piece underscores the importance of continuous improvement and adaptation of strategies in response to evolving threats and technologies.Upcoming events for _secpros this yearHere are the five conferences we're looking forward to the most this year (in no particular order...) and how you can get involved to boost your posture!DSEI (9th-12th September): DSEI stands out as a global platform that bridges defence, security, and cybersecurity. With its broad focus on cutting-edge technologies, this event is critical for those involved in national defence, law enforcement, and private security. Cybersecurity is a prominent theme, with sessions addressing both offensive and defensive cyber strategies.Defcon (7th-10th August): Defcon is a legendary event in the hacker and cybersecurity communities. Known for its hands-on approach, Defcon offers interactive workshops, capture-the-flag contests, and discussions on emerging threats. The conference is ideal for those looking to immerse themselves in technical aspects of cybersecurity.Black Hat (2nd-7th August): Black Hat USA is synonymous with advanced security training and research. This premier event features technical briefings, hands-on workshops, and sessions led by global security experts. Attendees can explore the latest trends in penetration testing, malware analysis, and defensive techniques, making it a must-attend for cybersecurity professionals.*{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0
  • 71

Austin Miller
05 Dec 2025
Save for later

#227: Down Memory Lane

Austin Miller
05 Dec 2025
A look back at 2025 to understand where we areMost teams dread FedRAMP—until they switch to Paramify. We make the process faster, clearer, and far more efficient by pairing smart automation with experts who help you exactly where you need it most. Come see how fun compliance can actually be, and grab a free gift when you join us for a demo.Schedule your demo here!#227: Wandering Down Memory LaneA look back at 2025 to understand where we are todayWelcome to another_secpro!We're done with social engineering for now, but if you'd like to find out how the adversary moves in the age of AI then make sure to check out the articles link in this introduction:here, here, here, here, and here.Check out _secpro premiumIf you want more, you know what you need to do: sign up to the premium and get access to everything we have on offer. Click the link above to visit our Substack and sign up there!Cheers!Austin MillerEditor-in-ChiefAI Agents Frontier - December 13, SaturdayJoin the pioneers behind AG2 and autonomous research agents for a 5-hour deep dive into controlled orchestration, reproducibility, and safe deployment of scalable multi-agent architectures systems. Discover how to build deterministic, explainable, verifiable agents that eliminate hallucinations and support secure, auditable decision workflows.Limited early-bird seats remaining. Book Your Pass Now!This week's articleA quick look back at the 2025A quick retrospective to take stock of a year of huge upheavals and change. Jump in to see what we've identified as "the big themes" of 2025 and leave your comments on Substack!Check it out todayNews BytesChinese-linked hackers deploy “BRICKSTORM” for long-term access: The Cybersecurity and Infrastructure Security Agency (CISA) has issued an alert describing a sophisticated backdoor called “BRICKSTORM,” used by state-sponsored actors from the People's Republic of China to maintain stealthy, persistent access on compromised VMware vSphere and Windows systems. The implant — written in Golang — grants attackers full interactive shell access, enabling file upload/download, manipulation, and long-term compromise.New Android RAT “Albiriox” targets 400+ financial apps — live remote-control & banking fraud : A recently discovered Android malware dubbed Albiriox operates as a remote-access Trojan (RAT) and banking trojan, giving attackers control over infected devices. Once installed (often via fake landing pages or spoofed app stores), Albiriox can remotely control phone screens, intercept credentials, and execute on-device banking or crypto transactions—effectively draining accounts while under victim’s own sessions.Seven-year browser-extension campaign from “ShadyPanda” infected 4.3M users : The group known as ShadyPanda spent years publishing seemingly legitimate extensions to browsers like Chrome and Edge — accumulating user trust — before silently updating them with malicious code. The campaign reportedly infected around 4.3 million users. The case underscores long-term supply-chain-style extension abuse and raises alarm about post-installation update security.Threat actors abusing calendar subscriptions to deliver phishing & malware lures: A new trend uncovered by threat intelligence shows attackers exploiting subscription-style calendar invites to deliver phishing links. Once subscribed, victims see malicious events or links — a stealthy method that bypasses traditional email phishing filters and broadens attack surface beyond email.Critical vulnerability in React/Next.js frameworks—remote code execution via deserialization bug (Akamai): A newly disclosed flaw, CVE-2025-55182, affects multiple React-based frameworks’ server-function implementations. The vulnerability enables remote code execution when processing incoming “Flight” requests, posing a serious risk to web applications built with React / Next.js. Developers are urged to patch immediately.“Telemetry Complexity Attacks”, a new class of bypass techniques against malware analysis & EDR platforms: A recent research paper demonstrated how adversaries can exploit weaknesses in telemetry collection pipelines used by malware analysis and EDR systems. By generating deeply nested and oversized telemetry data, attackers can trigger serializer or database failures — effectively causing denial-of-analysis (DoA) and hiding malicious behavior from detection. The research flagged real-world systems for failure under this technique.The emergence of “Benzona” ransomware on underground forums: According to the latest intelligence from CYFIRMA, a new ransomware strain called Benzona was spotted being offered on dark-web forums, signaling the ongoing churn and availability of malware-as-a-service (MaaS) tools for criminals.Research claim: cybercrime globally is dominated by middle-aged offenders, not typical “teen hackers”: A study aggregating data from over 400 law-enforcement bodies suggests that most cybercriminals fall into a middle-age demographic — challenging popular stereotypes of cybercrime being driven by young hackers. The findings may reshape how law enforcement and policy target cybercrime demographics.Into the blogosphere...Shai‑Hulud 2.0: How Cortex Detects and Blocks the Resurgent npm Worm: This post details a major supply-chain attack dubbed “Shai-Hulud 2.0,” where a malicious worm compromised thousands of npm packages. It explains how the malware spreads, steals credentials, establishes persistent backdoors, and compromises developer environments — and outlines how the provider’s security tools (Cortex Cloud, XDR, Prisma Cloud) can detect and block such attacks.AI & Security: Revolutionizing Cybersecurity in the Digital Age: This article explores how artificial intelligence (AI) is transforming cybersecurity — shifting defences from reactive to proactive. It examines use-cases where AI helps detect and mitigate threats, analyzes the challenges of integrating AI into security strategies, and highlights how organizations can leverage modern AI/ML to improve their security posture.When Artificial Intelligence Becomes the Battlefield: This post dives into the darker side of AI—describing how attackers are weaponizing AI for ransomware, phishing, browser-based exploits, AI-native malware, and “vibe-hacking” (emotionally targeted phishing/extortion). It outlines real-world incidents and warns of systemic weaknesses in AI governance, urging more robust controls and oversight for AI deployments.Multi‑Dimensional Threat Intelligence Analysis: Looking for AI Adversaries: This analysis recounts how a security team monitored 427 blocked IP addresses over a short period to evaluate whether emerging AI-powered adversarial techniques were in use. The conclusion: no AI-adaptive threats detected — yet. But the report highlights infrastructure evolution (bulletproof hosting, “brand-weaponization”) and warns that adversaries may shift once detection evasion becomes easier. Offers a practical view on real-world threat-intelligence operations.Turning Kubernetes Last Access to Kubernetes Least Access Using KIEMPossible: This recent post explains how identity and permissions inside Kubernetes environments often become sprawling, giving threat actors excessive attack surface. It shows how the tool/approach “KIEMPossible” can help organisations audit, trace, and reduce permissions to enforce least-privilege — significantly reducing risk for cloud workloads.This week's academiaIntrusion detection using TCP/IP single packet header binary image for IoT networks(Mohamed El-Sherif, Ahmed Khattab & Magdy El-Soudani):This paper proposes a novel intrusion detection approach for IoT networks by converting single raw TCP/IP packet headers into binary (black-and-white) images. Then, using a lightweight Convolutional Neural Network (CNN), the system classifies traffic as benign or malicious. On benchmark IoT datasets (Edge-IIoTset and MQTTset), the method achieved perfect or near-perfect detection rates (100% binary accuracy, ~97–100% multiclass accuracy) — all with minimal computational resources. The approach avoids heavy feature engineering or payload inspection, making it suitable for resource-constrained IoT devices and real-time deployment.Adversarial Defense in Cybersecurity: A Systematic Review of GANs for Threat Detection and Mitigation (Tharcisse Ndayipfukamiye, Jianguo Ding, Doreen Sebastian Sarwatt, Adamu Gaston Philipo & Huansheng Ning): This systematic review explores how Generative Adversarial Networks (GANs) are being used not just by attackers — but also defensively — for cybersecurity tasks. The paper consolidates 185 peer-reviewed studies, developing a taxonomy across defensive functions, GAN architectures, threat models, and application domains (e.g., network intrusion detection, IoT, malware analysis). The authors highlight meaningful gains (e.g., better detection accuracy and robustness) but also underscore persistent challenges: instability in GAN training, lack of standard benchmarks, high computational cost, and poor explainability. They propose directions for future research — including hybrid models, transparent benchmarks, and targeting emerging threats such as LLM-driven attacks.Red Teaming with Artificial Intelligence-Driven Cyberattacks: A Scoping Review (Mays Al-Azzawi, Dung Doan, Tuomo Sipola, Jari Hautamäki & Tero Kokkonen): This review maps how AI is transforming offensive cybersecurity — specifically red-teaming and attack simulations. Drawing on a broad literature base, the paper identifies typical AI-driven methods used by attackers (e.g., automated penetration testing, credential harvesting, social-engineering via AI) and common targets (sensitive databases, cloud services, social media, etc.). The review underscores the rising threat from AI-enabled attacks that scale, adapt, and can bypass traditional defenses — thus serving as a warning and a call for defence strategies that account for AI-driven adversaries.Adaptive Cybersecurity: Dynamically Retrainable Firewalls for Real-Time Network Protection (Sina Ahmadi): This paper argues that traditional static firewall rules are increasingly inadequate in the face of rapidly evolving threats. It proposes “dynamically retrainable firewalls”: ML-driven firewall systems that continuously retrain on incoming network data, detect anomalous activity in real-time, and adapt to new threat patterns. The work explores design architectures (micro-services, distributed systems), data sources for retraining, latency and performance trade-offs, and ways to integrate with modern paradigms like Zero Trust. It also discusses future challenges, including AI advances and quantum computing. The study suggests this adaptive firewall approach may be a key pillar for future network security.Neuromorphic Mimicry Attacks Exploiting Brain-Inspired Computing for Covert Cyber Intrusions (Hemanth Ravipati): As neuromorphic computing (brain-inspired hardware) becomes more common — especially in edge devices, IoT, and AI application — this paper demonstrates for the first time a novel class of threats: Neuromorphic Mimicry Attacks (NMAs). Because neuromorphic chips operate with probabilistic and non-deterministic neural activity, attackers can tamper with synaptic weights or poison sensory inputs to mimic legitimate neural signals. Such attacks can evade conventional intrusion detection systems. The paper provides a theoretical framework, simulations, and proposes countermeasures (e.g., neural-specific anomaly detection, secure learning protocols). The study warns that as neuromorphic hardware spreads, these threats will become increasingly relevant.*{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0

Austin Miller
28 Nov 2025
Save for later

#226: Ransowmare has had an upgrade...

Austin Miller
28 Nov 2025
Exploring the new horizons for adversarial activityKiteworks' 2025 Annual Report on Data Security & Compliance RiskThe global threat landscape is evolving faster than organizations can secure it. In 2025, businesses are either underestimating their readiness—or admitting outright that they're unprepared. As AI adoption accelerates across industries, the risk of sensitive data exposure is amplified by poor visibility, weak governance, and fragmented third-party oversight.The report reveals a stark reality:• Supply chain scale corresponds to slower breach discovery. Organizations managing 5,000+ supply chain partners take 3+ months—often longer—to detect a breach. Prolonged detection windows directly correlate with higher litigation costs and legal exposure, leaving companies blindsided by compounding financial and regulatory damage.• AI oversight is critically lacking. Only 17% of surveyed companies say they have technical AI governance frameworks in place. Even more concerning, a mere 17% claim technical governance frameworks exist, with just 17% confident in AI oversight across data security controls. In a world where AI workflows increasingly interact with sensitive data, 83% lack the guardrails needed to govern it.• Visible compliance hides costly inefficiencies. For every $1 invested in compliance activities organizations can see, an additional $2.33 is burned in hidden costs—including stalled innovation, missed opportunities, audit fatigue, and delayed security modernization. That’s billions lost annually to compliance drag, not improved resilience.• Breach frequency is outpacing defense budgets. Companies working with 5,000+ supply chain partners reported 10+ breaches per year and are spending an average of $3 million+ annually in litigation costs alone. Those experiencing 10+ full ecosystem breaches report $3 million+ annually in litigation costs, not including breach response, fines, or reputational damageOrganizations that achieve unified visibility—across third-party ecosystems, AI usage, and compliance risk posture—will outperform, out-innovate, and out-defend the rest.Kiteworks enables this advantage through secure content collaboration, AI-aware governance, and transparency across extended partner networks. By eliminating blind spots and hidden compliance waste, businesses gain proactive breach detection, stronger regulatory outcomes, and the freedom to innovate without compromising sensitive data.Learn more today#226: Ransowmare has had an upgrade...Exploring the new horizons for adversarial activityWelcome to another_secpro!We're done with social engineering for now, but if you'd like to find out how the adversary moves in the age of AI then make sure to check out the articles link in this introduction:here, here, here, here, and here.Check out _secpro premiumIf you want more, you know what you need to do: sign up to the premium and get access to everything we have on offer. Click the link above to visit our Substack and sign up there!Cheers!Austin MillerEditor-in-ChiefHUMAN’s Guide to Adopting Agentic CommerceWhat happens when machines become your buyers? HUMAN’s Guide to Adopting Agentic Commerce in 2025 explores how merchants can build readiness for this new agent-driven economy.At its center is AgenticTrust, HUMAN’s trust-layer technology that classifies and governs AI agents in real time. We’re inviting cybersecurity and AI influencers to engage their audiences with this critical question: How do we build trust when the customer is no longer human?The world is entering the era of Agentic Commerce. A new era where autonomous AI agents act as buyers, browsers, and decision-makers across digital platforms. HUMAN’s Agentic Commerce Guide explores how this shift is transforming digital interactions, and why visibility and trust are becoming the new foundation of online business. HUMAN’s AgenticTrust platform delivers that foundation, providing real-time visibility and governance over AI agent activity on your site: enabling automation without sacrificing security or business logic.Get a Free Guide to Adopting Agentic CommerceThis week's article"Ransomware 3.0": The Big IdeaThe concept of Ransomware 3.0 is introduced in research as a proof-of-concept illustrating the potential for large language models to automate ransomware creation and execution. Want to find out what that looks like? Click the link below.Check it out todayNews BytesThe U.S. has been cutting cyber defenses as AI boosts attacks: According to federal sources and cyber-security experts, U.S. cyber-defense capabilities—including staffing at Cybersecurity and Infrastructure Security Agency (CISA) and leadership at agencies like National Security Agency (NSA) — have been significantly reduced despite a rising wave of AI-enhanced cyberattacks. Experts warn this mismatch weakens national readiness just as adversaries leverage AI for automated, large-scale cyber operations.London councils hit by suspected cyber attacks — National Cyber Security Centre (NCSC) called in: Several London borough councils — including Kensington & Chelsea, Westminster, Hackney, and Hammersmith & Fulham — reported disrupted services (including phone-line outages) after suspected cyber-attacks. NCSC is now involved in remediation efforts. The disruption underscores rising threats on local government infrastructure.Black Friday shopping scams surge as fraudulent domains proliferate: As holiday shopping ramps up, researchers flagged a significant increase in malicious domain registrations mimicking legitimate retailers — many created to lure holiday-shoppers into scams. Nearly 1 in 11 of the new “Black Friday”-themed domains were found malicious. The use of generative AI to speed up scam site creation was also highlighted as a concern.Global agencies push to shut down “bulletproof” hosting and launch AI-risk framework: International cyber-agencies are urging Internet Service Providers to crack down on so-called “bulletproof” hosts that shelter cybercriminal activity. Meanwhile, the Cybersecurity and Infrastructure Security Agency (CSA) introduced a new “agentic-AI risk framework” to assess emerging threats from autonomous AI-driven tools — reflecting growing focus on AI as a security risk.Qilin ransomware conducts major supply-chain attack against South-Korean MSPs: The ransomware group Qilin — working via a supply-chain compromise of managed-service providers — carried out a high-impact campaign dubbed the “Korean Leaks,” hitting at least 28 victims, primarily in South Korea’s financial sector. Qilin was noted to be among the most active Ransomware-as-a-Service (RaaS) groups this year.RomCom malware: First-ever delivery via SocGholish JavaScript-loader observed: Researchers from Arctic Wolf Labs uncovered a novel attack where the malware family Mythic Agent — associated with the “RomCom” threat group — was distributed via SocGholish, a JavaScript-loader technique commonly used in browser-based malware campaigns. This marks the first known use of SocGholish for a RomCom payload, signaling an evolution in distribution tactics. New “Telemetry Complexity Attacks” break anti-malware analysis pipelines: A research team demonstrated a new class of attacks — dubbed “Telemetry Complexity Attacks” (TCAs) — that exploit how anti-malware platforms process telemetry (e.g., logs, events). By spawning deeply nested, oversized telemetry, the attackers cause failures in serialization/storage/visualization, leading to denial-of-analysis (DoA): malicious behavior executes but isn't recorded or alerted. Several commercial and open-source EDR and malware-analysis tools were shown vulnerable.UK MPs propose new economic-security regime to counter cyber and related threats: In light of rising cyber risks, including state-sponsored attacks and infrastructure vulnerabilities, UK MPs are pushing for a new economic-security framework. This proposal aims to integrate cybersecurity threats across economic, supply-chain, and national-security planning — reflecting growing recognition that cyber risk is not just an IT problem.New variant of IoT botnet based on Mirai emerges: “ShadowV2” tests IoT exploits during AWS disruptions: Security researchers observed a new Mirai-derived botnet variant, dubbed “ShadowV2”, which tested exploits against vulnerable IoT devices during October’s AWS outage — apparently to probe impact on availability and botnet propagation in unstable networks. The experiment raises alarms over IoT insecurity and the rising use of botnets exploiting cloud-service disruptions.Into the blogosphere...Shai‑Hulud 2.0: How Cortex Detects and Blocks the Resurgent npm Worm: This post details a major supply-chain attack dubbed “Shai-Hulud 2.0,” where a malicious worm compromised thousands of npm packages. It explains how the malware spreads, steals credentials, establishes persistent backdoors, and compromises developer environments — and outlines how the provider’s security tools (Cortex Cloud, XDR, Prisma Cloud) can detect and block such attacks.AI & Security: Revolutionizing Cybersecurity in the Digital Age: This article explores how artificial intelligence (AI) is transforming cybersecurity — shifting defences from reactive to proactive. It examines use-cases where AI helps detect and mitigate threats, analyzes the challenges of integrating AI into security strategies, and highlights how organizations can leverage modern AI/ML to improve their security posture.When Artificial Intelligence Becomes the Battlefield: This post dives into the darker side of AI — describing how attackers are weaponizing AI for ransomware, phishing, browser-based exploits, AI-native malware, and “vibe-hacking” (emotionally targeted phishing/extortion). It outlines real-world incidents and warns of systemic weaknesses in AI governance, urging more robust controls and oversight for AI deployments.Multi‑Dimensional Threat Intelligence Analysis: Looking for AI Adversaries: This analysis recounts how a security team monitored 427 blocked IP addresses over a short period to evaluate whether emerging AI-powered adversarial techniques were in use. The conclusion: no AI-adaptive threats detected — yet. But the report highlights infrastructure evolution (bulletproof hosting, “brand-weaponization”) and warns that adversaries may shift once detection evasion becomes easier. Offers a practical view on real-world threat-intelligence operations.Turning Kubernetes Last Access to Kubernetes Least Access Using KIEMPossible: This recent post explains how identity and permissions inside Kubernetes environments often become sprawling, giving threat actors excessive attack surface. It shows how the tool/approach “KIEMPossible” can help organisations audit, trace, and reduce permissions to enforce least-privilege — significantly reducing risk for cloud workloads.Is Your Snowflake Data at Risk? Find and Protect Sensitive Data with DSPM: This post targets cloud data security risks for organisations using data platforms (specifically Snowflake). It covers how sensitive data may be exposed, how data-security posture management (DSPM) can help spot and remedy exposures, and why companies should proactively use DSPM tools to safeguard critical data — especially relevant as cloud data stores grow.This week's academiaSoK: Frontier AI's Impact on the Cybersecurity Landscape: This paper examines how advances in “frontier AI” (very capable AI systems) affect the cybersecurity landscape, particularly how such AI can both enable new attack vectors and challenge defenders. It categorizes risks, analyses current and potential future impacts, and issues concrete recommendations — e.g., building fine-grained benchmarks, security mechanisms for hybrid AI–software systems, pre-deployment testing, and better transparency. (Wenbo Guo, Yujin Potter, Tianneng Shi, Zhun Wang, Andy Zhang & Dawn Song)Red Teaming with Artificial Intelligence-Driven Cyberattacks: A Scoping Review: This review analyses how AI techniques are being used to carry out or simulate cyberattacks (red-teaming). From 470 screened records, 11 were selected to illustrate how AI can automate penetration, data exfiltration, or social-engineering attacks — targeting everything from credentials to social-media accounts. It highlights how versatile and dangerous AI-assisted attack tools are, and calls for deeper understanding and defensive planning. (Mays Al-Azzawi, Dung Doan, Tuomo Sipola, Jari Hautamäki & Tero Kokkonen)Zero Trust Architecture: A Systematic Literature Review: This is a systematic lit-review (2016–2025) of research on the Zero Trust Architecture (ZTA). It synthesizes how ZTA is applied, what enabling technologies support it, and the obstacles to its adoption. The paper also traces how ZTA research evolved over time, offering a taxonomy of application domains and critical challenges — making it a go-to resource for both researchers and practitioners assessing ZTA deployment. (Muhammad Liman Gambo & Ahmad Almulhem)Analysing Cyber Attacks and Cyber Security Vulnerabilities in the University Sector: This empirical study reviews recent cyberattacks on universities (with a focus on the UK sector), producing a timeline of notable incidents, classifying them by confidentiality/integrity/availability (CIA triad) and attack type. It highlights challenges including: lack of full disclosure after attacks (hindering community learning), over-reliance on third-party service providers, and persistent threats like phishing, ransomware and insider risk (e.g. from students or staff lacking training). The paper recommends improved incident reporting, better training, supplier oversight, and adoption of common security standards. (Harjinder Singh Lallie, Andrew Thompson, Elżbieta Titis & Paul Stephens)*{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0

Austin Miller
21 Nov 2025
Save for later

#225: Digging into Social Engineering, part 5

Austin Miller
21 Nov 2025
The final investigation!Still chasing browser patches? Chrome Enterprise can handle that.Chrome Enterprise Premium delivers always-on browser protection, policy enforcement, and centralized control to eliminate manual updates and reduce security risks.Start your trial through Promevo and get more from Chrome Enterprise Premium#225: Digging into Social Engineering, part 5The final investigation!Welcome to another_secpro!As we step out into another week of cybersecurity-related shenanigans, it's important to remember some perspective and how we frame the constant threat of the adversary. It's easy to become doom-and-gloom about the possibilities of every getting away from the constant worry of "the next big disaster". There's no magic fix for that, obviously, but we can take our time, gather our resources, and build plans and processes that cut the adversary off. As a part of that, tackling the problem of social engineering is one of the more challenging difficulties to tackle...That's why we're back into social engineering this week and, this time, we're exploring how the adversary moves in the age of AI.If you've missed our other investigations, then check them out here, here, here, here, and here.Check out _secpro premiumIf you want more, you know what you need to do: sign up to the premium and get access to everything we have on offer. Click the link above to visit our Substack and sign up there!Cheers!Austin MillerEditor-in-ChiefThis week's articleUnit 42 on "Adversarial Innovation in the Age of AI"In their latest research, Unit 42 explains that many social engineering attacks don’t need advanced hacking tools. Instead, they work because of three main weaknesses: low detection coverage, alert fatigue, and organisational failures.Check it out todayNews BytesPatch Tuesday: Microsoft fixes actively exploited Windows kernel vulnerability (Help Net Security): Microsoft patched 63 vulnerabilities in its November 2025 update, including CVE-2025-62215, a race-condition in the Windows Kernel that allows elevation to SYSTEM and has seen in-the-wild exploitation.Amazon pins Cisco, Citrix zero-day attacks to APT group (CyberScoop / Amazon): Amazon’s Threat Intelligence team reported a sophisticated APT exploiting CVE-2025-20337 (Cisco ISE) and CVE-2025-5777 (Citrix Bleed 2) to deploy custom, in-memory malware.Exploiting Data Structures for Bypassing and Crashing Anti-Malware Solutions via Telemetry Complexity Attacks: Researchers describe a new class of attack — Telemetry Complexity Attacks (TCAs) — which overwhelm anti-malware telemetry pipelines (e.g., JSON serializers, DB backends) by generating deeply nested or oversized data, causing denial-of-analysis (DoA). They tested this on 12 platforms, finding several failures and even assigned CVEs (e.g., CVE-2025-61301, CVE-2025-61303).CYFIRMA Weekly Intelligence: “PureRAT” trending: The research unit reports that “PureRAT” is highly active, noting a phishing campaign targeting the hospitality sector using WhatsApp and booking systems to deliver the RAT, focused on credential theft and exfiltration.Sophisticated threat actor targeting zero-day flaws in Cisco ISE & Citrix: The campaign exploited two zero-days to inject a custom web shell into Cisco ISE and run memory read attacks on Citrix NetScaler, suggesting a high-skill, likely state-aligned actor.Operation Endgame Dismantles Rhadamanthys, Venom RAT, and Elysium Botnet: Technical breakdown of the takedown — over 1,025 servers seized, 20 domains taken down, and the arrest of a key suspect associated with VenomRAT; also warns that victims may still harbor residual malware.Into the blogosphere...The Reality of Full-Time Bug Bounty Hunting: Daniel Kelley reflects on what it’s really like to do bug bounty hunting as a full-time job: the unstable income, the pressure to constantly find bugs, and the trade-offs between freelancing and more stable security work.5 Key Factors to Consider When Purchasing an Automated Code Remediation Tool: Kelley breaks down what security teams should look for when buying automated code remediation tools — including accuracy, integration, usability, and how well the tool handles real-world code complexity.Not Getting Incentives Right Can Kill a Security Initiative: Ross Haleliuk argues that many security failures stem not from technical problems, but from misaligned incentives: different teams (developers, ops, execs) have conflicting priorities, which undermines security investments.AI Doesn’t Make It Much Easier to Build Security Startups: In a contrarian view, Haleliuk suggests that while AI is hyped-up as a game changer for security startups, the real challenge remains in product-market fit, recruiting top engineering talent, and building defensible IP — not just “add AI.”This week's academiaRansomware 3.0: Self-Composing and LLM-Orchestrated (Md Raz, Meet Udeshi, P. V. Sai Charan, Prashanth Krishnamurthy, Farshad Khorrami, Ramesh Karri): This paper introduces a proof-of-concept ransomware (“Ransomware 3.0”) that uses Large Language Models (LLMs) to autonomously carry out all phases of a ransomware attack. Rather than relying on static, hard-coded malicious logic, payloads are dynamically synthesized at runtime based on prompts embedded in the binary. The LLM orchestrator handles reconnaissance, payload generation, adaptation to the execution environment, and even crafts personalized ransom notes — all without human intervention. The authors evaluate the approach across environments (e.g., personal, enterprise, embedded) and analyze behavioral signals and telemetry to better understand detection and defense implications.Adaptive Cybersecurity: Dynamically Retrainable Firewalls for Real-Time Network Protection (Sina Ahmadi): This paper proposes a new kind of firewall that uses machine learning to continuously retrain itself in real time, adapting to evolving network threats. Unlike traditional firewalls built on static rules, this system uses reinforcement learning, continual learning, and micro-service architectures to dynamically update its threat model. The research discusses trade-offs around latency, computational cost, data privacy, and integration with architectures like Zero Trust.Artificial Intelligence and Machine Learning in Cybersecurity: A Deep Dive into State-of-the-Art Techniques and Future Paradigms: This is a thorough review of how AI and ML are currently being used in cybersecurity — covering intrusion detection, malware classification, behavioral analysis, threat intelligence, etc. It also identifies emerging paradigms, gaps, and future research directions, particularly around explainability, adversarial robustness, and real-time deployment.A Comprehensive Scientometric Study of Research Trends in Cybersecurity from 2000 to 2024 Using Biblioshiny and VOSviewer:This paper maps out the evolution of cybersecurity research over nearly 25 years by using scientometric tools (Biblioshiny, VOSviewer). It identifies key trends, influential papers, collaboration networks, and shifting research hotspots. The study is helpful for understanding where the field has come from and which areas are now accelerating (e.g., ML, cloud security, privacy).Advancing Cybersecurity Through Machine Learning: A Scientometric Analysis of Global Research Trends and Influential Contributions:This scientometric analysis focuses specifically on ML in cybersecurity, tracking publication trends, geographic distribution, influential works, and major contributing authors and institutions. It provides a quantitative picture of how ML-driven cybersecurity research has grown, and where it may be headed.QORE: Quantum Secure 5G / B5G Core (Vipin Rathi, Lakshya Chopra, Rudraksh Rawal, Nitin Rajput, Shiva Valia, Madhav Aggarwal, Aditya Gairola):This forward-looking paper proposes a quantum-resistant 5G (and beyond) core architecture by integrating standardised post-quantum cryptography (PQC) algorithms—specifically lattice-based schemes (ML-KEM, ML-DSA)—into 5G core network functions and mobile devices. They also propose a hybrid configuration that supports both classical and post-quantum primitives to ease migration, and they provide performance evaluation showing that their design meets the low-latency and high-throughput needs of carrier-grade networks.*{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0

Austin Miller
14 Nov 2025
Save for later

#224: Digging into Social Engineering, part 4

Austin Miller
14 Nov 2025
Exploring Unit 42's findingsDon't miss out!#224: Digging into Social Engineering, part 4Welcome to another_secpro!As we step out into another week of cybersecurity-related shenanigans, it's important to remember some perspective and how we frame the constant threat of the adversary. It's easy to become doom-and-gloom about the possibilities of every getting away from the constant worry of "the next big disaster". There's no magic fix for that, obviously, but we can take our time, gather our resources, and build plans and processes that cut the adversary off. As a part of that, tackling the problem of social engineering is one of the more challenging difficulties to tackle...That's why we're back into social engineering this week and, this time, we're exploring how social engineering disrupts business operations.If you've missed our other investigations, then check them out here, here, here, and here.Check out _secpro premiumIf you want more, you know what you need to do: sign up to the premium and get access to everything we have on offer. Click the link above to visit our Substack and sign up there!Cheers!Austin MillerEditor-in-ChiefThis week's articleUnit 42 on "Business Disruption"In their latest research, Unit 42 explains that many social engineering attacks don’t need advanced hacking tools. Instead, they work because of three main weaknesses: low detection coverage, alert fatigue, and organisational failures.Check it out todayNews BytesBuilding a Threat-Led Cybersecurity Program with Cyberthreat Intelligence: This white paper addresses how organisations often struggle to turn threat-intelligence programmes into measurable business value: intelligence that is not actionable, too many feeds, poorly defined requirements, etc. It explains the evolving threat environment (rise of infostealers, generative AI, commodification of cybercrime) and then offers a practical blueprint for building or refining a “threat-led” security programme. It covers: forming a threat model; establishing Priority Intelligence Requirements (PIRs); integrating intelligence with risk management; mapping strategic/tactical/operational/technical intelligence; selecting tools and metrics; dealing with legal/regulatory constraints.CYFIRMA Intelligence Report:This report provides a detailed breakdown of the latest active cyber-threats and trends, focusing on ransomware, malware, vulnerability exploitation, threat actors and data leaks. Key points include: Discovery of a new ransomware strain BAGAJAI, using strong hybrid encryption and moving toward a double-extortion/data-leak model; trend of PureRAT malware being used in hospitality sector spear-phishing campaigns (e.g., via Booking.com accounts) exploiting legitimate services to deliver RAT implants and steal credentials; focus on threat actor APT37 (aka ScarCruft) shifting toward mobile espionage across Asia and beyond, with credential dumping, exploitation of vulnerabilities etc.; a vulnerability in expr-eval JavaScript library (CVE-2025-12735) enabling remote JS code execution, posing risk across web apps. data-leak observation: e.g., claimed compromise of Pruksa Holding Public Company Limited in Thailand (real-estate), and leak of source-code from Internet Initiative Japan (IIJ) telecom provider. LANDFALL: New Commercial-Grade Android Spyware in Exploit Chain Targeting Samsung Devices: Technical write-up of a commercial-grade Android spyware family (“LANDFALL”) observed being delivered via an exploit chain targeting Samsung devices. Includes malware capabilities, exploitation chain details, impacted OEM components and recommended detection/mitigation controls.You Thought It Was Over? — Authentication Coercion Keeps Evolving:Deep dive into the resurgence/evolution of authentication-coercion techniques (coercing systems to authenticate to attacker-controlled hosts to harvest credentials). Explains RPC-based variants, detection pitfalls, and practical mitigations for endpoint and domain defenders. Actionable for blue teamsRussian attacks surge in Ukraine and Europe; Chinese groups target Latin America: A periodic APT report from ESET, summarising observed state-linked activity across regions between Apr–Sep 2025. Highlights targeting shifts, tooling and operational trends, and specific campaigns and IOCs useful to network defenders and threat intel teams.New Hacking Techniques and Critical CVEs: Technical weekly summarising new exploitation chains, observed EDR evasion techniques, several zero-day exploit chains observed in the wild, and notable sector breaches. Contains technical indicators and exploit/prioritisation guidance for vulnerability management teams.CISA / US-CERT Advisory update on the Akira ransomware:Government advisory updating TTPs, observed infrastructure and mitigations for Akira ransomware. Includes detection guidance, recommended mitigations for critical infrastructure and enterprise, and links to vendor detection rules. Important for orgs tracking ongoing ransomware campaigns and for continuous monitoring.CISA Update & Implementation Guidance for Emergency Directive: Cisco ASA and Firepower Device Vulnerabilities: Implementation guidance updating remediation and detection guidance for multiple exploited vulnerabilities affecting Cisco ASA/Firepower devices; includes emergency directive implementation notes and recommended mitigations for operators managing affected gear.Remember, remember the fifth of November, from Threat Source / Cisco Talos: Cisco Talos Threat Source newsletter and related technical writeups published last week, including detailed Talos research on the Kraken RaaS group and other active ransomware research. Talos’ write-ups are technical, often include IOCs and behavioural detection guidance for SOCs.Into the blogosphere...Chinese hackers used Anthropic AI to automate attacks from Aspicts: A deep dive into “Operation Endgame,” where threat actors leveraged Anthropic’s AI to run automated info-stealing campaigns, exploring both the technical mechanisms and the broader risk implications.Claude Code Agent Attack: 30 High Value Targets Hit from Nate: Analysis of an AI-driven cyberattack on high-value targets using Claude Code agents, looking at how attackers exploit trust in AI and what defensive strategies could mitigate such risks.The OWASP Top 10 Gets Modernized from Chris Hughes: A thoughtful breakdown of the 2025 update to the OWASP Top 10, explaining what’s changed, why it matters, and how the new version better reflects modern threat landscapes.U.S. CISA Adds Oracle, Windows, Kentico, and Apple Flaws from Ethical Hacking News:A technical and policy-oriented post summarising recent zero-days added by CISA, with commentary on the potential impacts for organisations and security teams.79% of Enterprises to Increase Investment in Threat Intelligence from Datayuan: Market-focused insight into how enterprises are shifting their security spend, especially on threat intel, in response to the rise of AI-agent threats; includes regional trends (APAC) and practical implications.This week's academiaSCVI: Bridging Social and Cyber Dimensions for Comprehensive Vulnerability Assessment (Shutonu Mitra, Tomas Neguyen, Qi Zhang, Hyungmin Kim, Hossein Salemi, Chen-Wei Chang, Fengxiu Zhang, Michin Hong, Chang-Tien Lu, Hemant Purohit, Jin-Hee Cho) This paper introduces the Social Cyber Vulnerability Index (SCVI), a novel metric/framework that combines individual-level (awareness, behavioural traits, psychological attributes) and attack-level (frequency, consequence, sophistication) factors to assess socio-technical vulnerabilities in cyber contexts. The authors validate SCVI using survey data (iPoll) and textual data (Reddit scam reports), and compare it to traditional metrics like CVSS (Common Vulnerability Scoring System) and SVI (Social Vulnerability Index). They demonstrate SCVI’s superior ability to capture nuances in socio-cyber risk (e.g., demographic and regional disparities).BotSim: LLM-Powered Malicious Social Botnet Simulation (Boyu Qiao, Kun Li, Wei Zhou, Shilong Li, Qianqian Lu, Songlin Hu):This study presents “BotSim”, a simulation framework for malicious social-bot activity powered by large language models (LLMs). The authors create an environment mixing intelligent agent bots and human users, simulate realistic social media interaction patterns (posting, commenting), and generate a dataset ("BotSim-24") of LLM-driven bot behaviour. They then benchmark detection algorithms and find that traditional bot-detection methods perform much worse on the LLM-driven bot dataset — highlighting a new frontier in adversarial social cybersecurity.Red Teaming with Artificial Intelligence-Driven Cyberattacks: A Scoping Review (Mays Al-Azzawi, Dung Doan, Tuomo Sipola, Jari Hautamäki, Tero Kokkonen):This review article surveys the use of AI for adversarial/red-teaming cyberattacks. It analyses ~470 records, selects 11 for in-depth review, and characterises the methods by which AI is being leveraged for penetration testing, intrusion, social engineering, etc. The authors identify typical targets (sensitive data, systems, social profiles, URLs), and emphasise the increasing threat from AI-based attack automation. It also reflects on how red-teaming practices must evolve in response.A Survey of Social Cybersecurity: Techniques for Attack Detection, Evaluations, Challenges, and Future Prospects (Aos Mulahuwaish, Basheer Qolomany, Kevin Gyorick, Jacques Bou Abdo, Mohammed Aledhari, Junaid Qadir, Kathleen Carley, Ala Al-Fuqaha):This survey paper focuses on “social cybersecurity” — the human/social dimension of cyber threats (e.g., cyber-bullying, spam, misinformation, terrorist activity over social platforms). It covers detection techniques, evaluation methodologies, the challenge of datasets and tools, and identifies future research directions.Evolution Cybercrime — Key Trends, Cybersecurity Threats, and Mitigation Strategies from Historical Data (Muhammad Abdullah, Muhammad Munib Nawaz, Bilal Saleem, Maila Zahra, Effa binte Ashfaq, Zia Muhammad):This article provides a longitudinal analysis of cybercrime over ~20 years, tracing how cyber threats have evolved (from rudimentary internet fraud to AI-driven attacks, deep fakes, 5G vulnerabilities, cryptojacking, supply chain attacks). It uses historical data (e.g., FBI IC3 complaints) and highlights demographic/geographic patterns, victims, losses, and state-sponsored trends. It also offers mitigation strategy recommendations.A Survey of Cyber Threat Attribution: Challenges, Techniques, and Future Directions(Nilantha Prasad, Abebe Diro, Matthew Warren, Mahesh Fernando):This paper examines the challenging problem of cyber threat attribution (identifying who is behind an attack). It reviews techniques from technical (IOCs, TTPs, malware profiling) to ML/AI-based methods, analyses gaps in existing research, and suggests future directions for more robust, reliable attribution in cyber contexts. The work is interdisciplinary and addresses both technical and intelligence-analysis aspects.*{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0

Austin Miller
07 Nov 2025
Save for later

#223: Digging into Social Engineering, part 3

Austin Miller
07 Nov 2025
Exploring Unit 42's findingsSecuring the Autonomous Enterprise: From Observability to ResilienceCurrent security stops at passive observation. Rubrik Agent Operations is the enterprise platform that unifies observability, governance, and recoverability for AI.Join us on November 12th to discover how Rubrik is leveraging its leadership in cyber resilience to protect your autonomous future.Save My Spot#223: Digging into Social Engineering, part 3Welcome to another_secpro!This week, we're back into social engineering - this time, exploring “missed or misclassified critical signals” with Unit 42. If you've missed our other investigations, then check them out here, here and here.Check out _secpro premiumIf you want more, you know what you need to do: sign up to the premium and get access to everything we have on offer. Click the link above to visit our Substack and sign up there!Cheers!Austin MillerEditor-in-ChiefDon't miss out!This week's articleUnit 42 on “Missed or Misclassified Critical Signals”In their latest research, Unit 42 explains that many social engineering attacks don’t need advanced hacking tools. Instead, they work because of three main weaknesses: low detection coverage, alert fatigue, and organisational failures.Check it out todayNews BytesGTIG AI Threat Tracker: Advances in Threat Actor Usage of AI Tools (Google Threat Intelligence Group): A deep technical analysis detailing how adversaries are now embedding generative AI and LLMs into malware and intrusion workflows. The report highlights real-world examples such as “PROMPTFLUX” and “PROMPTSTEAL,” which use AI for obfuscation, adaptive phishing, and command generation, marking a turning point toward autonomous, AI-powered attack operations.CYFIRMA Intelligence Report (CYFIRMA Research and Advisory Team): A comprehensive weekly update covering underground forum chatter, ransomware evolution, and exploitation trends. It documents “Monkey Ransomware,” details TTPs aligned to MITRE ATT&CK (execution via native API, process injection, defense evasion), and lists key vulnerabilities like CVE-2025-61932 in Lanscope Endpoint Manager.CYFIRMA's Analysis of the Monkey Ransomware (CYFIRMA Research): A detailed teardown of the newly observed “Monkey” ransomware variant. It appends a “.monkey” extension, deletes backups, and uses reflective code loading and service creation for persistence. The report provides full TTP mapping, IOCs, and mitigation guidance, suggesting active campaigns targeting APAC organizations.Rigged Poker Games - "The Department of Justice has indicted thirty-one people over the high-tech rigging of high-stakes poker games: In a typical legitimate poker game, a dealer uses a shuffling machine to shuffle the cards randomly before dealing them to all the players in a particular order. As set forth in the indictment, the rigged games used altered shuffling machines that contained hidden technology allowing the machines to read all the cards in the deck. Because the cards were always dealt in a particular order to the players at the table, the machines could determine which player would have the winning hand."Signal’s Post-Quantum Cryptographic Implementation - "Signal hasjust rolled out its quantum-safe cryptographic implementation.Ars Technicahas areally good article with details: Ultimately, the architects settled on a creative solution. Rather than bolt KEM onto the existing double ratchet, they allowed it to remain more or less the same as it had been. Then they used the new quantum-safe ratchet to implement a parallel secure messaging system."Into the blogosphere...AWS to Bare Metal Two Years Later: Answering Your Toughest Questions About Leaving AWS: "When we publishedHow moving from AWS to Bare-Metal saved us $230,000 /yr.in 2023, the story travelled far beyond our usual readership. The discussion threads onHacker NewsandReddit were packed with sharp questions: did we skip Reserved Instances, how do we fail over a single rack, what about the people cost, and when is cloud still the better answer? This follow-up is our long-form reply."Free software scares normal people: "I’m the person my friends and family come to for computer-related help. (Maybe you, gentle reader, can relate.) This experience has taught me which computing tasks are frustrating for normal people."What We Talk About When We Talk About Sideloading: "It bears reminding that “sideload” is a made-up term. Putting software on your computer is simply called “installing”, regardless of whether that computer is in your pocket or on your desk. This could perhaps be further precised as “direct installing”, in case you need to make a distinction between obtaining software the old-fashioned way versus going through a rent-seeking intermediary marketplace like the Google Play Store or the Apple App Store."Aggressive bots ruined my weekend: "On the 25th of October Bear had its first major outage. Specifically, the reverse proxy which handles custom domains went down, causing custom domains to time out. Unfortunately my monitoring tool failed to notify me, and it being a Saturday, I didn't notice the outage for longer than is reasonable. I apologise to everyone who was affected by it. First, I want to dissect the root cause, exactly what went wrong, and then provide the steps I've taken to mitigate this in the future."The bug that taught me more about PyTorch than years of using it:My training loss plateaued and wouldn’t budge. Obviously I’d screwed something up. I tried every hyperparameter combination, rewrote my loss function, spent days assuming I’d made some stupid mistake. Because it’s always user error. This time, it wasn’t. It was a niche PyTorch bug that forced me through layers of abstraction I normally never think about: optimizer internals, memory layouts, dispatch systems, kernel implementations. Taught me more about the framework than years of using it.What Happened To Running What You Wanted On Your Own Machine?: When the microcomputer first landed in homes some forty years ago, it came with a simple freedom—you could run whatever software you could get your hands on. Floppy disk from a friend? Pop it in. Shareware demo downloaded from a BBS? Go ahead! Dodgy code you wrote yourself at 2 AM? Absolutely. The computer you bought was yours. It would run whatever you told it to run, and ask no questions. Today, that freedom is dying. What’s worse, is it’s happening so gradually that most people haven’t noticed we’re already halfway into the coffin.This week's academiaFrom Texts to Shields: Convergence of Large Language Models and Cybersecurity (Tao Li; Ya-Ting Yang; Yunian Pan; Quanyan Zhu): This paper explores how large language models (LLMs) are increasingly converging with cybersecurity tasks: from vulnerability analysis and network/5G security to generative security engineering. It looks both at how LLMs can assist defenders (automation, reasoning, security analytics) and how they introduce new risks (trust, transparency, adversarial use). The authors outline socio-technical challenges like interpretability, human-in-the-loop design, and propose a forward-looking research agenda for secure, effective LLM adoption in cybersecurity.Adversarial Defense in Cybersecurity: A Systematic Review of GANs for Threat Detection and Mitigation (Tharcisse Ndayipfukamiye; Jianguo Ding; Doreen Sebastian Sarwatt; Adamu Gaston Philipo; Huansheng Ning): This is a large scale systematic literature review (PRISMA-compliant) of how Generative Adversarial Networks (GANs) are being used in cybersecurity—both as attack vectors and as defensive tools—from January 2021 through August 2025. It identifies 185 peer-reviewed studies, develops a four-dimensional taxonomy (defensive function, GAN architecture, cybersecurity domain, adversarial threat model), shows publication trends, assesses the effectiveness of GAN-based defences, and highlights key gaps (training instability, lack of benchmarks, limited explainability). The authors propose a roadmap for future work.Securing the AI Frontier: Urgent Ethical and Regulatory Imperatives for AI-Driven Cybersecurity (Vikram Kulothungan): This paper examines the ethical and regulatory challenges that arise when AI is deeply integrated into cybersecurity systems. It traces historical regulation of AI, analyzes current frameworks (for example the EU AI Act), and discusses ethical dimensions such as bias, transparency, accountability, privacy, and human oversight. It proposes strategies to promote AI literacy, public engagement, and global harmonisation of regulatory approaches in the cybersecurity/AI domain.Neuromorphic Mimicry Attacks Exploiting Brain-Inspired Computing for Covert Cyber Intrusions (Hemanth Ravipati) This paper introduces a new threat class: “Neuromorphic Mimicry Attacks (NMAs)”. These attacks target neuromorphic computing systems (brain-inspired chips, spiking neural networks, edge/IoT hardware) by mimicking legitimate neural activity (via synaptic weight tampering, sensory input poisoning) to evade detection. The paper provides a theoretical framework, simulation results using a synthetic neuromorphic dataset, and proposes countermeasures (neural-specific anomaly detection, secure synaptic learning protocols).Towards Adaptive AI Governance: Comparative Insights from the U.S., EU, and Asia (Vikram Kulothungan; Deepti Gupta) This article offers a comparative analysis of how different regions (US, European Union, Asia) approach AI governance, innovation, and regulation—especially in cybersecurity/AI domains. It identifies divergent models (market-driven, risk-based, state-guided), explains tensions for international collaboration, and proposes an “adaptive AI governance” framework blending innovation accelerators, risk oversight, and strategic alignment.Asymmetry by Design: Boosting Cyber Defenders with Differential Access to AI (Shaun Ee; Chris Covino; Cara Labrador; Christina Krawec; Jam Kraprayoon; Joe O’Brien) This work proposes a strategic framework for cybersecurity defence by deliberately shaping access to AI capabilities (“differential access”) such that defenders have prioritized access or harder restrictions on adversaries. It outlines three approaches—Promote Access, Manage Access, Deny by Default—and gives example schemes of how defenders might leverage these in practice. It argues that as adversaries gain advanced AI, defenders must build architectural and policy asymmetries in their favour.*{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}.reverse{display:table;width: 100%;
Read more
  • 0
  • 0

Austin Miller
31 Oct 2025
Save for later

#222: Digging into Social Engineering, part 2

Austin Miller
31 Oct 2025
Exploring Unit 42's findingsLearning Have to Zero Trust - with GoodAccessAs remote work expands and personal devices flood the enterprise, security teams face a growing challenge: how to protect sensitive data when employees and contractors connect from laptops, tablets, and phones you don’t control. Unmanaged devices and outdated software invite malware, data leaks, and compliance violations that can threaten SOC2, HIPAA, and PCI DSS standing.Traditional VPNs and mobile device management tools are too complex and costly to scale across a modern, flexible workforce. Zero Trust Network Architecture (ZTNA) changes the equation. By verifying identity instead of location, checking device health before access, and maintaining full visibility through centralized logs, it creates a secure perimeter around your data—not your network.With a Zero Trust approach, organizations can confidently enable BYOD and contractor access without hardware dependencies or heavy IT overhead. The result is faster onboarding, simplified compliance, and assurance that every user and device is exactly who—and what—it claims to be. And, to help you on that journey, GoodAccess are leading the way.Does this sound like your organization? Learn More!#222: Digging into Social Engineering, part 2Welcome to another_secpro!This week, we're back into social engineering - this time, exploring "high touch attacks" with Unit 42. If you've missed our other investigations, then check them out here and here. We've also included our popular PDF resource again, to help you improve your training sessions and help the non-specialists amongst us to make the right moves in the age of AI. Check it out!Check out _secpro premiumIf you want more, you know what you need to do: sign up to the premium and get access to everything we have on offer. Click the link above to visit our Substack and sign up there!Cheers!Austin MillerEditor-in-ChiefDo you want to make some money with _secpro?Like most newsletters, we rely on funds from sponsors to keep our quality high and our content consistent. Because of that, we’ve got a basic interest in finding something which our average reader might be able to help us with: sponsorships.We’re reaching out to potential sponsors who want to place their products, projects, and propositions to the world through the _secpro newsletter, showing our 105,000-strong readership exactly what you have to offer and why they should be listening to you.Does that sound like something you’d be interested in doing? If so, fill in the form below and we’ll get in touch within 7 working days.A chance to earn with _secproLLMs and Agentic AI In Production - Nexus 2025Build and fine-tune your own LLMs and Agents and deploy them in production with workshops on MCP, A2A, Context Engineering, and many more.Book now at 50% off with the code CYBER50This week's articleUnit 42 on “High-Touch Attacks”If you’ve been following over the last few weeks, you’ll be well aware that we’ve been digging into Unit 42’s year-long research into social engineering and how it is changing in the modern world. This research, in its second part, explains that “high-touch attacks” are increasing—something that few industries might be consciously aware of and even fewer prepared to deal with.Check it out todayNews BytesGrapheneOS Proves Resilient Against Cellebrite Forensic Tools While Community Debates Government Surveillance: The privacy-focused mobile operating system GrapheneOS has emerged as one of the few platforms capable of resisting advanced forensic extraction tools, according to leaked documentation from digital forensics company Cellebrite. This revelation has sparked intense community discussions about mobile security, government surveillance, and the trade-offs between privacy and convenience.Key IOCs for Pegasus and Predator Spyware Cleaned With iOS 26 Update: As iOS 26 is being rolled out, our team noticed a particular change in how the operating system handles the shutdown.log file: it effectively erases crucial evidence of Pegasus and Predator spyware infections. This development poses a serious challenge for forensic investigators and individuals seeking to determine if their devices have been compromised at a time when spyware attacks are becoming more common.Leaker reveals which Pixels are vulnerable to Cellebrite phone hacking: "Despite being a vast repository of personal information, smartphones used to have little by way of security. That has thankfully changed, but companies like Cellebrite offer law enforcement tools that can bypass security on some devices. The company keeps the specifics quiet, but an anonymous individual recently logged in to a Cellebrite briefing and came away with a list of which of Google’s Pixel phones are vulnerable to Cellebrite phone hacking."Meta and TikTok are obstructing researchers’ access to data, European Commission rules: When Philipp Lorenz-Spreen set out in 2024 to study how politicians across Europe communicate online and how much divisive language they use, he knew he had the law on his side. The European Union’s Digital Services Act (DSA), which had come into force in February of that year, guaranteed researchers like Lorenz-Spreen, a computational social scientist at the Dresden University of Technology, access to data from social media platforms X, TikTok, Facebook, and Instagram. All he had to do was ask.Into the blogosphere...AWS to Bare Metal Two Years Later: Answering Your Toughest Questions About Leaving AWS: "When we publishedHow moving from AWS to Bare-Metal saved us $230,000 /yr.in 2023, the story travelled far beyond our usual readership. The discussion threads onHacker NewsandReddit were packed with sharp questions: did we skip Reserved Instances, how do we fail over a single rack, what about the people cost, and when is cloud still the better answer? This follow-up is our long-form reply."Free software scares normal people: "I’m the person my friends and family come to for computer-related help. (Maybe you, gentle reader, can relate.) This experience has taught me which computing tasks are frustrating for normal people."What We Talk About When We Talk About Sideloading: "It bears reminding that “sideload” is a made-up term. Putting software on your computer is simply called “installing”, regardless of whether that computer is in your pocket or on your desk. This could perhaps be further precised as “direct installing”, in case you need to make a distinction between obtaining software the old-fashioned way versus going through a rent-seeking intermediary marketplace like the Google Play Store or the Apple App Store."Aggressive bots ruined my weekend: "On the 25th of October Bear had its first major outage. Specifically, the reverse proxy which handles custom domains went down, causing custom domains to time out. Unfortunately my monitoring tool failed to notify me, and it being a Saturday, I didn't notice the outage for longer than is reasonable. I apologise to everyone who was affected by it. First, I want to dissect the root cause, exactly what went wrong, and then provide the steps I've taken to mitigate this in the future."The bug that taught me more about PyTorch than years of using it:My training loss plateaued and wouldn’t budge. Obviously I’d screwed something up. I tried every hyperparameter combination, rewrote my loss function, spent days assuming I’d made some stupid mistake. Because it’s always user error. This time, it wasn’t. It was a niche PyTorch bug that forced me through layers of abstraction I normally never think about: optimizer internals, memory layouts, dispatch systems, kernel implementations. Taught me more about the framework than years of using it.What Happened To Running What You Wanted On Your Own Machine?: When the microcomputer first landed in homes some forty years ago, it came with a simple freedom—you could run whatever software you could get your hands on. Floppy disk from a friend? Pop it in. Shareware demo downloaded from a BBS? Go ahead! Dodgy code you wrote yourself at 2 AM? Absolutely. The computer you bought was yours. It would run whatever you told it to run, and ask no questions. Today, that freedom is dying. What’s worse, is it’s happening so gradually that most people haven’t noticed we’re already halfway into the coffin.This week's academiaArtificial Writing And Automated Detection (PDF) (B. Jabarian and A. Imas): "Artificial intelligence (AI) tools are increasingly used for written deliverables. This has created demand for distinguishing human-generated text from AI-generated text at scale, e.g., ensuring assignments were completed by students, product reviews written by actual customers, etc. A decision-maker aiming to implement a detector in practice must consider two key statistics: the False Negative Rate (FNR), which corresponds to the proportion of AI-generated text that is falsely classified as human, and the False Positive Rate (FPR), which corresponds to the proportion of human-written text that is falsely classified as AI-generated. We evaluate three leading commercial detectors—Pangram, OriginalityAI, GPTZero—and an open-source one —RoBERTa—on their performance in minimizing these statistics using a large corpus spanning genres, lengths, and models. Commercial detectors outperform open-source, with Pangram achieving near-zero FNR and FPR rates that remain robust across models, threshold rules, ultra-short passages, "stubs" (50 words) and ’humanizer’ tools. A decision-maker may weight one type of error (Type I vs. Type II) as more important than the other."Do Users Verify SSH Keys? (PDF) (P. Gutmann): A classic and hilariously concerning paper that is currently undergoing something of a revival in the halls of internet para-academia.Reasoning Models Reason Well, Until They Don't (R. Rameshkumar, J. Huang, Y. Sun, F. Xia, A. Saparov): "Large language models (LLMs) have shown significant progress in reasoning tasks. However, recent studies show that transformers and LLMs fail catastrophically once reasoning problems exceed modest complexity. We revisit these findings through the lens of large reasoning models (LRMs) -- LLMs fine-tuned with incentives for step-by-step argumentation and self-verification."Brough to you in cooperation with GoodAccess:*{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}.reverse{display:table;width: 100%;
Read more
  • 0
  • 0
Austin Miller
24 Oct 2025
Save for later

#221: Digging into Social Engineering, part 1

Austin Miller
24 Oct 2025
Exploring Unit 42's findingsDon't miss out!Sign up today!#221: Digging into Social Engineering, part 1Welcome to another_secpro!This week, we're poking the brain of CISO expert David Gee to deliver you some insights which line up nicely with his new book: A Day in the Life of a CISO. We've also included our popular PDF resource again, to help you improve your training sessions and help the non-specialists amongst us to make the right moves in the age of AI. Check it out!Check out _secpro premiumIf you want more, you know what you need to do: sign up to the premium and get access to everything we have on offer. Click the link above to visit our Substack and sign up there!Cheers!Austin MillerEditor-in-ChiefThis week's articleUnit 42 on non-phishing vectorsRecently, along with a wealth of other industry-critical information and resources, Palo Alto’s Unit 42 published their incident response report concerning social engineering. As an area of practice that has always fascinated me—as more art than science—this immediately grabbed my attention and almost forced me to start taking notes. With this in mind, we as a team are heading out over the next few weeks to dig deeper into social engineering and help you discern the golden kernels that you need to access.Check it out todayNews BytesUnit 42 Threat Bulletin – October 2025: Published 21 October 2025, this monthly bulletin by Unit 42 (the threat-research arm of Palo Alto Networks) surfaces multiple emerging threats. Highlights include the self-propagating supply-chain worm “Shai-Hulud”, an advanced supply-chain attack targeting npm packages; detailed technical IOCs; and spotting a new Chinese-nexus APT “Phantom Taurus” targeting government/telecom across Africa/Middle East/Asia.PacketWatch Cyber Threat Intelligence Report: Crafted by Intelligence Team and published 20 October 2025, this bi-weekly briefing highlights: (a) the major breach incident at F5 Networks (source code + undisclosed vulnerabilities); (b) a list of critical and high-severity vulnerabilities across major platforms (Oracle, Microsoft, Veeam, SAP, 7-Zip, Ivanti); and (c) a renewed emphasis on user-targeted attacks such as credential phishing, fake CAPTCHA software, and fake downloads.Disrupting malicious uses of AI(PDF): Released by OpenAI, this October 2025 update (PDF) details how threat actors are increasingly leveraging multiple AI tools (e.g., using one model for planning and another for execution), integrating AI into existing cyber-attack workflows, rather than inventing wholly new attack methods. The report also gives case studies of misuse (scams, code-signing abuse, social engineering) and how defence and detection are adapting.Microsoft Digital Defense Report 2025: Lighting the path to a secure future(PDF):Published by Microsoft 21 October 2025, this annual-style report provides their threat intelligence view: major uptick in AI-enabled adversary operations, increasing geopolitical cyber-conflict, supply chain risk, and the imperative for defenders to rethink traditional security models given the speed and scale of modern attacksENISA Threat Landscape 2025 (PDF):Published 7 October 2025 by ENISA (European Union Agency for Cyber Security), this comprehensive PDF analyses 4,875 incidents (1 July 2024–31 June 2025) to map global threat trends: shift toward mixed/campaign-style operations, AI-enabled threat activity, supply chain convergence, and increased adversary speed. Though slightly earlier than your window, its release date is timely and gives context for many of the current week’s incidents.This week's academiaFrom Texts to Shields: Convergence of Large Language Models and Cybersecurity (Tao Li, Ya-Ting Yang, Yunian Pan & Quanyan Zhu):This paper explores how large language models (LLMs) are increasingly converging with cybersecurity tasks: for example, using LLMs for vulnerability analysis, network and software security tasks, 5G-vulnerability assessment, generative security engineering and automated reasoning in defence scenarios. The authors highlight socio-technical challenges (trust, transparency, human-in-the-loop, interpretability) when deploying LLMs in high-stakes security settings, and propose a forward-looking research agenda to integrate formal methods, human-centred design and organisational policy in LLM-enhanced cyber-operations.Adversarial Defense in Cybersecurity: A Systematic Review of GANs for Threat Detection and Mitigation (Tharcisse Ndayipfukamiye, Jianguo Ding, Doreen Sebastian Sarwatt, Adamu Gaston Philipo & Huansheng Ning): This survey conducts a PRISMA-style review (2021–Aug 2025) of how Generative Adversarial Networks (GANs) are being used both as attack tools and defensive tools in cybersecurity. They analyse 185 peer-reviewed studies, develop a taxonomy across four dimensions (defensive function, GAN architecture, cybersecurity domain, adversarial threat model), and identify key gaps: training instability, lack of standard benchmarks, high computational cost, limited explainability. They propose a roadmap towards scalable, trustworthy GAN-powered defences.Securing the AI Frontier: Urgent Ethical and Regulatory Imperatives for AI-Driven Cybersecurity (Vikram Kulothungan): This article examines the ethical and regulatory challenges arising from the deployment of AI in cybersecurity. It traces historical regulation of AI, analyses current global frameworks (e.g., the EU AI Act), and discusses key issues including bias, transparency, accountability, privacy, human oversight. The paper proposes strategies for enhancing AI literacy, public engagement, and global harmonisation of regulation in AI-driven cyber-systems.A Defensive Framework Against Adversarial Attacks on Machine Learning-Based Network Intrusion Detection Systems (Benyamin Tafreshian & Shengzhi Zhang):The authors propose a multi-layer defensive framework aimed at ML‐based Network Intrusion Detection Systems (NIDS) which are vulnerable to adversarial evasion. Their framework integrates adversarial training, dataset balancing, advanced feature engineering, ensemble learning, and fine-tuning. On benchmark datasets NSL-KDD and UNSW-NB15, they report on average a ~35% increase in detection accuracy and ~12.5% reduction in false positives under adversarial conditions.Cyber Security: State of the Art, Challenges and Future (W.S. Admass et al.): This article presents an overview of the state of the art in cybersecurity: existing architectures, key challenges, and emerging trends globally. It reviews tactics, techniques, and procedures (TTPs), current defence mechanisms and future research directions.DYNAMITE: Dynamic Defense Selection for Enhancing Machine Learning-based Intrusion Detection Against Adversarial Attacks (Jing Chen, Onat Güngör, Zhengli Shang, Elvin Li & Tajana Rosing): This paper introduces “DYNAMITE”, a framework for dynamically selecting the optimal defence mechanism for ML-based Intrusion Detection Systems (IDS) when under adversarial attack. Instead of applying a static defence, DYNAMITE uses a meta-ML selection mechanism to pick the best defence in real-time, reducing computational overhead by ~96.2% compared to an oracle and improving F1-score by ~76.7% over random defence and ~65.8% over the best static defence.*{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0

Austin Miller
17 Oct 2025
Save for later

#220: Social Engineering for Counter-Adversaries

Austin Miller
17 Oct 2025
Exploring Unit 42's findingsDon't miss out!Sign up today!#220: Social Engineering for Counter-AdversariesWelcome to another_secpro!This week, we're poking the brain of CISO expert David Gee to deliver you some insights which line up nicely with his new book: A Day in the Life of a CISO. We've also included our popular PDF resource again, to help you improve your training sessions and help the non-specialists amongst us to make the right moves in the age of AI. Check it out!Check out _secpro premiumIf you want more, you know what you need to do: sign up to the premium and get access to everything we have on offer. Click the link above to visit our Substack and sign up there!Cheers!Austin MillerEditor-in-ChiefThis week's article2025 Unit 42 Global Incident Response Report: Social Engineering EditionRecently, along with a wealth of other industry-critical information and resources, Palo Alto’s Unit 42 published their incident response report concerning social engineering. As an area of practice that has always fascinated me—as more art than science—this immediately grabbed my attention and almost forced me to start taking notes. With this in mind, we as a team are heading out over the next few weeks to dig deeper into social engineering and help you discern the golden kernels that you need to access.Check it out todayNews BytesUnit 42: PhantomVAI Loader Delivers a Range of Infostealers: Researchers from Unit 42 describe a new loader named PhantomVAI, used to deploy various infostealers (malware that exfiltrates sensitive data). The loader uses techniques like steganography (hiding payload in an image file, e.g. a GIF file or other image) and obfuscated PowerShell to download and load the payload. The embedded data (DLL) is encoded inside images, hiding the payload from simple detection. Once loaded, it communicates with command-and-control servers to pull further stages.Unit 42: When AI Remembers Too Much – Persistent Behaviors in AI Agents via Indirect Prompt Injection: Shows a proof of concept demonstrating how adversaries can perform indirect prompt injection against AI agents. The technique doesn’t require direct user prompt, but uses external content (webpages, documents, metadata) feeding into the the agent’s memory or long-term memory subsystem. Once instructions are embedded via external content, they persist across sessions, meaning an attacker can embed malicious instructions that get loaded into the agent memory and later used to exfiltrate data, by instructing the agent to leak conversation history or other secrets. The attack is stealthy because it uses external content rather than explicit prompts.Unit 42: The Golden Scale: Bling Libra and the Evolving Extortion Economy: This threat brief analyzes how extortion actors (including groups using variants like Bling Libra) are evolving. They discuss stolen data, ransom demands, deadlines, leaking stolen credentials or data, and extortion notes targeted at executives. The group is apparently coordinating via Telegram channels, recruiting other actors to send extortion notes (e.g. executive level), focusing on stolen data (Salesforce data) and pressing for payment. They set deadlines (e.g. threat actor set Oct 10, 2025 as a deadline to pay ransom or leak files).CrowdStrike: Campaign targeting Oracle E‑Business Suite (Oracle EBS) zero-day CVE-2025-61882: CrowdStrike reports on a campaign targeting the zero-day vulnerability CVE-2025-61882 in Oracle E-Business Suite. This is an unauthenticated remote code execution (RCE) vulnerability (i.e. attackers can exploit without prior credentials). Oracle disclosed the vulnerability on 4 October 2025, but CrowdStrike observes that there are indicators of potential or likely exploitation in the wild. They note IOCs, commands, and files from Oracle’s advisory, suggesting real-world exploitation.Unit 42: 2025 Global Incident Response Report: Social Engineering Edition: A large incident response / threat intelligence report covering social engineering cases from May 2024 to May 2025. Some key findings: social engineering was the top initial access vector in their caseload (~36% of cases). Techniques go beyond phishing to non-phishing vectors (help desk, fake system prompts, help desk manipulation, fake prompts). Attackers exploit trust, identity workflow, help desk resets, compromised accounts, etc. They provide recommendations for defenders: just-in-time provisioning, restricting sensitive workflows, data loss prevention, identity correlation, etc. (Check in next week to read our first steps into unpacking this important analysis!)This week's academiaNeuromorphic Mimicry Attacks Exploiting Brain-Inspired Computing for Covert Cyber Intrusions: (Hemanth Ravipati)Neuromorphic computing, which mimics the brain’s neural structure in hardware, is increasingly used for efficient AI/edge computing. This paper introduces Neuromorphic Mimicry Attacks (NMAs), a novel class of threats that exploit the probabilistic, non-deterministic behavior of neuromorphic chips. By manipulating synaptic weights or poisoning sensory inputs, attackers can mimic legitimate neural activity, thereby evading standard intrusion detection systems. The work includes a theoretical framework, simulation experiments, and proposals for defenses—e.g. anomaly detection tuned to synaptic behavior, secure synaptic learning. The paper highlights that neuromorphic architectures introduce new cybersecurity risk surfaces.APT-LLM: Embedding-Based Anomaly Detection of Cyber Advanced Persistent Threats Using Large Language Models: (Sidahmed Benabderrahmane, Petko Valtchev, James Cheney, Talal Rahwan)This paper tackles the hard problem of detecting Advanced Persistent Threats (APTs), which tend to blend into normal system behavior. Their approach, APT-LLM, uses large language models (e.g. BERT, ALBERT, etc.) to embed process–action provenance traces into semantic-rich embeddings. They then use autoencoder models (vanilla, variational, denoising) to learn normal behavior and flag anomalies. Evaluated on highly imbalanced real-world datasets (some with only 0.004% APT-like traces), they demonstrate substantial gains over traditional anomaly detection methods. The core idea is leveraging the representational strength of LLMs for cybersecurity trace analysis.Precise Anomaly Detection in Behavior Logs Based on LLM Fine-Tuning: (S. Song et al.)Insider threats are notoriously difficult to detect because anomalies in user behavior often blur with benign but unusual actions. This paper proposes converting user behavior logs into natural language narratives, then fine-tuning a large language model with a contrastive learning objective (first at a global behavior level, then refined per user) to distinguish between benign and malicious anomalies. They also propose a fine-grained tracing mechanism to map detected anomalies back to behavioral steps. On the CERT v6.2 dataset, their approach achieves F1 ≈ 0.8941, outperforming various baseline methods. The method aims to reduce information loss in translation of logs to features and improve interpretability.Exposing the Ghost in the Transformer: Abnormal Detection for Large Language Models via Hidden State Forensics: (Shide Zhou, Kailong Wang, Ling Shi, Haoyu Wang) As LLMs are embedded into real-world systems, they become potential attack targets (jailbreaks, backdoors, adversarial attacks). This work proposes a detection method that inspects internal hidden states (activation patterns) across layers and uses “hidden state forensics” to detect abnormal behaviors in real-time. The approach is claimed to detect a variety of threats (e.g. backdoors, deviations) with >95% accuracy and low overhead. The method operates without needing to retrain or heavily instrument the model, offering a promising path toward monitoring LLM security in deployment.Robust Anomaly Detection in O-RAN: Leveraging LLMs against Data Manipulation Attacks: (Thusitha Dayaratne, Ngoc Duy Pham, Viet Vo, Shangqi Lai, Sharif Abuadbba, Hajime Suzuki, Xingliang Yuan, Carsten Rudolph) The Open Radio Access Network (O-RAN) architecture, used in 5G, introduces openness and programmability (xApps), but also novel attack vectors. The authors identify a subtle “hypoglyph” attack: injecting Unicode-wise manipulations (e.g. look-alike characters) into data that evade traditional ML-based anomaly detectors. They propose using LLMs (via prompt engineering) to robustly detect anomalies, even in manipulated data, and demonstrate low detection latency (<0.07 s), making it potentially viable for near-real-time use in RAN systems. This work bridges wireless systems and AI-based security in a timely domain.Generative AI in Cybersecurity: A Comprehensive Review of Future Directions: (M. A. Ferrag et al.) This is a survey/review paper covering the intersection of Generative AI / LLMs and cybersecurity. It synthesizes recent research on how generative models can be used for threat creation (e.g. adversarial attacks, automated phishing, malware synthesis) and defense (e.g. automated patch generation, security policy synthesis, anomaly detection). The paper also outlines open challenges and risks (e.g. misuse, model poisoning, hallucination) and proposes a structured roadmap for future research. As the field is evolving rapidly, this review is becoming a frequently cited reference point.*{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0

Austin Miller
10 Oct 2025
Save for later

#219: Getting a CISO's viewpoint

Austin Miller
10 Oct 2025
Helping beginners see from the top#219: Getting a CISO's viewpointHelping beginners see from the topWelcome to another_secpro!This week, we're poking the brain of CISO expert David Gee to deliver you some insights which line up nicely with his new book: A Day in the Life of a CISO. We've also included our popular PDF resource again, to help you improve your training sessions and help the non-specialists amongst us to make the right moves in the age of AI. Check it out!Check out _secpro premiumIf you want more, you know what you need to do: sign up to the premium and get access to everything we have on offer. Click the link above to visit our Substack and sign up there!Cheers!Austin MillerEditor-in-ChiefMeet Albus—the first AI Identity Agent—and see why AI-native IGA is set to replace legacy governance tools.Join Lumos on Oct 30, 12–1pm EST for a live webinar with CEO Andrej Safundzic and CTO Aurangazeb Khan. Learn how agentic AI is transforming Identity Governance with autonomous policies, approvals, and reviews—and see Albus, the industry’s first identity agent, in action.Register NowThis week's articleRootkits in Focus: A CISO's PerspectiveToday, we’re taking a closer look at two kernel-level Linux rootkits that, while discovered a few years ago, still reflect the techniques seen in many of today’s advanced threats:Syslogkand CMK Rootkit.Syslogk, reported by Avast (part of Gen) in 2022, is a kernel-mode rootkit for Linux based on the older Adore-Ng Linux kernel rootkit. It’s notable for its stealthy behavior: it can hide files, processes, kernel modules, and network connections. What makes it especially evasive is its use of “magic packets”—specific network traffic that acts as a trigger to activate its payload, such as a backdoor, only under certain conditions.Check it out todayNews Bytes“State-of-the-Art in Software Security Visualization: A Systematic Review”: This paper reviews and categorises modern techniques for visualising software system security, particularly to support threat detection, compliance monitoring, and security analytics. It argues that traditional textual or numerical approaches are increasingly insufficient as systems become more complex, and proposes a taxonomy (graph-based, metaphor-based, matrix, notation) of visualization approaches. It also discusses gaps and future research directions.“Vulnerability Management Chaining: An Integrated Framework for Efficient Cybersecurity Risk Prioritization”: This paper proposes a new integrated framework that combines historical exploitation evidence (Known Exploited Vulnerabilities, KEV), predictive threat modeling (EPSS), and technical impact (CVSS) to better prioritise vulnerabilities. The test over ~28,000 real-world CVEs suggests substantial efficiency gains (14-18×) and large reductions in urgent remediation workload, while maintaining high coverage of actual threats.“From Texts to Shields: Convergence of Large Language Models and Cybersecurity”: This paper analyses how large language models (LLMs) are being integrated with cybersecurity across multiple dimensions: network/software security, generative/automated security tools, 5G vulnerability analysis, and security operations. It explores both the potential (e.g. AI-driven analytics, automated reasoning) and the challenges (trust, transparency, adversarial robustness, governance). It lays out a research agenda for securing LLMs in high-stakes environments.“LLM-Assisted Proactive Threat Intelligence for Automated Reasoning”: This paper investigates how LLMs, combined with real-time threat intelligence (via Retrieval-Augmented Generation systems), can improve detection and response to emerging threats. Using feeds like KEV, EPSS, and CVE databases, the authors show that their system (Patrowl framework) better handles recently disclosed vulnerabilities compared to baseline LLMs, improving real-time responsiveness and reasoning in threat analysis.“CAI: An Open, Bug Bounty-Ready Cybersecurity AI”: This research introduces CAI, an open-source AI designed specifically to support bug bounty testing. It benchmarks CAI against human experts in CTF (Capture the Flag) environments and demonstrates that CAI can outperform state-of-the-art results, finding vulnerabilities faster and more efficiently, particularly when humans oversee the system (Human-In-The-Loop). It also shows how CAI can democratise access to powerful security testing tools.“A Framework for Evaluating Emerging Cyberattack Capabilities of AI”: This paper argues that current evaluation frameworks for AI in cybersecurity (e.g., via CTFs, benchmarks) are inadequate to assess real-world risk, and proposes a comprehensive framework to evaluate emerging AI offensive capabilities. It examines dual-use risks, adversarial models, and practical implications for red/blue teams, defenders, and policymakers.This week's academiaSmartAttack: Air-Gap Attack via Smartwatches: Demonstrates a practical ultrasonic covert-channel that uses a smartwatch’s microphone as a receiver to exfiltrate data from air-gapped machines. The study measures range, bit-rate, effects of body occlusion, noise, and suggests mitigations for high-security environments. This paper triggered broad media coverage because it shows how everyday wearables can defeat classical air-gap assumptions. (Mordechai Guri)RAMBO: Leaking Secrets from Air-Gap Computers by Spelling Covert Radio Signals from Computer RAM: Introduces RAMBO, a side-channel that programs RAM access patterns to generate detectable electromagnetic/radio emissions from memory buses. Shows how malware can encode and transmit secrets from air-gapped machines (with SDR receivers) and discusses countermeasures. The attack has been widely reported and discussed in the infosec press. (Mordechai Guri)Security Concerns for Large Language Models: A Survey: A comprehensive academic survey of emergent security/privacy threats tied to LLMs (prompt injection, jailbreaking, data-poisoning/backdoors, misuse for malware/disinformation, and risks from autonomous agents). Summarizes recent studies (2022–2025), evaluates defense approaches and open problems — highly relevant as LLMs increasingly factor into both offensive and defensive cyber operations.(Miles Q. Li and Benjamin C. M. Fung.)Why Johnny Signs with Sigstore: Examining Tooling as a Factor in Software-Signing Adoption in the Sigstore Ecosystem: Qualitative case study / interviews with practitioners on tooling, usability, and adoption barriers for modern software signing (Sigstore ecosystem). Offers practical recommendations to improve adoption of signing/provenance tools — directly relevant to ongoing software supply-chain security conversations after high-profile incidents. This paper has been cited in industry and academic discussions about improving supply-chain resilience. (Kelechi G. Kalu, Sofia Okorafor, Tanmay Singla, Sophie Chen, Santiago Torres-Arias, James C. Davis)“LLMs unlock new paths to monetizing exploits”: Technical/academic analysis showing how large language models lower the cost and change the economics of finding and monetizing software vulnerabilities — enabling more targeted, user-specific exploit generation and tailored extortion. The paper provides proof-of-concept demonstrations and argues for new defense strategies and measurements. It has stirred debate about the near-term impact of LLMs on attacker capabilities. (Nicholas Carlini, Milad Nasr, Edoardo Debenedetti, Barry Wang, Christopher A. Choquette-Choo, Daphne Ippolito, Florian Tramèr, Matthew Jagielski.)“Extortionality” in Ransomware Attacks: A Microeconomic Study of Extortion and Externality: Microeconomic / empirical treatment of ransomware payments and externalities: when victims pay, they may increase incentives for attackers and raise risk for others (an externality). The paper studies decision drivers for ransom payments and discusses policy implications (should ransom payments be regulated, taxed, or subsidized to reduce social harm?). This work is being referenced in policy discussions and media coverage about whether public institutions should be allowed to pay ransoms. (Tim Meurs and collaborators).*{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0
Austin Miller
03 Oct 2025
Save for later

#218:

Austin Miller
03 Oct 2025
Interested in something new?Life doesn't stand still. Neither does cybersecurity. In part, this is because cybersecurity is a concept and concepts can't stand at all—still or otherwise—but that is a concern for another day. If you have a finger on the pulse of the current landscape, you've probably noticed that quite a lot of people have quite a lot to say about AI, its role in cybersecurity, and how the future seems to be changing... and possibly even for the better.If you're interested in keeping up with this conversation (or you have been living under a rock and need to do some quick catching up), you might like our soon-to-be available newsletter:CyberAI with Packt. We will be riding the currents of the day, diving into the emerging issues and getting to the heart of the problem with our friends working on the front lines and wanting to show their battle scars. Sound like something interesting? Check out the survey below and tell us what you'd like to see.Take the survey - get the newsletter#218: AI for BeginnersA friendly resource for people low down the ladderWelcome to another_secpro!This week, we've included a PDF resource to help you improve your training sessions and help the non-specialists amongst us to make the right moves in the age of AI. We've also expanded the news we've been pouring over as well as included a few academic essays. Check them out!- A Global Analysis of Cyber Threats to the Energy Sector: “Currents of Conflict”- Kaspersky ICS CERT: Dynamics of External and Internal Threats to Industrial Control Systems, Q2 2025- Threat landscape for industrial automation systems (Kaspersky ICS CERT, Q2 2025)- Analysis of Publicly Accessible Operational Technology and Associated Risks- Tenable FAQ on CVE-2025-20333 / CVE-2025-20362: Cisco ASA / FTD Zero-Days Exploited- Kudelski Security Advisory: Cisco ASA WebVPN & HTTP Zero-Day Vulnerabilities (CVE-2025-20333 / CVE-2025-20362 / CVE-2025-20363)- Greenbone: “Cisco CVEs 2025: Critical Flaws in ASA & FTD”- CIRT.GY Advisory: Cisco ASA and FTD Zero-Day Vulnerabilities Actively Exploited in State-Sponsored Attacks- FortiGuard Labs: “Threat Signal Report – ArcaneDoor Attack (Cisco ASA Zero-Day)”- Black Arrow Cyber Threat Intelligence Briefing (26 Sept 2025): MFA Bypass, Supply Chain and Airport DisruptionsCheck out _secpro premiumIf you want more, you know what you need to do: sign up to the premium and get access to everything we have on offer. Click the link above to visit our Substack and sign up there!Cheers!Austin MillerEditor-in-ChiefHere's a little meme to keep you going...Source: RedditThis week's articleCybersecurity AI FAQsA cybersecurity professional's worst nightmare often instead an APT, a skilled hacker, or even a bored script kiddie with time to waste. It's often the most fearsome threat to internal security known to humanity: the average Joe employee.The kinds of errors that the adversary can seize upon are the kinds of errors that the average Joe makes through ignorance - and, often, it's not entirely his fault that he's ignorant about these things. Due to the nature of cybersecurity and cyberthreats, even a curious layman with a strong sense of responsibility to make sure he understands the newest emergent threats doesn't have enough time to get into the nitty-gritty of what makes a seemingly innocent action into the very thing the adversary needs to get working. Because of that, we've put together a handy little 10-point document to share with your coworkers, staple to walls, and build into your training sessions.Click below to check it out!Get the shareable document hereNews BytesA Global Analysis of Cyber Threats to the Energy Sector: “Currents of Conflict”: This arXiv paper provides a novel geopolitical threat-intelligence-based analysis of cyber threats targeting the energy sector. By applying generative AI to structure raw threat data, the authors map actor origins vs target geographies, assess detection tool effectiveness (especially learning-based), and highlight evolving trends (including supply chain, third-party, and state-actor activity) in the energy domain. Their findings offer actionable insights into risk exposure and resilience for operators and policymakers.Kaspersky ICS CERT: Dynamics of External and Internal Threats to Industrial Control Systems, Q2 2025: This report examines threat activity targeting ICS (Industrial Control Systems) in Q2 2025, breaking down external vs internal threats, types of malware detected, and penetration depth across network boundaries. Key findings include that ~20.5% of ICS systems blocked some threats, with malware types including spyware, backdoors, malicious scripts, and rogue documents. The report also analyses “borderline” systems where initial external penetration meets internal propagation, highlighting persistent risks in OT infrastructures.Threat landscape for industrial automation systems (Kaspersky ICS CERT, Q2 2025): A companion to the previous report, this document specifically focuses on industrial automation systems (e.g., HMIs, SCADA, local control networks) and tracks how often these systems are attacked, what types of malware and scripts are used, and the trends in exposure over time. It also discusses implications for segmentation, detection, and response in critical infrastructure settings.Analysis of Publicly Accessible Operational Technology and Associated Risks: This research quantifies and analyses OT devices exposed on the public internet, identifying nearly 70,000 such systems globally using vulnerable protocols (e.g. ModbusTCP, EtherNet/IP, S7). The authors use automated screenshot analysis to reveal exposed HMIs/SCADA interfaces, outdated firmware, and predictable configurations. The study underscores how misconfigured or publicly accessible OT systems create dangerous attack paths into critical infrastructure.Tenable FAQ on CVE-2025-20333 / CVE-2025-20362: Cisco ASA / FTD Zero-Days Exploited: Tenable’s research team provides a detailed walkthrough of two zero-day vulnerabilities actively exploited in Cisco’s Adaptive Security Appliance (ASA) and Firewall Threat Defense (FTD) products (CVE-2025-20333 and CVE-2025-20362). They explain how these flaws can be chained, the attack surface involved (VPN web server), the threat actor attribution (UAT4356 / ArcaneDoor), and mitigation strategies. This is timely given the widespread deployment of Cisco ASA in critical networks.Kudelski Security Advisory: Cisco ASA WebVPN & HTTP Zero-Day Vulnerabilities (CVE-2025-20333 / CVE-2025-20362 / CVE-2025-20363): This threat research brief gives technical detail on how Cisco ASA vulnerabilities impacting WebVPN and HTTP/HTTPS services are being actively exploited by state-sponsored attackers. It highlights persistent techniques (including firmware and ROM modification), evasion of logging, and the survival of implants across device reboots/updates. Useful for defenders needing to understand the root cause and attack chain.Greenbone: “Cisco CVEs 2025: Critical Flaws in ASA & FTD”: Greenbone’s security blog summarises the newly disclosed Cisco CVEs (including CVE-2025-20333 and CVE-2025-20362) and provides context for detection and remediation via their vulnerability scanners. They explain the exploitation risk (especially for unpatched VPN web server configurations) and give guidance for scanning and prioritising vulnerable assets.CIRT.GY Advisory: Cisco ASA and FTD Zero-Day Vulnerabilities Actively Exploited in State-Sponsored Attacks: This advisory provides detailed technical description and IOCs (Indicators of Compromise) for the exploitation of Cisco ASA/FTD zero-days by threat actors, particularly focusing on configuration bypass, persistence, and the importance of isolating impacted devices. It also includes recommendations for network segmentation and migration to supported hardware due to end-of-life concerns.FortiGuard Labs: “Threat Signal Report – ArcaneDoor Attack (Cisco ASA Zero-Day)”: FortiGuard provides a technical briefing on the ArcaneDoor espionage campaign, tracking its evolution, exploitation patterns, and implications for Cisco firewall deployments. The report discusses how the attackers maintain persistence, perform reconnaissance and lateral movement, and how defenders should respond at scale.Black Arrow Cyber Threat Intelligence Briefing (26 Sept 2025): MFA Bypass, Supply Chain and Airport Disruptions: In their weekly digest, Black Arrow highlights several important cyber events: (1) the exploitation of MFA bypass and third-party/supply chain weaknesses contributing to prolonged cyber incidents, (2) disruption at European airports via attacks targeting Collins Aerospace’s Muse software, and (3) increasing sophistication of ransomware groups focusing on data theft. While not a formal academic paper, this briefing is authored by credible threat intelligence analysts and includes incident patterns, risks, and mitigation recommendations.This week's academiaRansomware 3.0: Self-Composing and LLM-Orchestrated: introduces a research prototype and threat model for LLM-orchestrated ransomware that uses large language models at runtime to synthesize payloads, perform reconnaissance, and carry out extortion in a closed loop. The paper evaluates this capability across personal, enterprise and embedded environments and presents behavioral signals/telemetry to help build defenses. This work sparked media attention because it shows how low-cost LLMs could materially lower the barrier to generating effective malware (research demonstration, not a deployed criminal campaign).Author(s): (Md Raz, Meet Udeshi, P.V. Sai Charan, Prashanth Krishnamurthy, Farshad Khorrami, Ramesh Karri.)A Survey of Attacks on Large Language Models: a systematic survey cataloguing attacks against LLMs and LLM-based agents (training-phase attacks, inference-phase attacks, availability/integrity attacks). The paper reviews representative methods and defenses, organizes threat taxonomies, and highlights open research challenges for securing deployed LLM systems. This is useful background for anyone tracking LLM security trends and countermeasures. (Wenrui Xu, Keshab K. Parhi)To Patch or Not to Patch: Motivations, Challenges, and Implications for Cybersecurity: a focused review on why organizations delay or avoid applying security patches. The paper synthesizes industry and academic literature to identify incentives/disincentives (resource limits, legacy systems, risk perceptions, vendor relationships, human factors) and discusses implications for vulnerability management and policy. Highly relevant given recurring mass-exploitation incidents (Log4Shell, WannaCry, supply-chain incidents) where delayed patching was critical. (Jason R. C. Nurse, Institute of Cyber Security for Society / University of Kent)Unraveling Log4Shell: Analyzing the Impact and Response to the Log4j Vulnerability: a comprehensive technical measurement and analysis of the Log4Shell (Log4j/CVE-2021-44228) incident: discovery timeline, exploitation patterns, measured attack volumes, impacted sectors, and mitigation/response strategies. Useful both as a historical case study and as a guide to improving open-source component hygiene and incident response practices.Author(s): John Doll, Carson McCarthy, Hannah McDougall, Suman Bhunia (Dept. of Computer Science & Software Engineering, Miami University).*{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0

Austin Miller
26 Sep 2025
Save for later

#217: Privacy and You

Austin Miller
26 Sep 2025
A last look at Hemang Doshi's advice for AI, auditing, and privacyYou may have outsourced CIAM to the engineering team, but security still gets the call when there’s a breach. It’s time for you to take control, not the blame.Frontegg gives security teams direct control over the policies that safeguard your customer-facing application. No more waiting for developers to implement step-up MFA or manage compliance updates.Start Your Free TrialTake a look at the Security Suite directly#217: Privacy and YouAnother look at CISA and a survey of the landscapeWelcome to another_secpro!In cybersecurity, there's no such thing as standing still. While standing still might mean "going with the flow" in ordinary life, it means the very opposite when it comes to jousting with the adversary - indeed, standing still means "letting the flow go past you"! That's why we in the _secpro team are always pushing ourselves and pushing our readers to pick up ideas, develop skills, and stay above water in the rushing waves of "the flow"!That's why this week we are beginning a four-part series that looks into the deeds and needs of a CISA-trained professional - and, more importantly, how you can get to that plateau too. With the help of Hemang Doshi's fantastic book, we're taking the necessary steps to move from IT generalist or junior secpro into the higher echelons of auditing. Sound good? Check out this week's excerpt: Data Privacy Program and Principles.Check out _secpro premiumIf you want more, you know what you need to do: sign up to the premium and get access to everything we have on offer. Click the link above to visit our Substack and sign up there!Cheers!Austin MillerEditor-in-ChiefAdvance your technical career with actionable, practical solutionsAWS re:Invent 2025 Las VegasTransform your skills at AWS re:Invent 2025. Master new AWS services, join immersive workshops, and network with top cloud innovators at AWS re:Invent 2025. As a re:Invent attendee,you'll receive 50% discount code towards any AWS Certification exam.Our 2025 event catalog is now available!Explore the EventHere's a little meme to keep you going...Source: RedditThis week's articleData Privacy Program and PrinciplesAI is revolutionizing various industries, including auditing. Traditionally, auditing has been a manual and time-consuming process, requiring auditors to sift through large volumes of data to identify discrepancies and ensure compliance. However, with the advent of AI, the audit process is becoming more efficient, accurate, and insightful. AI can analyze vast amounts of data quickly, identify patterns, and even predict potential risks, making it an invaluable tool in modern auditing.Read the rest here!News BytesCisco ASA / FTD Zero-Days Under Active Exploitation: On 25 September, Cisco and CISA published security advisories confirming that multiple zero-day vulnerabilities affecting Cisco ASA / FTD (Firewall, VPN) products are being actively exploited. Two of these (CVE-2025-20333, CVE-2025-20362) were confirmed to have been exploited in the wild.Threat actors have leveraged advanced evasion techniques (disabling logs, intercepting CLI commands, modifying boot processes) and deployed bootkits such as RayInitiator combined with malware (e.g., LINE VIPER) to persist across reboots and firmware upgrades. The urgency prompted CISA to issue an Emergency Directive 25-03, mandating U.S. federal agencies to inventory, assess, and mitigate vulnerable Cisco devices.Continued Attack Campaign on Cisco Firewalls (Rommon / Bootkit-level Persistence) (PDF): Following the zero-day disclosures, deeper forensics revealed that the adversaries are not merely exploiting web/VPN logic flaws, but targeting the ROM Monitor (ROMMON) / boot environment of ASA devices. The RayInitiator bootkit persists in the boot chain, and it loads LINE VIPER, a malware module that can intercept commands, bypass VPN AAA, suppress logs, and embed itself into core ASA processes (e.g. lina). Some devices lack Secure Boot / Trust Anchor support, making them more vulnerable. These mechanisms impede forensic detection and complicate patching strategies — for example, even after reboots or upgrades, malicious modules can survive.Scattered Spider: Retail Service Desk Exploits Renewed Focus: Throughout the week, multiple analyses surfaced reaffirming that the hacking collective Scattered Spider (aka UNC3944 / Octo Tempest) is continuing to rely heavily on social engineering of service desks / help desks to gain initial footholds in enterprise networks. A new PDF—Cross-Sector Mitigations: Scattered Spider—jointly produced by sector cyber-information shares, outlines updated TTPs (tactics, techniques, procedures) and countermeasures for financial services, IT/retail, health, etc. In one prominent case, attackers impersonated internal staff, tricked the helpdesk into resetting MFA / disabling controls, and escalated privileges inside M&S / Co-op systems. Forensic Visualization Toolkit: Enhancing Threat Hunting: In a freshly published academic work (11 September 2025), researchers present “Enhancing Cyber Threat Hunting – A Visual Approach with the Forensic Visualization Toolkit”. The toolkit offers interactive visualizations of forensic and telemetry data (network, file access, process graphs) to assist threat hunters in spotting anomalies that may evade automated detection systems. The authors argue that combining human analytical insight with visualization accelerates detection of stealthy threats, especially those embedded in normal-looking activity windows.The paper includes realistic case studies and performance comparisons, making it a timely reference for SOC / IR teams aiming to ramp threat‐hunting maturity.Burnout in Cybersecurity: A Strategic Risk Report: While not a direct breach event, a notable paper published earlier in 2025 — “A Roadmap to Address Burnout in the Cybersecurity Profession” — has gained renewed attention this week in security circles. The work synthesizes findings from a multi-disciplinary workshop involving practitioners, academics, and ex-NSA cyber operators. It outlines the human, organizational, and workflow stresses contributing to attrition and mental fatigue, and presents a roadmap of interventions (training, rotation, psychological support, team-based structures) to mitigate erosion of security capacity. Given current pressure on SOC/IR teams (e.g. responding to high-tempo incidents like the Cisco zero-days), this issue is increasingly treated as a strategic risk in cybersecurity planning.Digital Forensics & Risk Mitigation Strategy for Modern Enterprises: Another academic contribution gaining traction is “Comprehensive Digital Forensics and Risk Mitigation Strategy for Modern Enterprises”, published February 2025. The paper walks through a simulated case of a large identity/data-analytics firm under attack and develops an integrated strategy covering pre-incident readiness (forensic architecture design, monitoring), live response, post-incident lessons, and regulatory compliance.It emphasizes adaptive AI/ML techniques, integration of threat intelligence into forensics workflows, and continuous “forensic readiness” as a discipline. In the context of emerging threats (e.g. boot-level persistence, identity-based service desk attacks), the paper serves as a robust blueprint for mature enterprise response programs.This week's academiaAdversarial Machine Learning: A Taxonomy and Terminology: A comprehensive NIST report that builds a clear taxonomy and standardized terminology for adversarial machine learning (AML). It describes attacker goals and capabilities across ML life-cycles, categorizes AML attack and defense types, and outlines current technical and measurement challenges for trustworthy AI in security-sensitive systems. Highly cited and used as a baseline by both researchers and practitioners.(A. Vassilev et al. NIST Trustworthy & Responsible AI group).On Adversarial Attack Detection in the Artificial Intelligence Era: Survey/analysis of detection techniques for adversarial attacks on ML models, contrasting classic concealment/malware tactics with modern adversarial-example threats. The paper evaluates state-of-the-art detection approaches and points to gaps where attackers are leveraging large models and automation to evade defenses. Useful for defenders designing layered ML security. (N. Al Roken and collaborators).A Defense-Oriented Model for Software Supply Chain Security: Introduces the AStRA graph-based model (Artifacts, Steps, Resources, Principals) to represent software supply chains and reason about security objectives and defenses bottom-up. Applies the model to case studies and maps past supply-chain attacks to show where defenses succeed or fail — a practical roadmap for research and industry focusing on supply-chain mitigations (SBOMs, build integrity, provenance, etc.). (E. A. Ishgair and coauthors).Securing Automotive Software Supply Chains: NDSS paper that examines unique risks in automotive software supply chains (ECUs, OTA updates, third-party components). It evaluates real automotive update pipelines, shows practical attack scenarios, and recommends defenses tailored to the automotive context (signing, reproducible builds, hardened update channels). Very relevant given recent high-profile industrial supply-chain incidents. (Marina Moore, Aditya Sirish A. Yelgundhalli, Justin Cappos).Managing Deepfakes with Artificial Intelligence: Introducing a Business/Privacy Calculus: Academic analysis of deepfake threats and defenses from both technical and socio-economic angles. Proposes an AI-assisted detection/mitigation framework and a privacy/business calculus for organizations to evaluate risks vs. countermeasure costs (useful for enterprises facing deepfake-enabled fraud or reputational attacks). Timely as synthetic media use explodes. (G. Vecchietti and collaborators).*{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more
  • 0
  • 0
Success Subscribed successfully to !
You’ll receive email updates to every time we publish our newsletters.
Modal Close icon
Modal Close icon