In a continuously evolving digital world, where all services have become increasingly dematerialized, cybersecurity has become strategic. Unfortunately, this vision is not always shared between all stakeholders in an organization. Depending on your point of view, whether you are managing finance or directly dealing with cybersecurity issues, the will to invest in cybersecurity initiatives will differ. However, the need for the alignment of cybersecurity priorities across an organization becomes obvious once the organization suffers a security breach.
This chapter will introduce the general threat landscape, allowing us to understand adversaries and their motivations, as well as the overall security environment. This will help us understand their aims and methods before they can add our name to their hunting board.
Organizations often rely on red and blue teams (whether internal or outsourced) to enhance their security posture. This arrangement works well in theory, but it is a different story in real life. We will describe the current issues and pitfalls with this binary approach, and suggest the need for a new methodological framework that relies on multiple purple team strategies.
In this chapter, we're going to cover the following main topics:
- General introduction to the threat landscape
- Types of threat actors
- Key definitions for purple teaming
- Challenges with today's approach
- Regulatory landscape
General introduction to the threat landscape
In this section, we are going to dive into the threat landscape by looking at some notorious threat reports from cybersecurity vendors. Thus, we will understand what techniques are often leveraged to break into organizations. But, we will also try to develop a common understanding of what a threat is and why today's threat landscape forces us to tackle cyber risks with a 360° visibility approach.
Threat trends and reports
Each year, multiple organizations from different sectors are targeted by threat actors. Due to the diversity of the attackers' skills, published vulnerabilities, attack vectors, and inventiveness, it is vital to maintain awareness of these elements to better prepare our defense strategies. To help us with that, one of the most useful sources of information comes from worldwide cybersecurity firms that are continuously facing current threats in every region and industry sector. These firms also rely on their own products to collect telemetry information and extract insights from cyber threats.
Some firms' reports have proven to be valuable and demonstrated a good representation of the current threat landscape. Among those, we can mention the following (non-exhaustive) list of relevant reports:
- Microsoft Digital Defense Report
- CrowdStrike® 2021 Global Threat Report
- Mandiant M-Trends Insights into Today's Top Cyber Trends and Attacks
- Trellix Advanced Threat Research Report
- SANS 2021 Cyber Threat Intelligence Survey
- Palo Alto Networks 2021 Unit 42 Ransomware Threat Report
- Verizon 2021 Data Breach Investigations Report
If we try to extract some similarities between all these reports, we can rapidly identify common trends to help us understand the threat landscape. Surprisingly, we can observe that zero-day vulnerabilities are very rare, in contrast to what people commonly think.
A zero-day is a highly sensitive vulnerability unknown to the product developer and exploited before any available patch has been issued. It is very expensive to develop a zero-day exploit, and once used, the risk of public disclosure of the vulnerability and payload becomes high. Therefore, the return on investment for the attacker is not very attractive, except for in specific circumstances usually linked to nation-state-sponsored cyber operations. Furthermore, considerable skill is required to find the vulnerability, develop a working and stable exploit, and implement an actionable payload, and any failures in the attack could expose or give hints on the identity of the attacker, which could be leveraged by law enforcement agencies.
Without going into too much detail about its geopolitical context, we can mention one famous cyberattack that leveraged several zero-day exploits, and that was Stuxnet. This piece of malware required a highly skilled team of developers building and testing for five years, and it was jointly created by at least two nation-states to compromise and sabotage Iran's nuclear program.
Nowadays, the term zero-day is commonly used to refer to known vulnerabilities without publicly available exploit code. In reality, this kind of vulnerability would be better named a one-day vulnerability. Here are some of the recent main vulnerabilities of this kind that gained high visibility in the press:
- Microsoft Exchange Server Side Request Forgery (SSRF) and Remote Command Execution (RCE): Vulnerabilities
CVE-2021-27065allow an attacker to take control of the mailboxes through the Messaging Application Programming Interface (MAPI) protocol and execute arbitrary code as
- Pulse Secure Connect VPN: Vulnerability
CVE-2021-22893allows remote arbitrary code execution on the Pulse Secure gateway.
- Fortigate SSL-VPN: Path traversal vulnerability
CVE-2018-13379allows an unauthenticated attacker to leak currently connected users' credentials.
- Citrix Netscaler Remote Command Execution (RCE): Vulnerability
CVE-2019-19781allows an unauthenticated attacker to execute malicious code remotely.
These vulnerabilities were all related to internet-facing devices, some of them being security equipment, which all led to global attack campaigns. The obvious lesson learned from these exploited vulnerabilities is that patch management is key, especially for exposed services. In addition, organizations must keep watching and monitoring new vulnerabilities affecting their products.
This is a typical example of a complex process, because organizations usually lack an up-to-date inventory and resources to perform urgent patching, and have to maintain a heterogeneous information system composed of dozens if not hundreds of different products. The number of published vulnerabilities per day doesn't help in that process. In addition, common vulnerabilities and exposures (CVEs) usually lack context (the Common Vulnerability Scoring System (CVSS) score helps a bit, but it's not perfect). Therefore, actionable remediation plans are hard to define and realistically to follow. We will see later in the book how a purple teaming approach can dramatically reduce the exploitation opportunity window for the attacker.
We can see from the threat reports mentioned previously that zero-day vulnerabilities are rarely used to get initial access into an information system. However, vulnerable public-facing assets are a common "way in" for attackers. In particular, the adoption of cloud services and, recently, work-from-home architecture has dramatically increased our internet exposure, making it even harder for defenders.
Exploiting exposed vulnerable devices is not the only technique leveraged by threat actors to target organizations. Another very common way to get a foothold in a victim's machine is related to social engineering attacks, and more specifically, phishing attacks. Indeed, why would an attacker invest effort or money into potentially complex perimeter attacks when people are still one of the weakest links in an organization? In 2020, 36% of data breaches started with a phishing email, as stated by the Verizon 2021 Data Breach Investigations Report.
We can also mention another trendy technique in recent years, which is credential reuse. Leveraging public leaks from various websites and services could allow an attacker to collect and create a practical password dictionary. Humans make mistakes, we all do, and reusing a password is one of them. This classic vulnerability is exploited quite easily to gain access within an organization's system.
Another recent trend is the supply-chain attack. Although this attack technique could be quite expensive and time-consuming to prepare, it is as powerful as a zero-day attack. With this knowledge, we can safely make the assumption that, in most cases, this type of attack will be leveraged by nation-state attackers. We could also mention the SolarWinds hack. Indeed, this was a perfect example of a supply-chain attack, where the attackers were able to break into the SolarWinds network, one of the leaders in IT monitoring software. From there, they injected malicious code (Sunburst) into the official update pipeline of the software called Orion. This malicious update was then downloaded and installed by more than 18,000 customers.
To conclude this section, let's highlight the main strategies used by attackers for initial access: unpatched vulnerability exploitation, social engineering-based attacks, zero-day exploitation, and supply-chain attacks.
But really, what is a threat?
This is a hierarchical view of risk components to better understand how threats are situated in the overall risk picture. Risk is always represented with two dimensions – one is its likelihood (or probability) of occurrence, and the other is its impact on an asset. Therefore, we can read the given diagram at the third level of the figure: A risk is the likelihood (probability) of a threat exploiting a vulnerability in an asset.
As our main focus is on threats, and, more specifically, adversarial threats (as opposed to environmental threats and accidental threats), in the above hierarchy, we redacted other types of threats, as well as the different components of vulnerabilities and assets.
In addition, we can divide a threat into three main components, which are its intent, opportunity, and capability. These three components must be met for a threat to exist and therefore, to be relevant to your threat profile. For example, if a child had the opportunity (by accessing their father's computer) and the capability (if they had learned how to hack) of exploiting a vulnerability, he would also need a trigger or a reason to perform that action. Only then can they become a threat relevant to your organization. On the other hand, many (if not all) organizations have people or groups of people with the intent and the opportunities to do harm but who are lacking capabilities.
This leads us to the observation that the capability component has been more and more accessible in recent years. The proliferation of free courses, hacking tools, and frameworks such as Metasploit, Powersploit, Empire, and others, has made offensive security skills easier to obtain for cyber threats. This is a recurring topic within the infosec community, as when a Proof of Concept (PoC) exploit code is made publicly available to anyone, does the benefit the community gets from this outweigh the benefit for threat actors?
Finally, the rise of cybercrime-as-a-service has removed barriers of entry to the cybercrime market, making advanced offensive capabilities available to threat actors who wouldn't be a fully formed threat if they only had the intent and opportunity components.
Knowing the composition of a threat – that is, its intent, opportunity, and capability – we will briefly look back at the history of cybersecurity and demonstrate why a new approach is needed to tackle today's threats.
What posture should be adopted regarding the current threat landscape?
It is true that if we look at past decades, people often tended to build large castles with big walls to combat cyber threats.
While it is mandatory to build resilient architecture and implement passive defense, history showed us that this is not sufficient to tackle evolving cyber threats. That is why an active defense approach is mandatory nowadays.
Another very important paper emphasizing the need for a broader approach is the NIST Framework for Improving Critical Infrastructure Cybersecurity. Without getting into too much detail, this paper highlights the need for prevention but also for detection and response capabilities. This key understanding changes our position to an assume-breach mindset.
In fact, this can be easily observed by describing the relationship between risk and controls. Several types of controls exist, but not all of them sit at the same place in the timeline of a risk event. As an example, an antivirus solution might help an organization to prevent, while a backup solution would help the same organization to respond to (or, more precisely, recover from) a risk event. Let's examine the bow-tie view of a risk event to understand this concept:
In Figure 1.2, we can read the graph from left to right – a threat exploits a vulnerability affecting an asset, therefore causing an impact on the organization. As you can see, three types of controls are in the way of the risk event occurring:
- Preventive controls, which would prevent a risk event from occurring
- Detective controls, which would help to detect the occurrence of a risk but not prevent it
- Reactive controls, which would help to mitigate the impact of a risk event but not prevent it
Again, this emphasizes the need for a proactive approach to cybersecurity. What is important to keep in mind is that when an adversary gets a foothold in our networks, it is not the end. They will need some more time to achieve their goal and that should allow us, the defenders, to detect and respond to the intrusion. Purple teaming will help us build and improve our security controls and, in particular, give us the 360° view necessary to survive in today's threat landscape.
Types of threat actors
In a far cry from the 90s, when teenage hackers sat in their bedrooms late at night and tried to break into systems for the thrill and challenge, the current typical threat actor looks quite different.
Nowadays, attackers' motivations are less noble and mostly related to financial interests, and the market is growing. Currently, some studies, blogs, and articles state that cybercrime profits are higher than all other crime profits combined, or that they would be in a list of the top 10 countries with the highest GDP. While we are not here to discuss those numbers, we can safely say that cybercrime has grown in its profits and popularity.
Interestingly, it seems that cybercrime-as-a-service – organized groups selling or renting tools, infrastructure and services – does generate more profit than cybercrime itself, allowing for new business models to emerge. Threat actors are now specialized in certain areas like initial access, renting infrastructure, ransom operations, and so on.
Of course, financial gain is not the only objective observed among threat actors. A common representation of threat actor types is based on their intents and objectives. Variations in the definitions of types exist between vendors, blog posts, papers, talks, and books, but overall, the picture looks like this:
- Advanced persistent threat (APT): Usually state-sponsored or nation-state actor groups sit in the IT infrastructure for an extended period of time, with different objectives such as cyberespionage. Sometimes an APT could be linked with organized cybercrime.
- Organized cybercrime: Mainly motivated by financial interests, they have several methods, such as extortion, ransomware, crypto mining, and so on.
- Hacktivist: Individuals or groups breaking into computers for political or social reasons. Defacement of websites is a common method for hacktivists.
- Insider threat: Employees, business associates, contractors, or trusted parties who try to steal data or abuse their access to break into other systems or exfiltrate and leak data.
- Script kiddies: Low-level attackers that use already existing programs and scripts to perform basic malicious operations.
The Center of Internet Security has a similar inventory of threat actors, but also adds terrorist organisations.
Several security vendors have their own classification and naming conventions when it comes to threat actors. Let's go through some of them.
CrowdStrike described its naming conventions in its latest threat report. Adversaries are named mainly using animal names. Bear actors are linked to Russia, Kitten to Iran, Panda to China, and Spider to cybercrime, just to mention a few. As an example, Cozy Bear is a Russian threat actor likely linked to the Foreign Intelligence Service of the Russian Federation, SVR, and it is also likely the same threat actor as APT29 or Yttrium, which are names from other vendors.
Microsoft does not have an official statement on its naming conventions, but Jeremy Dallman, Senior Director at the Microsoft Threat Intelligence Center (MSTIC), stated in an interview with the Security Unlocked podcast that the MSTIC is using the periodic table of elements as a basis for its names, with no real logic behind it. They even tested dinosaur names! Yttrium is the naming convention for the threat actor that is supposed to be APT29 for Mandiant or Cozy Bear for CrowdStrike.
Palo Alto Networks does not have an official statement on their naming conventions, but if a threat actor already has a common name in the infosec community, they will use it.
Naming conventions can be an issue in the cyber threat intelligence (CTI) community. For example, old actors can be renamed by other vendors or duplicates can be created, which makes it hard for organizations to keep track of and follow threat actors.
Also, it is important to mention that security vendors often observe different things in terms of campaigns and Indicators of Compromise (IoCs), leading to new threat actor names. Different data is collected and only part of the full picture can be seen by each organization, which is known as collection bias, as stated by Robert M. Lee in his talk, Threat Intelligence Naming Conventions: Threat Actors and Other Ways of Tracking Threats. He explains that each security vendor has its own dataset and will only analyze the parts of this data that they deem interesting. Apart from this bias, he also highlights the fact that some tend to focus solely on the malware data dimension, whereas the victimology and infrastructure dimensions are not leveraged in the way they should when following the Diamond Model of Intrusion Analysis. Such bias can lead to CTI analysts keeping track of malware developers but neglecting malware operators.
A word on attribution
Attributing a cyberattack to a country does expose an organization to geopolitical considerations. As an example, at the time of writing, Mandiant (previously Mandiant-FireEye) does not attribute the attack on SolarWinds to the Foreign Intelligence Service of Russia (SVR), whereas the US government does. Of course, Mandiant is not protecting any special interests by avoiding the finger-pointing exercise, but unless an organization has extreme confidence in the identity of an attacker, which probably only another intelligence service can have in this specific case, it does not bring any value for the majority of the defenders to know that the SVR is behind the attack.
In fact, it does not even help 99% of organizations to better protect themselves. On the other hand, clustering attribution does make sense in a way that it lets us identify groups that target specific organizations, countries, and industries, and that own specific infrastructure and sets of methods. This can help us prioritize efforts in improving our security posture by evaluating our defenses against those groups' tactics, techniques, and procedures (TTPs). In fact, this is the exact entry point to purple teaming, and in the next chapters, we will cover how CTI can help us identify which threats are relevant to us and how they operate, in order to simulate their TTPs and improve our security controls.
Key definitions for purple teaming
We will first see what the different teams look like within an organization, such as what a red and blue team is, before digging into recent key concepts that are often misunderstood or used interchangeably, like cyber range, breach attack simulation, and adversary emulation. We will also briefly describe a new standard terminology, which is threat-informed defense. However, we will not yet tackle purple teaming, as this will be described thoroughly in the next chapter.
The red team
The red team, also called the offensive team, is a term that originally came from military war simulations and became popular in the early 2000s within the infosec community. The idea is that this team will mimic the known threat actors' TTPs in order to perform real-life attack scenarios, trying to think and act like the enemy.
Contrary to usual penetration testing engagements, the red team (composed of ethical hackers) will try to exploit larger scopes. For example, social engineering techniques, physical access attempts, and unpredictable attack scenarios are usually allowed.
- Sending a package by mail containing a rogue Wi-Fi access point to a person on vacation in the organization. This will allow them to have a potential entry point without having to pass any physical security controls.
- Dropping USB keys containing malicious payloads at the entrance of the building, expecting that someone will find and plug them in.
- Coming dressed as a maintenance guy (maybe with a ladder, tools, and so on) and trying to bypass physical access restrictions this way to obtain LAN physical access, server room access, or worse, stealing a workstation by pretending they have to repair it.
- Perform advanced social engineering attacks based on phishing, phone calls, post and email, and so on.
As we can see, we are far from the standard penetration testing with these examples, but in this approach, the objective is to simulate a threat actor that would like to infiltrate the corporate network by any means necessary and go as deep as possible.
In addition to the usage of standard penetration testing tools, they will also use a dedicated red team infrastructure to hide their offensive operations as much as possible and rely on more advanced exploitation tools, such as the usage of Cobalt Strike, which is a commercial red team solution, but also recently often used by threat actors.
A feature of the red team engagements is that usually, the blue team is not aware of the operations, as they are supposed to test real-life blue team detection and response capabilities and assess the organization's overall cyber resilience. Usually, the red team members have permission from the organization's management for all their activities, who have approved them.
The blue team
In opposition to the red team, the blue team's main objective is to defend the organization against internal and external threats. The team's main responsibilities and expectations can be listed as follows:
- Prepare for defense (using at least the technologies listed hereafter).
- Be able to anticipate threats before they happen (thanks to threat intelligence, vulnerability watch, regular audits, and so on).
- Detect malicious activities, risky users, and suspicious behaviors to protect the organization.
- Manage vulnerabilities with passive (vulnerability watch) and active (scanning and assessment) processes.
- Respond to any cyber incidents.
- Ensure all defense mechanisms are set up and working properly.
- Continuously improve defense based on lessons learned, new threats, and adversary TTPs.
- Provide information and key performance indicators (KPIs) to management.
- People: Security awareness, security analysts (usually junior for triaging, and senior for case handling), detection engineers, forensic specialists, malware analysts, threat intelligence analysts, developers, DevSecOps, system engineers, and SOC/blue team managers. In smaller organizations or businesses, it is common to see multiple roles owned by one person.
- Process: Usual NIST/SANS-based incident response process (preparation, identification, containment, eradication, recovery, and lessons learned), internal security policies, standard operating procedures (SOPs), and playbooks or guidelines.
- Products and technologies: security information and event management (SIEM) as one of the main tool for SOC and blue teams, defined or provided use cases for detection, endpoint detection and response (EDR), intrusion detection systems (IDSs), network packet capture platform, threat intelligence platform (TIP), ticketing/case management system, digital forensic tools, security orchestration, automation and response (SOAR), reverse engineering tools (IDA, Ghidra, and so on), trap systems (honeypots, honeytokens, and so on), and vulnerability management platforms.
Blue teams are usually part of a Security Operations Center (SOC), with multiple analyst tiers organized in the following way: Tier 1 for triaging (basically, determining if an alert is a false positive or a true incident), Tier 2 for standard incident handling, and Tier 3 for complex cases (Subject Matter Expert (SME) analysis, malware analysis, and forensic investigation).
Usually, the red and blue teams are not really collaborating. The red team attacks the organization without informing the blue team (for better adversary emulation) and very few post-mortem activities are performed. The next section demonstrates what could be improved and how each side can be combined in a powerful synergy thanks to the purple teaming approach.
For some situations, new team colors are introduced, often called the rainbow team or the infosec wheel. We will not discuss the relevance of those naming conventions, but here are some definitions we can find online. They also include the concept of blue, red, and purple teams:
- The yellow team, or the Builders, is the team that builds infrastructure and applications.
- The orange team is the mixing of the red and yellow teams, to ease knowledge transfer from an attack perspective to the builders.
- The green team is the mixing of the blue and yellow teams to allow the better building of defenses by incorporating the yellow view with the blue needs.
Other resources, such as the regulatory framework from the Saudi Arabian Monetary Authority, introduce the concepts of the green team as a test manager provided by the regulator to supervise the intelligence-led red team exercises as opposed to the concept of mixing the blue and yellow teams. It also introduces the white team as a limited number of experts from the tested organization aware of the exercise.
Knowing all the different colored hats a defender can take within an organization is not critical for the rest of the book, but we should understand the difference between red and blue teams at a minimum. Let's now deep-dive into some key concepts in cybersecurity that recently became more and more popular.
Cyber ranges are designed as a simulation and representation of the organization's existing local systems, networks, tools, and applications that run interactively to safely enable hands-on cybersecurity training and develop new cybersecurity posture testing.
In an ideal situation, this should include simulated traffic, replicated web pages, exposed services, and interfaces similar to what can be found within the organization.
Cyber ranges provide an environment where the blue and red teams can work closely together to improve security capabilities and sharpen security analysis skills. They are used by professionals, cybersecurity analysts, law enforcement, incident handlers, students, trainers, and organizations.
Now, let's see how breach attack simulation solutions differ from cyber range solutions.
Breach attack simulation
Considered a form of advanced security testing, breach attack simulation (BAS) is part of the purple teaming arsenal. It is relatively new, as the term was first included in 2017 in Gartner's Hype Cycle for Threat-Facing Technologies 2017 report.
Originally, the blue team defenses were tested during red team exercises, but the main issue with this approach is that it is not automated, and it is considered to be partial because it depends on red team operator's preferences and skills, which can vary dramatically from one to another.
BAS is a concept allowing security engineers to replay attacks to and from any perimeter (external, internal, endpoints) manually or in an automated way and relying on specific solutions. They will classify and normalize the different generated attacks, map them to existing frameworks (such as MITRE ATT&CK), check if they were blocked or detected, and finally deliver a report.
The main advantage of this approach is the continuous updates from the vendors and the community allowing organizations to test new attacks and TTPs. Therefore, it helps us improve defenses in a continuous and automated fashion.
These tools also allow the continuous monitoring of the existing detection and prevention use cases' health to ensure they are still effective and working properly. It also prevents the risk of human error during tests, thanks again to the automated approach.
Let's now look at adversary emulation.
Adversary (attack) emulation
The general concept is to use threat intelligence reports and frameworks (ATT&CK, for example) to select specific (generally advanced) threat actors that may be interested in trying to compromise you, then extract the TTPs they are using. It can also help managers to answer the question, "Could the recent attack, seen in the news, happen to us?"
MITRE ATT&CK mapping is incredibly useful as a reliable source of information, as it allows analysts to have a clear understanding of the TTPs for each attack layer (initial access, privilege escalation, lateral movement, and so on) that are used by each threat actor.
- A specific description of the group and its TTPs, classified using the MITRE ATT&CK reference model
- An adversary emulation plan
- A spreadsheet to fill during the test for coverage evaluation
Even if the choice of this APT group could be thought of as limited (and not updated since 2018), the selected TTPs are still relevant at the time of writing, and the prototype of operations can still be effective as a starting point in the adversary emulation process. Also, MITRE and the cybersecurity community are getting stronger and starting to provide free adversary emulation plans for organizations to utilize themselves.
Finally, adversary emulation also focuses on the human dimension, and this will help the blue teams to test and improve their skills and capabilities to respond to a threat. BAS solutions, on the other hand, will mainly focus on the validation of existing security controls. The difference between BAS and adversary emulation is well described by Scythe in its blog post, The Difference Between Cybersecurity Simulation vs Cybersecurity Emulation. We will also deep dive into the difference between simulation and emulation in Chapter 9, Purple Team Infrastructure.
Threat-informed defense, in a few words, is exactly what purple teaming is trying to achieve. In the next chapter, we will see in more detail what it is exactly and how it works, but meanwhile, here is the definition from MITRE of the threat-informed defense approach – https://www.mitre.org/news/focal-points/threat-informed-defense:
Challenges with today's approach
As we just saw, different teams (red, blue, and more) have different objectives, constraints, and approaches in a cybersecurity environment. They don't have a standardized methodology for collaboration, and this leads both teams to encounter issues, and also disadvantages the overall security posture of the organization.
Additionally, though each team experiences problems specific to it, we wanted to highlight a few of the issues faced by blue teams in particular, and how a new approach to security teams could help to tackle these:
As a defender or an ethical hacker, it is very likely that you recognize some (if not all) of these issues. We briefly demonstrated how purple teaming could help everyone to solve some of the problems we are facing with today's approach. Before deep-diving into the purple teaming chapter, we will finish this chapter with an overview of the regulatory landscape. Once again, this will highlight the need for a new approach, but observed this time from the point of view of regulators.
Even though regulators are often late in terms of adoption, we are seeing numerous initiatives that tackle some of the issues discussed in this chapter, and tend to drive organizations toward the purple teaming approach. In general, the financial industry's regulators are often leading the way. Here, we will briefly explore some of the regulatory frameworks that have been proposed and applied in recent years.
The G7 (previously the G8) has a special group working on cybercrime and has created several cyber policies for its member countries. The G-7 Fundamental Elements for Threat-led Penetration Testing (G7FE-TLPT) was created in 2016 to help organizations incorporate real-world scenarios into their risk management controls with penetration testing exercises.
The Bank of England has developed, for the CBEST members, the CBEST Intelligence-Led Testing. This was developed in 2016 to help organizations evaluate their cyber resilience by mimicking the actions of real threat actors.
In 2016, the Honk Kong Monetary Authority (HKMA) published its Cybersecurity Fortification Initiative, composed of three pillars. The first one, the Cyber Resilience Assessment Framework (C-RAF), describes several types of cyber assessment with one in particular, which is called Intelligence-led Cyber Attack Simulation Testing (iCAST). The framework extends the scope of traditional penetration testing engagements by including detection and response evaluation from a technological perspective, but also from a human and procedural perspective.
In 2018, the European Central Bank released the TIBER-EU framework, which describes how to implement the European framework for threat intelligence-based ethical red teaming. Similar to the CBEST framework from the Bank of England, it helps organizations to mimic attackers to evaluate the cyber resilience of people, process, and technology security controls.
The same year, the Global Financial Markets Association (GFMA) published A Framework for the Regulatory use of Penetration Testing in the Financial Services Industry. It highlights the need for a more collaborative approach with regard to penetration testing, and it promotes the integration of threat intelligence within the planning phase of the assessment. This framework is mainly intended for regulators, as they are increasingly requiring financial services to perform mandatory penetration tests.
Also in 2018, the Association of Banks in Singapore (ABS) published its guidelines, Red Team: Adversarial Attack Simulation Exercises. The paper helps organizations to develop, plan, and execute adversarial attack simulation exercises (referred to as AASE in the paper). This guideline also helps to differentiate cyber range, penetration testing, automated attack simulation, and advanced adversary attack simulation assessments.
All the mentioned frameworks are trying to solve issues around penetration testing. Specifically, all of them integrate some form of threat intelligence into penetration testing exercises in order to perform a more realistic assessment with regard to the current threats to organizations. In addition, they all highlight the need for debriefing discussions between all stakeholders at the end of the security assessment to maximize the post-mortem activities (lessons learned).
Finally, even though this last point is not relevant to everyone, the regulators act as a participant in the exercise, which allows them to benefit from real-world experience that will help them to understand their industry's threat landscape. Let's hope they will make good use of that experience and intelligence across their industry to provide applicable and prioritized actions and recommendations for organizations.
Now that we've completed this chapter that sets the tone for the rest of the book, we are able to understand the current threat landscape and the fact that passive defense will always fail. The assume-breach mindset is necessary for each organization to shift to a more proactive defense approach.
We also understand cybersecurity threats and their intents, as well as the common terminology, concepts, and issues around blue and red teams. We have also highlighted the need for a new model to better improve our cyber resilience. We've also briefly seen that regulators are following the trend by providing new assessment frameworks.
The next chapter will help us define and understand how purple teaming can be applied within our organizations.
- The Sliding Scale of Cyber Security:
- Framework for Improving Critical Infrastructure cybersecurity:
- Cyber Threat Actors from Center for Internet Security:
- Threat Intelligence Naming Conventions: Threat Actors, & Other Ways of Tracking Threats by Rob M. Lee:
- Diamond Model of Intrusion Analysis:
- Cyber Ranges from NIST NICE:
- MITRE APT3 adversary emulation plan:
- The Difference Between Cybersecurity Simulation vs Cybersecurity Emulation by Scythe: