Reader small image

You're reading from  Cybersecurity Attacks ‚Äì Red Team Strategies

Product typeBook
Published inMar 2020
PublisherPackt
ISBN-139781838828868
Edition1st Edition
Tools
Right arrow
Author (1)
Johann Rehberger
Johann Rehberger
author image
Johann Rehberger

Johann Rehberger has over fifteen years of experience in threat analysis, threat modeling, risk management, penetration testing, and red teaming. As part of his many years at Microsoft, Johann established a penetration test team in Azure Data and led the program as Principal Security Engineering Manager. Recently, he built out a red team at Uber and currently works as an independent security and software engineer. Johann is well versed in analysis, design, implementation, and testing of software systems. Additionally, he enjoys providing training and was an instructor for ethical hacking at the University of Washington. Johann contributed to the MITRE ATT&CK framework and holds a master's in computer security from the University of Liverpool.
Read more about Johann Rehberger

Right arrow

Chapter 3: Measuring an Offensive Security Program

Little literature can be found that discusses or provides ideas on how to measure the effectiveness of a red team or an offensive security program. Management teams tend to want easy solutions to difficult problems.

When people ask for best practices to be used to measure security, especially red teaming and pen testing, I just smile and think that blindly applying someone else's idea to a seemingly similar problem without considering the unique context and conditions they operate under might result in suboptimal solutions. But I'm a red teamer and that's how we think. We challenge everything.

This chapter covers ideas for measuring an offensive security program and what has worked for me in the past to convey problems, share state, and encourage action to be taken. By no means is there one right way or a single best way to measure progress and maturity.

Some methods are useful for comparing systems with each...

Understanding the illusion of control

All models are wrong, but some are useful is a famous quote by George Box, and it applies to measuring red teaming and offensive security engineering, in particular. It's good to have a model, perform analysis and attempt to measure and improve, but do not make the model its own goal. The goal is to improve the security and quality of products and services by reducing the overall risk and, at the same time, building a well-functioning team. Chasing a vague understanding of a maturity model and climbing its ladder might, in the end, be counterproductive, especially when it comes to red teaming.

A standard model might create the illusion of control and could therefore be misleading. Putting things into context is necessary. So, feel free to adjust, accept, or reject what works for your team and organization.

One of the most difficult tasks in red teaming is measuring the maturity of the program itself. There have certainly been stages...

The road to maturity

In the beginning, a lot of the processes are entirely undefined. There is no clear distinction focusing efforts on targets because the risk or value of assets is not well defined in the organization. Testing might appear ad hoc and no repeatable processes are in place.

At this stage, the offensive team might be primarily driven by engineering or business tasks around shipping services, rather than defining its own objectives to simulate threats to the organization.

It's also not unlikely that there is only one pen tester performing offensive security work and that person might not even be a dedicated resource. Growth typically happens organically when the value of the offensive program becomes clear to the organization by demonstrating its impact and value.

Strategic red teaming across organizations

My growth in the security space came from initially testing software and systems directly for vulnerabilities and exploiting them. Afterwards, online...

Threats – trees and graphs

Threat, or attack, trees break down the anatomy of how a component might be compromised. They help analyze how an asset might be attacked by breaking down individual attack steps into smaller sub-steps. Some of the first work exploring these concepts in computer security was apparently done by Amoroso in Fundamentals of Computer Security Technology (1994), and a few years later by Schneier (https://www.schneier.com/academic/archives/1999/12/attack_trees.html).

On paper, an attack tree seems like a great idea; it allows you to break down an attack into detailed steps toward achieving an objective. However, using this technique one might end up with many attack trees, which can be hard to manage. Hence, tooling is needed.

How about graphs? Modeling adversarial behavior and associated threats and relationships between components using graphs can be a powerful way to explore connections between systems and components.

One possible way to measure...

Defining metrics and KPIs

Measuring the effectiveness of an offensive security program and how it helps the organization remove uncertainty around its actual security posture and risks is one of the more difficult questions to explore and answer. When it comes to metrics, we need to distinguish between what I refer to as internal versus external adversarial metrics.

Tracking the basic internal team commitments

Internal metrics are those that the pen test team use to measure and hold themselves accountable. Some organizations call these commitments or objectives and key results (OKRs). Initially, the metrics might be quite basic, and comparable to project management KPIs:

  • Performing x number of penetration tests over a planning cycle and delivering them on time
  • Committing to performing a series of training sessions in H2
  • Delivering a new Command and Control toolset in Q4
  • Delivering a custom C2 communication channel by Q1
  • Growing the team by two more pen...

Test Maturity Model integration (TMMi ®)and red teaming

Most likely, you will be familiar with, or have at least heard of, the Capability Maturity Model (CMM®) from Carnegie Mellon. The TMMi®, developed by the TMMi Foundation, explores and defines a framework for measuring test maturity and process improvements. More information can be found at https://www.tmmi.org.

It is based on the CMM® from Carnegie Mellon University, which defines maturity stages for software testing. In this section, we will explore how this framework can be used when it comes to offensive security testing and red teaming. We will put the five levels of maturity, as defined by TMMi®, into a penetration testing context next.

This is an experimental idea to help frame and allow discussions on how you could measure the maturity of your internal penetration test team. Throughout my career, I have been fascinated with quality assurance and testing, especially security testing. What...

MITRE ATT&CK™ Matrix

MITRE has developed a framework to catalog TTPs. It's an excellent, systematic way to tackle known TTPs. The attack matrix can be a great source to implement test cases to ensure that detections are in place and working.

Having a systematic approach to ensure that known TTPs are detected is a great way to grow defense capabilities. However, the systematic approach of building and testing (and hopefully automating) these could—but probably should not—be performed by the offensive security team. The task is a rather generic engineering task that doesn't necessarily need the unique creative skill set of the offensive team.

The offensive team should help augment the attack matrix and discover new missing areas. Most important, however, is that someone in your organization becomes familiar with the ATT&CK Matrix to ensure a holistic understanding of the publicly known TTPs.

MITRE ATT&CK Matrix includes analysis...

Remembering what red teaming is about

With all the discussions about maturity, measurements, and some of the risk management integration ideas that we covered in this chapter, it could be easy to forget why an adversarial red team in an organization is established in the first place.

Part of the job of a red teaming program is to help remove uncertainty and drive cultural change. The big challenge with risk management and measurement is to come up with quantifiable metrics that enable better decision-making. The more that is known, the less uncertainty there is. Penetration testers and red teamers are there to help discover more of the unknowns. Also, red teaming itself does not stop with penetration testing nor with offensive computer security engineering; it is much broader in nature.

Along the lines of an old Persian proverb, also stated by Donald Rumsfeld in a DoD briefing (https://archive.defense.gov/Transcripts/Transcript.aspx?TranscriptID=2636, these are the possible...

Summary

In this chapter, we described a variety of ways to measure an offensive security program, and what maturity stages the program might go through. We highlighted strategies and techniques to develop a mature program by starting out with basic ways to track findings. This included the discussion of mandatory metadata that is required to build successful reports and provide appropriate insights to the organization and its leadership.

We explored a wide range of graphics and charts on how to visualize findings that can be leveraged during reporting and debriefs.

As the next step, we explored attack and knowledge graphs as ways to represent information such as assets and threats and to highlight paths that adversaries take through the network. Afterward, we went ahead and discussed a set of key metrics and objectives with practical examples, and explored how Monte Carlo simulations can provide a totally different way to analyze and discuss threats.

As an exercise, we explored...

Questions

  1. What are some useful metadata fields for tracking red team findings? Name three.
  2. What are the differences between qualitative and quantitative scoring systems? Which ones does your organization use for tracking impact and likelihood to manage risks?
  3. Name two tools for creating and visualizing an attack graph.
  4. What are the metrics and KPIs that might appear on an attack insights dashboard? Can you name three?
lock icon
The rest of the chapter is locked
You have been reading a chapter from
Cybersecurity Attacks – Red Team Strategies
Published in: Mar 2020Publisher: PacktISBN-13: 9781838828868
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Johann Rehberger

Johann Rehberger has over fifteen years of experience in threat analysis, threat modeling, risk management, penetration testing, and red teaming. As part of his many years at Microsoft, Johann established a penetration test team in Azure Data and led the program as Principal Security Engineering Manager. Recently, he built out a red team at Uber and currently works as an independent security and software engineer. Johann is well versed in analysis, design, implementation, and testing of software systems. Additionally, he enjoys providing training and was an instructor for ethical hacking at the University of Washington. Johann contributed to the MITRE ATT&CK framework and holds a master's in computer security from the University of Liverpool.
Read more about Johann Rehberger