Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Security

174 Articles
article-image-getting-started-spring-security
Packt
14 Mar 2013
14 min read
Save for later

Getting Started with Spring Security

Packt
14 Mar 2013
14 min read
(For more resources related to this topic, see here.) Hello Spring Security Although Spring Security can be extremely difficult to configure, the creators of the product have been thoughtful and have provided us with a very simple mechanism to enable much of the software's functionality with a strong baseline. From this baseline, additional configuration will allow a fine level of detailed control over the security behavior of our application. We'll start with an unsecured calendar application, and turn it into a site that's secured with rudimentary username and password authentication. This authentication serves merely to illustrate the steps involved in enabling Spring Security for our web application; you'll see that there are some obvious flaws in this approach that will lead us to make further configuration refinements. Updating your dependencies The first step is to update the project's dependencies to include the necessary Spring Security .jar files. Update the Maven pom.xml file from the sample application you imported previously, to include the Spring Security .jar files that we will use in the following few sections. Remember that Maven will download the transitive dependencies for each listed dependency. So, if you are using another mechanism to manage dependencies, ensure that you also include the transitive dependencies. When managing the dependencies manually, it is useful to know that the Spring Security reference includes a list of its transitive dependencies. pom.xml <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-config</artifactId> <version>3.1.0.RELEASE</version> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-core</artifactId> <version>3.1.0.RELEASE</version> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-web</artifactId> <version>3.1.0.RELEASE</version> </dependency> Downloading the example code You can download the example code files for all Packt books you have purchased from your account at https://www.packtpub.com. If you purchased this book elsewhere, you can visit https://www.packtpub.com/books/content/support and register to have the files e-mailed directly to you. Using Spring 3.1 and Spring Security 3.1 It is important to ensure that all of the Spring dependency versions match and all the Spring Security versions match; this includes transitive versions. Since Spring Security 3.1 builds with Spring 3.0, Maven will attempt to bring in Spring 3.0 dependencies. This means, in order to use Spring 3.1, you must ensure to explicitly list the Spring 3.1 dependencies or use Maven's dependency management features, to ensure that Spring 3.1 is used consistently. Our sample applications provide an example of the former option, which means that no additional work is required by you. In the following code, we present an example fragment of what is added to the Maven pom.xml file to utilize Maven's dependency management feature, to ensure that Spring 3.1 is used throughout the entire application: <project ...> ... <dependencyManagement> <dependencies> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-aop</artifactId> <version>3.1.0.RELEASE</version> </dependency> … list all Spring dependencies (a list can be found in our sample application's pom.xml ... <dependency> <groupId>org.springframework</groupId> <artifactId>spring-web</artifactId> <version>3.1.0.RELEASE</version> </dependency> </dependencies> </dependencyManagement> </project> If you are using Spring Tool Suite, any time you update the pom.xml file, ensure you right-click on the project and navigate to Maven | Update Project…, and select OK, to update all the dependencies. Implementing a Spring Security XML configuration file The next step in the configuration process is to create an XML configuration file, representing all Spring Security components required to cover standard web requests.Create a new XML file in the src/main/webapp/WEB-INF/spring/ directory with the name security.xml and the following contents. Among other things, the following file demonstrates how to require a user to log in for every page in our application, provide a login page, authenticate the user, and require the logged-in user to be associated to ROLE_USER for every URL:URL element: src/main/webapp/WEB-INF/spring/security.xml <?xml version="1.0" encoding="UTF-8"?> <bean:beans xsi_schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.1.xsd http://www.springframework.org/schema/security http://www.springframework.org/schema/security/spring-security- 3.1.xsd"> <http auto-config="true"> <intercept-url pattern="/**" access="ROLE_USER"/> </http> <authentication-manager> <authentication-provider> <user-service> <user name="user1@example.com" password="user1" authorities="ROLE_USER"/> </user-service> </authentication-provider> </authentication-manager> </bean:beans> If you are using Spring Tool Suite, you can easily create Spring configuration files by using File | New Spring Bean Configuration File. This wizard allows you to select the XML namespaces you wish to use, making configuration easier by not requiring the developer to remember the namespace locations and helping prevent typographical errors. You will need to manually change the schema definitions as illustrated in the preceding code. This is the only Spring Security configuration required to get our web application secured with a minimal standard configuration. This style of configuration, using a Spring Security-specific XML dialect, is known as the security namespace style, named after the XML namespace (http://www.springframework.org/schema/security) associated with the XML configuration elements. Let's take a minute to break this configuration apart, so we can get a high-level idea of what is happening. The <http> element creates a servlet filter, which ensures that the currently logged-in user is associated to the appropriate role. In this instance, the filter will ensure that the user is associated with ROLE_USER. It is important to understand that the name of the role is arbitrary. Later, we will create a user with ROLE_ADMIN and will allow this user to have access to additional URLs that our current user does not have access to. The <authentication-manager> element is how Spring Security authenticates the user. In this instance, we utilize an in-memory data store to compare a username and password. Our example and explanation of what is happening are a bit contrived. An inmemory authentication store would not work for a production environment. However, it allows us to get up and running quickly. We will incrementally improve our understanding of Spring Security as we update our application to use production quality security . Users who dislike Spring's XML configuration will be disappointed to learn that there isn't an alternative annotation-based or Java-based configuration mechanism for Spring Security, as there is with Spring Framework. There is an experimental approach that uses Scala to configure Spring Security, but at the time of this writing, there are no known plans to release it. If you like, you can learn more about it at https://github.com/tekul/scalasec/. Still, perhaps in the future, we'll see the ability to easily configure Spring Security in other ways. Although annotations are not prevalent in Spring Security, certain aspects of Spring Security that apply security elements to classes or methods are, as you'd expect, available via annotations. Updating your web.xml file The next steps involve a series of updates to the web.xml file. Some of the steps have already been performed because the application was already using Spring MVC. However, we will go over these requirements to ensure that these more fundamental Spring requirements are understood, in the event that you are using Spring Security in an application that is not Spring-enabled. ContextLoaderListener The first step of updating the web.xml file is to ensure that it contains the o.s.w.context.ContextLoaderListener listener, which is in charge of starting and stopping the Spring root ApplicationContext interface. ContextLoaderListener determines which configurations are to be used, by looking at the <context-param> tag for contextConfigLocation. It is also important to specify where to read the Spring configurations from. Our application already has ContextLoaderListener added, so we only need to add the newly created security.xml configuration file, as shown in the following code snippet: src/main/webapp/WEB-INF/web.xml <context-param> <param-name>contextConfigLocation</param-name> <param-value> /WEB-INF/spring/services.xml /WEB-INF/spring/i18n.xml /WEB-INF/spring/security.xml </param-value> </context-param> <listener> <listener-class> org.springframework.web.context.ContextLoaderListener </listener-class> </listener> The updated configuration will now load the security.xml file from the /WEB-INF/spring/ directory of the WAR. As an alternative, we could have used /WEB-INF/spring/*.xml to load all the XML files found in /WEB-INF/spring/. We choose not to use the *.xml notation to have more control over which files are loaded. ContextLoaderListener versus DispatcherServlet You may have noticed that o.s.web.servlet.DispatcherServlet specifies a contextConfigLocation component of its own. src/main/webapp/WEB-INF/web.xml <servlet> <servlet-name>Spring MVC Dispatcher Servlet</servlet-name> <servlet-class> org.springframework.web.servlet.DispatcherServlet </servlet-class> <init-param> <param-name>contextConfigLocation</param-name> <param-value> /WEB-INF/mvc-config.xml </param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet> DispatcherServlet creates o.s.context.ApplicationContext, which is a child of the root ApplicationContext interface. Typically, Spring MVC-specific components are initialized in the ApplicationContext interface of DispatcherServlet, while the rest are loaded by ContextLoaderListener. It is important to know that beans in a child ApplicationContext (such as those created by DispatcherServlet) can reference beans of its parent ApplicationContext (such as those created by ContextLoaderListener). However, the parent ApplicationContext cannot refer to beans of the child ApplicationContext. This is illustrated in the following diagram where childBean can refer to rootBean, but rootBean cannot refer to childBean. As with most usage of Spring Security, we do not need Spring Security to refer to any of the MVC-declared beans. Therefore, we have decided to have ContextLoaderListener initialize all of Spring Security's configuration. springSecurityFilterChain The next step is to configure springSecurityFilterChain to intercept all requests by updating web.xml. Servlet <filter-mapping> elements are considered in the order that they are declared. Therefore, it is critical for springSecurityFilterChain to be declared first, to ensure the request is secured prior to any other logic being invoked. Update your web.xml file with the following configuration: src/main/webapp/WEB-INF/web.xml </listener> <filter> <filter-name>springSecurityFilterChain</filter-name> <filter-class> org.springframework.web.filter.DelegatingFilterProxy </filter-class> </filter> <filter-mapping> <filter-name>springSecurityFilterChain</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> <servlet> Not only is it important for Spring Security to be declared as the first <filter-mapping> element, but we should also be aware that, with the example configuration, Spring Security will not intercept forwards, includes, or errors. Often, it is not necessary to intercept other types of requests, but if you need to do this, the dispatcher element for each type of request should be included in <filter-mapping>. We will not perform these steps for our application, but you can see an example, as shown in the following code snippet: src/main/webapp/WEB-INF/web.xml <filter-mapping> <filter-name>springSecurityFilterChain</filter-name> <url-pattern>/*</url-pattern> <dispatcher>REQUEST</dispatcher> <dispatcher>ERROR</dispatcher> ... </filter-mapping> DelegatingFilterProxy The o.s.web.filter.DelegatingFilterProxy class is a servlet filter provided by Spring Web that will delegate all work to a Spring bean from the root ApplicationContext that must implement javax.servlet.Filter. Since, by default, the bean is looked up by name, using the value of <filter-name>, we must ensure we use springSecurityFilterChain as the value of <filter-name>. Pseudo-code for how o.s.web.filter.DelegatingFilterProxy works for our web.xml file can be found in the following code snippet: public class DelegatingFilterProxy implements Filter { void doFilter(request, response, filterChain) { Filter delegate = applicationContet.getBean("springSecurityFilterChain") delegate.doFilter(request,response,filterChain); } } FilterChainProxy When working in conjunction with Spring Security, o.s.web.filter. DelegatingFilterProxy will delegate to Spring Security's o.s.s.web. FilterChainProxy, which was created in our minimal security.xml file. FilterChainProxy allows Spring Security to conditionally apply any number of servlet filters to the servlet request. We will learn more about each of the Spring Security filters and their role in ensuring that our application is properly secured, throughout the rest of the book. The pseudo-code for how FilterChainProxy works is as follows: public class FilterChainProxy implements Filter { void doFilter(request, response, filterChain) { // lookup all the Filters for this request List<Filter> delegates = lookupDelegates(request,response) // invoke each filter unless the delegate decided to stop for delegate in delegates { if continue processing delegate.doFilter(request,response,filterChain) } // if all the filters decide it is ok allow the // rest of the application to run if continue processing filterChain.doFilter(request,response) } } Due to the fact that both DelegatingFilterProxy and FilterChainProxy are the front door to Spring Security, when used in a web application, it is here that you would add a debug point when trying to figure out what is happening. Running a secured application If you have not already done so, restart the application and visit http://localhost:8080/calendar/, and you will be presented with the following screen: Great job! We've implemented a basic layer of security in our application, using Spring Security. At this point, you should be able to log in using user1@example.com as the User and user1 as the Password (user1@example.com/user1). You'll see the calendar welcome page, which describes at a high level what to expect from the application in terms of security. Common problems Many users have trouble with the initial implementation of Spring Security in their application. A few common issues and suggestions are listed next. We want to ensure that you can run the example application and follow along! Make sure you can build and deploy the application before putting Spring Security in place. Review some introductory samples and documentation on your servlet container if needed. It's usually easiest to use an IDE, such as Eclipse, to run your servlet container. Not only is deployment typically seamless, but the console log is also readily available to review for errors. You can also set breakpoints at strategic locations, to be triggered on exceptions to better diagnose errors. If your XML configuration file is incorrect, you will get this (or something similar to this): org.xml.sax.SAXParseException: cvc-elt.1: Cannot find the declaration of element 'beans'. It's quite common for users to get confused with the various XML namespace references required to properly configure Spring Security. Review the samples again, paying attention to avoid line wrapping in the schema declarations, and use an XML validator to verify that you don't have any malformed XML. If you get an error stating "BeanDefinitionParsingException: Configuration problem: Unable to locate Spring NamespaceHandler for XML schema namespace [http://www.springframework.org/ schema/security] ...", ensure that the spring-security-config- 3.1.0.RELEASE.jar file is on your classpath. Also ensure the version matches the other Spring Security JARs and the XML declaration in your Spring configuration file. Make sure the versions of Spring and Spring Security that you're using match and that there aren't any unexpected Spring JARs remaining as part of your application. As previously mentioned, when using Maven, it can be a good idea to declare the Spring dependencies in the dependency management section.
Read more
  • 0
  • 0
  • 29606

article-image-vulnerabilities-in-the-picture-transfer-protocol-ptp-allows-researchers-to-inject-ransomware-in-canons-dslr-camera
Savia Lobo
13 Aug 2019
5 min read
Save for later

Vulnerabilities in the Picture Transfer Protocol (PTP) allows researchers to inject ransomware in Canon’s DSLR camera

Savia Lobo
13 Aug 2019
5 min read
At the DefCon 27, Eyal Itkin, a vulnerability researcher at Check Point Software Technologies, demonstrated how vulnerabilities in the Picture Transfer Protocol (PTP) allowed him to infect a Canon EOS 80D DSLR with ransomware over a rogue WiFi connection. The PTP along with image transfer also contains dozens of different commands that support anything from taking a live picture to upgrading the camera’s firmware. The researcher chose Canon’s EOS 80D DSLR camera for three major reasons: Canon is the largest DSLR maker, controlling more than 50% of the market. The EOS 80D supports both USB and WiFi. Canon has an extensive “modding” community, called Magic Lantern, an open-source free software add-on that adds new features to the Canon EOS cameras. Eyal Itkin highlighted six vulnerabilities in the PTP that can easily allow a hacker to infiltrate the DSLRs and inject ransomware and lock the device. Next, the users might have to pay ransom to free up their camera and picture files. CVE-2019-5994 – Buffer Overflow in SendObjectInfo  (opcode 0x100C) CVE-2019-5998 – Buffer Overflow in NotifyBtStatus (opcode 0x91F9) CVE-2019-5999– Buffer Overflow in BLERequest (opcode 0x914C) CVE-2019-6000– Buffer Overflow in SendHostInfo (opcode0x91E4) CVE-2019-6001– Buffer Overflow in SetAdapterBatteryReport (opcode 0x91FD) CVE-2019-5995 – Silent malicious firmware update Itkin’s team informed Canon about the vulnerabilities in their DSLR on March 31, 2019. Recently, on August 6, Canon published a security advisory informing users that, “at this point, there have been no confirmed cases of these vulnerabilities being exploited to cause harm” and asking them to take advised measures to ensure safety. Itkin told The Verge, “due to the complexity of the protocol, we do believe that other vendors might be vulnerable as well, however, it depends on their respective implementation”. Though Itkin said he worked only with the Canon model, he also said DSLRs of other companies may also be at high risk. Vulnerability discovery by Itkin’s team in Canon’s DSLR After Itkin’s team was successful in dumping the camera’s firmware and loading it into their disassembler (IDA Pro), they say finding the PTP layer was an easy task. This is because, The PTP layer is command-based, and every command has a unique numeric opcode. The firmware contains many indicative strings, which eases the task of reverse-engineering it. Next, the team traversed back from the PTP OpenSession handler and found the main function that registers all of the PTP handlers according to their opcodes. “When looking on the registration function, we realized that the PTP layer is a promising attack surface. The function registers 148 different handlers, pointing to the fact that the vendor supports many proprietary commands. With almost 150 different commands implemented, the odds of finding a critical vulnerability in one of them is very high,” Itkin wrote in the research report. Each PTP command handler implements the same code API. The API makes use of the ptp_context object, an object that is partially documented thanks to ML, Itkin said. The team realized that most of the commands were relatively simple. “They receive only a few numeric arguments, as the protocol supports up to 5 such arguments for every command. After scanning all of the supported commands, the list of 148 commands was quickly narrowed down to 38 commands that receive an input buffer,” Itkin writes. “From an attacker’s viewpoint, we have full control of this input buffer, and therefore, we can start looking for vulnerabilities in this much smaller set of commands. Luckily for us, the parsing code for each command uses plain C code and is quite straight-forward to analyze,” he further added. Following this, they were able to find their first vulnerabilities and then the rest. Check Point and Canon have advised users to ensure that their cameras are using the latest firmware and install patches whenever they become available. Also, if the device is not in use camera owners should keep the device’s Wi-Fi turned off. A user on HackerNews points out, “It could get even worse if the perpetrator instead of bricking the device decides to install a backdoor that silently uploads photos to a server whenever a wifi connection is established.” Another user on Petapixel explained what quick measures they should take,  “A custom firmware can close the vulnerability also if they put in the work. Just turn off wifi and don't use random computers in grungy cafes to connect to your USB port and you should be fine. It may or may not happen but it leaves the door open for awesome custom firmware to show up. Easy ones are real CLOG for 1dx2. For the 5D4, I would imagine 24fps HDR, higher res 120fps, and free Canon Log for starters. For non tech savvy people that just leave wifi on all the time, that visit high traffic touristy photo landmarks they should update. Especially if they have no interest in custom firmware.” Another user on Petapixel highlighted the fact, “this hack relies on a serious number of things to be in play before it works, there is no mention of how to get the camera working again, is it just a case of flashing the firmware and accepting you may have lost a few images ?... there’s a lot more things to worry about than this.” Check Point has demonstrated the entire attack in the following YouTube video. https://youtu.be/75fVog7MKgg To know more about this news in detail, read Eyal Itkin’s complete research on Check Point. Researchers reveal vulnerability that can bypass payment limits in contactless Visa card Apple patched vulnerability in Mac’s Zoom Client; plans to address ‘video on by default’ VLC media player affected by a major vulnerability in a 3rd library, libebml
Read more
  • 0
  • 0
  • 28869

article-image-what-is-a-magecart-attack-and-how-can-you-protect-your-business
Guest Contributor
07 Aug 2019
5 min read
Save for later

What is a Magecart attack, and how can you protect your business?

Guest Contributor
07 Aug 2019
5 min read
Recently, British Airways was slapped with a $230M fine after attackers stole data from hundreds of thousands of its customers in a massive breach. The fine, the result of a GDPR prosecution, was issued after a 2018 Magecart attack. Attackers were able to insert around 22 lines of code into the airline’s website, allowing them to capture customer credit card numbers and other sensitive pieces of information. Magecart attacks have largely gone unnoticed within the security world in recent years in spite of the fact that the majority occur at eCommerce companies or other similar businesses that collect credit card information from customers. Magecart has also been responsible for significant damage, theft, and fraud across a variety of industries. According to a 2018 report conducted by RiskIQ and Flashpoint, at least 6,400 websites had been affected by Magecart as of November 2018. To safeguard against Magecart and protect your organization from web-based threats, there are a few things you should do: Understand how Magecart attacks happen There are two approaches hackers take when it comes to Magecart attacks; the first focuses on attacking the main website or application, while the second focuses on exploiting third-party tags. In both cases, the intent is to insert malicious JavaScript which can then skim data from HTML forms and send that data to servers controlled by the attackers. Users typically enter personal data — whether it’s for authentication, searching for information, or checking out with a credit card — into a website through an HTML form. Magecart attacks utilize JavaScript to monitor for this kind of sensitive data when it’s entered into specific form fields, such as a password, social security number, or a credit card number. They then make a copy of it and send the copy to a different server on the internet.  In the British Airways attack, for example, hackers inserted malicious code into the airline’s baggage claim subdomain, which appears to have been less secure than the main website. This code was referenced on the main website, which when run within the airline’s customers’ browsers, could skim credit card and other personal information. Get ahead of the confusion that surrounds the attacks Magecart attacks are very difficult for web teams to identify because they do not take place on the provider’s backend infrastructure, but instead within the visitor’s browser. This means data is transferred directly from the browser to malicious servers, without any interaction with the backend website server — the origin — needing to take place. As a result, auditing the backend infrastructure and code supporting website on a regular basis won’t stop attacks, because the issue is happening in the user’s browser which traditional auditing won't detect.  This means Magecart attacks can only be discovered when the company is alerted to credit card fraud or a client-side code review including all the third-party services takes place. Because of this, there are still many sites online today that hold malicious Magecart JavaScript within their pages leaking sensitive information. Restrict access to sensitive data There are a number of things your team can do to prevent Magecart attacks from threatening your website visitors. First, it’s beneficial if your team limits third-party code on sensitive pages. People tend to add third-party tags all over their websites, but consider if you really need that kind of functionality on high-security pages (like your checkout or login pages). Removing non-essential third-party tags like chat widgets and site surveys from sensitive pages limit your exposure to potentially malicious code.  Next, you should consider implementing content security policies (CSP). Web teams can build policies that dictate which domains can run code and send data on sensitive pages. While this approach requires ongoing maintenance, it’s one way to prevent malicious hackers from gaining access to visitors’ sensitive information. Another approach is to adopt a zero-trust strategy. Web teams can look to a third-party security service that allows creating a policy that, by default, blocks access to sensitive data entered in web forms or stored in cookies. Then the team should restrict access to this data to everyone except for a select set of vetted scripts. With these policies in place, if malicious skimming code does make it onto your site, it won’t be able to access any sensitive information, and alerts will let you know when a vendor’s code has been exploited. Magecart doesn’t need to destroy your brand. Web skimming attacks can be difficult to detect because they don’t attack core application infrastructure — they focus on the browser where protections are not in place. As such, many brands are confused about how to protect their customers. However, implementing a zero-trust approach, thinking critically about how many third-party tags your website really needs and limiting who is able to run code will help you keep your customer data safe. Author bio Peter is the VP of Technology at Instart. Previously, Peter was with Citrix, where he was senior director of product management and marketing for the XenClient product. Prior to that, he held a variety of pre-sales, web development, and IT admin roles, including five years at Marimba working with enterprise change management systems. Peter has a BA in Political Science with a minor in Computer Science from UCSD. British Airways set to face a record-breaking fine of £183m by the ICO over customer data breach. A universal bypass tricks Cylance AI antivirus into accepting all top 10 Malware. An IoT worm Silex, developed by a 14 year old resulted in malware attack and took down 2000 devices  
Read more
  • 0
  • 0
  • 28837

article-image-10-key-announcements-from-microsoft-ignite-2019-you-should-know-about
Sugandha Lahoti
26 Nov 2019
7 min read
Save for later

10 key announcements from Microsoft Ignite 2019 you should know about

Sugandha Lahoti
26 Nov 2019
7 min read
This year’s Microsoft Ignite was jam-packed with new releases and upgrades in Microsoft’s line of products and services. The company elaborated on its growing focus to address the needs of its customers to help them do their business in smarter, more productive and more efficient ways. Most of the products were AI-based and Microsoft was committed to security and privacy. Microsoft Ignite 2019 took place on November 4-8, 2019 in Orlando, Florida and was attended by 26,000 IT implementers and decision-makers, developers, data professionals and people from various industries. There were a total of 175 separate announcements made! We have tried to cover the top 10 here. Microsoft’s Visual Studio IDE is now available on the web The web-based version of Microsoft’s Visual Studio IDE is now available to all developers. Called the Visual Studio Online, this IDE will allow developers to configure a fully configured development environment for their repositories and use the web-based editor to work on their code. Visual Studio Online is deeply integrated with GitHub (also owned by Microsoft), although developers can also attach their own physical and virtual machines to their Visual Studio-based environments. Visual Studio Online’s cloud-hosted environments, as well as extended support for Visual Studio Code and the web UI, are now available in preview. Support for Visual Studio 2019 is in private preview, which you can also sign up for through the Visual Studio Online web portal. Project Cortex will classify all content in a single network Project Cortex is a new service in Microsoft 365 useful to maintain the everyday flow of work in enterprises. Project Cortex collates enterprises generated documents and data, which is often spread across numerous repositories. It uses AI and machine learning to automatically classify all your content into topics to form a knowledge network. Cortex improves individual productivity and organizational intelligence and can be used across Microsoft 365, such as in the Office apps, Outlook, and Microsoft Teams. Project Cortex is now in private preview and will be generally available in the first half of 2020. Single-view device management with ‘Microsoft Endpoint Manager’ Microsoft has combined its Configuration Manager with Intune, its cloud-based endpoint management system to form what they call an Endpoint Manager. ConfigMgr allows enterprises to manage the PCs, laptops, phones, and tablets they issue to their employees. Intune is used for cloud-based management of phones. The Endpoint Manager will provide unique co-management options to organizations to provision, deploy, manage and secure endpoints and applications across their organization. Touted as the most important release of the event by Satya Nadella, this solution will give enterprises a single view of their deployments. ConfigMgr users will now also get a license to Intune to allow them to move to cloud-based management. No-code bot builder ‘Microsoft Power Virtual Agents’ is available in public preview Built on the Azure Bot Framework, Microsoft Power Virtual Agents is a low-code and no-code bot-building solution now available in public preview. Power Virtual Agents enables programmers with little to no developer experience to create and deploy intelligent virtual agents. The solution also includes Azure Machine Learning to help users create and improve conversational agents for personalized customer service. Power Virtual Agents will be generally available Dec. 1. Microsoft’s Chromium-based version of Edge is now more privacy-focused Microsoft Ignite announced the release candidate for Microsoft’s Chromium-based version of Edge browser with the general availability release on January 15. InPrivate search will be available for Microsoft Edge and Microsoft Bing to keep online searches and identities private, giving users more control over their data.  When searching InPrivate, search history and personally identifiable data will not be saved nor be associated back to you. Users’ identities and search histories are completely private. There will also be a new security baseline for the all-new Microsoft Edge. Security baselines are pre-configured groups of security settings and default values that are recommended by the relevant security teams. The next version of Microsoft Edge will feature a new icon symbolizing the major changes in Microsoft Edge, built on the Chromium open source project. It will appear in an Easter egg hunt designed to reward the Insider community. ML.NET 1.4 announces General Availability ML.NET 1.4, Microsoft’s open-source machine learning framework is now generally available. The latest release adds image classification training with the ML.NET API, as well as a relational database loader API for reading data used for training models with ML.NET. ML.NET also includes Model Builder (easy to use UI tool in Visual Studio) and Command-Line Interface to make it super easy to build custom Machine Learning models using AutoML. This release also adds a new preview of the Visual Studio Model Builder extension that supports image classification training from a graphical user interface. A preview of Jupyter support for writing C# and F# code for ML.NET scenarios is also available. Azure Arc extends Azure services across multiple infrastructures One of the most important features of Microsoft Ignite 2019 was Azure Arc. This new service enables Azure services anywhere and extends Azure management to any infrastructure — including those of competitors like AWS and Google Cloud.  With Azure Arc, customers can use Azure’s cloud management experience for their own servers (Linux and Windows Server) and Kubernetes clusters by extending Azure management across environments. Enterprises can also manage and govern resources at scale with powerful scripting, tools, Azure Portal and API, and Azure Lighthouse. Announcing Azure Synapse Analytics Azure Synapse Analytics builds upon Microsoft’s previous offering Azure SQL Data Warehouse. This analytics service combines traditional data warehousing with big data analytics bringing serverless on-demand or provisioned resources—at scale. Using Azure Synapse Analytics, customers can ingest, prepare, manage, and serve data for immediate BI and machine learning applications within the same service. Safely share your big data with Azure Data Share, now generally available As the name suggests, Azure Data Share allows you to safely share your big data with other organizations. Organizations can share data stored in their data lakes with third party organizations outside their Azure tenancy. Data providers wanting to share data with their customers/partners can also easily create a new share, populate it with data residing in a variety of stores and add recipients. It employs Azure security measures such as access controls, authentication, and encryption to protect your data. Azure Data Share supports sharing from SQL Data Warehouse and SQL DB, in addition to Blob and ADLS (for snapshot-based sharing). It also supports in-place sharing for Azure Data Explorer (in preview). Azure Quantum to be made available in private preview Microsoft has been working on Quantum computing for some time now. At Ignite, Microsoft announced that it will be launching Azure Quantum in private preview in the coming months. Azure Quantum is a full-stack, open cloud ecosystem that will bring quantum computing to developers and organizations. Azure Quantum will assemble quantum solutions, software, and hardware across the industry in a  single, familiar experience in Azure. Through Azure Quantum, you can learn quantum computing through a series of tools and learning tutorials, like the quantum katas. Developers can also write programs with Q# and the QDK Solve. Microsoft Ignite 2019 organizers have released an 88-page document detailing about all 175 announcements which you can access here. You can also view the conference Keynote delivered by Satya Nadella on YouTube as well as Microsoft Ignite’s official blog. Facebook mandates Visual Studio Code as default development environment and partners with Microsoft for remote development extensions Exploring .Net Core 3.0 components with Mark J. Price, a Microsoft specialist Yubico reveals Biometric YubiKey at Microsoft Ignite Microsoft announces .NET Jupyter Notebooks
Read more
  • 0
  • 0
  • 28407

article-image-british-airways-set-to-face-a-record-breaking-fine-of-183m-by-the-ico-over-customer-data-breach
Sugandha Lahoti
08 Jul 2019
6 min read
Save for later

British Airways set to face a record-breaking fine of £183m by the ICO over customer data breach

Sugandha Lahoti
08 Jul 2019
6 min read
UK’s watchdog ICO is all set to fine British Airways more than £183m over a customer data breach. In September last year, British Airways notified ICO about a data breach that compromised personal identification information of over 500,000 customers and is believed to have begun in June 2018. ICO said in a statement, “Following an extensive investigation, the ICO has issued a notice of its intention to fine British Airways £183.39M for infringements of the General Data Protection Regulation (GDPR).” Information Commissioner Elizabeth Denham said, "People's personal data is just that - personal. When an organisation fails to protect it from loss, damage or theft, it is more than an inconvenience. That's why the law is clear - when you are entrusted with personal data, you must look after it. Those that don't will face scrutiny from my office to check they have taken appropriate steps to protect fundamental privacy rights." How did the data breach occur? According to the details provided by the British Airways website, payments through its main website and mobile app were affected from 22:58 BST August 21, 2018, until 21:45 BST September 5, 2018. Per ICO’s investigation, user traffic from the British Airways site was being directed to a fraudulent site from where customer details were harvested by the attackers. Personal information compromised included log in, payment card, and travel booking details as well name and address information. The fraudulent site performed what is known as a supply chain attack embedding code from third-party suppliers to run payment authorisation, present ads or allow users to log into external services, etc. According to a cyber-security expert, Prof Alan Woodward at the University of Surrey, the British Airways hack may possibly have been a company insider who tampered with the website and app's code for malicious purposes. He also pointed out that live data was harvested on the site rather than stored data. https://twitter.com/EerkeBoiten/status/1148130739642413056 RiskIQ, a cyber security company based in San Francisco, linked the British Airways attack with the modus operandi of a threat group Magecart. Magecart injects scripts designed to steal sensitive data that consumers enter into online payment forms on e-commerce websites directly or through compromised third-party suppliers. Per RiskIQ, Magecart set up custom, targeted infrastructure to blend in with the British Airways website specifically and to avoid detection for as long as possible. What happens next for British Airways? The ICO noted that British Airways cooperated with its investigation, and has made security improvements since the breach was discovered. They now have 28 days to appeal. Responding to the news, British Airways’ chairman and chief executive Alex Cruz said that the company was “surprised and disappointed” by the ICO’s decision, and added that the company has found no evidence of fraudulent activity on accounts linked to the breach. He said, "British Airways responded quickly to a criminal act to steal customers' data. We have found no evidence of fraud/fraudulent activity on accounts linked to the theft. We apologise to our customers for any inconvenience this event caused." ICO was appointed as the lead supervisory authority to tackle this case on behalf of other EU Member State data protection authorities. Under the GDPR ‘one stop shop’ provisions the data protection authorities in the EU whose residents have been affected will also have the chance to comment on the ICO’s findings. The penalty is divided up between the other European data authorities, while the money that comes to the ICO goes directly to the Treasury. What is somewhat surprising is that ICO disclosed the fine publicly even before Supervisory Authorities commented on ICOs findings and a final decision has been taken based on their feedback, as pointed by Simon Hania. https://twitter.com/simonhania/status/1148145570961399808 Record breaking fine appreciated by experts The penalty imposed on British Airways is the first one to be made public since GDPR’s new policies about data privacy were introduced. GDPR makes it mandatory to report data security breaches to the information commissioner. They also increased the maximum penalty to 4% of turnover of the penalized company. The fine would be the largest the ICO has ever issued; last ICO fined Facebook £500,000 fine for the Cambridge Analytica scandal, which was the maximum under the 1998 Data Protection Act. The British Airways penalty amounts to 1.5% of its worldwide turnover in 2017, making it roughly 367 times than of Facebook’s. Infact, it could have been even worse if the maximum penalty was levied;  the full 4% of turnover would have meant a fine approaching £500m. Such a massive fine would clearly send a sudden shudder down the spine of any big corporation responsible for handling cybersecurity - if they compromise customers' data, a severe punishment is in order. https://twitter.com/j_opdenakker/status/1148145361799798785 Carl Gottlieb, Privacy Lead & Data Protection Officer at Duolingo has summarized the factoids of this attack in a twitter thread which were much appreciated. GDPR fines are for inappropriate security as opposed to getting breached. Breaches are a good pointer but are not themselves actionable. So organisations need to implement security that is appropriate for their size, means, risk and need. Security is an organisation's responsibility, whether you host IT yourself, outsource it or rely on someone else not getting hacked. The GDPR has teeth against anyone that messes up security, but clearly action will be greatest where the human impact is most significant. Threats of GDPR fines are what created change in privacy and security practices over the last 2 years (not orgs suddenly growing a conscience). And with very few fines so far, improvements have slowed, this will help. Monetary fines are a great example to change behaviour in others, but a TERRIBLE punishment to drive change in an affected organisation. Other enforcement measures, e.g. ceasing processing personal data (e.g. ban new signups) would be much more impactful. https://twitter.com/CarlGottlieb/status/1148119665257963521 Facebook fined $2.3 million by Germany for providing incomplete information about hate speech content European Union fined Google 1.49 billion euros for antitrust violations in online advertising French data regulator, CNIL imposes a fine of 50M euros against Google for failing to comply with GDPR.
Read more
  • 0
  • 0
  • 28006

Packt
20 Oct 2015
3 min read
Save for later

OAuth 2.0 – Gaining Consent

Packt
20 Oct 2015
3 min read
In this article by Charles Bihis, the author of the book, Mastering OAuth 2.0, discusses the topic of gaining consent in OAuth 2.0. OAuth 2.0 is a framework built around the concept of resources and permissions for protecting those resources. Central to this is the idea of gaining consent. Let's look at an example.   (For more resources related to this topic, see here.) How does it work? You have just downloaded the iPhone app GoodApp. After installing, GoodApp would like to suggest contacts for you to add by looking at your Facebook friends. Conceptually, the OAuth 2.0 workflow can be represented like this:   The following are the steps present in the OAuth 2.0 workflow: You ask GoodApp to suggest you contacts. GoodApp says, "Sure! But you'll have to authorize me first. Go here…" GoodApp sends you to Facebook to log in and authorize GoodApp. Facebook asks you directly for authorization to see if GoodApp can access your friend list on your behalf. You say yes. Facebook happily obliges, giving GoodApp your friend list. GoodApp then uses this information to tailor suggested contacts for you. The preceding image and workflow presents a rough idea for how this interaction looks like using the OAuth 2.0 model. However, of particular interest to us now are steps 3-5. In these steps, the service provider, Facebook, is asking you, the user, whether or not you allow the client application, GoodApp, to perform a particular action. This is known as user consent. User consent When a client application wants to perform a particular action relating to you or resources you own, it must first ask you for permission. In this case, the client application, GoodApp, wants to access your friend list on the service provider, Facebook. In order for Facebook to allow this, they must ask you directly. This is where the user consent screen comes in. It is simply a page that you are presented with in your application that describes the permissions that are being requested of you by the client application along with an option to either allow or reject the request. You may be familiar with these types of screens already if you've ever tried to access resources on one service from another service. For example, the following is an example of a user consent screen that is presented when you want to log into Pinterest using your Facebook credentials. Incorporating this into our flow chart, we get a new image: This flow chart includes the following steps: You ask GoodApp to suggest you contacts. GoodApp says, "Sure! But you'll have to authorize me first. Go here…" GoodApp sends you to Facebook. Here, Facebook asks you directly for authorization for GoodApp to access your friend list on your behalf. It does this by presenting the user consent form which you can either accept or deny. Let's assume you accept. Facebook happily obliges, giving GoodApp your friend list. GoodApp then uses this information to tailor suggested contacts for you. When you accept the terms on the user consent screen, you have allowed GoodApp access to your Facebook friend list on your behalf. This is a concept known as delegated authority, and it is all accomplished by gaining consent. Summary In this article, we discussed the idea of gaining consent in OAuth 2.0, and how it works with the help of an example and flow charts. Resources for Article: Further resources on this subject: Oracle API Management Implementation 12c [article] Find Friends on Facebook [article] Core Ephesoft Features [article]
Read more
  • 0
  • 0
  • 27759
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-windows-drive-acquisition
Packt
21 Jul 2017
13 min read
Save for later

Windows Drive Acquisition

Packt
21 Jul 2017
13 min read
In this article, by Oleg Skulkin and Scar de Courcier, authors of Windows Forensics Cookbook, we  will cover drive acquisition in E01 format with FTK Imager, drive acquisition in RAW Format with DC3DD, and mounting forensic images with Arsenal Image Mounter. (For more resources related to this topic, see here.) Before you can begin analysing evidence from a source, it first of all, needs to be imaged. This describes a forensic process in which an exact copy of a drive is taken. This is an important step, especially if evidence needs to be taken to court because forensic investigators must be able to demonstrate that they have not altered the evidence in any way. The term forensic image can refer to either a physical or a logical image. Physical images are precise replicas of the drive they are referencing, whereas a logical image is a copy of a certain volume within that drive. In general, logical images show what the machine’s user will have seen and dealt with, whereas physical images give a more comprehensive overview of how the device works at a higher level. A hash value is generated to verify the authenticity of the acquired image. Hash values are essentially cryptographic digital fingerprints which show whether a particular item is an exact copy of another. Altering even the smallest bit of data will generate a completely new hash value, thus demonstrating that the two items are not the same. When a forensic investigator images a drive, they should generate a hash value for both the original drive and the acquired image. Some pieces of forensic software will do this for you. There are a number of tools available for imaging hard drives, some of which are free and open source. However, the most popular way for forensic analysts to image hard drives is by using one of the more well-known forensic software vendors solutions. This is because it is imperative to be able to explain how the image was acquired and its integrity, especially if you are working on a case that will be taken to court. Once you have your image, you will then be able to analyse the digital evidence from a device without directly interfering with the device itself. In this chapter, we will be looking at various tools that can help you to image a Windows drive, and taking you through the process of acquisition. Drive acquisition in E01 format with FTK Imager FTK Imager is an imaging and data preview tool by AccessData, which allows an examiner not only to create forensic images in different formats, including RAW, SMART, E01 and AFF, but also to preview data sources in a forensically sound manner. In the first recipe of this article, we will show you how to create a forensic image of a hard drive from a Windows system in E01 format. E01 or EnCase's Evidence File is a standard format for forensic images in law enforcement. Such images consist of a header with case info, including acquisition date and time, examiner's name, acquisition notes, and password (optional), bit-by-bit copy of an acquired drive (consists of data blocks, each is verified with its own CRC or Cyclical Redundancy Check), and a footer with MD5 hash for the bitstream.  Getting ready First of all, let's download FTK Imager from AccessData website. To do it, go to SOLUTIONS tab, and after - to Product Downloads. Now choose DIGITAL FORENSICS, and after - FTK Imager. At the time of this writing, the most up-to-date version is 3.4.3, so click DOWNLOAD PAGE green button on the right. Ok, now you should be at the download page. Click on DOWNLOAD NOW button and fill in the form, after this you'll get the download link to the email you provided. The installation process is quite straightforward, all you need is just click Next a few times, so we won't cover it in the recipe. How to do it... There are two ways of initiating drive imaging process: Using Create Disk Image button from the Toolbar as shown in the following figure: Create Disk Image button on the Toolbar Use Create Disk Image option from the File menu as shown in the following figure: Create Disk Image... option in the File Menu You can choose any option you like. The first window you see is Select Source. Here you have five options: Physical Drive: This allows you to choose a physical drive as the source, with all partitions and unallocated space Logical Drive: This allows you to choose a logical drive as the source, for example, E: drive Image File: This allows you to choose an image file as the source, for example, if you need to convert you forensic image from one format to another Contents of a Folder: This allows you to choose a folder as the source, of course, no deleted files, and so on will be included Fernico Device: This allows you to restore images from multiple CD/DVD Of course, we want to image the whole drive to be able to work with deleted data and unallocated space, so: Let's choose Physical Drive option. Evidence source mustn't be altered in any way, so make sure you are using a hardware write blocker, you can use the one from Tableau, for example. These devices allow acquisition of  drive contents without creating the possibility of modifying the data.  FTK Imager Select Source window Click Next and you'll see the next window - Select Drive. Now you should choose the source drive from the drop down menu, in our case it's .PHYSICALDRIVE2. FTK Imager Select Drive window Ok, the source drive is chosen, click Finish. Next window - Create Image. We'll get back to this window soon, but for now, just click Add...  It's time to choose the destination image type. As we decided to create our image in EnCase's Evidence File format, let's choose E01. FTK Imager Select Image Type window Click Next and you'll see Evidence Item Information window. Here we have five fields to fill in: Case Number, Evidence Number, Unique Description, Examiner and Notes. All fields are optional. FTK Imager Evidence Item Information window Filled the fields or not, click Next. Now choose image destination. You can use Browse button for it. Also, you should fill in image filename. If you want your forensic image to be split, choose fragment size (in megabytes). E01 format supports compression, so if you want to reduce the image size, you can use this feature, as you can see in the following figure, we have chosen 6. And if you want the data in the image to be secured, use AD Encryption feature. AD Encryption is a whole image encryption, so not only is the raw data encrypted, but also any metadata. Each segment or file of the image is encrypted with a randomly generated image key using AES-256. FTK Imager Select Image Destination window Ok, we are almost done. Click Finish and you'll see Create Image window again. Now, look at three options at the bottom of the window. The verification process is very important, so make sure Verify images after they are created option is ticked, it helps you to be sure that the source and the image are equal. Precalculate Progress Statistics option is also very useful: it will show you estimated time of arrival during the imaging process. The last option will create directory listings of all files in the image for you, but of course, it takes time, so use it only if you need it.  FTK Imager Create Image window All you need to do now is to click Start. Great, the imaging process has been started! When the image is created, the verification process starts. Finally, you'll get Drive/Image Verify Results window, like the one in the following figure: FTK Imager Drive/Image Verify Results window As you can see, in our case the source and the image are identical: both hashes matched. In the folder with the image, you will also find an info file with valuable information such as drive model, serial number, source data size, sector count, MD5 and SHA1 checksums, and so on. How it works... FTK Imager uses the physical drive of your choice as the source and creates a bit-by-bit image of it in EnCase's Evidence File format. During the verification process, MD5 and SHA1 hashes of the image and the source are being compared. See more FTK Imager download page: http://accessdata.com/product-download/digital-forensics/ftk-imager-version-3.4.3 FTK Imager User Guide: https://ad-pdf.s3.amazonaws.com/Imager/3_4_3/FTKImager_UG.pdf Drive acquisition in RAW format with DC3DD DC3DD is a patched (by Jesse Kornblum) version of classic GNU DD utility with some computer forensics features. For example, the fly hashing with a number of algorithms, such as MD5, SHA-1, SHA-256, and SHA-512, showing the progress of the acquisition process, and so on. Getting ready You can find a compiled stand alone 64 bit version of DC3DD for Windows at Sourceforge. Just download the ZIP or 7z archive, unpack it, and you are ready to go. How to do it... Open Windows Command Prompt and change directory (you can use cd command to do it) to the one with dc3dd.exe, and type the following command: dc3dd.exe if=.PHYSICALDRIVE2 of=X:147-2017.dd hash=sha256 log=X:147-2017.log Press Enter and the acquisition process will start. Of course, your command will be a bit different, so let's find out what each part of it means: if: It stands for input file, yes, originally DD is a Linux utility, and, if you don't know, everything is a file in Linux, as you can see in our command, we put physical drive 2 here (this is the drive we wanted to image, but in your case it can be another drive, depend on the number of drives connected to your workstation). of: It stands for output file, here you should type the destination of your image, as you remember, in RAW format, in our case it's X: drive and 147-2017.dd file. hash: As it's already been said, DC3DD supports four hashing algorithms: MD5, SHA-1, SHA-256, and SHA-512, we chose SHA-256, but you can choose the one you like. log: Here you should type the destination for the logs, you will find the image version, image hash, and so on in this file after acquisition is completed. How it works... DC3DD creates bit-by-bit image of the source drive n RAW format, so the size of the image will be the same as source, and calculates the image hash using the algorithm of the examiner's choice, in our case SHA-256. See also DC3DD download page: https://sourceforge.net/projects/dc3dd/files/dc3dd/7.2%20-%20Windows/ Mounting forensic images with Arsenal Image Mounter Arsenal Image Mounter is an open source tool developed by Arsenal Recon. It can help a digital forensic examiner to mount a forensic image or virtual machine disk in Windows. It supports both E01 (and Ex01) and RAW forensic images, so you can use it with any of the images we created in the previous recipes. It's very important to note, that Arsenal Image Mounter mounts the contents of disk images as complete disks. The tool supports all file systems you can find on Windows drives: NTFS, ReFS, FAT32 and exFAT. Also, it has temporary write support for images and it's very useful feature, for example, if you want to boot system from the image you are examining. Getting ready Go to Arsenal Image Mounter web page at Arsenal Recon website and click on Download button to download the ZIP archive. At the time of this writing the latest version of the tool is 2.0.010, so in our case, the archive has the name  Arsenal_Image_Mounter_v2.0.010.0_x64.zip. Extract it to a location of your choice and you are ready to go, no installation is needed. How to do it... There two ways to choose an image for mounting in Arsenal Image Mounter: You can use File menu and choose Mount image. Use the Mount image button as shown in the following figure:  Arsenal Image Mounter main window When you choose Mount image option from File menu or click on Mount image button, Open window will popup - here you should choose an image for mounting. The next windows you will see - Mount options, like the one in the following figure:  Arsenal Image Mounter Mount options window As you can see, there are a few options here: Read only: If you choose this option, the image is mounted in read-only mode, so no write operations are allowed (Do you still remember that you mustn't alter the evidence in any way? Of course, it's already an image, not the original drive, but nevertheless).Fake disk signatures: If an all-zero disk signature is found on the image, Arsenal Image Mounter reports a random disk signature to Windows, so it's mounted properly. Write temporary: If you choose this option, the image is mounted in read-write mode, but all modifications are written not in the original image file, but to a temporary differential file. Write original: Again, this option mounts the image in read-write mode, but this time the original image file will be modified. Sector size: This option allows you to choose sector size. Create "removable" disk device: This option emulates the attachment of a USB thumb drive.   Choose the options you think you need and click OK. We decided to mount our image as read only. Now you can see a hard drive icon on the main windows of the tool - the image is mounted. If you mounted only one image and want to unmount it- select the image and click on Remove selected button. If you have a few mounted images and want to unmount all of them - click on Remove all button. How it works... Arsenal Image Mounter mounts forensic images or virtual machine disks as complete disks in read-only or read-write mode. Later, a digital forensics examiner can access their contents even with Windows Explorer. See also Arsenal Image Mounter page at Arsenal Recon website: https://arsenalrecon.com/apps/image-mounter/ Summary In this article, the author has explained about the process and importance of drive acquisition using imaging software's which are available with well-known forensic software vendors such as FTK Imager and DC3DD. Drive acquisition being the first step in the analysis of digital evidence, need to be carried out with utmost care which in turn will make the analysis process smooth. Resources for Article: Further resources on this subject: Forensics Recovery [article] Digital and Mobile Forensics [article] Mobile Forensics and Its Challanges [article]
Read more
  • 0
  • 0
  • 27664

article-image-amazon-s3-security-access-policies
Savia Lobo
03 May 2018
7 min read
Save for later

Amazon S3 Security access and policies

Savia Lobo
03 May 2018
7 min read
In this article, you will get to know about Amazon S3, and the security access and policies associated with it.  AWS provides you with S3 as the object storage, where you can store your object files from 1 KB to 5 TB in size at a low cost. It's highly secure, durable, and scalable, and has unlimited capacity. It allows concurrent read/write access to objects by separate clients and applications. You can store any type of file in AWS S3 storage. [box type="shadow" align="" class="" width=""]This article is an excerpt taken from the book,' Cloud Security Automation', written by Prashant Priyam.[/box] AWS keeps multiple copies of all the data stored in the standard S3 storage, which are replicated across devices in the region to ensure the durability of 99.999999999%. S3 cannot be used as block storage. AWS S3 storage is further categorized into three different sections: S3 Standard: This is suitable when we need durable storage for files with frequent access. Reduced Redundancy Storage: This is suitable when we have less critical data that is persistent in nature. Infrequent Access (IA): This is suitable when you have durable data with nonfrequent access. You can opt for Glacier. However, in Glacier, you have a very long retrieval time. So, S3 IA becomes a suitable option. It provides the same performance as the S3 Standard storage. AWS S3 has inbuilt error correction and fault tolerance capabilities. Apart from this, in S3 you have an option to enable versioning and cross-region replication (cross-origin resource sharing (CORS)). If you want to enable versioning on any existing bucket, versioning will be enabled only for new objects in that bucket, not for existing objects. This also happens in the case of CORS, where you can enable cross-region replication, but it will be applicable only for new objects. Security in Amazon S3 S3 is highly secure storage. Here, we can enable fine-grained access policies for resource access and encryption. To enable access-level security, you can use the following: S3 bucket policy IAM access policy MFA for object deletion The S3 bucket policy is a JSON code that defines what will be accessed by whom and at what level: { "Version": "2008-10-17", "Statement": [ { "Sid": "AllowPublicRead", "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": [ "s3:GetObject" ], "Resource": [ "arn:aws:s3:::prashantpriyam/*" ] } ] } In the preceding JSON code, we have just allowed read-only access to all the objects (as defined in the Action section) for an S3 bucket named prashantpriyam (defined in the Resource section). Similar to the S3 bucket policy, we can also define an IAM policy for S3 bucket access: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetBucketLocation", "s3:ListAllMyBuckets" ], "Resource": "arn:aws:s3:::*" }, { "Effect": "Allow", "Action": ["s3:ListBucket"], "Resource": ["arn:aws:s3:::prashantpriyam"] }, { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObject" ], "Resource": ["arn:aws:s3:::prashantpriyam/*"] } ] } In the preceding policy, we want to give the user full permissions on the S3 bucket from the AWS console as well. In the following section of policy (JSON code), we have granted permission to the user to get the bucket location and list all the buckets for traversal, but here we cannot perform other operations, such as getting object details from the bucket: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetBucketLocation", "s3:ListAllMyBuckets" ], "Resource": "arn:aws:s3:::*" }, { "Effect": "Allow", "Action": ["s3:ListBucket"], "Resource": ["arn:aws:s3:::prashantpriyam"] }, While in the second section of the policy (specified as follows), we have given permission to users to traverse into the prashantpriyam bucket and perform PUT, GET, and DELETE operations on the object: { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObject" ], "Resource": ["arn:aws:s3:::prashantpriyam/*"] } MFA enables additional security on your account where, after password-based authentication, it asks you to provide the temporary code generated from AWS MFA. We can also use a virtual MFA such as Google Authenticator. AWS S3 supports MFA-based API, which helps to enforce MFA-based access policy on S3 bucket. Let's look at an example where we are giving users read-only access to a bucket while all other operations require an MFA token, which will expire after 600 seconds: { "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Deny", "Principal": "*", "Action": "s3:*", "Resource": "arn:aws:s3:::prashantpriyam/priyam/*", "Condition": {"Null": {"aws:MultiFactorAuthAge": true }} }, { "Sid": "", "Effect": "Deny", "Principal": "*", "Action": "s3:*", "Resource": "arn:aws:s3:::prashantpriyam/priyam/*", "Condition": {"NumericGreaterThan": {"aws:MultiFactorAuthAge": 600 }} }, { "Sid": "", "Effect": "Allow", "Principal": "*", "Action": ["s3:GetObject"], "Resource": "arn:aws:s3:::prashantpriyam/*" } ] } In the preceding code, you can see that we have allowed all the operations on the S3 bucket if they have an MFA token whose life is less than 600 seconds. Apart from MFA, we can enable versioning so that S3 can automatically create multiple versions of the object to eliminate the risk of unwanted modification of data. This can be enabled with the AWS Console only. You can also enable cross-region replication so that the S3 bucket content can be replicated to the other selected regions. This option is mostly used when you want to deliver static content into two different regions, but it also gives you redundancy. For infrequently accessed data you can enable a lifecycle policy, which helps you to transfer the objects to a low-cost archival storage called Glacier. Let's see how to secure the S3 bucket using the AWS Console. To do this, we need to log in to the S3 bucket and search for S3. Now, click on the bucket you want to secure: In the screenshot, we have selected the bucket called velocis-manali-trip-112017 and, in the bucket properties, we can see that we have not enabled the security options that we have learned so far. Let's implement the security. Now, we need to click on the bucket and then on the Properties tab. From here, we can enable Versioning, Default encryption, Server access logging, and Object-level logging: To enable Server access logging, you need to specify the name of the bucket and a prefix for the logs: To enable encryption, you need to specify whether you want to use AES 256 or AWS KMS based encryption. Now, click on the Permission tab. From here, you will be able to define the Access Control List, Bucket Policy, and CORS configuration: In Access Control List, you can define who will access what and to what extent, in Bucket Policy you define resource-based permissions on the bucket (like we have seen in the example for bucket policy), and in CORS configuration we define the rule for CORS. Let's look at a sample CORS file: <!-- Sample policy --> <CORSConfiguration> <CORSRule> <AllowedOrigin>*</AllowedOrigin> <AllowedMethod>GET</AllowedMethod> <MaxAgeSeconds>3000</MaxAgeSeconds> <AllowedHeader>Authorization</AllowedHeader> </CORSRule> </CORSConfiguration> It's an XML script that allows read-only permission to all the origins. In the preceding code, instead of a URL, the origin is the wildcard *, which means anyone. Now, click on the Management section. From here, we define the life cycle rule, replication, and so on: In life cycle rules, an S3 bucket object is transferred to the Standard-IA tier after 30 days and transferred to Glacier after 60 days. This is how we define security on the S3 bucket. To summarize, we learned about security access and policies in Amazon S3. If you've enjoyed reading this, do check out this book, 'Cloud Security Automation' to know how private cloud security functions can be automated for better time and cost-effectiveness. Creating and deploying an Amazon Redshift cluster Amazon Sagemaker makes machine learning on the cloud easy How cybersecurity can help us secure cyberspace
Read more
  • 0
  • 0
  • 27475

article-image-introducing-penetration-testing
Packt
07 Sep 2016
28 min read
Save for later

Introducing Penetration Testing

Packt
07 Sep 2016
28 min read
In this article by Kevin Cardwell, the author of the book Building Virtual Pentesting Labs for Advanced Penetration Testing, we will discuss the role that pen testing plays in the professional security testing framework. We will discuss the following topics: Defining security testing An abstract security testing methodology Myths and misconceptions about pen testing (For more resources related to this topic, see here.) If you have been doing penetration testing for some time and are very familiar with the methodology and concept of professional security testing, you can skip this article or just skim it. But you might learn something new or at least a different approach to penetration testing. We will establish some fundamental concepts in this article. Security testing If you ask 10 consultants to define what security testing is today, you will more than likely get a variety of responses. Here is the Wikipedia definition: "Security testing is a process and methodology to determine that an information system protects and maintains functionality as intended." In my opinion, this is the most important aspect of penetration testing. Security is a process and not a product. I'd also like to add that it is a methodology and not a product. Another component to add to our discussion is the point that security testing takes into account the main areas of a security model. A sample of this is as follows: Authentication Authorization Confidentiality Integrity Availability Non-repudiation Each one of these components has to be considered when an organization is in the process of securing their environment. Each one of these areas in itself has many subareas that also have to be considered when it comes to building a secure architecture. The lesson is that when testing security, we have to address each of these areas. Authentication It is important to note that almost all systems and/or networks today have some form of authentication and, as such, it is usually the first area we secure. This could be something as simple as users selecting a complex password or us adding additional factors to authentication, such as a token, biometrics, or certificates. No single factor of authentication is considered to be secure by itself in today's networks. Authorization Authorization is often overlooked since it is assumed and not a component of some security models. That is one approach to take, but it's preferred to include it in most testing models, as the concept of authorization is essential since it is how we assign the rights and permissions to access a resource, and we would want to ensure it is secure. Authorization enables us to have different types of users with separate privilege levels coexist within a system. We do this when we have the concept of discretionary access, where a user can have administrator privileges on a machine or assume the role of an administrator to gain additional rights or permissions, whereas we might want to provide limited resource access to a contractor. Confidentiality Confidentiality is the assurance that something we want to be protected on the machine or network is safe and not at risk of being compromised. This is made harder by the fact that the protocol (TCP/IP) running the Internet today is a protocol that was developed in the early 1970s. At that time, the Internet consisted of just a few computers, and now, even though the Internet has grown to the size it is today, we are still running the same protocol from those early days. This makes it more difficult to preserve confidentiality. It is important to note that when the developers created the protocol and the network was very small, there was an inherent sense of trust on who you could potentially be communicating with. This sense of trust is what we continue to fight from a security standpoint today. The concept from that early creation was and still is that you can trust data received to be from a reliable source. We know now that the Internet is at this huge size and that is definitely not the case. Integrity Integrity is similar to confidentiality, in that we are concerned with the compromising of information. Here, we are concerned with the accuracy of data and the fact that it is not modified in transit or from its original form. A common way of doing this is to use a hashing algorithm to validate that the file is unaltered. Availability One of the most difficult things to secure is availability, that is, the right to have a service when required. The irony of availability is that a particular resource is available to one user, and it is later available to all. Everything seems perfect from the perspective of an honest/legitimate user. However, not all users are honest/legitimate, and due to the sheer fact that resources are finite, they can be flooded or exhausted; hence, is it is more difficult to protect this area. Non-repudiation Non-repudiation makes the claim that a sender cannot deny sending something after the fact. This is the one I usually have the most trouble with, because a computer system can be compromised and we cannot guarantee that, within the software application, the keys we are using for the validation are actually the ones being used. Furthermore, the art of spoofing is not a new concept. With these facts in our minds, the claim that we can guarantee the origin of a transmission by a particular person from a particular computer is not entirely accurate. Since we do not know the state of the machine with respect to its secureness, it would be very difficult to prove this concept in a court of law. All it takes is one compromised machine, and then the theory that you can guarantee the sender goes out the window. We won't cover each of the components of security testing in detail here, because that is beyond the scope of what we are trying to achieve. The point I want to get across in this article is that security testing is the concept of looking at each and every one of these and other components of security, addressing them by determining the amount of risk an organization has from them, and then mitigating that risk. An abstract testing methodology As mentioned previously, we concentrate on a process and apply that to our security components when we go about security testing. For this, I'll describe an abstract methodology here. We will define our testing methodology as consisting of the following steps: Planning Nonintrusive target search Intrusive target search Data analysis Reporting Planning Planning is a crucial step of professional testing. But, unfortunately, it is one of the steps that is rarely given the time that is essentially required. There are a number of reasons for this, but the most common one is the budget: clients do not want to provide consultants days and days to plan their testing. In fact, planning is usually given a very small portion of the time in the contract due to this reason. Another important point about planning is that a potential adversary is going to spend a lot of time on it. There are two things we should tell clients with respect to this step that as a professional tester we cannot do but an attacker could: 6 to 9 months of planning: The reality is that a hacker who targets someone is going to spend a lot of time before the actual attack. We cannot expect our clients to pay us for 6 to 9 months of work just to search around and read on the Internet. Break the law: We could break the law and go to jail, but it is not something that is appealing for most. Additionally, being a certified hacker and licensed penetration tester, you are bound to an oath of ethics, and you can be pretty sure that breaking the law while testing is a violation of this code of ethics. Nonintrusive target search There are many names that you will hear for nonintrusive target search. Some of these are open source intelligence, public information search, and cyber intelligence. Regardless of which name you use, they all come down to the same thing: using public resources to extract information about the target or company you are researching. There is a plethora of tools that are available for this. We will briefly discuss those tools to get an idea of the concept, and those who are not familiar with them can try them out on their own. Nslookup The nslookup tool can be found as a standard program in the majority of the operating systems we encounter. It is a method of querying DNS servers to determine information about a potential target. It is very simple to use and provides a great deal of information. Open a command prompt on your machine, and enter nslookup www.packtpub.com. This will result in output like the following screenshot:  As you can see, the response to our command is the IP address of the DNS server for the www.packtpub.com domain. If we were testing this site, we would have explored this further. Alternatively, we may also use another great DNS-lookup tool called dig. For now, we will leave it alone and move to the next resource. Central Ops The https://centralops.net/co/ website has a number of tools that we can use to gather information about a potential target. There are tools for IP, domains, name servers, e-mail, and so on. The landing page for the site is shown in the next screenshot: The first thing we will look at in the tool is the ability to extract information from a web server header page: click on TcpQuery, and in the window that opens, enter www.packtpub.com and click on Go. An example of the output from this is shown in the following screenshot: As the screenshot shows, the web server banner has been modified and says packt. If we do the query against the www.packtpub.com domain we have determined that the site is using the Apache web server, and the version that is running; however we have much more work to do in order to gather enough information to target this site. The next thing we will look at is the capability to review the domain server information. This is accomplished by using the domain dossier. Return to the main page, and in the Domain Dossier dialog box, enter yahoo.com and click on go. An example of the output from this is shown in the following screenshot:  There are many tools we could look at, but again, we just want to briefly acquaint ourselves with tools for each area of our security testing procedure. If you are using Windows and you open a command prompt window and enter tracert www.microsoft.com, you will observe that it fails, as indicated in this screenshot:  The majority of you reading this article probably know why this is blocked; for those of you who do not, it is because Microsoft has blocked the ICMP protocol, which is what the tracert command uses by default. It is simple to get past this because the server is running services; we can use those protocols to reach it, and in this case, that protocol is TCP. If you go to http://www.websitepulse.com/help/testtools.tcptraceroute-test.html and enter www.microsoft.com in the IP address/domain field with the default location and conduct the TCP Traceroute test, you will see it will now be successful, as shown in the following screenshot: As you can see, we now have additional information about the path to the potential target; moreover, we have additional machines to add to our target database as we conduct our test within the limits of the rules of engagement. The Wayback Machine The Wayback Machine is proof that nothing that has ever been on the Internet leaves! There have been many assessments in which a client informed the team that they were testing a web server that hadn't placed into production, and when they were shown the site had already been copied and stored, they were amazed that this actually does happen. I like to use the site to download some of my favorite presentations, tools, and so on, that have been removed from a site or, in some cases, whose site no longer exists. As an example, one of the tools used to show students the concept of steganography is the tool infostego. This tool was released by Antiy Labs, and it provided students an easy-to-use tool to understand the concepts. Well, if you go to their site at http://www.antiy.net/, you will find no mention of the tool—in fact, it will not be found on any of their pages. They now concentrate more on the antivirus market. A portion from their page is shown in the following screenshot: Now, let's try and use the power of the Wayback Machine to find our software. Open the browser of your choice and go to www.archive.org. The Wayback Machine is hosted there and can be seen in the following screenshot: As indicated, there are 491 billion pages archived at the time of writing this article. In the URL section, enter www.antiy.net and hit Enter. This will result in the site searching its archives for the entered URL. After a few moments, the results of the search will be displayed. An example of this is shown in the following screenshot: We know we don't want to access a page that has been recently archived, so to be safe, click on 2008. This will result in the calendar being displayed and showing all the dates in 2008 on which the site was archived. You can select any one that you want; an example of the archived site from December 18 is shown in the following screenshot: as you can see, the infostego tool is available, and you can even download it! Feel free to download and experiment with the tool if you like. Shodan The Shodan site is one of the most powerful cloud scanners available. You are required to register with the site to be able to perform the more advanced types of queries. To access the site, go to https://www.shodan.io/. It is highly recommended that you register, since the power of the scanner and the information you can discover is quite impressive, especially after registration. The page that is presented once you log in is shown in the following screenshot: The screenshot shows recently shared search queries as well as the most recent searches the logged-in user has conducted. This is another tool you should explore deeply if you do professional security testing. For now, we will look at one example and move on, since an entire article could be written just on this tool. If you are logged in as a registered user, you can enter iphone us into the search query window. This will return pages with iphone in the query and mostly in the United States, but as with any tool, there will be some hits on other sites as well. An example of the results of this search is shown in the following screenshot: Intrusive target search This is the step that starts the true hacker-type activity. This is when you probe and explore the target network; consequently, ensure that you have with you explicit written permission to carry out this activity. Never perform an intrusive target search without permission, as this written authorization is the only aspect which differentiates you and a malicious hacker. Without it, you are considered a criminal like them. Within this step, there are a number of components that further define the methodology. Find live systems No matter how good our skills are, we need to find systems that we can attack. This is accomplished by probing the network and looking for a response. One of the most popular tools to do this with is the excellent open source tool nmap, written by Fyodor. You can download nmap from https://nmap.org/, or you can use any number of toolkit distributions for the tool. We will use the exceptional penetration-testing framework Kali Linux. You can download the distribution from https://www.kali.org/. Regardless of which version of nmap you explore with, they all have similar, if not the same, command syntax. In a terminal window, or a command prompt window if you are running it on Windows, type nmap –sP <insert network IP address>. The network we are scanning is the 192.168.4.0/24 network; yours more than likely will be different. An example of this ping sweep command is shown in the following screenshot:  We now have live systems on the network that we can investigate further. For those of you who would like a GUI tool, you can use Zenmap. Discover open ports Now that we have live systems, we want to see what is open on these machines. A good analogy to a port is a door, and it's that if the door is open, I can approach it. There might be things that I have to do once I get to the door to gain access, but if it is open, then I know it is possible to get access, and if it is closed, then I know I cannot go through that door. Furthermore, we might need to know the type of lock that is on the door, because it might have weaknesses or additional protection that we need to know about. The same is with ports: if they are closed, then we cannot go into that machine using that port. We have a number of ways to check for open ports, and we will continue with the same theme and use nmap. We have machines that we have identified, so we do not have to scan the entire network as we did previously—we will only scan the machines that are up. Additionally, one of the machines found is our own machine; therefore, we will not scan ourselves—we could, but it's not the best plan. The targets that are live on our network are 1, 2, 16, and 18. We can scan these by entering nmap –sS 192.168.4.1,2,16,18. Those of you who want to learn more about the different types of scans can refer to http://nmap.org/book/man-port-scanning-techniques.html. Alternatively, you can use the nmap –h option to display a list of options. The first portion of the stealth scan (not completing the three-way handshake) result is shown in the following screenshot:   Discover services We now have live systems and openings that are on the machine. The next step is to determine what, if anything, is running on the ports we have discovered—it is imperative that we identify what is running on the machine so that we can use it as we progress deeper into our methodology. We once again turn to nmap. In most command and terminal windows, there is history available; hopefully, this is the case for you and you can browse through it with the up and down arrow keys on your keyboard. For our network, we will enter nmap –sV 192.168.4.1. From our previous scan, we've determined that the other machines have all scanned ports closed, so to save time, we won't scan them again. An example of this is shown in the following screenshot: From the results, you can now see that we have additional information about the ports that are open on the target. We could use this information to search the Internet using some of the tools we covered earlier, or we could let a tool do it for us. Enumeration Enumeration is the process of extracting more information about the potential target to include the OS, usernames, machine names, and other details that we can discover. The latest release of nmap has a scripting engine that will attempt to discover a number of details and in fact enumerate the system to some aspect. To process the enumeration with nmap, use the -A option. Enter nmap -A 192.168.4.1. Remember that you will have to enter your respective target address, which might be different from the one mentioned here. Also, this scan will take some time to complete and will generate a lot of traffic on the network. If you want an update, you can receive one at any time by pressing the spacebar. This command's output is quite extensive; so a truncated version is shown in the following screenshot:   As you can see, you have a great deal of information about the target, and you are quite ready to start the next phase of testing. Additionally, we have the OS correctly identified; until this step, we did not have that. Identify vulnerabilities After we have processed the steps up to this point, we have information about the services and versions of the software that are running on the machine. We could take each version and search the Internet for vulnerabilities, or we could use a tool—for our purposes, we will choose the latter. There are numerous vulnerability scanners out there in the market, and the one you select is largely a matter of personal preference. The commercial tools for the most part have a lot more information and details than the free and open source ones, so you will have to experiment and see which one you prefer. We will be using the Nexpose vulnerability scanner from Rapid7. There is a community version of their tool that will scan a limited number of targets, but it is worth looking into. You can download Nexpose from http://www.rapid7.com/. Once you have downloaded it, you will have to register, and you'll receive a key by e-mail to activate it. I will leave out the details of this and let you experience them on your own. Nexpose has a web interface, so once you have installed and started the tool, you have to access it. You can access it by entering https://localhost:3780. It seems to take an extraordinary amount of time to initialize, but eventually, it will present you with a login page, as shown in the following screenshot: The credentials required for login will have been created during the installation. It is quite an involved process to set up a scan, and since we are just detailing the process and there is an excellent quick start guide available, we will just move on to the results of the scan. We will have plenty of time to explore this area as the article progresses. The result of a typical scan is shown in the following screenshot: As you can see, the target machine is in bad shape. One nice thing about Nexpose is the fact that since they also own Metasploit, they will list the vulnerabilities that have a known exploit within Metasploit. Exploitation This is the step of the security testing that gets all the press, and it is, in simple terms, the process of validating a discovered vulnerability. It is important to note that it is not a 100-percent successful process—some vulnerabilities will not have exploits and some will have exploits for a certain patch level of the OS but not others. As I like to say, it is not an exact science and in reality is an infinitesimal part of professional security testing, but it is fun, so we will briefly look at the process. We also like to say in security testing that we have to validate and verify everything a tool reports to our client, and that is what we try to do with exploitation. The point is that you are executing a piece of code on a client's machine, and this code could cause damage. The most popular free tool for exploitation is the Rapid7-owned tool Metasploit. There are entire articles written on using the tool, so we will just look at the results of running it and exploiting a machine here. As a reminder, you have to have written permission to do this on any network other than your own; if in doubt, do not attempt it. Let's look at the options: There is quite a bit of information in the options. The one we will cover is the fact that we are using the exploit for the MS08-067 vulnerability, which is a vulnerability in the server service. It is one of the better ones to use as it almost always works and you can exploit it over and over again. If you want to know more about this vulnerability, you can check it out here: http://technet.microsoft.com/en-us/security/bulletin/ms08-067. Since the options are set, we are ready to attempt the exploit, and as indicated in the following screenshot, we are successful and have gained a shell on the target machine. The process for this we will cover as we progress through the article. For now, we will stop here. Here onward, it is only your imagination that can limit you. The shell you have opened is running at system privileges; therefore, it is the same as running a Command Prompt on any Windows machine with administrator rights, so whatever you can do in that shell, you can also do in this one. You can also do a number of other things, which you will learn as we progress through the article. Furthermore, with system access, we can plant code as malware: a backdoor or really anything we want. While we might not do that as a professional tester, a malicious hacker could do it, and this would require additional analysis to discover on the client's end. Data analysis Data analysis is often overlooked, and it can be a time-consuming process. This is the process that takes the most time to develop. Most testers can run tools and perform manual testing and exploitation, but the real challenge is taking all of the results and analyzing them. We will look at one example of this in the next screenshot. Take a moment and review the protocol analysis captured with the tool Wireshark—as an analyst, you need to know what the protocol analyzer is showing you. Do you know what exactly is happening? Do not worry, I will tell you after we have a look at the following screenshot: You can observe that the machine with the IP address 192.168.3.10 is replying with an ICMP packet that is type 3 code 13; in other words, the reason the packet is being rejected is because the communication is administratively filtered. Furthermore, this tells us that there is a router in place and it has an access control list (ACL) that is blocking the packet. Moreover, it tells us that the administrator is not following best practices— absorbing packets and not replying with any error messages that can assist an attacker. This is just a small example of the data analysis step; there are many things you will encounter and many more that you will have to analyze to determine what is taking place in the tested environment. Remember: the smarter the administrator, the more challenging pen testing can become—which is actually a good thing for security! Reporting Reporting is another one of the areas in testing that is often overlooked in training classes. This is unfortunate since it is one of the most important things you need to master. You have to be able to present a report of your findings to the client. These findings will assist them in improving their security practices, and if they like the report, it is what they will most often share with partners and other colleagues. This is your advertisement for what separates you from others. It is a showcase that not only do you know how to follow a systematic process and methodology of professional testing, you also know how to put it into an output form that can serve as a reference going forward for the clients. At the end of the day, as professional security testers, we want to help our clients improve their security scenario, and that is where reporting comes in. There are many references for reports, so the only thing we will cover here is the handling of findings. There are two components we use when it comes to findings, the first of which is a summary-of-findings table. This is so the client can reference the findings early on in the report. The second is the detailed findings section. This is where we put all of the information about the findings. We rate them according to severity and include the following: Description This is where we provide the description of the vulnerability, specifically, what it is and what is affected. Analysis and exposure For this article, you want to show the client that you have done your research and aren't just repeating what the scanning tool told you. It is very important that you research a number of resources and write a good analysis of what the vulnerability is, along with an explanation of the exposure it poses to the client site. Recommendations We want to provide the client a reference to the patches and measures to apply in order to mitigate the risk of discovered vulnerabilities. We never tell the client not to use the service and/or protocol! We do not know what their policy is, and it might be something they have to have in order to support their business. In these situations, it is our job as consultants to recommend and help the client determine the best way to either mitigate the risk or remove it. When a patch is not available, we should provide a reference to potential workarounds until one is available. References If there are references such as a Microsoft bulletin number or a Common Vulnerabilities and Exposures (CVE) number, this is where we would place them. Myths and misconceptions about pen testing After more than 20 years of performing professional security testing, it is amazing to me really how many are confused as to what a penetration test is. I have on many occasions gone to a meeting where the client is convinced they want a penetration test, and when I explain exactly what it is, they look at me in shock. So, what exactly is a penetration test? Remember our abstract methodology had a step for intrusive target searching and part of that step was another methodology for scanning? Well, the last item in the scanning methodology, exploitation, is the step that is indicative of a penetration test. That's right! That one step is the validation of vulnerabilities, and this is what defines penetration testing. Again, it is not what most clients think when they bring a team in. The majority of them in reality want a vulnerability assessment. When you start explaining to them that you are going to run exploit code and all these really cool things on their systems and/or networks, they usually are quite surprised. The majority of the times, the client will want you to stop at the validation step. On some occasions, they will ask you to prove what you have found, and then you might get to show validation. I once was in a meeting with the IT department of a foreign country's stock market, and when I explained what we were about to do for validating vulnerabilities, the IT director's reaction was, "Those are my stock broker records, and if we lose them, we lose a lot of money!" Hence, we did not perform the validation step in that test. Summary In this article, we defined security testing as it relates to this article, and we identified an abstract methodology that consists of the following steps: planning, nonintrusive target search, intrusive target search, data analysis, and reporting. More importantly, we expanded the abstract model when it came to intrusive target searching, and we defined within that a methodology for scanning. This consisted of identifying live systems, looking at open ports, discovering services, enumeration, identifying vulnerabilities, and finally, exploitation. Furthermore, we discussed what a penetration test is and that it is a validation of vulnerabilities and is associated with one step in our scanning methodology. Unfortunately, most clients do not understand that when you validate vulnerabilities, it requires you to run code that could potentially damage a machine or, even worse, damage their data. Because of this, once they discover this, most clients ask that it not be part of the test. We created a baseline for what penetration testing is in this article, and we will use this definition throughout the remainder of this article. In the next article, we will discuss the process of choosing your virtual environment.  Resources for Article:  Further resources on this subject: CISSP: Vulnerability and Penetration Testing for Access Control [article] Web app penetration testing in Kali [article] BackTrack 4: Security with Penetration Testing Methodology [article]
Read more
  • 0
  • 0
  • 27384

article-image-a-new-stuxnet-level-vulnerability-named-simjacker-used-to-secretly-spy-over-mobile-phones-in-multiple-countries-for-over-2-years-adaptive-mobile-security-reports
Savia Lobo
13 Sep 2019
6 min read
Save for later

A new Stuxnet-level vulnerability named Simjacker used to secretly spy over mobile phones in multiple countries for over 2 years: Adaptive Mobile Security reports

Savia Lobo
13 Sep 2019
6 min read
Updated: On September 27, a few researchers from the Security Research Labs (SRLabs) released five key research findings based on the extent of Simjacker and how one can understand whether is SIM is vulnerable to such an exploit. Yesterday, Adaptive Mobile Security made a breakthrough announcement revealing a new vulnerability which the firm calls Simjacker has been used by attackers to spy over mobile phones. Researchers at Adaptive Mobile Security believe the vulnerability has been exploited for at least the last 2 years “by a highly sophisticated threat actor in multiple countries, primarily for the purposes of surveillance.” They further added that the Simjacker vulnerability “is a huge jump in complexity and sophistication compared to attacks previously seen over mobile core networks. It represents a considerable escalation in the skillset and abilities of attackers seeking to exploit mobile networks.” Also Read: 25 million Android devices infected with ‘Agent Smith’, a new mobile malware How Simjacker attack works and why it is a grave threat In the Simjacker attack, an SMS that contains a specific spyware-like code is sent to a victim’s mobile phone. This SMS when received, instructs the UICC (SIM Card) within the phone to ‘take over’ the mobile phone, in order to retrieve and perform sensitive commands. “During the attack, the user is completely unaware that they received the SMS with the Simjacker Attack message, that information was retrieved, and that it was sent outwards in the Data Message SMS - there is no indication in any SMS inbox or outbox,” the researchers mention on their official blog post. Source: Adaptive Mobile Security The Simjacker attack relies on the S@T(SIMalliance Toolbox ‘pronounced as sat’) browser software as an execution environment. The S@T browser, an application specified by the SIMalliance, can be installed on different UICC (SIM cards), including eSIMs. The S@T browser software is quite old and unpopular with an initial aim to enable services such as getting your account balance through the SIM card. The software specifications have not been updated since 2009 and have been superseded by many other technologies since then. Researchers say they have observed the “S@T protocol being used by mobile operators in at least 30 countries whose cumulative population adds up to over a billion people, so a sizable amount of people are potentially affected. It is also highly likely that additional countries have mobile operators that continue to use the technology on specific SIM cards.” Simjacker attack is a next-gen SMS attack Simjacker attack is unique. Previous SMS malware involved sending links to malware. However, the Simjacker Attack Message carries a complete malware payload, specifically spyware with instructions for the SIM card to execute the attack. Simjacker attack can do more than simply tracking the user’s location and user’s personal data. By modifying the attack message, the attacker could instruct the UICC to execute a range of other attacks. This is because the same method allows an attacker to have complete access to the STK command set including commands such as launch browser, send data, set up a call, and much more. Also Read: Using deep learning methods to detect malware in Android Applications The researchers used these commands in their own tests and were successfully able to make targeted handsets open up web browsers, ring other phones, send text messages and so on. They further highlighted other purposes this attack could be used for: Mis-information (e.g. by sending SMS/MMS messages with attacker-controlled content) Fraud (e.g. by dialling premium rate numbers), Espionage (as well as the location retrieving attack an attacked device it could function as a listening device, by ringing a number), Malware spreading (by forcing a browser to open a web page with malware located on it) Denial of service (e.g by disabling the SIM card) Information retrieval (retrieve other information like language, radio type, battery level etc.) The researchers highlight another benefit of the Simjacker attack for the attackers: many of its attacks seem to work independent of handset types, as the vulnerability is dependent on the software on the UICC and not the device. Adaptive Mobile says behind the Simjacker attack is a “specific private company that works with governments to monitor individuals.” This company also has extensive access to the SS7 and Diameter core network. Researchers said that in one country, roughly 100-150 specific individual phone numbers being targeted per day via Simjacker attacks. Also, a few phone numbers had been tracked a hundred times over a 7-day period, suggesting they belonged to high-value targets. Source: Adaptive Mobile Security The researchers added that they have been “working with our own mobile operator customers to block these attacks, and we are grateful for their assistance in helping detect this activity.” They said they have also communicated to the GSM Association – the trade body representing the mobile operator community - the existence of this vulnerability. This vulnerability has been managed through the GSMA CVD program, allowing information to be shared throughout the mobile community. “Information was also shared to the SIM alliance, a trade body representing the main SIM Card/UICC manufacturers and they have made new security recommendations for the S@T Browser technology,” the researchers said. “The Simjacker exploit represents a huge, nearly Stuxnet-like, leap in complexity from previous SMS or SS7/Diameter attacks, and show us that the range and possibility of attacks on core networks are more complex than we could have imagined in the past,” the blog mentions. The Adaptive Mobile Security team will present more details about the Simjacker attack in a presentation at Virus Bulletin Conference, London, on 3rd October 2019. https://twitter.com/drogersuk/status/1172194836985913344 https://twitter.com/campuscodi/status/1172141255322689537   To know more about the Simjacker attack in detail, read Adaptive Mobile’s official blog post. SRLabs researchers release protection tools against Simjacker and other SIM-based attacks On September 27, a few researchers from the Security Research Labs (SRLabs) released five key findings based on the extent of Simjacker and how one can understand whether is SIM is vulnerable to such an exploit. The researchers have highlighted five key findings in their research report and also provided an FAQ for users to implement necessary measures. Following are the five key research findings the SRLabs researchers mention: Around 6% of 800 tested SIM cards in recent years were vulnerable to Simjacker A second, previously unreported, vulnerability affects an additional 3.5% of SIM cards The tool SIMtester  provides a simple way to check any SIM card for both vulnerabilities (and for a range of other issues reported in 2013) The SnoopSnitch Android app warns users about binary SMS attacks including Simjacker since 2014. (Attack alerting requires a rooted Android phone with Qualcomm chipset.) A few Simjacker attacks have been reported since 2016 by the thousands of SnoopSnitch users that actively contribute data To know about these key findings by SRLabs' researchers in detail, read the official report. Other interesting news in Security Endpoint protection, hardening, and containment strategies for ransomware attack protection: CISA recommended FireEye report Highlights Intel’s DDIO and RDMA enabled microprocessors vulnerable to new NetCAT attack Wikipedia hit by massive DDoS (Distributed Denial of Service) attack; goes offline in many countries
Read more
  • 0
  • 0
  • 27177
article-image-google-project-zero-reveals-six-interactionless-bugs-that-can-affect-ios-via-apples-imessage
Savia Lobo
31 Jul 2019
3 min read
Save for later

Google Project Zero reveals six “interactionless” bugs that can affect iOS via Apple’s iMessage

Savia Lobo
31 Jul 2019
3 min read
Yesterday, two members of the Google Project Zero team revealed about six “interactionless” security bugs that can affect iOS by exploiting the iMessage Client. Four of these bugs can execute malicious code on a remote iOS device, without any prior user interaction. Apple released fixes for these bugs in the iOS 12.4 update on July 22. The two Project Zero researchers, Natalie Silvanovich and Samuel Groß, published details and demo proof-of-concept only for five out of the six vulnerabilities. Details of one of the "interactionless" vulnerabilities have been kept private because Apple's iOS 12.4 patch did not completely resolve the bug, according to Natalie Silvanovich. https://twitter.com/natashenka/status/1155941211275956226 4 bugs can perform an RCE via a malformed message Bugs with vulnerability IDs, CVE-2019-8647, CVE-2019-8660, CVE-2019-8662, CVE-2019-8641 (the one whose details are kept private), can execute malicious code on a remote iOS device. The attacker has to simply send a malformed message to the victim’s phone. Once the user opens the message and views it, the malicious code will automatically execute without the user knowing about it. 2 bugs can leak user’s on-device data to a remote device The other two bugs, CVE-2019-8624 and CVE-2019-8646, allow an attacker to leak data from a user’s device memory and read files off a remote device. This execution too can happen without the user knowing. “Apple's own notes about iOS 12.4 indicate that the unfixed flaw could give hackers a means to crash an app or execute commands of their own on recent iPhones, iPads and iPod Touches if they were able to discover it”, BBC reports. Silvanovich will talk about these remote and interactionless iPhone vulnerabilities at this year’s Black Hat security conference held at Las Vegas from August 3 - 8. An abstract of her talk reads, “There have been rumors of remote vulnerabilities requiring no user interaction being used to attack the iPhone, but limited information is available about the technical aspects of these attacks on modern devices.” Her presentation will explore “the remote, interaction-less attack surface of iOS. It discusses the potential for vulnerabilities in SMS, MMS, Visual Voicemail, iMessage and Mail, and explains how to set up tooling to test these components. It also includes two examples of vulnerabilities discovered using these methods." According to ZDNet, “When sold on the exploit market, vulnerabilities like these can bring a bug hunter well over $1 million, according to a price chart published by Zerodium. It wouldn't be an exaggeration to say that Silvanovich just published details about exploits worth well over $5 million, and most likely valued at around $10 million”. For iOS users who haven’t yet updated the latest version, it is advisable to install the iOS 12.4 release without any delay. Early this month, the Google Project Zero team revealed a bug in Apple’s iMessage that bricks iPhone causing a repetitive crash and respawn operations. This bug was patched in iOS 12.3 update. To know more about these five vulnerabilities in detail, visit the Google Project Zero bug report page. Stripe’s API degradation RCA found unforeseen interaction of database bugs and a config change led to cascading failure across critical services Azure DevOps report: How a bug caused ‘sqlite3 for Python’ to go missing from Linux images Is the Npm 6.9.1 bug a symptom of the organization’s cultural problems?
Read more
  • 0
  • 0
  • 26601

article-image-submitting-malware-word-document
Packt
18 Oct 2013
5 min read
Save for later

Submitting a malware Word document

Packt
18 Oct 2013
5 min read
(For more resources related to this topic, see here.) We will submit a document dealing with Iran's Oil and Nuclear Situation. Perform the following steps: Open a new tab in the terminal and type the following command: $ python utils/submit.py --platform windows –package doc shares/Iran's Oil and Nuclear Situation.doc In this case, the document is located inside the shares folder. You have to change the location based on where your document is. Please make sure you get a Success message like the preceding screenshot with task with ID 7 (it is the ID that depends on how many times you tried to submit a malware). Cuckoo will then start the latest snapshot of the virtual machine we've made. Windows will open the Word document. A warning pop-up window will appear as shown in the preceding screenshot. We assume that the users will not be aware of what that warning is, so we will choose I recognize this content. Allow it to play. option and click on the Continue button. Wait a moment until the malware document takes some action. The VM will close automatically after all the actions are finished by the malware document. Now, you will see the Cuckoo status—on the terminal tab where we started Cuckoo—as shown in the following screenshot: We have now finished the submission process. Let's look at the subfolder of cuckoo, in the storage/analyses/ path. There are some numbered folders in storage/analyses, which represent the analysis task inside the database. These folders are based on the task ID we have created before. So, do not be confused when you find folders other than 7. Just find the folder your were searching for based on the task ID. When you see the reporting folder, you will know that Cuckoo Sandbox will make several files in a dedicated directory. Following is an example of an analysis directory structure: |-- analysis.conf|-- analysis.log|-- binary|-- dump.pcap|-- memory.dmp|-- files| |-- 1234567890| `-- dropped.exe|-- logs| |-- 1232.raw| |-- 1540.raw| `-- 1118.raw|-- reports| |-- report.html| |-- report.json| |-- report.maec11.xml| |-- report.metadata.xml| `-- report.pickle`-- shots|-- 0001.jpg|-- 0002.jpg|-- 0003.jpg`-- 0004.jpg Let us have a look at some of them in detail: analysis.conf: This is a configuration file automatically generated by Cuckoo to instruct its analyzer with some details about the current analysis. It is generally of no interest for the end user, as it is exclusively used internally by the sandbox. analysis.log: This is a log file generated by the analyzer and it contains a trace of the analysis execution inside the guest environment. It will report the creation of processes, files, and eventual error occurred during the execution. binary: This is the binary file we have submitted before. dump.pcap: This is the network dump file generated by tcpdump or any other corresponding network sniffer. memory.dmp: In case you enabled it, this file contains the full memory dump of the analysis machine. files: This directory contains all the files the malware operated on and that Cuckoo was able to dump. logs: This directory contains all the raw logs generated by Cuckoo's process monitoring. reports: This directory contains all the reports generated by Cuckoo. shots: This directory contains all the screenshots of the guest's desktop taken during the malware execution. The contents are not always similar to what is mentioned. They depend on how Cuckoo Sandbox analyzes the malware, what is the kind of the submitted malware and its behavior. After analyzing Iran's Oil and Nuclear Situation.doc there will be four folders, namely, files, logs, reports, and shots, and three files, namely, analysis.log, binary, dump.pcap, inside the storage/analyses/7 folder. To know more about how the final result of the execution of malware inside the Guest OS is, it will be more user-friendly if we open the HTML result located inside the reports folder. There will be a file named report.html. We need to double-click it and open it on the web browser. Another option to see the content of report.html is by using this command: $ lynx report.html There are some tabs with information gathered by Cuckoo Sandbox analyzer in your browser: In the File tab from your browser , you may see some interesting information. We can see this malware has been created by injecting a Word document containing nothing but a macro virus on Wednesday, November 9th, between 03:22 – 03:24 hours. What's more interesting is that it is available in the Network tab under Hosts Involved. Under the Hosts Involved option, there is a list of IP addresses, that is, 192.168.2.101, 192.168.2.255, and 192.168.2.100, which are the Guest OS's IP, Network Broadcast's IP, and vmnet0's IP, respectively. Then, what about the public IP 208.115.230.76? This is the IP used by the malware to contact to the server, which makes the analysis more interesting. After knowing that malware try to make contact outside of the host, you must be wondering how the malware make contact with the server. Therefore, we can look at the contents of the dump.pcap file. To open the dump.pcap file, you should install a packet analyzer. In this article, we will use Wireshark packet analyzer. Please make sure that you have installed Wireshark in your host OS, and then open the dump.pcap file using Wireshark. We can see the network activities of the malware in the preceding screenshot. Summary In this article, you have learned how to submit malware samples to Cuckoo Sandbox. This article also described the example of the submission of malicious files that consist of MS Office Word. Resources for Article: Further resources on this subject: Big Data Analysis [Article] GNU Octave: Data Analysis Examples [Article] StyleCop analysis [Article]
Read more
  • 0
  • 0
  • 26522

article-image-ios-12-top-choice-for-app-developers-security
Guest Contributor
17 Dec 2019
6 min read
Save for later

Why is iOS 12 a top choice for app developers when it comes to security

Guest Contributor
17 Dec 2019
6 min read
When it comes to mobile operating systems, iOS 12 is generally considered to be one of the most secure — if not the leader — in mobile security. It's now a little more than a year old, and its features may be a bit overshadowed by the launch of iOS 13. Still, a considerable number of devices run iOS 12, and developers should know about its security features. Further Reading If you want to build iOS 12 applications from scratch with the latest Swift 4.2 language and Xcode 10, explore our book iOS 12 Programming for Beginners by Craig Clayton.  For beginners, this book starts by introducing you to iOS development as you learn Xcode 10 and Swift 4.2. You'll also study advanced iOS design topics, such as gestures and animations. The book also details new iOS 12 features, such as the latest in notifications, custom-UI notifications, maps, and the recent additions in Sirikit.  Below are the most prominent changes iOS 12 made in terms of security. Based on these changes, app developers can take advantage of several safety features if they want to build secure mobile apps for devices running on this OS. Major security features in iOS 12 iOS 12's biggest security upgrades were primarily outright new features. In general, these changes reflected a pivot towards privacy, i.e., giving users more control over how their data can be collected and used, as well as towards better password and device security. Default updating: Automatic software updates are now turned on by default. This feature is good news for developers — if they need to push an update that patches a major security flaw, most users will update to the more secure version of the app automatically. Users are also likely to have the most secure version of first-party apps and iOS 12. Password auditing: iOS 12's password auditing tools let users know when they've used the same password more than twice — devices themselves now encourage users to create strong and secure passwords when logging into their apps. The OS keeps a record of all passwords a user creates and stores them on the iCloud. While this feature may not sound particularly secure — especially considering iCloud's discovered security flaw last year — all these passwords are encrypted with AES-256. USB connection: If a user hasn't unlocked a device running iOS 12 in more than an hour, USB devices won't be able to connect. Safari upgrades: The mobile version of Safari will now, by default, prevent websites from using tracking cookies without explicit user permission. 2FA integration: iOS 12 offers better native integration with two-factor authentication (2FA). If an app uses 2FA and sends a security code to a user's phone over text, iOS 12 can autofill the security code field for the user. This may be a good reason for developers to consider implementing 2FA functionality if their apps don't already support it. Improvements in iOS 12 specific to app developers Other changes in iOS 12 were more subtle to end-users but more relevant to app developers. Automated password generation: Since iOS 11, developers have been able to label their password and username fields, allowing users to automatically populate these fields with saved passwords and usernames for a specific app or Safari webpage. With iOS 12's new functionality, users can have iOS 12 generate a unique, strong password that fills the password field once prompted by an app. In-house business app development: Apple now supports the development of in-house business apps. Businesses that partner with Apple through the Apple Developer Enterprise Program can develop apps that work only on specific, permitted devices. Sandboxed apps: By default, all third-party apps are now sandboxed and cannot directly access files modified by other apps. If an app needs to alter files outside of its specific home directory — which is randomly assigned by iOS 12 on install — it will do so through iOS. The same is true for all system files and resources. If an app needs to run a background process, it can do so only through system-provided APIs. Content sharing: Apps created by the same developer can share content — like user preferences and stored data — with each other when configured to be part of an App Group. App frameworks: New software development frameworks like HomeKit are now available to developers working with iOS 12. HomeKit allows developers to create apps that configure or otherwise communicate with smart home appliances and IoT devices. Likewise, SiriKit lets developers update their apps to work with user requests that originate from Siri and Maps. Editor’s tip: To learn more about SirKit you can through Chapter 24 of the book iOS 12 Programming for Beginners by Packt Publishing. Handoff: iOS 12's new Handoff feature allows developers to design apps and websites so that users can use an app one device, then seamlessly transfer their activity to another. The feature will be useful for developers working on apps that also have web versions. App Store review guideline updates with iOS 12 Along with the launch of iOS 12 came some changes to the App Store review guidelines. App developers will need to be aware of these if they want to continue developing programs for iOS devices. Apple now limits the amount of data, developers can collect from user's address books — and how apps are allowed to use this data. This fact doesn't bar developers from using an iPhone's address book to add social functionality to their apps. Developers can still scan a user's contact lists to allow users to send invites or to link users up with friends who also use a specific app. Developers, however, can't maintain and transfer databases of user address information. Apple also banned the selling of user info to third parties. Some tech analysts consider this a response to the Cambridge Analytica scandal of last year, as well as growing discontentment over how large companies were collecting and using user data. Depending on how a developer plans on using user data, these guidelines may not bring about huge changes. However, app designers may want to review what data collection is allowed and how they can use that data. Over time, iOS security updates have trended towards giving users more control over their data, apps less control over the system and developers more APIs for adding specific functionality. Following that trend, iOS 12 is built with user security in mind. For developers, implementing security features will be easier than it has been in the past — and they can also feel more confident that the devices accessing their app are secure. Some of these changes make apps more secure for developers — like the addition of password auditing and better 2FA authentication. Others, like app sandboxing and the updates to the app store review guidelines, may require more planning from app developers than Apple has asked for in the past. To start building iOS 12 applications of your own with Xcode 10 and Swift 4.2, the building blocks of iOS development, read the book iOS 12 Programming for Beginners by Packt Publishing. Author Bio Kayla Matthews writes about big data, cybersecurity, and technology. You can find her work on The Week, Information Age, KDnuggets and CloudTweaks, or over at ProductivityBytes.com.
Read more
  • 0
  • 0
  • 26354
article-image-google-released-a-paper-showing-how-its-fighting-disinformation-on-its-platforms
Prasad Ramesh
26 Feb 2019
5 min read
Save for later

Google released a paper showing how it’s fighting disinformation on its platforms

Prasad Ramesh
26 Feb 2019
5 min read
Last Saturday, Google presented a paper in the Munich Security Conference titled How Google Fights Disinformation. In the paper, they explain what steps they’re taking against disinformation and detail their strategy for their platforms Google Search, News, YouTube, and Google Ads. We take a look at the key strategies that Google is taking against disinformation. Disinformation has become widespread in recent years. It directly affects Google’s mission of organizing the world’s information and making it accessible. Disinformation, misinformation, or fake new are deliberate attempts by acting parties to mislead people in believing things that aren’t true by spreading such content over the internet. Disinformation is deliberate attempts to mislead people where the creator knows that the information is false, misinformation is where the creator has their facts wrong and spreads wrong information unintentionally. The motivations behind it can be financial, political, or just for entertainment (trolls). Motivations can overlap with the content produced, moreover, the disinformation could also be for a good cause, making the fight against fake news very complex. A common solution for all platforms is not possible as different platforms pose different challenges. Making standards that exercise deep deliberation for individual cases is also not practical. There are three main principles that Google is outlining to combat disinformation, shown as follows. #1 Make quality content count Google products sort through a lot of information to display the most useful content first. They want to deliver quality content and legitimate commercial messages are prone to rumors. While the content is different on different Google platforms, the principles are similar: Organizing information by ranking algorithms. The algorithms are aimed to ensure that the information benefits users and is measured by user testing #2 Counter malicious actors Algorithms cannot determine if a piece of content is true or false based on current events. Neither can it determine the true intents of the content creator. For this, Google products have policies that prohibit certain behaviors like misinterpreting ownership of content. Certain users try to get a better ranking by practicing spam, such behavior is also shown by people who engage in spreading disinformation. Google has algorithms in place that can reduce such content and it’ll also be supported by human reviews for further filtering. #3 Giving users more choices Giving users different perspectives is important before they choose a link and proceed reading content or viewing a video. Hence, Google provides multiple links for a topic searched. Google search and other products now have additional UI elements to segregate information into different sections for an organized view of content. They also have a feedback button on their services via which users can submit their thoughts. Partnership with external experts Google cannot do this alone, hence they have partnered with supporting new organizations to create quality content that can uproot disinformation. They mention in the paper: “In March 2018, we launched the Google News Initiative (GNI) 3 to help journalism thrive in the digital age. With a $300 million commitment over 3 years, the initiative aims to elevate and strengthen quality journalism.” Preparing for the future People who create fake news will always try new methods to propagate it. Google is investing in research and development against it, now especially before the elections. They intend to stay ahead of the malicious actors who may use new technologies or tactics which can include deepfakes. They want to protect so that polling booths etc are easily available, guard against phishing, mitigate DDoS attacks on political websites. YouTube and conspiracy theories Recently, there have been a lot of conspiracy theories floating around on YouTube. In the paper, they say that: “YouTube has been developing products that directly address a core vulnerability involving the spread of disinformation in the immediate aftermath of a breaking news event.” Making a legitimate video with correct facts takes time, while disinformation can be created quickly for spreading panic/negativity etc,. In conclusion they, note that “fighting disinformation is not a straightforward endeavor. Disinformation and misinformation can take many shapes, manifest differently in different products, and raise significant challenges when it comes to balancing risks of harm to good faith, free expression, with the imperative to serve users with information they can trust.” Public reactions People think that only the platforms themselves can take actions against disinformation propaganda. https://twitter.com/halhod/status/1097640819102691328 Users question Google’s efforts in cases where the legitimate website is shown after the one with disinformation with an example of Bitcoin. https://twitter.com/PilotDaveCrypto/status/1097395466734653440 Some speculate that corporate companies should address their own bias of ranking pages first: https://twitter.com/PaulJayzilla/status/1097822412815646721 https://twitter.com/Darin_T80/status/1097203275483426816 To read the complete research paper with Google product-specific details on fighting disinformation, you can head on to the Google Blog. Fake news is a danger to democracy. These researchers are using deep learning to model fake news to understand its impact on elections. Defending Democracy Program: How Microsoft is taking steps to curb increasing cybersecurity threats to democracy Is Anti-trust regulation coming to Facebook following fake news inquiry made by a global panel in the House of Commons, UK?
Read more
  • 0
  • 0
  • 26350

article-image-endpoint-protection-hardening-and-containment-strategies-for-ransomware-attack-protection-cisa-recommended-fireeye-report-highlights
Savia Lobo
12 Sep 2019
8 min read
Save for later

Endpoint protection, hardening, and containment strategies for ransomware attack protection: CISA recommended FireEye report Highlights

Savia Lobo
12 Sep 2019
8 min read
Last week, the Cybersecurity and Infrastructure Security Agency (CISA) shared some strategies with users and organizations to prevent, mitigate, and recover against ransomware. They said, “The Cybersecurity and Infrastructure Security Agency (CISA) has observed an increase in ransomware attacks across the Nation. Helping organizations protect themselves from ransomware is a chief priority for CISA.” They have also advised that those attacked by ransomware should report immediately to CISA, a local FBI Field Office, or a Secret Service Field Office. In the three resources shared, the first two include general awareness about what ransomware is and why it is a major threat, mitigations, and much more. The third resource is a FireEye report on ransomware protection and containment strategies. Also Read: Vulnerabilities in the Picture Transfer Protocol (PTP) allows researchers to inject ransomware in Canon’s DSLR camera CISA INSIGHTS and best practices to prevent ransomware The CISA, as a part of their first “CISA INSIGHTS” product, has put down three simple steps or recommendations organizations can take to manage their cybersecurity risk. CISA advises users to take necessary precautionary steps such as backing up the entire system offline, keeping the system updated and patched, update security solutions, and much more. If users have been affected by ransomware, they should contact the CISA or FBI immediately, work with an experienced advisor to help recover from the attack, isolate the infected systems and phase your return to operations, etc. Further, the CISA also tells users to practice good cyber hygiene, i.e. backup, update, whitelist apps, limit privilege, and using multi-factor authentication. Users should also develop containment strategies that will make it difficult for bad actors to extract information. Users should also review disaster recovery procedures and validate goals with executives, and much more. The CISA team has suggested certain best practices which the organizations should employ to stay safe from a ransomware attack. These include, users should restrict permissions to install and run software applications, and apply the principle of “least privilege” to all systems and services thus, limiting ransomware to spread further. The organization should also ensure using application whitelisting to allow only approved programs to run on a network. All firewalls should be configured to block access to known malicious IP addresses. Organizations should also enable strong spam filters to prevent phishing emails from reaching the end users and authenticate inbound emails to prevent email spoofing. A measure to scan all incoming and outgoing emails to detect threats and filter executable files from reaching end-users should be initiated. Read the entire CISA INSIGHTS to know more about the various ransomware outbreak strategies in detail. Also Read: ‘City Power Johannesburg’ hit by a ransomware attack that encrypted all its databases, applications and network FireEye report on Ransomware Protection and Containment strategies As a third resource, the CISA shared a FireEye report titled “Ransomware Protection and Containment Strategies: Practical Guidance for Endpoint Protection, Hardening, and Containment”. In this whitepaper, FireEye discusses different steps organizations can proactively take to harden their environment to prevent the downstream impact of a ransomware event. These recommendations can also help organizations with prioritizing the most important steps required to contain and minimize the impact of a ransomware event after it occurs. The FireEye report points out that any ransomware can be deployed across an environment in two ways. First, by Manual propagation by a threat actor after they have penetrated an environment and have administrator-level privileges broadly across the Environment to manually run encryptors on the targeted system through Windows batch files, Microsoft Group Policy Objects, and existing software deployment tools used by the victim’s organization. Second, by Automated propagation where the credential or Windows token is extracted directly from disk or memory to build trust relationships between systems through Windows Management Instrumentation, SMB, or PsExec. This binds systems and executes payloads. Hackers also automate brute-force attacks on unpatched exploitation methods, such as BlueKeep and EternalBlue. “While the scope of recommendations contained within this document is not all-encompassing, they represent the most practical controls for endpoint containment and protection from a ransomware outbreak,” FireEye researchers wrote. To combat these two deployment techniques, the FireEye researchers have suggested two enforcement measures which can limit the capability for a ransomware or malware variant to impact a large scope of systems within an environment. The FireEye report covers several technical recommendations to help organizations mitigate the risk of and contain ransomware events some of which include: RDP Hardening Remote Desktop Protocol (RDP) is a common method used by malicious actors to remotely connect to systems, laterally move from the perimeter onto a larger scope of systems for deploying malware. Organizations should also scan their public IP address ranges to identify systems with RDP (TCP/3389) and other protocols (SMB – TCP/445) open to the Internet in a proactive manner. RDP and SMB should not be directly exposed to ingress and egress access to/from the Internet. Other measures that organizations can take include: Enforcing Multi-Factor Authentication Organizations can either integrate a third-party multi-factor authentication technology or leverage a Remote Desktop Gateway and Azure Multi-Factor Authentication Server using RADIUS. Leveraging Network Level Authentication (NLA) Network Level Authentication (NLA) provides an extra layer of pre-authentication before a connection is established. It is also useful for protecting against brute force attacks, which mostly target open internet-facing RDP servers. Reducing the exposure of privileged and service accounts For ransomware deployment throughout an environment, both privileged and service accounts credentials are commonly utilized for lateral movement and mass propagation. Without a thorough investigation, it may be difficult to determine the specific credentials that are being utilized by a ransomware variant for connectivity within an environment. Privileged account and service account logon restrictions For accounts having privileged access throughout an environment, these should not be used on standard workstations and laptops, but rather from designated systems (e.g., Privileged Access Workstations (PAWS)) that reside in restricted and protected VLANs and Tiers. Explicit privileged accounts should be defined for each Tier, and only utilized within the designated Tier. The recommendations for restricting the scope of access for privileged accounts is based upon Microsoft’s guidance for securing privileged access. As a quick containment measure, consider blocking any accounts with privileged access from being able to login (remotely or locally) to standard workstations, laptops, and common access servers (e.g., virtualized desktop infrastructure). If a service account is only required to be leveraged on a single endpoint to run a specific service, the service account can be further restricted to only permit the account’s usage on a predefined listing of endpoints. Protected Users Security Group With the “Protected Users” security group for privileged accounts, an organization can minimize various risk factors and common exploitation methods for exposing privileged accounts on endpoints. Starting from Microsoft Windows 8.1 and Microsoft Windows Server 2012 R2 (and above), the “Protected Users” security group was introduced to manage credential exposure within an environment. Members of this group automatically have specific protections applied to their accounts, including: The Kerberos ticket granting ticket (TGT) expires after 4 hours, rather than the normal 10-hour default setting.  No NTLM hash for an account is stored in LSASS since only Kerberos authentication is used (NTLM authentication is disabled for an account).  Cached credentials are blocked. A Domain Controller must be available to authenticate the account. WDigest authentication is disabled for an account, regardless of an endpoint’s applied policy settings. DES and RC4 can’t be used for Kerberos pre-authentication (Server 2012 R2 or higher); rather Kerberos with AES encryption will be enforced. Accounts cannot be used for either constrained or unconstrained delegation (equivalent to enforcing the “Account is sensitive and cannot be delegated” setting in Active Directory Users and Computers). Cleartext password protections Organizations should also try minimizing the exposure of credentials and tokens in memory on endpoints. On older Windows Operating Systems, cleartext passwords are stored in memory (LSASS) to primarily support WDigest authentication. The WDigest should be explicitly disabled on all Windows endpoints where it is not disabled by default. WDigest authentication is disabled in Windows 8.1+ and in Windows Server 2012 R2+, by default. Starting from Windows 7 and Windows Server 2008 R2, after installing Microsoft Security Advisory KB2871997, WDigest authentication can be configured either by modifying the registry or by using the “Microsoft Security Guide” Group Policy template from the Microsoft Security Compliance Toolkit. To implement these and other ransomware protection and containment strategies, read the FireEye report. Other interesting news in Cybersecurity Wikipedia hit by massive DDoS (Distributed Denial of Service) attack; goes offline in many countries Exim patches a major security bug found in all versions that left millions of Exim servers vulnerable to security attacks CircleCI reports of a security breach and malicious database in a third-party vendor account
Read more
  • 0
  • 0
  • 26177
Modal Close icon
Modal Close icon