Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Security

174 Articles
article-image-did-you-know-facebook-shares-the-data-you-share-with-them-for-security-reasons-with-advertisers
Natasha Mathur
28 Sep 2018
5 min read
Save for later

Did you know Facebook shares the data you share with them for ‘security’ reasons with advertisers?

Natasha Mathur
28 Sep 2018
5 min read
Facebook is constantly under the spotlight these days when it comes to controversies regarding user’s data and privacy. A new research paper published by the Princeton University researchers states that Facebook shares the contact information you handed over for security purposes, with their advertisers. This study was first brought to light by a Gizmodo writer, Kashmir Hill. “Facebook is not content to use the contact information you willingly put into your Facebook profile for advertising. It is also using contact information you handed over for security purposes and contact information you didn’t hand over at all, but that was collected from other people’s contact books, a hidden layer of details Facebook has about you that I’ve come to call “shadow contact information”, writes Hill. Recently, Facebook introduced a new feature called custom audiences. Unlike traditional audiences, the advertiser is allowed to target specific users. To do so, the advertiser uploads user’s PII (personally identifiable information) to Facebook. After the uploading is done, Facebook then matches the given PII against platform users. Facebook then develops an audience that comprises the matched users and allows the advertiser to further track the specific audience. Essentially with Facebook, the holy grail of marketing, which is targeting an audience of one, is practically possible; nevermind whether that audience wanted it or not. In today’s world, different social media platforms frequently collect various kinds of personally identifying information (PII), including phone numbers, email addresses, names and dates of birth. Majority of this PII often represent extremely accurate, unique, and verified user data. Because of this, these services have the incentive to exploit and use this personal information for other purposes. One such scenario includes providing advertisers with more accurate audience targeting. The paper titled ‘Investigating sources of PII used in Facebook’s targeted advertising’ is written by Giridhari Venkatadri, Elena Lucherini, Piotr Sapiezynski, and Alan Mislove. “In this paper, we focus on Facebook and investigate the sources of PII used for its PII-based targeted advertising feature. We develop a novel technique that uses Facebook’s advertiser interface to check whether a given piece of PII can be used to target some Facebook user and use this technique to study how Facebook’s advertising service obtains users’ PII,” reads the paper. The researchers developed a novel methodology, which involved studying how Facebook obtains the PII to provide custom audiences to advertisers. “We test whether PII that Facebook obtains through a variety of methods (e.g., directly from the user, from two-factor authentication services, etc.) is used for targeted advertising, whether any such use is clearly disclosed to users, and whether controls are provided to users to help them limit such use,” reads the paper. The paper uses size estimates to study what sources of PII are used for PII-based targeted advertising. Researchers used this methodology to investigate which range of sources of PII was actually used by Facebook for its PII-based targeted advertising platform. They also examined what information gets disclosed to users and what control users have over PII. What sources of PII are actually being used by Facebook? Researchers found out that Facebook allows its users to add contact information (email addresses and phone numbers) on their profiles. While any arbitrary email address or phone number can be added, it is not displayed to other users unless verified (through a confirmation email or confirmation SMS message, respectively). This is the most direct and explicit way of providing PII to advertisers. Researchers then further moved on to examine whether PII provided by users for security purposes such as two-factor authentication (2FA) or login alerts are being used for targeted advertising. They added and verified a phone number for 2FA to one of the authors’ accounts. The added phone number became targetable after 22 days. This proved that a phone number provided for 2FA was indeed used for PII-based advertising, despite having set the privacy controls to the choice. What control do users have over PII? Facebook allows users the liberty of choosing who can see each PII listed on their profiles, the current list of possible general settings being: Public, Friends, Only Me.   Users can also restrict the set of users who can search for them using their email address or their phone number. Users are provided with the following options: Everyone, Friends of Friends, and Friends. Facebook provides users a list of advertisers who have included them in a custom audience using their contact information. Users can opt out of receiving ads from individual advertisers listed here. But, information about what PII is used by advertisers is not disclosed. What information about how Facebook uses PII gets disclosed to the users? On adding mobile phone numbers directly to one’s Facebook profile, no information about the uses of that number is directly disclosed to them. This Information is only disclosed to users when adding a number from the Facebook website. As per the research results, there’s very little disclosure to users, often in the form of generic statements that do not refer to the uses of the particular PII being collected or that it may be used to allow advertisers to target users. “Our paper highlights the need to further study the sources of PII used for advertising, and shows that more disclosure and transparency needs to be provided to the user,” says the researchers in the paper. For more information, check out the official research paper. Ex-employee on contract sues Facebook for not protecting content moderators from mental trauma How far will Facebook go to fix what it broke: Democracy, Trust, Reality Mark Zuckerberg publishes Facebook manifesto for safeguarding against political interference
Read more
  • 0
  • 0
  • 14569

article-image-social-media-platforms-twitter-and-gab-com-accused-of-facilitating-recent-domestic-terrorism-in-the-u-s
Savia Lobo
29 Oct 2018
6 min read
Save for later

Social media platforms, Twitter and Gab.com, accused of facilitating recent domestic terrorism in the U.S.

Savia Lobo
29 Oct 2018
6 min read
Updated on 30th Oct 2018: Following PayPal, two additional platforms, Stripe and Joyent have suspended Gab accounts from their respective platforms. Social media platforms Twitter and Gab.com were at the center of two shocking stories of domestic terrorism. Both platforms were used to send by the men responsible for the mail bomb attacks and Pittsburgh’s Tree of Life synagogue shooting to send cryptic threats. Following the events, both platforms have been accused of failing to act appropriately, both in terms of their internal policies, and their ability to coordinate with law enforcement to deal with the threats. Twitter fails to recognize a bomb attacker. Mail bomber sent a threat first on Twitter Twitter neglected an abuse report by a Twitter user against mail bombing suspect. Rochelle Ritchie, a former congressional press secretary, tweeted that she had received threats from Cesar Altieri Sayoc via Twitter. Sayoc was later arrested and charged in connection with mailing at least 13 suspected explosive devices to prominent Democrats, the staff at CNN, and other U.S. officials, as Bloomberg reported. On October 11, Ritchie received a tweet from an account using the handle @hardrock2016. The message was bizarre, saying, “So you like make threats. We Unconquered Seminole Tribe will answer your threats. We have nice silent Air boat ride for u here on our land. We will see you 4 sure. Hug your loved ones real close every time you leave your home.” Ritchie immediately reported this to twitter as abuse. Following this, Twitter responded that the tweet did not qualify as a “violation of the Twitter rules against abusive behavior.” The tweet was visible on Twitter until Sayoc was arrested on Friday. Ritchie tweeted again on Friday, “Hey @Twitter remember when I reported the guy who was making threats towards me after my appearance on @FoxNews and you guys sent back a bs response about how you didn’t find it that serious. Well, guess what it’s the guy who has been sending #bombs to high profile politicians!!!!” Later in the day, Twitter apologized in reply to Ritchie’s tweet saying it should have taken a different action when Ritchie had first approached them. Twitter's statement said. "The Tweet clearly violated our rules and should have been removed. We are deeply sorry for that error." Twitter has been keen to assure users that it is working hard to combat harassment and abuse on its platform. But many users disagree. https://twitter.com/Luvvie/status/1055889940150607872 Even the apology sent to Ritchie looks a lot like the company is trying to push the matter under the carpet. This wasn’t the first time Sayoc used Twitter to post his sentiments. On September 18th, Sayoc tweeted a picture of former Vice President Joe Biden’s home and wrote, "Hug your loved son, Niece, wife family real close everytime U walk out your home." On September 20, in response to a tweet from President Trump, Sayoc posted a video of himself at what appears to be a Donald Trump rally. The text of the tweet threatened former Vice President Joe Biden and former attorney general Eric Holder. Later that week, they were targeted by improvised explosive devices. Twitter suspended Sayoc's accounts late Friday afternoon last week. Shooter hinted at Pittsburgh shooting on Gab.com "It's a very horrific crime scene; One of the worst that I've seen" - Public Safety Director Wendell Hissrich said at a press conference Gab.com, which is described as “The Home Of Free Speech Online” was allegedly linked to the shooting at a synagogue in Pittsburgh on Saturday, 27th October’18. The 46-year-old suspected shooter named Robert Bowers, posted on his Gab page, “jews are the children of satan.” He also reportedly shouted “all Jews must die” before he opened the round of firing at the Tree of Life synagogue in Pittsburgh’s Squirrel Hill neighborhood. According to The Hill’s report, “Gab.com rejected claims it was responsible for the shooting after it confirmed that the name identified in media reports as the suspect matched the name on an account on its platform.” PayPal, GoDaddy suspend Gab.com for promoting hate speech Following the Pittsburgh shooting incident, PayPal has banned Gab.com. A PayPal spokesperson confirmed the ban to The Verge, citing hate speech as a reason for the action, “The company is diligent in performing reviews and taking account actions. When a site is explicitly allowing the perpetuation of hate, violence or discriminatory intolerance, we take immediate and decisive action.” https://twitter.com/getongab/status/1056283312522637312 Similarly, GoDaddy, a domain hosting website, has threatened to suspend the Gab.com domain if it fails to transfer to a new provider. Currently, Gab is inaccessible through the GoDaddy website. Gab.com denies enabling hate speech Denying the claims, Gab.com said that it has zero tolerance for terrorism and violence.“Gab unequivocally disavows and condemns all acts of terrorism and violence,” the site said in a statement. “This has always been our policy. We are saddened and disgusted by the news of violence in Pittsburgh and are keeping the families and friends of all victims in our thoughts and prayers.” Gab was quick to respond to the accusation, taking swift and proactive action to contact law enforcement. It first backed up all user data from the account and then proceeded to suspend the account. “We then contacted the FBI and made them aware of this account and the user data in our possession. We are ready and willing to work with law enforcement to see to it that justice is served”, Gab said. Gab.com also stated that the shooter had accounts on other social media platforms including Facebook, which has not yet confirmed the deactivation of the account. Federal investigators are reportedly treating the attack as a potential hate crime. This incident is a stark reminder of how online hate can easily escalate into the real world. It also sheds light on how easy it is to misuse any social media platform to post threat attacks; some of which can also be a hoax. Most importantly it underscores how social media platforms are ill-equipped to not just identify such threats but also in prioritizing manually flagged content by users and in alerting concerned authorities on time to avert tragedies such as this. To gain more insights on these two scandals, head over to CNN and The Hill. 5 nation joint Activity Alert Report finds most threat actors use publicly available tools for cyber attacks Twitter on the GDPR radar for refusing to provide a user his data due to ‘disproportionate effort’ involved 90% Google Play apps contain third-party trackers, share user data with Alphabet, Facebook, Twitter, etc: Oxford University Study
Read more
  • 0
  • 0
  • 14504

article-image-vcloud-networks
Packt
13 Sep 2013
14 min read
Save for later

vCloud Networks

Packt
13 Sep 2013
14 min read
(For more resources related to this topic, see here.) Basics Network Virtualization is what makes vCloud Director such an awesome tool. However, before we go full out in the next article, we need to set up the Network virtualization, and this is what we will be focusing on here. When we talk about isolated networks we are talking about vCloud director making use of different methods of Network Layer three encapsulation (OSI/ISO model). Basically it is the same concept as was introduced with VLANs. VLANs split up the network communication in physical network cables in different totally isolated communication streams. vCloud makes use of these isolated networks to create isolated Org and vApp Networks. VCloud Director has three different Network items: An external network is a network that exists outside the vCloud, for example, a production network. It is basically a PortGroup in vSphere that is used in vCloud to connect to the outside world. An External Network can be connected to multiple Organization Networks. External Networks are not virtualized and are based on existing PortGroups on a vSwitch or Distributed vSwitch. An organization network (Org Net) is a network that exists only inside one organization. You can have multiple Org Nets in an organization. Organizational Networks come in three different shapes: Isolated: An isolated Org Net exists only in this organization and is not connected to an external network; however, it can be connected to vApp networks or VMs. This network type uses network virtualization and its own Network setting. Routed Network (Edge Gateway): An Org Network connects to an existing Edge Device. An Edge Gateway allows defining firewall, NAT rules, as well as VPN connections and load balance functionality. Routed gateways connect external networks to vApp networks and/or VMs. This Network uses virtualized networks and its own Network setting. Directly connected: These Org Nets are an extension of an external network into the organization. They directly connect external networks to the vApp networks or VMs. These networks do NOT use network virtualization and they make use of the network settings of an External Network. A vApp network is a virtualized network that only exists inside a vApp. You can have multiple vApp networks inside one vApp. A vApp network can connect to VMs and to Org networks. It has its own network settings. When connecting a vApp Network to an Org Network you can create a Router between the vApp and the Org Network that lets you define DHCP, Firewall, NAT rules and Static Routing. To create isolated networks, vCloud Director uses Network Pools. Network pools are collection of VLANs, PortGroups and VLANs that can use L2 in L3 encapsulation. The content of these pools can be used by Org and vApp Networks for network virtualization. Network Pools There are four kinds of network pools that can be created: VXLAN: VXLAN networks are Layer 2 networks that are encapsulated in Layer 3 packages. VMware calls this Software Defined Networking (SDN). VXLANs are automatically created by vCD, however, they don't work out of the box and require some extra configuration in vCloud Network and Security (see later) Network isolation-backed: These are basically the same as VXLANs, however, they work out of the box and use Mac in Mac encapsulation. The difference is that VXLAN can transcend routers, network isolation-backed networks can't. vSphere Portgroups backed: vCD will use pre-created portgroups to build the vApp or organization networks. You need to pre-provision one portgroup for every vApp/Org network you would like to use. VLAN backed: vCD will use a pool of VLAN numbers to automatically provision portgroups on demand; however, you still need to configure the VLAN trunking. You will need to reserve one VLAN for every vApp/Org network you would like to use. VXLANs and network isolation networks solve the problems of pre-provisioning and reserving a multitude of VLANs, which makes them extremely important. However using PortGroup or VLAN Network Pools can have additional benefits that we will explore later. Types of vCloud Network VCloud Director has basically 3 different Network items. An external network is basically a PortGroup in vSphere that is imported into vCloud. An Org Network is an isolated network that exists only in an Organization. The same is true for vApp Network, they exist only in vApps. In the picture above you can see all possible connections. Let’s play through the scenarios and see how one can use them Isolated vApp Network An Isolated vApp network exist only inside a vApp. They are useful if one needs to test how VM’s behave in a network or to test using an IP range that is already in use (e.g. Production). The downside of them is that they are isolated, meaning it is hard to get information or software in and out. Have a look at the Recipe for RDP (or SSH) forward into an isolated vApp to find some answers to this problem. VMs directly connected to External Network VM’s inside a vApp are connected to a direct OrgNet, meaning they will be able to get IP’s from the External Network pool. Typically these VM’s are used for Production, meaning that customers choose vCloud for fast provisioning of predefined templates. As vCloud manages the IP’s for a given IP range it can be quite easy to fast provision a VM. vApp Network connected via vApp Router to External network VMs are connected to a vApp Network that has a vApp Router defined as its Gateway. The Gateway connects to a direct OrgNet, meaning that the Gateway will be automatically be given an IP from the External Network Pool. These configurations come in handy to reduce the amount of “physical” Networking that has to be done. The vApp Router can act as a Router with defined Firewall rules, it can do SNAT and DNAT as well as define static routing. So instead of using up a “physical” VLAN or SubNet, one can hide away applications this way. As an added benefit these Applications can be used as templates for fast deployment. VMs direct connected to isolated OrgNet VMs are connected directly to an isolated OrgNet. Connecting VMs directly to an Isolated Network normally only makes sense if there is more than one vApp/VM connected to the OrgNet. What they are used for is an extension of the isolated vApp concept. You need to test repeatedly complex Applications that require certain infrastructure like Active Directory, DHCP, DNS, Database, Exchange Servers etc. Instead of deploying large isolated vApps that contain these, you could deploy them in one vApp and connect them via an Isolated OrgNet directly to the vApp that contains your testing VMs. This makes it possible to reuse this base infrastructure. By using Sharing you even can hide away the Infrastructure vApp from your users. vApp connected via vApp Router to isolate OrgNet VMs are connected to an vApp network that has as its Gateway a vApp Router . The vApp router gets automatically its IP from the OrgNet Pool. Basically, a variant of the idea before. A test vApp or an infrastructure vApp can be packaged this way and be made ready for fast deployment. VMs connected directly to Edge VMs are directly connected to the Edge OrgNet and get their IP from the OrgNet Pool. Their Gateway is the Edge device that connects them to the External Networks through the Edge Firewall. A very typical setup is using the Edge Load balancing feature to load balance VM’s out of a vApp via the Edge. Another one is that the Organization is secured using the Edge Gateway against other Organizations that use the same External Network. This is mostly the case if the External Network is the internet and each Organization is an external customer. vApp connected to Edge via vApp Router VMs are connected to a vApp network that has the vApp router as its Gateway. The vApp Router will automatically get an IP form the OrgNet, which has its Gateway the Edge. This is a more complicated variant of the above scenario, allowing customers to package their VM’s, secure them against other vApps or VMs or subdivide their allocated networks. IP Management Let’s have a look into IP management with vCloud. vCloud knows about three different settings for IP management of VM’s. DHCP You need to provide a DHCP, vCloud doesn’t automatically create one. However a vApp Router or an Edge can create one. Static – IP Pool The IP for the VM comes from the Static IP Pool of the network it is connected to. In addition to that DNS and Domain Suffix will be written to the VM. Static – Manual The IP can be defined on the spot; however, it must be in the network defined by the Gateway and the Network mask of the network the VM is connected to. In addition to that, DNS and Domain Suffix will be written to the VM. All these settings require Guest Customization to be effective. If no Guest Customization is selected, it doesn’t work and whatever the VM was configured with as a Template will be used. vSphere and vCloud vApps One think that need to be said about vApps is that they actually come in two completely different versions. The vCenter vApp and the vCloud vApp. The vSphere vApp concept was introduced in vSphere 4.0 as a container for VMs. In vSphere a vApp is essentially a resource pool with some extras, such as starting and stopping order and (if you configured it) Network IP allocation method. The idea is it to have an entity of VMs that build one unit. Such vApp then can be exported or imported using the OVF format. A very good example for an vApp is VMware Operations Manager. It comes as a vApp in an OVF format and contains not only the VMs but also the start-up sequence as well as some setup script. When the vApp is deployed the first time, additional information like Network settings are asked and then implemented. As vSphere vApp is a resource pool, it can be configured so that it will only demand resources that it is using; on the other hand resource pool configuration is something that most people struggle with. A vSphere vApp is ONLY a resource pool, it is not automatically a folder in the Folder and Template View of vSphere, but is viewed there as again as a vApp. The vCloud vApp is a very different concept; first of all it is not a resource pool. The VMs of the vCloud vApp live in the OvDC resource Pool. However the vCloud vAppp is automatically a folder in the Folder and Template View of vSphere. It is a construct that is created by vCloud, it consists of VMs, a Start and Stop sequence and Networks. The Network part is one of the major differences (next to the resource pool). In vSphere only network information, like how IPs gets assigned to it and settings like Gateway and DNS are given to the vApp, a vCloud vApp actually encapsulates Networks. The vCloud vApp Networks are full networks, meaning they contain the full information for a given network including network settings and IP Pools. For more details see the last article. This information is kept when importing and exporting vCloud vApps. When I’m talking about vApps in the book, I will always mean vCloud vApps. vCenter vApp, if they feature will be written as vCenter vApp. Datastores, profiles and clusters I probably don’t have to explain what a datastore is, but here is a short intro just in case . A Datastore is a VMware object that exists in ESXi. This Object can be a hard disk that is attached to an ESXi server, a NFS or iSCSSI mount on a ESXi host or an fibre channel disk that is attached to an HBA on the ESXi server. A Storage Profile is a container that contains one or more Datastores. A Storage Profile doesn’t have any intelligence implemented, it just groups the Storage. However, it is extremely beneficial in vCloud. If you run out of storage on a datastore you can just add another datastore to the same Storage Profile and your back in business. Datastore Clusters again are containers for datastores, but now there is intelligence included. A Datastore Cluster can use Storage DRS, which allows for VMs to automatically use Storage vMotion to move from one datastore to another if the I/O latency is high or the storage low. Depending on your storage backend system this can be extremely useful. vCloud Director doesn’t know the difference between a Storage Profile and a Datastore Cluster. If you add a Datastore cluster, vCloud will pick it up as a Storage Profile, but that’s ok because it’s not a problem at all. Be aware that Storage profiles are part of the vSphere Enterprise Plus licensing. If you don’t have Enterprise Plus you won’t get storage profiles, and the only thing you can do in vCloud is use the storage profile ANY, which doesn’t contribute to productivity. Thin provisioning Thin Provisioning means that the file that contains the virtual hard disk (.vmdk) is only as big as the the amount of data written to the virtual hard disk.. As an example, if you have a 40GB hard disk attached to a Windows VM and have just installed Windows on it you are using around 2GB of the 40GB disk. When using Thin provisioning only 2GB will be written to the datastore not 40GB. If you don’t use thin provisioning the .vmdk file wil be 40GB big. If your storage vendors Storage APIs is integrated in your ESXi servers Thin Provisioning may be offloaded to your storage backend, making it even faster. Fast Provisioning Fast provisioning is similar to linked clones that you may know from Lab Manager or VMware View. However, in vCloud they are a bit more intelligent than in the other products. In the other products linked clones can NOT be deployed across different datastores but in vCloud they can. Let’s talk about how linked clones work. If you have a VM with a hard disk of 40GB and you clone that VM you would normally have to spend another 40GB (not using Thin Provisioning). Using Linked clones you will not need another 40GB but less. What happens in layman’s terms is that vCloud creates two snapshots of the original VM’s hard disk. A snapshot contains only the differences between the original and the Snapshot. The original hard disk (.vmdk file) is set to read-only and the first snapshot is connected to the original VM, so that one still can work with the original VM. The second snapshot is used to create the new VM. Using snapshots makes deploying a VM using Fast Provisioning not only Fast but it also saves a lot of disk space. The problem with this is that a snapshot must be on the same datastore as its source. So if you have a VM in one datastore, its linked clone cannot be in another. vCloud has solved that problem by deploying a Shadow VM. When you deploy a VM with Fast Provisioning onto a different datastore than its source, vCloud creates a full clone (a normal full copy) of the VM onto the new datastore and then creates a linked clone from the Shadow VM. If your storage vendors Storage APIs is integrated in your ESXi servers Fast Provisioning may be offloaded to your storage backend, making it faster. See also recipe “Making NFS based datastores faster”. Summary In this article, we saw vCloud networks, vSphere and vCloud vApps, and datastores, profiles and clusters. Resources for Article :   Further resources on this subject: Windows 8 with VMware View [Article] VMware View 5 Desktop Virtualization [Article] Cloning and Snapshots in VMware Workstation [Article]
Read more
  • 0
  • 0
  • 14438

article-image-starting-out-backbox-linux
Packt
17 Feb 2014
11 min read
Save for later

Starting Out with BackBox Linux

Packt
17 Feb 2014
11 min read
(For more resources related to this topic, see here.) A flexible penetration testing distribution BackBox Linux is a very young project designed for penetration testing, vulnerability assessment and management. The key focus in using BackBox is to provide an independent security testing platform that can be easily customized with increased performance and stability. BackBox uses a very light desktop manager called XFCE. It includes the most popular security auditing tools that are essential for penetration testers and security advisers. The suite of tools includes web application analysis, network analysis, stress tests, computer sniffing forensic analysis, exploitation, documentation, and reporting. The BackBox repository is hosted on Launchpad and is constantly updated to the latest stable version of its tools. Adding and developing new tools inside the distribution requires it to be compliant with the open source community and particularly the Debian Free Software Guidelines criteria. IT security and penetration testing are dedicated sectors and quite new in the global market. There are a lot of Linux distributions dedicated to security; but if we do some research, we can see that only a couple of distributions are constantly updated. Many newly born projects stop at the first release without continuity and very few of them are updated. BackBox is one of the new players in this field and even though it is only a few years old, it has acquired an enormous user base and now holds the second place in worldwide rankings. It is a lightweight, community-built penetration testing distribution capable of running live in USB mode or as a permanent installation. BackBox now operates on release 3.09 as of September 2013, with a significant increase in users, thus becoming a stable community. BackBox is also significantly used in the professional world. BackBox is built on top of Ubuntu LTS and the 3.09 release uses 12.04 as its core. The desktop manager environment with XFCE and the ISO images are provided for 32-bit and 64-bit platforms (with the availability on Torrents and HTTP downloads from the project's website). The following screenshot shows the main view of the desktop manager, XFCE: The choice of desktop manager, XFCE, plays a very important role in BackBox. It is not only designed to serve the slender environment with medium and low level of resources, but also designed for very low memory. In case of very low memory and other resources (such as CPU, HD, and video), BackBox has an alternative way of booting the system without graphical user interface (GUI) and using command-line only, which requires really minimal amount of resources. With this aim in mind, BackBox is designed to function with pretty old and obsolete hardware to be used as a normal auditing platform. However, BackBox can be used on more powerful systems to perform actions that require the modern multicore processors to reduce ETA of the task such as brute-force attacks, data/password decryption, and password-cracking. Of course, the BackBox team aims to minimize overhead for the aforementioned cases through continuous research and development. Luckily, the majority of the tools included in BackBox can be performed in a shell/console environment and for the ones which require less resource. However, we always have our XFCE interface where we can access user-friendly GUI tools (in particular network analysis tools), which do not require many resources. Relatively, newcomer into the IT security and penetration testing environment, the first release of BackBox was back in September 09, 2010, as a project of the Italian web community. Now on its third major release and close to the next minor release (BackBox Linux 3.13 is planned for the end of January 2014), BackBox has grown rapidly and offers a wide scope for both amateur and professional use. The minimum requirements for BackBox are as follows: A 32-bit or 64-bit processor 512 MB of system memory RAM (256 MB in case there will be no desktop manager usage and only the console) 4.4 GB of disk space for installation Graphics card capable of 800 × 600 resolution (less resolution in case there will be no desktop manager usage) DVD-ROM drive or USB port The following screenshot shows the main view of BackBox with a toolbar at the bottom: The suite of auditing tools in BackBox makes the system complete and ready to use for security professionals of penetration testing. The organization of tools in BackBox. The entire set of BackBox security tools are populated into a single menu called Audit and structured into different subtasks as follows: Information Gathering Vulnerability Assessment Exploitation Privilege Escalation Maintaining Access Documentation & Reporting Social Engineering Stress Testing Forensic Analysis VoIP Analysis Wireless Analysis Miscellaneous We have to run through all the tools in BackBox by giving a short description of each single tool in the Auditing menu. The following screenshot shows the Auditing menu of BackBox: Information Gathering Information Gathering is the first absolute step of any security engineer and/or penetration tester. It is about collecting information on target systems, which can be very useful to start the assessment. Without this step, it will be quite difficult and hard to assess any system. Vulnerability Assessment After you've gathered information by performing the first step, the next step will be to analyze that information and its evaluation. Vulnerability Assessment is the process of identifying the vulnerabilities present in the system and prioritizing them. Exploitation Exploitation is the process where the weakness or bug in the software is used to penetrate the system. This can be done through the usage of an exploit, which is nothing but an automated script that is designed to perform a malicious attack on target systems. Privilege Escalation Privilege Escalation occurs when we have already gained access to the system but with low privileges. It can also be that we have legitimate access but not enough to make effective changes on the system, so we will need to elevate our privileges or gain access to another account with higher privileges. Maintaining Access Maintaining Access is about setting up an environment that will allow us to access the system again without repeating the tasks that we performed to gain access initially. Documentation & Reporting The Documentation & Reporting menu contains the tools that will allow us to collect the information during our assessment and generate a human readable report from them. Reverse Engineering The Reverse Engineering menu contains the suite of tools aimed to reverse the system by analyzing its structure for both hardware and software. Social Engineering Social Engineering is based on a nontechnical intrusion method, mainly on human interaction. It is the ability to manipulate the person and obtain his/her access credentials or the information that can introduce us to such parameters. Stress Testing The Stress Testing menu contains a group of tools aimed to test the stress level of applications and servers. Stress testing is the action where a massive amount of requests (for example, ICMP request) are performed against the target machine to create heavy traffic to overload the system. In this case, the target server is under severe stress and can be taken advantage of. For instance, the running services such as the web server, database or application server (for example, DDoS attack) can be taken down. Forensic Analysis The Forensic Analysis menu contains a great amount of useful tools to perform a forensic analysis on any system. Forensic analysis is the act of carrying out an investigation to obtain evidence from devices. It is a structured examination that aims to rebuild the user's history in a computer device or a server system. VoIP Analysis The voice over IP (VoIP) is a very commonly used protocol today in every part of the world. VoIP analysis is the act of monitoring and analyzing the network traffic with a specific analysis of VoIP calls. So in this section, we have a single tool dedicated to the analysis of VoIP systems. Wireless Analysis The Wireless Analysis menu contains a suite of tools dedicated to the security analysis of wireless protocols. Wireless analysis is the act of analyzing wireless devices to check their safety level. Miscellaneous The Miscellaneous menu contains tools that have different functionalities and can be placed in any section that we mentioned earlier, or in none of them. Services Apart from the Auditing menu, BackBox also has a Services menu. This menu is designed to populate the daemons of the tools, those which need to be manually initialized as a service. Update We have the Update menu that can be found in the main menu, just next to the Services menu. The Update menu contains the automated scripts to allow the users to update the tools that are out of APT automated system. Anonymous BackBox 3.13 has a new menu voice called Anonymous in the main menu. This menu contains a script that makes the user invisible to the network once started. The script populates a set of tools that anonymize the system while navigating, and connects to the global network, Internet. Extras Apart from the security-auditing tools, BackBox also has several privacy-protection tools. The suite of privacy-protection tools includes Tor, Polipo, and the Firefox safe mode that have been configured with a default profile in the private-browsing mode. There are many other useful tools recommended by the team but they are not included in the default ISO image. Therefore, the recommended tools are available in the BackBox repository and can be easily installed with apt-get (automated package installation tool for Debian-like systems). Completeness, accuracy, and support It is obvious that there are many alternatives when it comes to the choice of penetration testing tools for any particular auditing process. The BackBox team is mainly focused on the size of the tool library, performance, and the inclusion of the tools for security and auditing. The amount of tools included in BackBox is subject to accurate selection and testing by a team. Most of the security and penetration testing tools are implemented to perform identical functions. The BackBox team is very careful in the selection process in order to avoid duplicate applications and redundancies. Besides the wiki-based documentation provided for its set of tools, the repository of BackBox can also be imported into any of existing Ubuntu installation (or any of Debian derivative distro) by simply importing the project's Launchpad repository to the source list. Another point that the BackBox team focus their attention on is the size issue. BackBox may not offer the largest number of tools and utilities, but numbers are not equal to the quality. It has the essential tools installed by default that are sufficient to a penetration tester. However, BackBox is not a perfect penetration testing distribution. It is a very young project and aims to offer the best solution to the global community. Links and contacts BackBox is an open community where everybody's help is greatly welcomed. Here is a list of useful links to BackBox information on the Web: The BackBox main and official web page, where we can find general information about the distribution and the organization of the team, is available at http://www.BackBox.org/ The BackBox official blog, where we can find news about BackBox such as release notes and bug correction notifications, is available at http://www.BackBox.org/blog The BackBox official wikipage, where we can find many tutorials for the tools usage that are included in the distribution, is available at http://wiki.BackBox.org/ The BackBox official forum is the main discussion forum, where users can post their problems and also suggestions, is available at http://forum.BackBox.org/ The BackBox official IRC chat room is available at https://kiwiirc.com/client/irc.autistici.org:6667/?nick=BackBox_?#BackBox The BackBox official repository hosted on Launchpad, where the entire packages are located, is available at https://launchpad.net/~BackBox BackBox has also a Wikipedia page, where we can run through a brief history about how the project began, which is available at http://en.wikipedia.org/wiki/BackBox Summary In this article, we became more familiar with the BackBox environment by analyzing its menu structure and the way its tools are organized. We also provided a quick comment on each tool in BackBox. This is the only theoretical information regarding the introduction of BackBox. Resources for Article: Further resources on this subject: Penetration Testing and Setup [article] BackTrack 4: Security with Penetration Testing Methodology [article] Web app penetration testing in Kali [article]
Read more
  • 0
  • 0
  • 13907

article-image-defining-applications-policy-file
Packt
23 Aug 2013
21 min read
Save for later

Defining the Application's Policy File

Packt
23 Aug 2013
21 min read
(For more resources related to this topic, see here.) The AndroidManifest.xml file All Android applications need to have a manifest file. This file has to be named as AndroidManifest.xml and has to be placed in the application's root directory. This manifest file is the application's policy file. It declares the application components, their visibility, access rules, libraries, features, and the minimum Android version that the application runs against. The Android system uses the manifest file for component resolution. Thus, the AndroidManfiest.xml file is by far the most important file in the entire application, and special care is required when defining it to tighten up the application's security. The manifest file is not extensible, so applications cannot add their own attributes or tags. The complete list of tags with how these tags can be nested is as follows: <uses-sdk><?xml version="1.0" encoding="utf-8"?> <manifest> <uses-permission /> <permission /> <permission-tree /> <permission-group /> <instrumentation /> <uses-sdk /> <uses-configuration /> <uses-feature /> <supports-screens /> <compatible-screens /> <supports-gl-texture /> <application> <activity> <intent-filter> <action /> <category /> <data /> </intent-filter> <meta-data /> </activity> <activity-alias> <intent-filter> </intent-filter> <meta-data /> </activity-alias> <service> <intent-filter> </intent-filter> <meta-data/> </service> <receiver> <intent-filter> </intent-filter> <meta-data /> </receiver> <provider> <grant-uri-permission /> <meta-data /> <path-permission /> </provider> <uses-library /> </application> </manifest> Only two tags, <manifest> and <application>, are the required tags. There is no specific order to declare components. The <manifest> tag declares the application specific attributes. It is declared as follows: <manifest package="string" android_sharedUserId="string" android_sharedUserLabel="string resource" android_versionCode="integer" android_versionName="string" android_installLocation=["auto" | "internalOnly" | "preferExternal"] > </manifest> An example of the <manifest> tag is shown in the following code snippet. In this example, the package is named com.android.example, the internal version is 10, and the user sees this version as 2.7.0. The install location is decided by the Android system based on where it has room to store the application. <manifest package="com.android.example" android_versionCode="10" android_versionName="2.7.0" android_installLocation="auto" > The attributes of the <manifest> tag are as follows: package: This is the name of the package. This is the Java style namespace of your application, for example, com.android.example. This is the unique ID of your application. If you change the name of a published application, it is considered a new application and auto updates will not work. android:sharedUserId: This attribute is used if two or more applications share the same Linux ID. This attribute is discussed in detail in a later section. android:sharedUserLabel: This is the user readable name of the shared user ID and only makes sense if android:sharedUserId is set. It has to be a string resource. android:versionCode: This is the version code used internally by the application to track revisions. This code is referred to when updating an application with the more recent version. android:versionName: This is the version of the application shown to the user. It can be set as a raw string or as a reference, and is only used for display to users. android:installLocation: This attribute defines the location where an APK will be installed. The application tag is defined as follows: <application android_allowTaskReparenting=["true" | "false"] android_backupAgent="string" android_debuggable=["true" | "false"] android_description="string resource" android_enabled=["true" | "false"] android_hasCode=["true" | "false"] android_hardwareAccelerated=["true" | "false"] android_icon="drawable resource" android_killAfterRestore=["true" | "false"] android_largeHeap=["true" | "false"] android_label="string resource" android_logo="drawable resource" android_manageSpaceActivity="string" android_name="string" android_permission="string" android_persistent=["true" | "false"] android_process="string" android_restoreAnyVersion=["true" | "false"] android_supportsRtl=["true" | "false"] android_taskAffinity="string" android_theme="resource or theme" android_uiOptions=["none" | "splitActionBarWhenNarrow"] > </application> An example of the <application> tag is shown in the following code snippet. In this example, the application name, description, icon, and label are set. The application is not debuggable and the Android system can instantiate the components. <application android_label="@string/app_name" android_description="@string/app_desc" android_icon="@drawable/example_icon" android_enabled="true" android_debuggable="false"> </application> Many attributes of the <application> tag serve as the default values for the components declared within the application. These tags include permission, process, icon, and label. Other attributes such as debuggable and enabled are set for the entire application. The attributes of the <application> tag are discussed as follows: android:allowTaskReparenting: This value can be overridden by the <activity> element. It allows an Activity to re-parent with the Activity it has affinity with, when it is brought to the foreground. android:backupAgent: This attribute contains the name of the backup agent for the application. android:debuggable: This attribute when set to true allows an application to be debugged. This value should always be set to false before releasing the app in the market. android:description: This is the user readable description of an application set as a reference to a string resource. android:enabled: This attribute if set to true, the Android system can instantiate application components. This attribute can be overridden by components. android:hasCode: This attribute if set to true, the application will try to load some code when launching the components. android:hardwareAccelerated: This attribute when set to true allows an application to support hardware accelerated rendering. It was introduced in the API level 11. android:icon: This is the application icon as a reference to a drawable resource. android:killAfterRestore: This attribute if set to true, the application will be terminated once its settings are restored during a full-system restore. android:largeHeap: This attribute lets the Android system create a large Dalvik heap for this application and increases the memory footprint of the application, so this should be used sparingly. android:label: This is the user readable label for the application. android:logo: This is the logo for the application. android:manageSpaceActivity: This value is the name of the Activity that manages the memory for the application. android:name: This attribute contains the fully qualified name of the subclass that will be instantiated before any other component is started. android:permission: This attribute can be overridden by a component and sets the permission that a client should have to interact with the application. android:persistent: Usually used by a system application, this attribute allows the application to be running all the time. android:process: This is the name of the process in which a component will run. This can be overridden by any component's android:process attribute. android:restoreAnyVersion: This attribute lets the backup agent attempt a restore even if the backup currently stored is by a newer application than what is attempting to restore now. android:supportsRtl: This attribute when set to true supports right-to-left layouts. It was added in the API level 17. android:taskAffinity: This attribute lets all activities have affinity with the package name, unless it is set by the Activity explicitly. android:theme: This is a reference to the style resource for the application. android:uiOptions: This attribute if set to none, there are no extra UI options; if set to splitActionBarWhenNarrow, a bar is set at the bottom if constrained for the screen. In the following sections we will discuss how to handle specific requirements using the policy file. Application policy use cases This section discusses how to define the application policies using the manifest file. I have used use cases and we will discuss how to implement these use cases in the policy file. Declaring application permissions An application on the Android platform has to declare what resources it intends to use for proper functioning of the application. These are the permissions that are displayed to the user when they download the application. Application permissions should be descriptive so that users can understand them. Also, as is the general rule with security, it is important to request the minimum permissions required. Application permissions are declared in the manifest file by using the tag <uses-permission>. An example of a location-based manifest file that uses the GPS for retrieving location is shown in the following code snippet: <uses-permissionandroid:name="android. permission.ACCESS_COARSE_LOCATION" /> <uses-permissionandroid:name="android. permission.ACCESS_FINE_LOCATION" /> <uses-permissionandroid:name="android. permission.ACCESS_LOCATION_EXTRA_COMMANDS" /> <uses-permissionandroid:name="android. permission.ACCESS_MOCK_LOCATION" /> <uses-permissionandroid:name="android.permission.INTERNET" /> These permissions will be displayed to the users when they install the app, and can always be checked by going to Application under the settings menu. These permissions are seen in the following screenshot: Declaring permissions for external applications The manifest file also declares the permissions an external application (which does not run with the same Linux ID) needs to access the application components. This can be one of two places in the policy file: in the <application> tag or along with the component in the <activity>, <provider>, <receiver>, and <service> tag. If there are permissions that all components of an application require, then it is easy to specify them in the <application> tag. If a component requires some specific permission, then those can be defined in the specific component tag. Remember, only one permission can be declared in any of the tags. If a component is protected by permission then the component permission overrides the permission declared in the <application> tag. The following is an example of an application that requires external applications to have android.permission.ACCESS_COARSE_LOCATION to access its components and resources: <application android_allowBackup="true" android_icon="@drawable/ic_launcher" android_label="@string/app_name" android_permission="android. permission.ACCESS_COARSE_LOCATION"> If a Service requires that any application component that accesses it should have access to the external storage, then it can be defined as follows: <service android_enabled="true" android_name=".MyService" android_permission="android. permission.WRITE_EXTERNAL_STORAGE"> </service> If a policy file has both the preceding tags then when an external component makes a request to this Service, it should have android.permission.WRITE_EXTERNAL_STORAGE, as this permission will override the permission declared by the application tag. Applications running with the same Linux ID Sharing data between applications is always tricky. It is not easy to maintain data confidentiality and integrity. Proper access control mechanisms have to be put in place based on who has access to how much data. In this section, we will discuss how to share application data with the internal applications (signed by the same developer key). Android is a layered architecture with an application isolation enforced by the operating system itself. Whenever an application is installed on the Android device, the Android system gives it a unique user ID defined by the system. Notice that the two applications, example1 and example2, in the following screenshot are the applications run as separate user IDs, app_49 and app_50: However, an application can request the system for a user ID of its choice. The other application can then request the same user ID as well. This creates tight coupling and does not require components to be made visible to the other application or to create shared content providers. This kind of tight coupling is done in the manifest tags of all applications that want to run in the same process. The following is a snippet of manifest files of the two applications com.example.example1 and com.example.example2 that use the same user ID: <manifest package="com.example.example1" android_versionCode="1" android_versionName="1.0" android_sharedUserId="com.sharedID.example"> <manifest package="com.example.example2" android_versionCode="1" android_versionName="1.0" android_sharedUserId="com.sharedID.example"> The following screenshot is displayed when these two applications are running on the device. Notice that the applications, com.example.example1 and com.example.example2, now have the app ID of app_113. You will notice that the shared UID follows a certain format akin to a package name. Any other naming convention will result in an error such as an installation error: INSTALL_PARSE_FAILED_BAD_SHARED_USER_ID. All applications that share the same UID should have the same certificate. External storage Starting with API Level 8, Android provides support to store Android applications (APK files) on external devices, such as an SD card. This helps to free up internal phone memory. Once the APK is moved to external storage, the only memory taken up by the app is the private data of the application stored on internal memory. It is important to note that even for the SD card resident APKs, the DEX (Dalvik Executable) files, private data directories, and native shared libraries remain on the internal storage. Adding an optional attribute in the manifest file enables this feature. The application info screen for such an application either has a move to the SD card or move to a phone button depending on the current storage location of APK. The user then has an option to move the APK file accordingly. If the external device is un-mounted or the USB mode is set to Mass Storage (where the device is used as a disk drive), all the running activities and services hosted on that external device are immediately killed. The feature to enable storing APK on the external devices is enabled by adding the optional attribute android:installLocation in the application's manifest file in the <manifest> element. The attribute android:installLocation can have the following three values: InternalOnly: The Android system will install the application on the internal storage only. In case of insufficient internal memory, storage errors are returned. PreferExternal: The Android system will try to install the application on the external storage. In case there is not enough external storage, the application will be installed on the internal storage. The user will have the ability to move the app from external to internal storage and vice versa as desired. auto: This option lets the Android system decide the best install location for the application. The default system policy is to install the application on internal storage first. If the system is running low on internal memory, the application is then installed on the external storage. The user will have the ability to move the application from external to internal storage and vice versa as desired. For example, if android:installLocation is set to Auto, then on devices running a version of Android less than 2.2, the system will ignore this feature and APK will only be installed on the internal memory. The following is the code snippet from an application's manifest file with this option: <manifest package="com.example.android" android_versionCode="10" android_versionName="2.7.0" android_installLocation="auto" > The following is a screenshot of the application with the manifest file as specified previously. You will notice that Move to SD card is enabled in this case: In another application, where android:installLocation is not set, the Move to SD Card is disabled as shown in the following screenshot: Setting component visibility Any of the application components namely, activities, services, providers, and receivers can be made discoverable to the external applications. This section discusses the nuances of such scenarios. Any Activity or Service can be made private by setting android:exported=false. This is also the default value for an Activity. See the following two examples of a private Activity: <activity android_name=".Activity1" android_exported="false" /> <activity android_name=".Activity2" /> However, if you add an Intent Filter to the Activity, then the Activity becomes discoverable for the Intent in the Intent Filter. Thus, the Intent Filter should never be relied upon as a security boundary. See the following examples for Intent Filter declaration: <activity android_name=".Activity1" android_label="@string/app_name" > <intent-filter> <action android_name="android.intent.action.MAIN" /> <category android_name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <activity android_name=".Activity2"> <intent-filter> <action android_name="com.example.android. intent.START_ACTIVITY2" /> </intent-filter> </activity> Both activities and services can also be secured by an access permission required by the external component. This is done using the android:permission attribute of the component tag. A Content Provider can be set up for private access by using android:exported=false. This is also the default value for a provider. In this case, only an application with the same ID can access the provider. This access can be limited even further by setting the android:permission attribute of the provider tag. A Broadcast Receiver can be made private by using android:exported=false. This is the default value of the receiver if it does not contain any Intent Filters. In this case, only the components with the same ID can send a broadcast to the receiver. If the receiver contains Intent Filters then it becomes discoverable and the default value of android:exported is false. Debugging During the development of an application, we usually set the application to be in the debug mode. This lets developers see the verbose logs and can get inside the application to check for errors. This is done in the <application> tag by setting android:debuggable to true. To avoid security leaks, it is very important to set this attribute to false before releasing the application. An example of sensitive information that I have seen in my experience includes usernames and passwords, memory dumps, internal server errors, and even some funny personal notes state of a server and a developer's opinion about a piece of code. The default value of android:debuggable is false. Backup Starting with API level 8, an application can choose a backup agent to back up the device to the cloud or server. This can be set up in the manifest file in the <application> tag by setting android:allowBackup to true and then setting android:backupAgent to a class name. The default value of android:allowBackup is set to true and the application can set it to false if it wants to opt out of the backup. There is no default value for android:backupAgent and a class name should be specified. The security implications of such a backup are debatable as services used to back up the data are different and sensitive data, such as usernames and passwords can be compromised. Putting it all together The following example puts all the learning we have done so far to analyze AndroidManifest.xml provided with an Android SDK sample for RandomMusicPlayer. The manifest file specifies that this is version 1 of the application com.example.android.musicplayer. It runs on SDK 14 but supports backwards up to SDK 7. The application uses two permissions namely, android.permission.INTERNET and android.permission.WAKE_LOCK. The application has one Activity that is the entry point for the application called MainActivity, one Service called MusicService, and one receiver called MusicIntentReceiver. MusicService has defined custom actions called PLAY, REWIND, PAUSE, SKIP, STOP, and TOGGLE_PLAYBACK. The receiver uses the action intent android.media.AUDIO_BECOMING_NOISY and android.media.MEDIA_BUTTON defined by the Android system. None of the components are protected with permissions. An example of an AndroidManifst.xml file is shown in the following screenshot: Example checklist In this section, I have tried to put together an example list that I suggest you refer to whenever you are ready to release a version of your application. This is a very general version and you should adapt it according to your own use case and components. When creating a checklist think about issues that relate to the entire application, those that are specific to a component, and issues that might come up by setting the component and application specification together. Application level In this section, I have listed some questions that you should be asking yourself as you define the application specific preferences. They may affect how your application is viewed, stored, and perceived by users. Some application level questions that you may like to ask are as follows: Do you want to share resources with other applications that you have developed? Did you specify the unique user ID? Did you define this unique ID for another application either intentionally or unintentionally? Does your application require some capabilities such as camera, Bluetooth, and SMS? Does your application need all these permissions? Is there another permission that is more restrictive than the one you have defined? Remember the principle of least privilege Do all the components of your application need this permission or only a few? Check the spellings of all the permissions once again. The application may compile and work even if the permission spelling is incorrect. If you have defined this permission, is this the correct one that you need? At what API level does the application work? What is the minimum API level that your application can support? Are there any external libraries that your application needs? Did you remember to turn off the debug attribute before you release? If you are using a backup agent then remember to mention it here Did you remember to set a version number? This will help you during application upgrade Do you want to set an auto upgrade? Did you remember to sign the application with your release key? Sometimes setting a particular screen orientation will not allow your application to be visible on certain devices. For example, if your application only supports portrait mode then it might not appear for devices with landscape mode only. Where do you want to install the APK? Are there any services that might cease to work if the intent is not received in time? Do you want some other application level settings, such as the ability of the system to restore components? If defining a new permission, think twice if you really want them. Chances are there is already an existing permission that will cover your use case. Component level Some component level questions that you will want to think about in the policy are listed here. These are questions that you should be asking yourself for each component: Did you define all components? If using the third party libraries in your application, did you define all the components that you will use? Was there a particular setting that the third party library expects from your application? Do you want this component to be visible to other applications? Do you need to add some Intent Filters? If the component is not supposed to be visible, did you add Intent Filters? Remember as soon as you add Intent Filters, your component becomes visible. Do other components require some special permission to trigger this component? Verify the spelling of the permission name. Does your application require some capabilities such as camera, Bluetooth, and SMS? Summary In this article, we've learned how to define an applications policy file. The manifest file is the most important artifact of an application and should be defined with utmost care. This manifest file declares the permissions requested by an application and permissions that the external applications need to access its components. With the policy file we also define the storage location of the out APK and the minimum SDK against which the out application will run. The policy file exposes components that are not sensitive to the application. At the end of this article we discussed some sample issues that a developer should be aware of when writing a manifest file. In this article, we've learned about an Android application structure. Resources for Article: Further resources on this subject: So, what is Spring for Android? [Article] Animating Properties and Tweening Pages in Android 3-0 [Article] New Connectivity APIs – Android Beam [Article]
Read more
  • 0
  • 0
  • 13886

article-image-bloombergs-big-hack-expose-says-china-had-microchips-on-servers-for-covert-surveillance-of-big-tech-and-big-brother-big-tech-deny-supply-chain-compromise
Savia Lobo
05 Oct 2018
4 min read
Save for later

Bloomberg's Big Hack Exposé says China had microchips on servers for covert surveillance of Big Tech and Big Brother; Big Tech deny supply chain compromise

Savia Lobo
05 Oct 2018
4 min read
According to an in-depth report by Bloomberg yesterday, Chinese spies secretly inserted microchips within servers at Apple, Amazon, the US Department of Defense, the Central Intelligence Agency, the Navy, among others. What Bloomberg’s Big Hack Exposé revealed? The tiny chips were made to be undetectable without specialist equipment. These were later implanted on to the motherboards of servers on the production line in China. These servers were allegedly assembled by Super Micro Computer Inc., a San Jose-based company, one of the world’s biggest suppliers of server motherboards. Supermicro's customers include Elemental Technologies, a streaming services startup which was acquired by Amazon in 2015 and provided the foundation for the expansion of the Amazon Prime Video platform. According to the report, the Chinese People's Liberation Army (PLA) used illicit chips on hardware during the manufacturing process of server systems in factories. How did Amazon detect these microchips? In late 2015, Elemental’s staff boxed up several servers and sent them to Ontario, Canada, for the third-party security company to test. The testers found a tiny microchip, not much bigger than a grain of rice, that wasn’t part of the boards’ original design, nested on the servers’ motherboards. Following this, Amazon reported the discovery to U.S. authorities which shocked the intelligence community. This is because, Elemental’s servers are ubiquitous and used across US key government agencies such as in Department of Defense data centers, the CIA’s drone operations, and the onboard networks of Navy warships. And Elemental was just one of the hundreds of Supermicro customers. According to the Bloomberg report, “The chips were reportedly built to be as inconspicuous as possible and to mimic signal conditioning couplers. It was determined during an investigation, which took three years to conclude, that the chip "allowed the attackers to create a stealth doorway into any network that included the altered machines.” The report claims Amazon became aware of the attack during moves to purchase streaming video compression firm Elemental Technologies in 2015. Elemental's services appear to have been an ideal target for Chinese state-sponsored attackers to conduct covert surveillance. According to Bloomberg, Apple was one of the victims of the apparent breach. Bloomberg says that Apple found the malicious chips in 2015 subsequently cutting ties with Supermicro in 2016. Amazon, Apple, and Supermicro deny supply chain compromise Amazon and Apple have both strongly denied the results of the investigation. Amazon said, "It's untrue that AWS knew about a supply chain compromise, an issue with malicious chips, or hardware modifications when acquiring Elemental. It's also untrue that AWS knew about servers containing malicious chips or modifications in data centers based in China, or that AWS worked with the FBI to investigate or provide data about malicious hardware." Apple affirms that internal investigations have been conducted based on Bloomberg queries, and no evidence was found to support the accusations. The only infected driver was discovered in 2016 on a single Supermicro server found in Apple Labs. It was this incident which may have led to the severed business relationship back in 2016, rather than the discovery of malicious chips or a widespread supply chain attack. Supermicro confirms that they were not aware of any investigation regarding the topic nor were they contacted by any government agency in this regard. Bloomberg says the denials are in direct contrast to the testimony of six current and former national security officials, as well as confirmation by 17 anonymous sources which said the nature of the Supermicro compromise was accurate. Bloomberg's investigation has not been confirmed on the record by the FBI. To know about this news in detail, visit Bloomberg News. “Intel ME has a Manufacturing Mode vulnerability, and even giant manufacturers like Apple are not immune,” say researchers Amazon increases the minimum wage of all employees in the US and UK Bloomberg says Google, Mastercard covertly track customers’ offline retail habits via a secret million dollar ad deal
Read more
  • 0
  • 0
  • 13884
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-rounding
Packt
25 Nov 2013
3 min read
Save for later

Rounding up...

Packt
25 Nov 2013
3 min read
(For more resources related to this topic, see here.) We have now successfully learned how to secure our users' passwords using hashes; however, we should take a look at the big picture, just in case. The following figure shows what a very basic web application looks like: Note the https transmission tag: HTTPS is a secure transfer protocol, which allows us to transport information in a secure way. When we transport sensitive data such as passwords in a Web Application, anyone who intercepts the connection can easily get the password in plain text, and our users' data would be compromised. In order to avoid this, we should always use HTTPS when there's sensitive data involved. HTTPS is fairly easy to setup, you just need to buy an SSL certificate and configure it with your hosting provider. Configuration varies depending on the provider, but usually they provide an easy way to do it. It is strongly suggested to use HTTPS for authentication, sign up, sign in, and other sensitive data processes. As a general rule, most (if not all) of the data exchange that requires the user to be logged in should be protected. Keep in mind that HTTPS comes at a cost, so try to avoid using HTTPS on static pages that have public information. Always keep in mind that to protect the password, we need ensure secure transport (with HTTPS) and secure storage (with strong hashes) as well. Both are critical phases and we need to be very careful with them. Now that our passwords and other sensitive data are being transferred in a secure way, we can get into the application workflow. Consider the following steps for an authentication process: The application receives an Authentication Request. The Web Layer takes care of it as it gets the parameters (username and password), and passes them to the Authentication Service. The Authentication Service calls the Database Access Layer to retrieve the user from the database. The Database Access Layer queries the database, gets the user, and returns it to the Authentication Service. The Authentication Service gets the stored hash from the users' data retrieved from the database, extracts the salt and the amount of iterations, and calls the Hashing Utility passing the password from the authentication request, the salt, and the iterations. The Hashing Utility generates the hash and returns it to the Authentication Service. The Authentication Service performs a constant-time comparison between the stored hash and the generated hash, and we inform the Web Layer if the user is authenticated or not. The Web Layer returns the corresponding view to the user depending on whether they are authenticated or not. The following figure can help us understand how this works, please consider that flows 1, 2, 3, and 4 are bidirectional: The Authentication Service and the Hashing Utility components are the ones we have been working with so far. We already know how to create hashes, this workflow is an example to understand when we should it. Summary In this article we learned how to create hashes and have now successfully learned how to secure our users' passwords using hashes. We have also learned that we need to ensure secure transport (with HTTPS) and secure storage (with strong hashes) as well. Resources for Article: Further resources on this subject: FreeRADIUS Authentication: Storing Passwords [Article] EJB 3.1: Controlling Security Programmatically Using JAAS [Article] So, what is Spring for Android? [Article]
Read more
  • 0
  • 0
  • 13507

article-image-puppet-and-os-security-tools
Packt
27 Mar 2015
17 min read
Save for later

Puppet and OS Security Tools

Packt
27 Mar 2015
17 min read
In this article by Jason Slagle, author of the book Learning Puppet Security, covers using Puppet to manage SELinux and auditd. We learned a lot so far about using Puppet to secure your systems as, well as how to use it to make groups of systems more secure. However, in all of that, we've not yet covered some of the basic OS-level functions that are available to secure a system. In this article, we'll review several of those functions. (For more resources related to this topic, see here.) SELinux is a powerful tool in the security arsenal. Most administrators experience with it, is along the lines of "how can I turn that off ?" This is born out of frustration with the poor documentation about the tool, as well as the tedious nature of the configuration. While Puppet cannot help you with the documentation (which is getting better all the time), it can help you with some of the other challenges that SELinux can bring. That is, ensuring that the proper contexts and policies are in place on the systems being managed. In this article, we'll cover the following topics related to OS-level security tools: A brief introduction to SELinux and auditd The built-in Puppet support for SELinux Community modules for SELinux Community modules for auditd At the end of this article, you should have enough skills so that you no longer need to disable SELinux. However, if you still need to do so, it is certainly possible to do via the modules presented here. Introducing SELinux and auditd During the course of this article, we'll explore the SELinux framework for Linux and see how to automate it using Puppet. As part of the process, we'll also review auditd, the logging and auditing framework for Linux. Using Puppet, we can automate the configuration of these often-neglected security tools, and even move the configuration of these tools for various services to the modules that configure those services. The SELinux framework SELinux is a security system for Linux originally developed by the United States National Security Agency (NSA). It is an in-kernel protection mechanism designed to provide Mandatory Access Controls (MACs) to the Linux kernel. SELinux isn't the only MAC framework for Linux. AppArmor is an alternative MAC framework included in the Linux kernel since Version 2.6.30. We choose to implement SELinux; since it is the default framework used under Red Hat Linux, which we're using for our examples. More information on AppArmor can be found at http://wiki.apparmor.net/index.php/Main_Page. These access controls work by confining processes to the minimal amount of files and network access that the processes require to run. By doing this, the controls limit the amount of collateral damage that can be done by a process, which becomes compromised. SELinux was first merged to the Linux mainline kernel for the 2.6.0 release. It was introduced into Red Hat Enterprise Linux with Version 4, and into Ubuntu in Version 8.04. With each successive release of the operating systems, support for SELinux grows, and it becomes easier to use. SELinux has a couple of core concepts that we need to understand to properly configure it. The first are the concepts of types and contexts. A type in SELinux is a grouping of similar things. Files used by Apache may be httpd_sys_content_t, for instance, which is a type that all content served by HTTP would have. The httpd process itself is of type httpd_t. These types are applied to objects, which represent discrete things, such as files and ports, and become part of the context of that object. The context of an object represents the object's user, role, type, and optionally data on multilevel security. For this discussion, the type is the most important component of the context. Using a policy, we grant access from the subject, which represents a running process, to various objects that represent files, network ports, memory, and so on. We do that by creating a policy that allows a subject to have access to the types it requires to function. SELinux has three modes that it can operate in. The first of these modes is disabled. As the name implies, the disabled mode runs without any SELinux enforcement. The second mode is called permissive. In permissive mode, SELinux will log any access violations, but will not act on them. This is a good way to get an idea of where you need to modify your policy, or tune Booleans to get proper system operations. The final mode, enforcing, will deny actions that do not have a policy in place. Under Red Hat Linux variants, this is the default SELinux mode. By default, Red Hat 6 runs SELinux with a targeted policy in enforcing mode. This means, that for the targeted daemons, SELinux will enforce its policy by default. An example is in order here, to explain this well. So far, we've been operating with SELinux disabled on our hosts. The first step in experimenting with SELinux is to turn it on. We'll set it to permissive mode at first, while we gather some information. To do this, after starting our master VM, we'll need to modify the SELinux configuration and reboot. While it's possible to change from enforcing mode to either permissive or disabled mode without a reboot, going back requires us to reboot. Let's edit the /etc/sysconfig/selinux file and set the SELINUX variable to permissive on our puppetmaster. Remember to start the vagrant machine and SSH in as it is necessary. Once this is done, the file should look as follows: Once this is complete, we need to reboot. To do so, run the following command: sudo shutdown -r now Wait for the system to come back online. Once the machine is back up and you SSH back into it, run the getenforce command. It should return permissive, which means SELinux is running, but not enforced. Now, we can make sure our master is running and take a look at its context. If it's not running, you can start the service with the sudo service puppetmaster start command. Now, we'll use the -Z flag on the ps command to examine the SELinux flag. Many commands, such as ps and ls use the -Z flag to view the SELinux data. We'll go ahead and run the following command to view the SELinux data for the running puppetmaster: ps -efZ|grep puppet When you do this, you'll see a Linux output, such as follows: unconfined_u:system_r:initrc_t:s0 puppet 1463     1 1 11:41 ? 00:00:29 /usr/bin/ruby /usr/bin/puppet master If you take a look at the first part of the output line, you'll see that Puppet is running in the unconfined_u:system_r:initrc_t context. This is actually somewhat of a bug and a result of the Puppet policy on CentOS 6 being out of date. We should actually be running under the system_u:system_r:puppetmaster_t:s0 context, but the policy is for a much older version of Puppet, so it runs unconfined. Let's take a look at the sshd process to see what it looks like also. To do so, we'll just grep for sshd instead: ps -efZ|grep sshd The output is as follows: system_u:system_r:sshd_t:s0-s0:c0.c1023 root 1206 1 0 11:40 ? 00:00:00 /usr/sbin/sshd This is a more traditional output one would expect. The sshd process is running under the system_u:system_r:sshd_t context. This actually corresponds to the system user, the system role, and the sshd type. The user and role are SELinux constructs that help you allow role-based access controls. The users do not map to system users, but allow us to set a policy based on the SELinux user object. This allows role-based access control, based on the SELinux user. Previously the unconfined user was a user that will not be enforced. Now, we can take a look at some objects. Doing a ls -lZ /etc/ssh command results in the following: As you can see, each of the files belongs to a context that includes the system user, as well as the object role. They are split among the etc type for configuration files and the sshd_key type for keys. The SSH policy allows the sshd process to read both of these file types. Other policies, say, for NTP, would potentially allow the ntpd process to read the etc types, but it would not be able to read the sshd_key files. This very fine-grained control is the power of SELinux. However, with great power comes very complex configuration. Configuration can be confusing to set up, if it doesn't happen correctly. For instance, with Puppet, the wrong type can potentially impact the system if not dealt with. Fortunately, in permissive mode, we will log data that we can use to assist us with this. This leads us into the second half of the system that we wish to discuss, which is auditd. In the meantime, there is a bunch of information on SELinux available on its website at http://selinuxproject.org/page/Main_Page. There's also a very funny, but informative, resource available describing SELinux at https://people.redhat.com/duffy/selinux/selinux-coloring-book_A4-Stapled.pdf. The auditd framework for audit logging SELinux does a great job at limiting access to system components; however, reporting what enforcement took place was not one of its objectives. Enter the auditd. The auditd is an auditing framework developed by Red Hat. It is a complete auditing system using rules to indicate what to audit. This can be used to log SELinux events, as well as much more. Under the hood, auditd has hooks into the kernel to watch system calls and other processes. Using the rules, you can configure logging for any of these events. For instance, you can create a rule that monitors writes to the /etc/passwd file. This would allow you to see if any users were added to the system. We can also add monitoring of files, such as lastlog and wtmp to monitor the login activity. We'll explore this example later when we configure auditd. To quickly see how a rule works, we'll manually configure a quick rule that will log the time when the wtmp file was edited. This will add some system logging around users logging in. To do this, let's edit the /etc/audit/audit.rules file to add a rule to monitor this. Edit the file and add the following lines: -w /var/log/wtmp -p wa -k logins-w /etc/passwd –p wa –k password We'll take a look at what the preceding lines do. These lines both start with the –w clauses. These indicate the files that we are monitoring. Second, we have the –p clauses. This lets you set what file operations we monitor. In this case, it is write and append operations. Finally, with the the –k entries, we're setting a keyword that is logged and can be filtered on. This should go at the end of the file. Once it's done, reload auditd with the following command: sudo service auditd restart Once this is complete, go ahead and log another ssh session in. Once you can simply log, back out. Once this is done, take a look at the /var/log/audit/audit.log file. You should see the content like the following: type=SYSCALL msg=audit(1416795396.816:482): arch=c000003e syscall=2 success=yes exit=8 a0=7fa983c446aa a1=1 a2=2 a3=7fff3f7a6590 items=1 ppid=1206 pid=2202 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=51 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 key="logins"type=SYSCALL msg=audit(1416795420.057:485): arch=c000003e syscall=2 success=yes exit=7 a0=7fa983c446aa a1=1 a2=2 a3=8 items=1 ppid=1206 pid=2202 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=51 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 key="logins" There are tons of fields in this output, including the SELinux context, the userID, and so on. Of interest is the auid, which is the audit user ID. On commands run via the sudo command, this will still contain the user ID of the user who called sudo. This is a great way to log commands performed via sudo. Auditd also logs SELinux failures. They get logged under the type AVC. These access vector cache logs will be placed in the auditd log file when a SELinux violation occurs. Much like SELinux, auditd is somewhat complicated. The intricacies of it are beyond the scope of this book. You can get more information at http://people.redhat.com/sgrubb/audit/. SELinux and Puppet Puppet has direct support for several features of SELinux. There are two native Puppet types for SELinux: selboolean and selmodule. These types support setting SELinux Booleans and installing SELinux policy modules. SELinux Booleans are variables that impact on how SELinux behaves. They are set to allow various functions to be permitted. For instance, you set a SELinux Boolean to true to allow the httpd process to access network ports. SELinux modules are groupings of policies. They allow policies to be loaded in a more granular way. The Puppet selmodule type allows Puppet to load these modules. The selboolean type The targeted SELinux policy that most distributions use is based on the SELinux reference policy. One of the features of this policy is the use of Boolean variables that control actions of the policy. There are over 200 of these Booleans on a Red Hat 6-based machine. We can investigate them by installing the policycoreutils-python package on the operating system. You can do this by executing the following command: sudo yum install policycoreutils-python Once installed, we can run the semanage boolean -l command to get a list of the Boolean values, along with their descriptions. The output of this will look as follows: As you can see, there exists a very large number of settings that can be reconfigured, simply by setting the appropriate Boolean value. The selboolean Puppet type supports managing these Boolean values. The provider is fairly simple, accepting the following values: Parameter Description name This contains the name of the Boolean to be set. It defaults to the title. persistent This checks whether to write the value to disk for the next boot. provider This is the provider for the type. Usually, the default getsetsebool value is accepted. value This contains the value of the Boolean, true or false. Usage of this type is rather simple. We'll show an example that will set the puppetmaster_use_db parameter to true value. If we are using the SELinux Puppet policy, this would allow the master to talk to a database. For our use, it's a simple unused variable that we can use for demonstration purposes. As a reminder, the SElinux policy for Puppet on CentOS 6 is outdated, so setting the Boolean does not impact the version of Puppet we're running. It does, however, serve to show how a Boolean is set. To do this, we'll create a sample role and profile for our puppetmaster. This is something that would likely exist in a production environment to manage the configuration of the master. In this example, we'll simply build a small profile and role for the master. Let's start with the profile. Copy over the profiles module we've slowly been building up, and let's add a puppetmaster.pp profile. To do so, edit the profiles/manifests/puppetmaster.pp file and make it look as follows: class profiles::puppetmaster {selboolean { 'puppetmaster_use_db':   value     => on,   persistent => true,}} Then, we'll move on to the role. Copy the roles, and edit the roles/manifests/puppetmaster.pp file there and make it look as follows: class roles::puppetmaster {include profiles::puppetmaster} Once this is done, we can apply it to our host. Edit the /etc/puppet/manifests/site.pp file. We'll apply the puppetmaster role to the puppetmaster machine, as follows: node 'puppet.book.local' {include roles::puppetmaster} Now, we'll run Puppet and get the output as follows: As you can see, it set the value to on when run. Using this method, we can set any of the SELinux Boolean values we need for our system to operate properly. More information on SELinux Booleans with information on how to obtain a list of them can be found at https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Security-Enhanced_Linux/sect-Security-Enhanced_Linux-Working_with_SELinux-Booleans.html. The selmodule type The other native type inside Puppet is a type to manage the SELinux modules. Modules are compiled collections of the SELinux policy. They're loaded into the kernel using the selmodule command. This Puppet type provides support for this mechanism. The available parameters are as follows: Parameter Description name This contains the name of the module— it defaults to the title ensure This is the desired state—present or absent provider This specifies the provider for the type—it should be selmodule selmoduledir This is the directory that contains the module to be installed selmodulepath This provides the complete path to the module to be installed if not present in selmoduledir syncversion This checks whether to resync the module if a new version is found, such as ensure => latest  Using the module, we can take our compiled module and serve it onto the system with Puppet. We can then use the module to ensure that it gets installed on the system. This lets us centrally manage the module with Puppet. We'll see an example where this module compiles a policy and then installs it, so we won't show a specific example here. Instead, we'll move on to talk about the last SELinux-related component in Puppet. File parameters for SELinux The final internal support for SELinux types comes in the form of the file type. The file type parameters are as follows: Parameter Description selinux_ignore_defaults By default, Puppet will use the matchpathcon function to set the context of a file. This overrides that behavior if set to true value. Selrange This sets the SELinux range component. We've not really covered this. It's not used in most mainstream distributions at the time this book was written. Selrole This sets the SELinux role on the file. seltype This sets the SELinux type on the file. seluser This sets the SELinux role on the file. Usually, if you place files in the correct location (the expected location for a service) on the filesystem, Puppet will get the SELinux properties correct via its use of the matchpathcon function. This function (which also has a matching utility) applies a default context based on the policy settings. Setting the context manually is used in cases where you're storing data outside the normal location. For instance, you might be storing web data under the /opt file. The preceding types and providers provide the basics that allow you to manage SELinux on a system. We'll now take a look at a couple of community modules that build on these types and create a more in-depth solution. Summary This article looked at what SELinux and auditd were, and gave a brief example of how they can be used. We looked at what they can do, and how they can be used to secure your systems. After this, we looked at the specific support for SELinux in Puppet. We looked at the two built-in types to support it, as well as the parameters on the file type. Then, we took a look at one of the several community modules for managing SELinux. Using this module, we can store the policies as text instead of compiled blobs. Resources for Article: Further resources on this subject: The anatomy of a report processor [Article] Module, Facts, Types and Reporting tools in Puppet [Article] Designing Puppet Architectures [Article]
Read more
  • 0
  • 0
  • 13336

article-image-ios-security-overview
Packt
04 Mar 2015
20 min read
Save for later

iOS Security Overview

Packt
04 Mar 2015
20 min read
In this article by Allister Banks and Charles S. Edge, the authors of the book, Learning iOS Security, we will go through an overview of the basic security measures followed in an iOS. Out of the box, iOS is one of the most secure operating systems available. There are a number of factors that contribute to the elevated security level. These include the fact that users cannot access the underlying operating system. Apps also have data in a silo (sandbox), so instead of accessing the system's internals they can access the silo. App developers choose whether to store settings such as passwords in the app or on iCloud Keychain, which is a secure location for such data on a device. Finally, Apple has a number of controls in place on devices to help protect users while providing an elegant user experience. However, devices can be made even more secure than they are now. In this article, we're going to get some basic security tasks under our belt in order to get some basic best practices of security. Where we feel more explanation is needed about what we did on devices, we'll explore a part of the technology itself in this article. This article will cover the following topics: Pairing Backing up your device Initial security checklist Safari and built-in app protection Predictive search and spotlight (For more resources related to this topic, see here.) To kick off the overview of iOS security, we'll quickly secure our systems by initially providing a simple checklist of tasks, where we'll configure a few device protections that we feel everyone should use. Then, we'll look at how to take a backup of our devices and finally, at how to use a built-in web browser and protections around a browser. Pairing When you connect a device to a computer that runs iTunes for the first time, you are prompted to enter a password. Doing so allows you to synchronize the device to a computer. Applications that can communicate over this channel include iTunes, iPhoto, Xcode, and others. To pair a device to a Mac, simply plug the device in (if you have a passcode, you'll need to enter that in order to pair the device.) When the device is plugged in, you'll be prompted on both the device and the computer to establish a trust. Simply tap on Trust on the iOS device, as shown in the following screenshot: Trusting a computer For the computer to communicate with the iOS device, you'll also need to accept the pairing on your computer (although, when you use libimobiledevice, which is the command to pair, does not require doing so, because you use the command line to accept). When prompted, click on Continue to establish the pairing, as seen in the following screenshot (the screenshot is the same in Windows): Trusting a device When a device is paired, a file is created in /var/db/lockdown, which is the UDID of the device with a property list (plist) extension. A property list is an Apple XML file that stores a variety of attributes. In Windows, iOS data is stored in the MobileSync folder, which you can access by navigating to Users(username)AppDataRoamingApple ComputerMobileSync. The information in this file sets up a trust between the computers and includes the following attributes: DeviceCertificate: This certificate is unique to each device. EscrowBag: The key bag of EscrowBag contains class keys used to decrypt the device. HostCertificate: This certificate is for the host who's paired with iOS devices (usually, the same for all files that you've paired devices with, on your computer). HostID: This is a generated ID for the host. HostPrivateKey: This is the private key for your Mac (should be the same in all files on a given computer). RootCertificate: This is the certificate used to generate keys (should be the same in all files on a given computer). RootPrivateKey: This is the private key of the computer that runs iTunes for that device. SystemBUID: This refers to the ID of the computer that runs iTunes. WiFiMACAddress: This is the Mac address of the Wi-Fi interface of the device that is paired to the computer. If you do not have an active Wi-Fi interface, MAC is still used while pairing. Why does this matter? It's important to know how a device interfaces with a computer. These files can be moved between computers and contain a variety of information about a device, including private keys. Having keys isn't all that is required for a computer to communicate with a device. When the devices are interfacing with a computer over USB, if you have a passcode enabled on the device, you will be required to enter that passcode in order to unlock the device. Once a computer is able to communicate with a device, you need to be careful as the backups of a device, apps that get synchronized to a device, and other data that gets exchanged with a device can be exposed while at rest on devices. Backing up your device What do most people do to maximize the security of iOS devices? Before we do anything, we need to take a backup of our devices. This protects the device from us by providing a restore point. This also secures the data from the possibility of losing it through a silly mistake. There are two ways, which are most commonly used to take backups: iCloud and iTunes. As the names imply, the first makes backups for the data on Apple's cloud service and the second on desktop computers. We'll cover how to take a backup on iCloud first. iCloud backups An iCloud account comes with free storage, to back up your Apple devices. An iOS device takes a backup to Apple servers and can be restored when a new device is set up from those same servers (it's a screen that appears during the activation process of a new device. Also, it appears as an option in iTunes if you back up to iTunes over USB—covered later in this article). Setting up and checking the status of iCloud backups is a straightforward process. From the Settings app, tap on iCloud and then Backup. As you can see from the Backup screen, you have two options, iCloud Backup, which enables automatic backups of the device to your iCloud account, and Back Up Now, which runs an immediate backup of the device. iCloud backups Allowing iCloud to take backups on devices is optional. You can disable access to iCloud and iCloud backups. However, doing so is rarely a good idea as you are limiting the functionality of the device and putting the data on your device at risk, if that data isn't backed up another way such as through iTunes. Many people have reservations about storing data on public clouds; especially, data as private as phone data (texts, phone call history, and so on). For more information on Apple's security and privacy around iCloud, refer to http://support.apple.com/en-us/HT202303. If you do not trust Apple or it's cloud, then you can also take a backup of your device using iTunes, described in the next section. Taking backups using iTunes Originally, iTunes was used to take backups for iOS devices. You can still use iTunes and it's likely you will have a second backup even if you are using iCloud, simply for a quick restore if nothing else. Backups are usually pretty small. The reason is that the operating system is not part of backups, since users can't edit any of those files. Therefore, you can use an ipsw file (the operating system) to restore a device. These are accessed through Apple Configurator or through iTunes if you have a restore file waiting to be installed. These can be seen in ~/Library/iTunes, and the name of the device and its software updates, as can be seen in the following screenshot: IPSW files Backups are stored in the ~/Library/Application Support/MobileSync/Backup directory. Here, you'll see a number of directories that are associated with the UDID of the devices, and within those, you'll see a number of files that make up the modular incremental backups beyond the initial backup. It's a pretty smart system and allows you to restore a device at different points in time without taking too long to perform each backup. Backups are stored in the Documents and SettingsUSERNAMEApplication DataApple ComputerMobileSyncBackup directory on Windows XP and in the UsersUSERNAMEAppDataRoamingApple ComputerMobileSyncBackup directory for newer operating systems. To enable an iTunes back up, plug a device into a computer, and then open iTunes. Click on the device for it to show the device details screen. The top section of the screen is for Backups (in the following screenshot, you can set a back up to This computer, which takes a backup on the computer you are on). I would recommend you to always choose the Encrypt iPhone backup option as it forces you to save a password in order to restore the back up. Additionally, you can use the Back Up Now button to kick off the first back up, as shown in the following screenshot: iTunes Viewing iOS data in iTunes To show why it's important to encrypt backups, let's look at what can be pulled out of those backups. There are a few tools that can extract backups, provided you have a password. Here, we'll look at iBackup Extractor to view the backup of your browsing history, calendars, call history, contacts, iMessages, notes, photos, and voicemails. To get started, download iBackup Extractor from http://www.wideanglesoftware.com/ibackupextractor. When you open iBackup Extractor for the first time, simply choose the device backup you wish to extract in iBackup Extractor. As you can see in following screenshot, you will be prompted for a password in order to unlock the Backup key bag. Enter the password to unlock the system. Unlock the backups Note that the file tree in the following screenshot gives away some information on the structure of the iOS filesystem, or at least, the data stored in the backups of the iOS device. For now, simply click on Browser to see a list of files that can be extracted from the backup, as you can see in the next screenshot: View Device Contents Using iBackup Extractor Note the prevalence of SQL databases in the files. Most apps use these types of databases to store data on devices. Also, check out the other options such as extracting notes (many that were possibly deleted), texts (some that have been deleted from devices), and other types of data from devices. Now that we've exhausted backups and proven that you should really put a password in place for your back ups, let's finally get to some basic security tasks to be performed on these devices! Initial security checklist Apple has built iOS to be one of the most secure operating systems in the world. This has been made possible by restricting access to much of the operating system by end users, unless you jailbreak a device. In this article, we won't cover jail-breaking devices much due to the fact that securing the devices then becomes a whole new topic. Instead, we have focused on what you need to do, how you can do those tasks, what the impacts are, and, how to manage security settings based on a policy. The basic steps required to secure an iOS device start with encrypting devices, which is done by assigning a passcode to a device. We will then configure how much inactive time before a device requires a PIN and accordingly manage the privacy settings. These settings allow us to get some very basic security features under our belt, and set the stage to explain what some of the features actually do. Configuring a passcode The first thing most of us need to do on an iOS device is configure a passcode for the device. Several things happen when a passcode is enabled, as shown in the following steps: The device is encrypted. The device then requires a passcode to wake up. An idle timeout is automatically set that puts the device to sleep after a few minutes of inactivity. This means that three of the most important things you can do to secure a device are enabled when you set up a passcode. Best of all, Apple recommends setting up a passcode during the initial set up of new devices. You can manage passcode settings using policies (or profiles as Apple likes to call them in iOS). Best of all—you can set a passcode and then use your fingerprint on the Home button instead of that passcode. We have found that by the time our phone is out of our pocket and if our finger is on the home button, the device is unlocked by the time we check it. With iPhone 6 and higher versions, you can now use that same fingerprint to secure payment information. Check whether a passcode has been configured, and if needed, configure a passcode using the Settings app. The Settings app is by default on the Home screen where many settings on the device, including Wi-Fi networks the device has been joined to, app preferences, mail accounts, and other settings are configured. To set a passcode, open the Settings app and tap on Touch ID & Passcode If a passcode has been set, you will see the Turn Passcode Off (as seen in the following screenshot) option If a passcode has not been set, then you can do so at this screen as well Additionally, you can change a passcode that has been set using the Change Passcode button and define a fingerprint or additional fingerprints that can be used with a touch ID There are two options in the USE TOUCH ID FOR section of the screen. You can choose whether, or not, you need to enter the passcode in order to unlock a phone, which you should use unless the device is also used by small children or as a kiosk. In these cases, you don't need to encrypt or take a backup of the device anyway. The second option is to force the entering of a passcode while using the App Store and iTunes. This can cost you money if someone else is using your device, so let the default value remain, which requires you to enter a passcode to unlock the options. Configure a Passcode The passcode settings are very easy to configure; so, they should be configured when possible. Scroll down on this screen and you'll see several other features, as shown in the next screenshot. The first option on the screen is Simple Passcode. Most users want to use a simple pin with an iOS device. Trying to use alphanumeric and long passcodes simply causes most users to try to circumvent the requirement. To add a fingerprint as a passcode, simply tap on Add a Fingerprint…, which you can see in the preceding screenshot, and follow the onscreen instructions. Additionally, the following can be accessed when the device is locked, and you can choose to turn them off: Today: This shows an overview of upcoming calendar items Notifications View: This shows you the recent push notifications (apps that have updates on the device) Siri: This represents the voice control of the device Passbook: This tool is used to make payments and display tickets for concert venues and meetups Reply with Message: This tool allows you to send a text reply to an incoming call (useful if you're on the treadmill) Each organization can decide whether it considers these options to be a security risk and direct users how to deal with them, or they can implement a policy around these options. Passcode Settings There aren't a lot of security options around passcodes and encryption, because by and large, Apple secures the device by giving you fewer options than you'll actually use. Under the hood, (for example, through Apple Configurator and Mobile Device Management) there are a lot of other options, but these aren't exposed to end users of devices. For the most part, a simple four-character passcode will suffice for most environments. When you complicate passcodes, devices become much more difficult to unlock, and users tend to look for ways around passcode enforcement policies. The passcode is only used on the device, so complicating the passcode will only reduce the likelihood that a passcode would be guessed before swiping open a device, which typically occurs within 10 tries. Finally, to disable a passcode and therefore encryption, simply go to the Touch ID & Passcode option in the Settings app and tap on Turn Passcode Off. Configuring privacy settings Once a passcode is set and the device is encrypted, it's time to configure the privacy settings. Third-party apps cannot communicate with one another by default in iOS. Therefore, you must enable communication between them (also between third-party apps and built-in iOS apps that have APIs). This is a fundamental concept when it comes to securing iOS devices. To configure privacy options, open the Settings app and tap on the entry for Privacy. On the Privacy screen, you'll see a list of each app that can be communicated with by other apps, as shown in the following screenshot: Privacy Options As an example, tap on the Location Services entry, as shown in the next screenshot. Here, you can set which apps can communicate with Location Services and when. If an app is set to While Using, the app can communicate with Location Services when the app is open. If an app is set to Always, then the app can only communicate with Location Services when the app is open and not when it runs in the background. Configure Location Services On the Privacy screen, tap on Photos. Here, you have fewer options because unlike the location of a device, you can't access photos when the app is running in the background. Here, you can enable or disable an app by communicating with the photo library on a device, as seen in the next screenshot: Configure What Apps Can Access Your Camera Roll Each app should be configured in such a way that it can communicate with the features of iOS or other apps that are absolutely necessary. Other privacy options which you can consider disabling include Siri and Handoff. Siri has the voice controls of an iOS. Because Siri can be used even when your phone is locked, consider to disable it by opening the Settings app, tapping on General and then on Siri, and you will be able disable the voice controls. To disable Handoff, you should use the General System Preference pane in any OS X computer paired to an iOS device. There, uncheck the Allow Handoff between this Mac and your iCloud devices option. Safari and built-in App protections Web browsers have access to a lot of data. One of the most popular targets on other platforms has been web browsers. The default browser on an iOS device is Safari. Open the Settings app and then tap on Safari. The Safari preferences to secure iOS devices include the following: Passwords & AutoFill: This is a screen that includes contact information, a list of saved passwords and credit cards used in web browsers. This data is stored in an iCloud Keychain if iCloud Keychain has been enabled in your phone. Favorites: This performs the function of bookmark management. This shows bookmarks in iOS. Open Links: This configures how links are managed. Block Pop-ups: This enables a pop-up blocker. Scroll down and you'll see the Privacy & Security options (as seen in the next screenshot). Here, you can do the following: Do Not Track: By this, you can block the tracking of browsing activity by websites. Block Cookies: A cookie is a small piece of data sent from a website to a visitor's browser. Many sites will send cookies to third-party sites, so the management of cookies becomes an obstacle to the privacy of many. By default, Safari only allows cookies from websites that you visit (Allow from Websites I Visit). Set the Cookies option to Always Block in order to disable its ability to accept any cookies; set the option to Always Allow to accept cookies from any source; and set the option to Allow from Current Website Only to only allow cookies from certain websites. Fraudulent Website Warning: This blocks phishing attacks (sites that only exist to steal personal information). Clear History and Website Data: This clears any cached history, web files, and passwords from the Safari browser. Use Cellular Data: When this option is turned off, it disables web traffic over cellular connections (so web traffic will only work when the phone is connected to a Wi-Fi network). Configure Privacy Settings for Safari There are also a number of advanced options that can be accessed by clicking on the Advanced button, as shown in the following screenshot: Configure the Advanced Safari Options These advanced options include the following: Website Data: This option (as you can see in the next screenshot) shows the amount of data stored from each site that caches files on the device, and allows you to swipe left on these entries to access any files saved for the site. Tap on Remove All Website Data to remove data for all the sites at once. JavaScript: This allows you to disable any JavaScripts from running on sites the device browses. Web Inspector: This shows the device in the Develop menu on a computer connected to the device. If the Web Inspector option has been disabled, use Advanced Preferences in the Safari Preferences option of Safari. View Website Data On Devices Browser security is an important aspect of any operating system. Predictive search and spotlight The final aspect of securing the settings on an iOS device that we'll cover in this article includes predictive search and spotlight. When you use the spotlight feature in iOS, usage data is sent to Apple along with the information from Location Services. Additionally, you can search for anything on a device, including items previously blocked from being accessed. The ability to search for blocked content warrants the inclusion in locking down a device. That data is then used to generate future searches. This feature can be disabled by opening the Settings app, tap on Privacy, then Location Services, and then System Services. Simply slide Spotlight Suggestions to Off to disable the location data from going over that connection. To limit the type of data that spotlight sends, open the Settings app, tap on General, and then on Spotlight Search. Uncheck each item you don't want indexed in the Spotlight database. The following screenshot shows the mentioned options: Configure What Spotlight Indexes These were some of the basic tactical tasks that secure devices. Summary This article was a whirlwind of quick changes that secure a device. Here, we paired devices, took a backup, set a passcode, and secured app data and Safari. We showed how to manually do some tasks that are set via policies. Resources for Article: Further resources on this subject: Creating a Brick Breaking Game [article] New iPad Features in iOS 6 [article] Sparrow iOS Game Framework - The Basics of Our Game [article]
Read more
  • 0
  • 0
  • 13184

article-image-penetration-testing
Packt
04 Feb 2015
15 min read
Save for later

Penetration Testing

Packt
04 Feb 2015
15 min read
In this article by Aamir Lakhani and Joseph Muniz, authors of the book Penetration Testing with Raspberry Pi, we will see the various LAN- and wireless-based attack scenarios, using tools found in Kali Linux that are optimized for a Raspberry Pi. These scenarios include scanning, analyzing and capturing network traffic. (For more resources related to this topic, see here.) The Raspberry Pi has limited performance capabilities due to its size and processing power. It is highly recommended that you test the following techniques in a lab prior to using a Raspberry Pi for a live penetration test. Network scanning Network reconnaissance is typically time-consuming, yet it is the most important step when performing a penetration test. The more you know about your target, the more likely it is that you will find the fastest and easiest path to success. The best practice is starting with reconnaissance methods that do not require you to interact with your target; however, you will need to make contact eventually. Upon making contact, you will need to identify any open ports on a target system as well as map out the environment to which it's connected. Once you breach a system, typically there are other networks that you can scan to gain deeper access to your target's network. One huge advantage of the Raspberry Pi is its size and mobility. Typically, Kali Linux is used from an attack system outside a target's network; however, tools such as PWNIE Express and small systems that run Kali Linux, such as a Raspberry Pi, can be placed inside a network and be remotely accessed. This gives an attacker a system inside the network, bypassing typical perimeter defenses while performing internal reconnaissance. This approach brings the obvious risks of having to physically place the system on the network as well as create a method to communicate with it remotely without being detected; however, if successful, this can be very effective. Let's look at a few popular methods to scan a target network. We'll continue forward assuming that you have established a foothold on a network and now want to understand the current environment that you have connected to. Nmap The most popular open source tool used to scan hosts and services on a network is Nmap (short for Network Mapper). Nmap's advanced features can detect different applications running on systems as well as offer services such as the OS fingerprinting features. Nmap can be very effective; however, it can also be easily detected unless used properly. We recommend using Nmap in very specific situations to avoid triggering a target's defense systems. For more information on how to use Nmap, visit http://nmap.org/. To use Nmap to scan a local network, open a terminal window and type nmap (target), for example, nmap www.somewebsite.com or nmap 192.168.1.2. There are many other commands that can be used to tune your scan. For example, you can tune how stealthy you want to be or specify to store the results in a particular location. The following screenshot shows the results after running Nmap against www.thesecurityblogger.com. Note that this is an example and is considered a noisy scan. If you simply type in either of the preceding two commands, it is most likely that your target will easily recognize that you are performing an Nmap scan. There are plenty of online resources available to learn how to master the various features for Nmap. Here is a reference list of popular nmap commands: nmap 192.168.1.0/24: This scans the entire class C range nmap -p <port ranges>: This scans specific ports nmap -sP 192.168.1.0/24: This scans the network/find servers and devices that are running nmap –iflist: This shows host interfaces and routes nmap –sV 192.168.1.1: This detects remote services' version numbers nmap –sS 192.168.1.1: This performs a stealthy TCP SYN scan nmap –sO 192.168.1.1: This scans for the IP protocol nmap -192.168.1.1 > output.txt: This saves the output from the scan to the text file nmap –sA 192.168.1.254: This checks whether the host is protected by a firewall nmap –PN 192.168.1.1: This scans the host when it is protected by a firewall nmap --reason 192.168.1.1: This displays the reason a port is in a particular state nmap --open 192.168.1.1: This only shows open or possibly open ports The Nmap GUI software Zenmap is not included in the Kali Linux ARM image. It is also not recommended over using the command line when running Kali Linux on a Raspberry Pi. Wireless security Another attack vector that can be leveraged on a Raspberry Pi with a Wi-Fi adapter is targeting wireless devices such as mobile tablets and laptops. Scanning wireless networks, once they are connected, is similar to how scanning is done on a LAN; however, typically a layer of password decryption is required before you can connect to a wireless network. Also, wireless network identifier known as Service Set Identifier (SSID) might not be broadcasted but will still be visible when you use the right tools. This section will cover how to bypass wireless onboarding defenses so that you can access a target's Wi-Fi network and perform the penetration testing steps. Looking at a Raspberry Pi with Kali Linux, one of the use cases is hiding the system inside or near a target's network and launching wireless attacks remotely. The goal will be to enable the Raspberry Pi to access the network wirelessly and provide a remote connection back to the attacker. The attacker can be nearby using wireless to control the Raspberry Pi until it gains wireless access. Once on the network, a backdoor can be established so that the attacker can communicate with the Raspberry Pi from anywhere in the world and launch attacks. Cracking WPA/WPA2 A commonly found security protocol for protecting wireless networks is Wi-Fi Protected Access (WPA). WPA was later replaced by WPA2 and it will be probably what you will be up against when you perform a wireless penetration test. WPA and WPA2 can be cracked with Aircrack. Kali Linux includes the Aircrack suite, which is one of the most popular applications to break wireless security. Aircrack works by gathering packets seen on a wireless connection to either mathematically analyze the data to crack weaker protocols such as Wired Equivalent Privacy (WEP), or use brute force on the captured data with a wordlist. Cracking WPA/WPA2 can be done due to a weakness in the four-way handshake between the client and the access point. In summary, a client will authenticate to an access point and go through a four-step process. This is the time when the attacker is able to grab the password and use a brute force approach to identify it. The time-consuming part in this is based on how unique the network password is, how extensive your wordlist that will be used to brute force against the password is, and the processing power of the system. Unfortunately, the Raspberry Pi lacks the processing power and the hard drive space to accommodate large wordlist files. So, you might have to crack the password off-box with a tool such as John the Ripper. We recommend this route for most WPA2 hacking attempts. Here is the process to crack a WPA running on a Linksys WRVS4400N wireless router using a Raspberry Pi on-box options. We are using a WPA example so that the time-consuming part can be accomplished quickly with a Raspberry Pi. Most WPA2 cracking examples would take a very long time to run from a Raspberry Pi; however, the steps to be followed are the same to run on a faster off-box system. The steps are as follows: Start Aircrack by opening a terminal and typing airmon-ng; In Aircrack, we need to select the desired interface to use for the attack. In the previous screenshot, wlan0 is my Wi-Fi adapter. This is a USB wireless adapter that has been plugged into my Raspberry Pi. It is recommended that you hide your Mac address while cracking a foreign wireless network. Kali Linux ARM does not come with the program macchanger. So, you should download it by using the sudo apt-get install macchanger command in a terminal window. There are other ways to change your Mac address, but macchanger can provide a spoofed Mac so that your device looks like a common network device such as a printer. This can be an effective way to avoid detection. Next, we need to stop the interface used for the attack so that we can change our Mac address. So, for this example, we will be stopping wlan0 using the following commands: airmon-ng stop wlan0 ifconfig wlan0 down Now, let's change the Mac address of this interface to hide our true identity. Use macchanger to change your Mac to a random value and specify your interface. There are options to switch to another type of device; however, for this example, we will just leave it as a random Mac address using the following command: macchanger -r wlan0 Our random value is b0:43:3a:1f:3a:05 in the following screenshot. Macchanger shows our new Mac as unknown. Now that our Mac is spoofed, let's restart airmon-ng with the following command: airmon-ng start wlan0 We need to locate available wireless networks so that we can pick our target to attack. Use the following command to do this: airodump-ng wlan0 You should now see networks within range of your Raspberry Pi that can be targeted for this attack. To stop the search once you identify a target, press Ctrl + C. You should write down the Mac address, also known as BSSID, and the channel, also known as CH, used by your target network. The following screenshot shows that our target with ESSID HackMePlease is running WPA on CH 6: The next step is running airodump against the Mac address that you just copied. You will need the following things to make this work: The channel being used by the target The Mac address (BSSID) that you copied A name for the file to save your data Let's run the airodump command in the following manner: airodump-ng –c [channel number] –w [name of file] –-bssid [target ssid] wlan0 This will open a new terminal window after you execute it. Keep that window open. Open another terminal window that will be used to connect to the target's wireless network. We will run aireplay using the following command: aireplay-ng-deauth 1 –a [target's BSSID] –c [our BSSID] [interface] For our example, the command will look like the following: aireplay-ng -–deauth 1 –a 00:1C:10:F6:04:C3 –c 00:0f:56:bc:2c:d1 wlan0 The following screenshot shows the launch of the preceding command: You may not get the full handshake when you run this command. If that happens, you will have to wait for a live user to authenticate you to the access point prior to launching the attack. The output on using Aircrack may show you something like Opening [file].cap a few times followed by No valid WPA handshakes found, if you didn't create a full handshake and somebody hasn't authenticated you by that time. Do not proceed to the next step until you capture a full handshake. The last step is to run Aircrack against the captured data to crack the WPA key. Use the –w option to specify the location of a wordlist that will be used to scan against the captured data. You will use the .cap file that was created earlier during step 9, so we will use the name capturefile.cap in our example. We'll do this using the following command: Aircrack-ng –w ./wordlist.lst wirelessattack.cap The Kali Linux ARM image does not include a wordlist.lst file for cracking passwords. Usually, default wordlists are not good anyway. So, it is recommended that you use Google to find an extensive wordlist (see the next section on wordlists for more information). Make sure to be mindful of the hard drive space that you have on the Raspberry Pi, as many wordlists might be too large to be used directly from the Raspberry Pi. The best practice for running process-intensive steps such as brute forcing passwords is to do them off-box on a more powerful system. You will see Aircrack start and begin trying each password in the wordlist file against the captured data. This process could take a while depending on the password you are trying to break, the number of words in your list, and the processing speed of the Raspberry Pi. We found that it ranges from a few hours to days, as it's a very tedious process and is possibly better-suited for an external system with more horsepower than a Raspberry Pi. You may also find that your wordlist doesn't work after waiting a few days to sort through the entire wordlist file. If Aircrack doesn't open and start trying keys against the password, you either didn't specify the location of the .cap file or the location of the wordlist.lst file, or you don't have the captured handshake data. By default, the previous steps store files in the root directory. You can move your wordlist file in the root directory to mimic how we ran the commands in the previous steps since all our files are located in the root directory folder. You can verify this by typing ls to list the current directory files. Make sure that you list the correct directories of each file that are called by each command. If your attack is successful, you should see something like the following screenshot that shows the identified password as sunshine: It is a good idea to perform this last step on a remote machine. You can set up a FTP server and push your .cap files to that FTP server. You can learn more about setting up an FTP server at http://www.raspberrypi.org/forums/viewtopic.php?f=36&t=35661. Creating wordlists There are many sources and tools that can be used to develop a wordlist for your attack. One popular tool called Custom Wordlist Generator (CeWL), allows you to create your own custom dictionary file. This can be extremely useful if you are targeting individuals and want to scrape their blogs, LinkedIn, or other websites for commonly used words. CeWL doesn't come preinstalled on the Kali Linux ARM image, so you will have to download it using apt-get install cewl. To use CeWL, open a terminal window and put in your target website. CeWL will examine the URL and create a wordlist based on all the unique words it finds. In the following example, we are creating a wordlist of commonly used words found on the security blog www.drchaos.com using the following command: cewl www.drchaos.com -w drchaospasswords.txt The following screenshot shows the launch of the preceding command: You can also find many examples of popular wordlists used as dictionary files on the Internet. Here are a few wordlist examples sources that you can use; however, be sure to research Google for other options as well: https://crackstation.net/buy-crackstation-wordlist-password-cracking-dictionary.html https://wiki.skullsecurity.org/Passwords Here is a dictionary that one of the coauthors put together: http://www.drchaos.com/public_files/chaos-dictionary.lst.txt Capturing traffic on the network It is great to get access to a target network. However, typically the next step, once a foothold is established, is to start looking at the data. To do this, you will need a method to capture and view network packets. This means turning your Raspberry Pi into a remotely accessible network tap. Many of these tools could overload and crash your Raspberry Pi. Look out for our recommendations regarding when to use a tuning method to avoid this from happening. Tcpdump Tcpdump is a command line based packet analyzer. You can use tcpdump to intercept and display TCP/IP and other packets that are transmitted and seen attached by the system This means the Raspberry Pi must have access to the network traffic that you intend to view or using tcpdump won't provide you with any useful data. Tcpdump is not installed with the default Kali Linux ARM image, so you will have to install it using the sudo apt-get install tcpdump command. Once installed, you can run tcpdump by simply opening a terminal window and typing sudo tcpdump. The following screenshot shows the traffic flow visible to us after the launch of the preceding command: As the previous screenshot shows, there really isn't much to see if you don't have the proper traffic flowing through the Raspberry Pi. Basically, we're seeing our own traffic while being plugged into an 802.1X-enabled switch, which isn't interesting. Let's look at how to get other system's data through your Raspberry Pi. Running tcpdump consumes a lot of the Raspberry Pi's processing power. We found that this could crash the Raspberry Pi by itself or while using it with other applications. We recommend that you tune your data capture to avoid this from happening. Man-in-the-middle attacks One common method to capture sensitive information is by performing a man-in-the-middle attack. By definition, a man-in-the-middle attack is when an attacker makes independent connections with victims while actively eavesdropping on the communication. This is typically done between a host and the systems. For example, a popular method to capture passwords is to act as a middleman between login credentials passed by a user to a web server. Summary This article introduced us to the various attack scenarios of penetration testing used with the tools available in Kali Linux, over a Raspberry Pi. This article also gave us detailed description of tools like Nmap, CeWL, and tcpdump, which are used for network scanning, creating wordlists, and analyzing network traffic respectively. Resources for Article: Further resources on this subject: Testing Your Speed [Article] Creating a 3D world to roam in [Article] Making the Unit Very Mobile – Controlling the Movement of a Robot with Legs [Article]
Read more
  • 0
  • 0
  • 13174
article-image-overview-data-protection-manager-2010
Packt
30 May 2011
9 min read
Save for later

Overview of Data Protection Manager 2010

Packt
30 May 2011
9 min read
  Microsoft Data Protection Manager 2010 A practical step-by-step guide to planning deployment, installation, configuration, and troubleshooting of Data Protection Manager 2010         Read more about this book       (For more resources on Microsoft, see here.) DPM structure In this section we will look at the DPM file structure in order to have a better understanding of where DPM stores its components. We will also look at important processes that DPM runs and what they are used for. There will be some hints and tips that you should know about that will be useful when administering DPM. DPM file locations It is important to know not only how DPM operates, but also to know the structure that is underneath the application. Understanding the structure of where the DPM components are will help you with administering and troubleshooting DPM if the need arises. The following are some important locations: The DPM database backups are stored in the following location. Also when you make backup shadow copies for the replicas these will be stored in this directory. You would make backup show copies of your replicas if you were archiving them using a third-party backup solution: C:Program FilesMicrosoft DPMDPMVolumesShadowCopyDatabase Backups The following directory is where DPM is installed: C:Program FilesMicrosoft DPM The following directory contains PowerShell scripts that come with DPM. There are many scripts that can be used for performing common DPM tasks. C:Program FilesMicrosoft DPMDPMbin The following folder contains the database and files for SQL reporting services: C:Program FilesMicrosoft DPMSQL The following directory contains the SQL DPM database. MDF and LDF files: C:Program FilesMicrosoft DPMDPMDPMDB The following directory stores shadow copy volumes that are recovery points for a data source. These essentially are the changed blocks of VSS (Volume Shadow Copy Service) (Shadow Copy). C:Program FilesMicrosoft DPMDPMVolumesDiffArea The following folder contains mounted replica volumes. Mounted replica volumes are essentially pointers for every protected data object that points to the partition in a DPM storage pool. Think of these mounted replica points as a map from DPM to the protected data on the hard drives where the actual protected data lives. C:Program FilesMicrosoft DPMDPMVolumesReplica DPM processes We are now going to explore DPM processes. The executable files for these are all located in C:Program FilesMicrosoft DPMDPMbin. You can view these processes in Windows Task Manager and they show up in Windows Services as well: The following screenshot shows the DPM services as they appear in Windows Services: We will look at what each of these processes are and what they do. We will also look at the processes that have an impact on the performance of your DPM server. The processes are as follows: DPMAMService.exe: In Windows Services this is listed as the DPM AccessManager Service. This manages access to DPM. DpmWriter.exe: This is a service as well, so you will see it on the services list. This service is used for archiving. It manages the backup shadow copies or replicas, backups of report databases, as well as DPM backups. Msdpm.exe: The DPM service is the core component of DPM. The DPM service manages all core DPM operations, including replica creation, synchronization, and recovery point creation. This service implements and manages synchronization and shadow copy creation for protected file servers. DPMLA.exe: This is the DPM Library Agent Service. DPMRA.exe: This is the DPM Replication Agent. It helps to back up and recover file and application data to DPM. Dpmac.exe: This is known as the DPM Agent Coordinator Service. This manages the installations, uninstallations, and upgrades of DPM protection agents on remote computers that you need to protect. DPM processes that impact DPM performance The Msdpm.exe, MsDpmProtectionAgent.exe, Microsoft$DPM$Acct.exe, and mmc.exe processes take a toll on DPM performance. mmc.exe is a standard Windows service. "MMC" stands for Microsoft Management Console application and is used to display various management plug-ins. Not all but a good amount of Microsoft server applications run in the MMC such as Exchange, ISA, IIS, System Center, and the Microsoft Server Manager. The DPM Administrator Console runs in an MMC as well. mmc.exe can cause high memory usage. The best way to ensure that this process does not overload your memory is to close the DPM Administrator Console when not using it. MsDpmProtectionAgent.exe is the DPM Protection Agent service and affects both CPU and memory usage when DPM jobs and consistency checks are run. There is nothing you can do to get the usage down for this service. You just need to be aware of this and try not to schedule any other resource intensive applications such as antivirus scans at the same time as DPM jobs or consistency checks. Mspdpm.exe is a service that runs synchronization and shadow copy creations as stated previously. Like MsDpmProtectionAgent.exe, Mspdpm.exe also affects CPU and memory usage when running synchronizations and shadow copies. Like MsDpmProtectionAgent.exe there is nothing you can do to the Mspdpm. exe service to reduce memory and CPU usage. Just make sure to keep the system clear of resource intensive applications when the Mspdpm.exe is running jobs. If you are running a local SQL instance for your DPM deployment you will notice a Microsoft$DPM$Acct.exe process. The SQL Server and SQL Agent services use a Microsoft$DPM$Acct account. This normally runs on a high level. This service reserves part of your system's memory for cache. If the system memory goes low, the Microsoft$DPM$Acct.exe process will let go of the memory cache it has reserved. Important DPM terms In this section you will learn some important terms used commonly in DPM. You will need to understand these terms as you begin to administer DPM on a regular basis. You can read the full list of terms at this site: http://technet.microsoft.com/en-us/library/bb795543.aspx We group the terms in a way that each group relates to an area of DPM. The following are some important terms: Bare metal recovery: This is a restore technique that allows one to restore a complete system onto bare metal, without any requirements, to the previous hardware. This allows restoring to dissimilar hardware. Change journal: A feature that tracks changes to NTFS (New Technology File System) volumes, including additions, deletions, and modifications. The change journal exists on the volume as a sparse file. Sparse files are used to make disk space usage more efficient in NTFS. A sparse file allocates disk space only when it is needed. This allows files to be created even when there is insufficient space on a hard drive. These files contain zeroes instead of disk blocks. Consistency check: The process by which DPM checks for and corrects inconsistencies between a protected data source and its replica. A consistency check is only performed when normal mechanisms for recording changes to protected data, and for applying those changes to replicas, have been interrupted. Express full backup: A synchronization operation in which the protection agent transfers a snapshot of all the blocks that have changed since the previous express full backup (or initial replica creation, for the first express full backup). Shadow copy: A point-in-time copy of files and folders that is stored on the DPM server. Shadow copies are sometimes referred to as snapshots. Shadow copy client software: Client software that enables an end-user to independently recover data by retrieving a shadow copy. Replica: A complete copy of the protected data on a single volume, database, or storage group. Each member of a protection group is associated with a replica on the DPM server. Replica creation: The process by which a full copy of data sources, selected for inclusion in a protection group, is transferred to the DPM storage pool. The replica can be created over the network from data on the protected computer or from a tape backup system. Replica creation is an initialization process that is performed for each data source when the data source is added to a protection group. Replica volume: A volume on the DPM server that contains the replica for a protected data source. Custom volume: A volume that is not in the DPM storage pool and is specified to store the replica and recovery points for a protection group member. Dismount: To remove a removable tape or disc from a drive. DPM Alerts log: A log that stores DPM alerts as Windows events so that the alerts can be displayed in Microsoft System Center Operations Manager (SCOM). DPMDB.mdf: The filename of the DPM database, the SQL Server database that stores DPM settings and configuration information. DPMDBReaders group: A group, created during DPM installation, that contains all accounts that have read-only access to the DPM database. The DPMReport account is a member of this group. DPMReport account: The account that the Web and NT services of SQL Server Reporting Services use to access the DPM database. This account is created when an administrator configures DPM reporting. MICROSOFT$DPM$: The name that the DPM setup assigns to the SQL Server instance used by DPM. Microsoft$DPMWriter$ account: The low-privilege account under which DPM runs the DPM Writer service. This account is created during the DPM installation. MSDPMTrustedMachines group: A group that contains the domain accounts for computers that are authorized to communicate with the DPM server. DPM uses this group to ensure that only computers that have the DPM protection agent installed from a specific DPM server can respond to calls from that server. Protection configuration: The collection of settings that is common to a protection group; specifically, the protection group name, disk allocations, replica creation method, and on-the-wire compression. Protection group: A collection of data sources that share the same protection configuration. Protection group member: A data source within a protection group. Protected computer: A computer that contains data sources that are protection group members. Synchronization: The process by which DPM transfers changes from the protected computer to the DPM server, and applies the changes to the replica of the protected volume. Recovery goals: The retention range, data loss tolerance, and frequency of recovery points for protected data. Recovery collection: The aggregate of all recovery jobs associated with a single recovery operation. Recovery point: The date and time of a previous version of a data source that is available for recovery from media that is managed by DPM. Report database: The SQL Server database that stores DPM reporting information (ReportServer.mdf). ReportServer.mdf: In DPM, the filename for the report database—a SQL Server database that stores reporting information. Retention range: Duration of time for which the data should be available for recovery.
Read more
  • 0
  • 0
  • 13140

article-image-ruby-and-metasploit-modules
Packt
23 May 2014
11 min read
Save for later

Ruby and Metasploit Modules

Packt
23 May 2014
11 min read
(For more resources related to this topic, see here.) Reinventing Metasploit Consider a scenario where the systems under the scope of the penetration test are very large in number, and we need to perform a post-exploitation function such as downloading a particular file from all the systems after exploiting them. Downloading a particular file from each system manually will consume a lot of time and will be tiring as well. Therefore, in a scenario like this, we can create a custom post-exploitation script that will automatically download a file from all the systems that are compromised. This article focuses on building programming skill sets for Metasploit modules. This article kicks off with the basics of Ruby programming and ends with developing various Metasploit modules. In this article, we will cover the following points: Understanding the basics of Ruby programming Writing programs in Ruby programming Exploring modules in Metasploit Writing your own modules and post-exploitation modules Let's now understand the basics of Ruby programming and gather the required essentials we need to code Metasploit modules. Before we delve deeper into coding Metasploit modules, we must know the core features of Ruby programming that are required in order to design these modules. However, why do we require Ruby for Metasploit? The following key points will help us understand the answer to this question: Constructing an automated class for reusable code is a feature of the Ruby language that matches the needs of Metasploit Ruby is an object-oriented style of programming Ruby is an interpreter-based language that is fast and consumes less development time Earlier, Perl used to not support code reuse Ruby – the heart of Metasploit Ruby is indeed the heart of the Metasploit framework. However, what exactly is Ruby? According to the official website, Ruby is a simple and powerful programming language. Yokihiru Matsumoto designed it in 1995. It is further defined as a dynamic, reflective, and general-purpose object-oriented programming language with functions similar to Perl. You can download Ruby for Windows/Linux from http://rubyinstaller.org/downloads/. You can refer to an excellent resource for learning Ruby practically at http://tryruby.org/levels/1/challenges/0. Creating your first Ruby program Ruby is an easy-to-learn programming language. Now, let's start with the basics of Ruby. However, remember that Ruby is a vast programming language. Covering all the capabilities of Ruby will push us beyond the scope of this article. Therefore, we will only stick to the essentials that are required in designing Metasploit modules. Interacting with the Ruby shell Ruby offers an interactive shell too. Working on the interactive shell will help us understand the basics of Ruby clearly. So, let's get started. Open your CMD/terminal and type irb in it to launch the Ruby interactive shell. Let's input something into the Ruby shell and see what happens; suppose I type in the number 2 as follows: irb(main):001:0> 2 => 2 The shell throws back the value. Now, let's give another input such as the addition operation as follows: irb(main):002:0> 2+3 => 5 We can see that if we input numbers using an expression style, the shell gives us back the result of the expression. Let's perform some functions on the string, such as storing the value of a string in a variable, as follows: irb(main):005:0> a= "nipun" => "nipun" irb(main):006:0> b= "loves metasploit" => "loves metasploit" After assigning values to the variables a and b, let's see what the shell response will be when we write a and a+b on the shell's console: irb(main):014:0> a => "nipun" irb(main):015:0> a+b => "nipunloves metasploit" We can see that when we typed in a as an input, it reflected the value stored in the variable named a. Similarly, a+b gave us back the concatenated result of variables a and b. Defining methods in the shell A method or function is a set of statements that will execute when we make a call to it. We can declare methods easily in Ruby's interactive shell, or we can declare them using the script as well. Methods are an important aspect when working with Metasploit modules. Let's see the syntax: def method_name [( [arg [= default]]...[, * arg [, &expr ]])] expr end To define a method, we use def followed by the method name, with arguments and expressions in parentheses. We also use an end statement following all the expressions to set an end to the method definition. Here, arg refers to the arguments that a method receives. In addition, expr refers to the expressions that a method receives or calculates inline. Let's have a look at an example: irb(main):001:0> def week2day(week) irb(main):002:1> week=week*7 irb(main):003:1> puts(week) irb(main):004:1> end => nil We defined a method named week2day that receives an argument named week. Further more, we multiplied the received argument with 7 and printed out the result using the puts function. Let's call this function with an argument with 4 as the value: irb(main):005:0> week2day(4) 28 => nil We can see our function printing out the correct value by performing the multiplication operation. Ruby offers two different functions to print the output: puts and print. However, when it comes to the Metasploit framework, the print_line function is used. Variables and data types in Ruby A variable is a placeholder for values that can change at any given time. In Ruby, we declare a variable only when we need to use it. Ruby supports numerous variables' data types, but we will only discuss those that are relevant to Metasploit. Let's see what they are. Working with strings Strings are objects that represent a stream or sequence of characters. In Ruby, we can assign a string value to a variable with ease as seen in the previous example. By simply defining the value in quotation marks or a single quotation mark, we can assign a value to a string. It is recommended to use double quotation marks because if single quotations are used, it can create problems. Let's have a look at the problem that may arise: irb(main):005:0> name = 'Msf Book' => "Msf Book" irb(main):006:0> name = 'Msf's Book' irb(main):007:0' ' We can see that when we used a single quotation mark, it worked. However, when we tried to put Msf's instead of the value Msf, an error occurred. This is because it read the single quotation mark in the Msf's string as the end of single quotations, which is not the case; this situation caused a syntax-based error. The split function We can split the value of a string into a number of consecutive variables using the split function. Let's have a look at a quick example that demonstrates this: irb(main):011:0> name = "nipun jaswal" => "nipun jaswal" irb(main):012:0> name,surname=name.split(' ') => ["nipun", "jaswal"] irb(main):013:0> name => "nipun" irb(main):014:0> surname => "jaswal" Here, we have split the value of the entire string into two consecutive strings, name and surname by using the split function. However, this function split the entire string into two strings by considering the space to be the split's position. The squeeze function The squeeze function removes extra spaces from the given string, as shown in the following code snippet: irb(main):016:0> name = "Nipun Jaswal" => "Nipun Jaswal" irb(main):017:0> name.squeeze => "Nipun Jaswal" Numbers and conversions in Ruby We can use numbers directly in arithmetic operations. However, remember to convert a string into an integer when working on user input using the .to_i function. Simultaneously, we can convert an integer number into a string using the .to_s function. Let's have a look at some quick examples and their output: irb(main):006:0> b="55" => "55" irb(main):007:0> b+10 TypeError: no implicit conversion of Fixnum into String from (irb):7:in `+' from (irb):7 from C:/Ruby200/bin/irb:12:in `<main>' irb(main):008:0> b.to_i+10 => 65 irb(main):009:0> a=10 => 10 irb(main):010:0> b="hello" => "hello" irb(main):011:0> a+b TypeError: String can't be coerced into Fixnum from (irb):11:in `+' from (irb):11 from C:/Ruby200/bin/irb:12:in `<main>' irb(main):012:0> a.to_s+b => "10hello" We can see that when we assigned a value to b in quotation marks, it was considered as a string, and an error was generated while performing the addition operation. Nevertheless, as soon as we used the to_i function, it converted the value from a string into an integer variable, and addition was performed successfully. Similarly, with regards to strings, when we tried to concatenate an integer with a string, an error showed up. However, after the conversion, it worked. Ranges in Ruby Ranges are important aspects and are widely used in auxiliary modules such as scanners and fuzzers in Metasploit. Let's define a range and look at the various operations we can perform on this data type: irb(main):028:0> zero_to_nine= 0..9 => 0..9 irb(main):031:0> zero_to_nine.include?(4) => true irb(main):032:0> zero_to_nine.include?(11) => false irb(main):002:0> zero_to_nine.each{|zero_to_nine| print(zero_to_nine)} 0123456789=> 0..9 irb(main):003:0> zero_to_nine.min => 0 irb(main):004:0> zero_to_nine.max => 9 We can see that a range offers various operations such as searching, finding the minimum and maximum values, and displaying all the data in a range. Here, the include? function checks whether the value is contained in the range or not. In addition, the min and max functions display the lowest and highest values in a range. Arrays in Ruby We can simply define arrays as a list of various values. Let's have a look at an example: irb(main):005:0> name = ["nipun","james"] => ["nipun", "james"] irb(main):006:0> name[0] => "nipun" irb(main):007:0> name[1] => "james" So, up to this point, we have covered all the required variables and data types that we will need for writing Metasploit modules. For more information on variables and data types, refer to the following link: http://www.tutorialspoint.com/ruby/ Refer to a quick cheat sheet for using Ruby programming effectively at the following links: https://github.com/savini/cheatsheets/raw/master/ruby/RubyCheat.pdf http://hyperpolyglot.org/scripting Methods in Ruby A method is another name for a function. Programmers with a different background than Ruby might use these terms interchangeably. A method is a subroutine that performs a specific operation. The use of methods implements the reuse of code and decreases the length of programs significantly. Defining a method is easy, and their definition starts with the def keyword and ends with the end statement. Let's consider a simple program to understand their working, for example, printing out the square of 50: def print_data(par1) square = par1*par1 return square end answer=print_data(50) print(answer) The print_data method receives the parameter sent from the main function, multiplies it with itself, and sends it back using the return statement. The program saves this returned value in a variable named answer and prints the value. Decision-making operators Decision making is also a simple concept as with any other programming language. Let's have a look at an example: irb(main):001:0> 1 > 2 => false irb(main):002:0> 1 < 2 => true Let's also consider the case of string data: irb(main):005:0> "Nipun" == "nipun" => false irb(main):006:0> "Nipun" == "Nipun" => true Let's consider a simple program with decision-making operators: #Main num = gets num1 = num.to_i decision(num1) #Function def decision(par1) print(par1) par1= par1 if(par1%2==0) print("Number is Even") else print("Number is Odd") end end We ask the user to enter a number and store it in a variable named num using gets. However, gets will save the user input in the form of a string. So, let's first change its data type to an integer using the to_i method and store it in a different variable named num1. Next, we pass this value as an argument to the method named decision and check whether the number is divisible by two. If the remainder is equal to zero, it is concluded that the number is divisible by true, which is why the if block is executed; if the condition is not met, the else block is executed. The output of the preceding program will be something similar to the following screenshot when executed in a Windows-based environment:
Read more
  • 0
  • 0
  • 13005

article-image-booting-system
Packt
27 Feb 2015
12 min read
Save for later

Booting the System

Packt
27 Feb 2015
12 min read
In this article by William Confer and William Roberts, author of the book, Exploring SE for Android, we will learn once we have an SE for Android system, we need to see how we can make use of it, and get it into a usable state. In this article, we will: Modify the log level to gain more details while debugging Follow the boot process relative to the policy loader Investigate SELinux APIs and SELinuxFS Correct issues with the maximum policy version number Apply patches to load and verify an NSA policy (For more resources related to this topic, see here.) You might have noticed some disturbing error messages in dmesg. To refresh your memory, here are some of them: # dmesg | grep –i selinux <6>SELinux: Initializing. <7>SELinux: Starting in permissive mode <7>SELinux: Registering netfilter hooks <3>SELinux: policydb version 26 does not match my version range 15-23 ... It would appear that even though SELinux is enabled, we don't quite have an error-free system. At this point, we need to understand what causes this error, and what we can do to rectify it. At the end of this article, we should be able to identify the boot process of an SE for Android device with respect to policy loading, and how that policy is loaded into the kernel. We will then address the policy version error. Policy load An Android device follows a boot sequence similar to that of the *NIX booting sequence. The boot loader boots the kernel, and the kernel finally executes the init process. The init process is responsible for managing the boot process of the device through init scripts and some hard coded logic in the daemon. Like all processes, init has an entry point at the main function. This is where the first userspace process begins. The code can be found by navigating to system/core/init/init.c. When the init process enters main (refer to the following code excerpt), it processes cmdline, mounts some tmpfs filesystems such as /dev, and some pseudo-filesystems such as procfs. For SE for Android devices, init was modified to load the policy into the kernel as early in the boot process as possible. The policy in an SELinux system is not built into the kernel; it resides in a separate file. In Android, the only filesystem mounted in early boot is the root filesystem, a ramdisk built into boot.img. The policy can be found in this root filesystem at /sepolicy on the UDOO or target device. At this point, the init process calls a function to load the policy from the disk and sends it to the kernel, as follows: int main(int argc, char *argv[]) { ...   process_kernel_cmdline();   unionselinux_callback cb;   cb.func_log = klog_write;   selinux_set_callback(SELINUX_CB_LOG, cb);     cb.func_audit = audit_callback;   selinux_set_callback(SELINUX_CB_AUDIT, cb);     INFO(“loading selinux policyn”);   if (selinux_enabled) {     if (selinux_android_load_policy() < 0) {       selinux_enabled = 0;       INFO(“SELinux: Disabled due to failed policy loadn”);     } else {       selinux_init_all_handles();     }   } else {     INFO(“SELinux:  Disabled by command line optionn”);   } … In the preceding code, you will notice the very nice log message, SELinux: Disabled due to failed policy load, and wonder why we didn't see this when we ran dmesg before. This code executes before setlevel in init.rc is executed. The default init log level is set by the definition of KLOG_DEFAULT_LEVEL in system/core/include/cutils/klog.h. If we really wanted to, we could change that, rebuild, and actually see that message. Now that we have identified the initial path of the policy load, let's follow it on its course through the system. The selinux_android_load_policy() function can be found in the Android fork of libselinux, which is in the UDOO Android source tree. The library can be found at external/libselinux, and all of the Android modifications can be found in src/android.c. The function starts by mounting a pseudo-filesystem called SELinuxFS. In systems that do not have sysfs mounted, the mount point is /selinux; on systems that have sysfs mounted, the mount point is /sys/fs/selinux. You can check mountpoints on a running system using the following command: # mount | grep selinuxfs selinuxfs /sys/fs/selinux selinuxfs rw,relatime 0 0 SELinuxFS is an important filesystem as it provides the interface between the kernel and userspace for controlling and manipulating SELinux. As such, it has to be mounted for the policy load to work. The policy load uses the filesystem to send the policy file bytes to the kernel. This happens in the selinux_android_load_policy() function: int selinux_android_load_policy(void) {   char *mnt = SELINUXMNT;   int rc;   rc = mount(SELINUXFS, mnt, SELINUXFS, 0, NULL);   if (rc < 0) {     if (errno == ENODEV) {       /* SELinux not enabled in kernel */       return -1;     }     if (errno == ENOENT) {       /* Fall back to legacy mountpoint. */       mnt = OLDSELINUXMNT;       rc = mkdir(mnt, 0755);       if (rc == -1 && errno != EEXIST) {         selinux_log(SELINUX_ERROR,”SELinux:           Could not mkdir:  %sn”,         strerror(errno));         return -1;       }       rc = mount(SELINUXFS, mnt, SELINUXFS, 0, NULL);     }   }   if (rc < 0) {     selinux_log(SELINUX_ERROR,”SELinux:  Could not mount selinuxfs:  %sn”,     strerror(errno));     return -1;   }   set_selinuxmnt(mnt);     return selinux_android_reload_policy(); } The set_selinuxmnt(car *mnt) function changes a global variable in libselinux so that other routines can find the location of this vital interface. From there it calls another helper function, selinux_android_reload_policy(), which is located in the same libselinux android.c file. It loops through an array of possible policy locations in priority order. This array is defined as follows: Static const char *const sepolicy_file[] = {   “/data/security/current/sepolicy”,   “/sepolicy”,   0 }; Since only the root filesystem is mounted, it chooses /sepolicy at this time. The other path is for dynamic runtime reloads of policy. After acquiring a valid file descriptor to the policy file, the system is memory mapped into its address space, and calls security_load_policy(map, size) to load it to the kernel. This function is defined in load_policy.c. Here, the map parameter is the pointer to the beginning of the policy file, and the size parameter is the size of the file in bytes: int selinux_android_reload_policy(void) {   int fd = -1, rc;   struct stat sb;   void *map = NULL;   int i = 0;     while (fd < 0 && sepolicy_file[i]) {     fd = open(sepolicy_file[i], O_RDONLY | O_NOFOLLOW);     i++;   }   if (fd < 0) {     selinux_log(SELINUX_ERROR, “SELinux:  Could not open sepolicy:  %sn”,     strerror(errno));     return -1;   }   if (fstat(fd, &sb) < 0) {     selinux_log(SELINUX_ERROR, “SELinux:  Could not stat %s:  %sn”,     sepolicy_file[i], strerror(errno));     close(fd);     return -1;   }   map = mmap(NULL, sb.st_size, PROT_READ, MAP_PRIVATE, fd, 0);   if (map == MAP_FAILED) {     selinux_log(SELINUX_ERROR, “SELinux:  Could not map %s:  %sn”,     sepolicy_file[i], strerror(errno));     close(fd);     return -1;   }     rc = security_load_policy(map, sb.st_size);   if (rc < 0) {     selinux_log(SELINUX_ERROR, “SELinux:  Could not load policy:  %sn”,     strerror(errno));     munmap(map, sb.st_size);     close(fd);     return -1;   }     munmap(map, sb.st_size);   close(fd);   selinux_log(SELINUX_INFO, “SELinux: Loaded policy from %sn”, sepolicy_file[i]);     return 0; } The security load policy opens the <selinuxmnt>/load file, which in our case is /sys/fs/selinux/load. At this point, the policy is written to the kernel via this pseudo file: int security_load_policy(void *data, size_t len) {   char path[PATH_MAX];   int fd, ret;     if (!selinux_mnt) {     errno = ENOENT;     return -1;   }     snprintf(path, sizeof path, “%s/load”, selinux_mnt);   fd = open(path, O_RDWR);   if (fd < 0)   return -1;     ret = write(fd, data, len);   close(fd);   if (ret < 0)   return -1;   return 0; } Fixing the policy version At this point, we have a clear idea of how the policy is loaded into the kernel. This is very important. SELinux integration with Android began in Android 4.0, so when porting to various forks and fragments, this breaks, and code is often missing. Understanding all parts of the system, however cursory, will help us to correct issues as they appear in the wild and develop. This information is also useful to understand the system as a whole, so when modifications need to be made, you'll know where to look and how things work. At this point, we're ready to correct the policy versions. The logs and kernel config are clear; only policy versions up to 23 are supported, and we're trying to load policy version 26. This will probably be a common problem with Android considering kernels are often out of date. There is also an issue with the 4.3 sepolicy shipped by Google. Some changes by Google made it a bit more difficult to configure devices as they tailored the policy to meet their release goals. Essentially, the policy allows nearly everything and therefore generates very few denial logs. Some domains in the policy are completely permissive via a per-domain permissive statement, and those domains also have rules to allow everything so denial logs do not get generated. To correct this, we can use a more complete policy from the NSA. Replace external/sepolicy with the download from https://bitbucket.org/seandroid/external-sepolicy/get/seandroid-4.3.tar.bz2. After we extract the NSA's policy, we need to correct the policy version. The policy is located in external/sepolicy and is compiled with a tool called check_policy. The Android.mk file for sepolicy will have to pass this version number to the compiler, so we can adjust this here. On the top of the file, we find the culprit: ... # Must be <= /selinux/policyvers reported by the Android kernel. # Must be within the compatibility range reported by checkpolicy -V. POLICYVERS ?= 26 ... Since the variable is overridable by the ?= assignment. We can override this in BoardConfig.mk. Edit device/fsl/imx6/BoardConfigCommon.mk, adding the following POLICYVERS line to the bottom of the file: ... BOARD_FLASH_BLOCK_SIZE := 4096 TARGET_RECOVERY_UI_LIB := librecovery_ui_imx # SELinux Settings POLICYVERS := 23 -include device/google/gapps/gapps_config.mk Since the policy is on the boot.img image, build the policy and bootimage: $ mmm -B external/sepolicy/ $ make –j4 bootimage 2>&1 | tee logz !!!!!!!!! WARNING !!!!!!!!! VERIFY BLOCK DEVICE !!!!!!!!! $ sudo chmod 666 /dev/sdd1 $ dd if=$OUT/boot.img of=/dev/sdd1 bs=8192 conv=fsync Eject the SD card, place it into the UDOO, and boot. The first of the preceding commands should produce the following log output: out/host/linux-x86/bin/checkpolicy: writing binary representation (version 23) to out/target/product/udoo/obj/ETC/sepolicy_intermediates/sepolicy At this point, by checking the SELinux logs using dmesg, we can see the following: # dmesg | grep –i selinux <6>init: loading selinux policy <7>SELinux: 128 avtab hash slots, 490 rules. <7>SELinux: 128 avtab hash slots, 490 rules. <7>SELinux: 1 users, 2 roles, 274 types, 0 bools, 1 sens, 1024 cats <7>SELinux: 84 classes, 490 rules <7>SELinux: Completing initialization. Another command we need to run is getenforce. The getenforce command gets the SELinux enforcing status. It can be in one of three states: Disabled: No policy is loaded or there is no kernel support Permissive: Policy is loaded and the device logs denials (but is not in enforcing mode) Enforcing: This state is similar to the permissive state except that policy violations result in EACCESS being returned to userspace One of the goals while booting an SELinux system is to get to the enforcing state. Permissive is used for debugging, as follows: # getenforce Permissive Summary In this article, we covered the important policy load flow through the init process. We also changed the policy version to suit our development efforts and kernel version. From there, we were able to load the NSA policy and verify that the system loaded it. This article additionally showcased some of the SELinux APIs and their interactions with SELinuxFS. Resources for Article: Further resources on this subject: Android And Udoo Home Automation? [article] Sound Recorder For Android [article] Android Virtual Device Manager [article]
Read more
  • 0
  • 0
  • 12848
article-image-securing-openstack-networking
Packt
10 Aug 2015
10 min read
Save for later

Securing OpenStack Networking

Packt
10 Aug 2015
10 min read
In this article by Fabio Alessandro Locati, author of the book OpenStack Cloud Security, you will learn about the importance of firewall, IDS, and IPS. You will also learn about Generic Routing Encapsulation, VXLAN. (For more resources related to this topic, see here.) The importance of firewall, IDS, and IPS The security of a network can and should be achieved in multiple ways. Three components that are critical to the security of a network are: Firewall Intrusion detection system (IDS) Intrusion prevention system (IPS) Firewall Firewalls are systems that control traffic passing through them based on rules. This can seem something like a router, but they are very different. The router allows communication between different networks while the firewall limits communication between networks and hosts. The root of this confusion may occur because very often the router will have the firewall functionality and vice versa. Firewalls need to be connected in a series to your infrastructure. The first paper on the firewall technology appeared in 1988 and designed the packet filter firewall. This kind of firewall is often known as first generation firewall. This kind of firewall analyzes the packages passing through and if the package matches a rule, the firewall will act accordingly to that rule. This firewall will analyze each package by itself and will not consider other aspects such as other packages. It works on the first three layers of the OSI model with very few features using layer 4 specifically to check port numbers and protocols (UDP/TCP). First generation firewalls are still in use, because in a lot of situations, to do the job properly and are cheap and secure. Examples of typical filtering those firewalls prohibit (or allow) to IPs of certain classes (or specific IPs), to access certain IPs, or allow traffic to a specific IP only on specific ports. There are no known attacks to those kind of firewalls, but specific models can have specific bugs that can be exploited. In 1990, a new generation of firewall appeared. The initial name was circuit-level gateway, but today it is far more commonly known as stateful firewalls or second generation firewall. These firewalls are able to understand when connections are being initialized and closed so that the firewall comes to know what is the current state of a connection when a package arrives. To do so, this kind of firewall uses the first four layers of the networking stack. This allows the firewall to drop all packages that are not establishing a new connection or are in an already established connection. These firewalls are very powerful with the TCP protocol because it has states, while they have very small advantages compared to first generation firewalls handling UDP or ICMP packages, since those packages travel with no connection. In these cases, the firewall sets the connection as established; only the first valid package passes through and closes it after the connection times out. Performance-wise, stateful firewall can be faster than packet firewall because if the package is part of an active connection, no further test will be performed against that package. These kinds of firewalls are more susceptible to bugs in their code since reading more about the package makes it easier to exploit. Also, on many devices, it is possible to open connections (with SYN packages) until the firewall is saturated. In such cases, the firewall usually downgrades itself as a simple router allowing all traffic to pass through it. In 1991, improvements were made to the stateful firewall allowing it to understand more about the protocol of the package it was evaluating. The firewalls of this kind before 1994 had major problems, such as working as a proxy that the user had to interact with. In 1994, the first application firewall, as we know it, was born doing all its job completely transparently. To be able to understand the protocol, this kind of firewall requires an understanding of all seven layers of the OSI model. As for security, the same as the stateful firewall does apply to the application firewall as well. Intrusion detection system (IDS) IDSs are systems that monitor the network traffic looking for policy violation and malicious traffic. The goal of the IDS is not to block malicious activity, but instead to log and report them. These systems act in a passive mode, so you'll not see any traffic coming from them. This is very important because it makes them invisible to attackers so you can gain information about the attack, without the attacker knowing. IDSs need to be connected in parallel to your infrastructure. Intrusion prevention system (IPS) IPSs are sometimes referred to as Intrusion Detection and Prevention Systems (IDPS), since they are IDS that are also able to fight back malicious activities. IPSs have greater possibility to act than IDSs. Other than reporting, like IDS, they can also drop malicious packages, reset the connection, and block the traffic from the offending IP address. IPSs need to be connected in series to your infrastructure. Generic Routing Encapsulation (GRE) GRE is a Cisco tuning protocol that is difficult to position in the OSI model. The best place for it to be is between layers 2 and 3. Being above layer 2 (where VLANs are), we can use GRE inside VLAN. We will not go deep into the technicalities of this protocol. I'd like to focus more on the advantages and disadvantages it has over VLAN. The first advantage of (extended) GRE over VLAN is scalability. In fact, VLAN is limited to 4,096, while GRE tunnels do not have this limitation. If you are running a private cloud and you are working in a small corporation, 4,096 networks could be enough, but will definitely not be enough if you work for a big corporation or if you are running a public cloud. Also, unless you use VTP for your VLANs, you'll have to add VLANs to each network device, while GREs don't need this. You cannot have more than 4,096 VLANs in an environment. The second advantage is security. Since you can deploy multiple GRE tunnels in a single VLAN, you can connect a machine to a single VLAN and multiple GRE networks without the risks that come with putting a port in trunking that is needed to bring more VLANs in the same physical port. For these reasons, GRE has been a very common choice in a lot of OpenStack clusters deployed up to OpenStack Havana. The current preferred networking choice (since Icehouse) is Virtual Extensible LAN (VXLAN). VXLAN VXLAN is a network virtualization technology whose specifications have been originally created by Arista Networks, Cisco, and VMWare, and many other companies have backed the project. Its goal is to offer a standardized overlay encapsulation protocol and it was created because the standard VLAN were too limited for the current cloud needs and the GRE protocol was a Cisco protocol. It works using layer 2 Ethernet frames within layer 4 UDP packages on port 4789. As for the maximum number of networks, the limit is 16 million logical networks. Since the Icehouse release, the suggested standard for networking is VXLAN. Flat network versus VLAN versus GRE in OpenStack Quantum In OpenStack Quantum, you can decide to use multiple technologies for your networks: flat network, VLAN, GRE, and the most recent, VXLAN. Let's discuss them in detail: Flat network: It is often used in private clouds since it is very easy to set up. The downside is that any virtual machine will see any other virtual machines in our cloud. I strongly discourage people from using this network design because it's unsafe, and in the long run, it will have problems, as we have seen earlier. VLAN: It is sometimes used in bigger private clouds and sometimes even in small public clouds. The advantage is that many times you already have a VLAN-based installation in your company. The major disadvantages are the need to trunk ports for each physical host and the possible problems in propagation. I discourage this approach, since in my opinion, the advantages are very limited while the disadvantages are pretty strong. VXLAN: It should be used in any kind of cloud due to its technical advantages. It allows a huge number of networks, its way more secure, and often eases debugging. GRE: Until the Havana release, it was the suggested protocol, but since the Icehouse release, the suggestion has been to move toward VXLAN, where the majority of the development is focused. Design a secure network for your OpenStack deployment As for the physical infrastructure, we have to design it securely. We have seen that the network security is critical and that there a lot of possible attacks in this realm. Is it possible to design a secure environment to run OpenStack? Yes it is, if you remember a few rules: Create different networks, at the very least for management and external data (this network usually already exists in your organization and is the one where all your clients are) Never put ports on trunking mode if you use VLANs in your infrastructure, otherwise physically separated networks will be needed The following diagram is an example of how to implement it: Here, the management, tenant external networks could be either VLAN or real networks. Remember that to not use VLAN trunking, you need at least the same amount of physical ports as of VLAN, and the machine has to be subscribed to avoid port trunking that can be a huge security hole. A management network is needed for the administrator to administer the machines and for the OpenStack services to speak to each other. This network is critical, since it may contain sensible data, and for this reason, it has to be disconnected from other networks, or if not possible, have very limited connectivity. The external network is used by virtual machines to access the Internet (and vice versa). In this network, all machines will need an IP address reachable from the Web. The tenant network, sometimes even called internal or guest network is the network where the virtual machines can communicate with other virtual machines in the same cloud. This network, in some deployment cases, can be merged with the external network, but this choice has some security drawbacks. The API network is used to expose OpenStack APIs to the users. This network requires IP addresses reachable from the Web, and for this reason, is often merged into the external network. There are cases where provider networks are needed to connect tenant networks to existing networks outside the OpenStack cluster. Those networks are created by the OpenStack administrator and map directly to an existing physical network in the data center. Summary In this article, we have seen how networking works, which attacks we can expect, and how we can counter them. Also, we have seen how to implement a secure deployment of OpenStack Networking. Resources for Article: Further resources on this subject: Cloud distribution points [Article] Photo Stream with iCloud [Article] Integrating Accumulo into Various Cloud Platforms [Article]
Read more
  • 0
  • 0
  • 12711

article-image-furthering-the-net-neutrality-debate-gop-proposes-the-21st-century-internet-act
Sugandha Lahoti
18 Jul 2018
3 min read
Save for later

Furthering the Net Neutrality debate, GOP proposes the 21st Century Internet Act

Sugandha Lahoti
18 Jul 2018
3 min read
GOP Rep. Mike Coffman has proposed a new bill to solidify the principles of net neutrality into law, rather than them being a set of rules to be modified by the FCC every year. The bill known as the 21st Century Internet Act would ban providers from blocking, throttling, or offering paid fast lanes. It will also forbid them from participating in paid prioritization and charging access fees from edge providers. It could take some time for this amendment to be voted on in the Congress. It mostly depends on the makeup of Congress after the midterm elections. The 21st Century Internet Act modifies the Communications Act of 1934 and adds a new Title VIII section full of conditions specific to internet providers. This new title permanently codifies into law the ‘four corners’ of net neutrality”. The amendment proposes these measures: No Blocking A broadband internet access service provider can not block lawful content, or charge an edge provider a fee to avoid blocking of content. No Throttling The service provider cannot degrade and enhance (slow down or speed up) the internet traffic. No Paid prioritization The internet access provider may not engage in paid preferential treatment. No unreasonable Interference The service provider cannot interfere with the ability of end users to select, the internet access service of their choice. This bill aims to settle the long ongoing debate over whether internet access is an information service or a telecommunications service. In his letter to FCC Chairman Ajit Pai, Coffman mentions, “The Internet has been and remains a transformative tool, and I am concerned any action you may take to alter the rules under which it functions may well have significant unanticipated negative consequences.” As far as the FCC’s role is concerned, The 21st Century Internet act will solidify the rules of net neutrality, barring the FCC from modifying it. The commision will solely be responsible for watching over the bill’s implementation and enforce the law. This would include investigating unfair acts or practices, such as false advertising, misrepresenting the product etc. The Senate has already voted to save net neutrality, by passing the CRA measure back in May 2018. The Congressional Review Act, or CRA received 52-47 vote, overturning the FCC and taking net neutrality rules off the books. The 21st century Internet Act is being seen in a good light by The Internet Association, which represents Google, Facebook, Netflix and others, who commended Coffman on his bill and called it a "step in the right direction." For the rest of us, it will be quite interesting to see the bill’s progress and its fate as it goes through the voting process and then into the White House for final approval. 5 reasons why the government should regulate technology DCLeaks and Guccifer 2.0: How hackers used social engineering to manipulate the 2016 U.S. elections Tech, unregulated: Washington Post warns Robocalls could get worse
Read more
  • 0
  • 0
  • 12617
Modal Close icon
Modal Close icon