Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Cybersecurity

90 Articles
article-image-rounding
Packt
25 Nov 2013
3 min read
Save for later

Rounding up...

Packt
25 Nov 2013
3 min read
(For more resources related to this topic, see here.) We have now successfully learned how to secure our users' passwords using hashes; however, we should take a look at the big picture, just in case. The following figure shows what a very basic web application looks like: Note the https transmission tag: HTTPS is a secure transfer protocol, which allows us to transport information in a secure way. When we transport sensitive data such as passwords in a Web Application, anyone who intercepts the connection can easily get the password in plain text, and our users' data would be compromised. In order to avoid this, we should always use HTTPS when there's sensitive data involved. HTTPS is fairly easy to setup, you just need to buy an SSL certificate and configure it with your hosting provider. Configuration varies depending on the provider, but usually they provide an easy way to do it. It is strongly suggested to use HTTPS for authentication, sign up, sign in, and other sensitive data processes. As a general rule, most (if not all) of the data exchange that requires the user to be logged in should be protected. Keep in mind that HTTPS comes at a cost, so try to avoid using HTTPS on static pages that have public information. Always keep in mind that to protect the password, we need ensure secure transport (with HTTPS) and secure storage (with strong hashes) as well. Both are critical phases and we need to be very careful with them. Now that our passwords and other sensitive data are being transferred in a secure way, we can get into the application workflow. Consider the following steps for an authentication process: The application receives an Authentication Request. The Web Layer takes care of it as it gets the parameters (username and password), and passes them to the Authentication Service. The Authentication Service calls the Database Access Layer to retrieve the user from the database. The Database Access Layer queries the database, gets the user, and returns it to the Authentication Service. The Authentication Service gets the stored hash from the users' data retrieved from the database, extracts the salt and the amount of iterations, and calls the Hashing Utility passing the password from the authentication request, the salt, and the iterations. The Hashing Utility generates the hash and returns it to the Authentication Service. The Authentication Service performs a constant-time comparison between the stored hash and the generated hash, and we inform the Web Layer if the user is authenticated or not. The Web Layer returns the corresponding view to the user depending on whether they are authenticated or not. The following figure can help us understand how this works, please consider that flows 1, 2, 3, and 4 are bidirectional: The Authentication Service and the Hashing Utility components are the ones we have been working with so far. We already know how to create hashes, this workflow is an example to understand when we should it. Summary In this article we learned how to create hashes and have now successfully learned how to secure our users' passwords using hashes. We have also learned that we need to ensure secure transport (with HTTPS) and secure storage (with strong hashes) as well. Resources for Article: Further resources on this subject: FreeRADIUS Authentication: Storing Passwords [Article] EJB 3.1: Controlling Security Programmatically Using JAAS [Article] So, what is Spring for Android? [Article]
Read more
  • 0
  • 0
  • 13507

article-image-puppet-and-os-security-tools
Packt
27 Mar 2015
17 min read
Save for later

Puppet and OS Security Tools

Packt
27 Mar 2015
17 min read
In this article by Jason Slagle, author of the book Learning Puppet Security, covers using Puppet to manage SELinux and auditd. We learned a lot so far about using Puppet to secure your systems as, well as how to use it to make groups of systems more secure. However, in all of that, we've not yet covered some of the basic OS-level functions that are available to secure a system. In this article, we'll review several of those functions. (For more resources related to this topic, see here.) SELinux is a powerful tool in the security arsenal. Most administrators experience with it, is along the lines of "how can I turn that off ?" This is born out of frustration with the poor documentation about the tool, as well as the tedious nature of the configuration. While Puppet cannot help you with the documentation (which is getting better all the time), it can help you with some of the other challenges that SELinux can bring. That is, ensuring that the proper contexts and policies are in place on the systems being managed. In this article, we'll cover the following topics related to OS-level security tools: A brief introduction to SELinux and auditd The built-in Puppet support for SELinux Community modules for SELinux Community modules for auditd At the end of this article, you should have enough skills so that you no longer need to disable SELinux. However, if you still need to do so, it is certainly possible to do via the modules presented here. Introducing SELinux and auditd During the course of this article, we'll explore the SELinux framework for Linux and see how to automate it using Puppet. As part of the process, we'll also review auditd, the logging and auditing framework for Linux. Using Puppet, we can automate the configuration of these often-neglected security tools, and even move the configuration of these tools for various services to the modules that configure those services. The SELinux framework SELinux is a security system for Linux originally developed by the United States National Security Agency (NSA). It is an in-kernel protection mechanism designed to provide Mandatory Access Controls (MACs) to the Linux kernel. SELinux isn't the only MAC framework for Linux. AppArmor is an alternative MAC framework included in the Linux kernel since Version 2.6.30. We choose to implement SELinux; since it is the default framework used under Red Hat Linux, which we're using for our examples. More information on AppArmor can be found at http://wiki.apparmor.net/index.php/Main_Page. These access controls work by confining processes to the minimal amount of files and network access that the processes require to run. By doing this, the controls limit the amount of collateral damage that can be done by a process, which becomes compromised. SELinux was first merged to the Linux mainline kernel for the 2.6.0 release. It was introduced into Red Hat Enterprise Linux with Version 4, and into Ubuntu in Version 8.04. With each successive release of the operating systems, support for SELinux grows, and it becomes easier to use. SELinux has a couple of core concepts that we need to understand to properly configure it. The first are the concepts of types and contexts. A type in SELinux is a grouping of similar things. Files used by Apache may be httpd_sys_content_t, for instance, which is a type that all content served by HTTP would have. The httpd process itself is of type httpd_t. These types are applied to objects, which represent discrete things, such as files and ports, and become part of the context of that object. The context of an object represents the object's user, role, type, and optionally data on multilevel security. For this discussion, the type is the most important component of the context. Using a policy, we grant access from the subject, which represents a running process, to various objects that represent files, network ports, memory, and so on. We do that by creating a policy that allows a subject to have access to the types it requires to function. SELinux has three modes that it can operate in. The first of these modes is disabled. As the name implies, the disabled mode runs without any SELinux enforcement. The second mode is called permissive. In permissive mode, SELinux will log any access violations, but will not act on them. This is a good way to get an idea of where you need to modify your policy, or tune Booleans to get proper system operations. The final mode, enforcing, will deny actions that do not have a policy in place. Under Red Hat Linux variants, this is the default SELinux mode. By default, Red Hat 6 runs SELinux with a targeted policy in enforcing mode. This means, that for the targeted daemons, SELinux will enforce its policy by default. An example is in order here, to explain this well. So far, we've been operating with SELinux disabled on our hosts. The first step in experimenting with SELinux is to turn it on. We'll set it to permissive mode at first, while we gather some information. To do this, after starting our master VM, we'll need to modify the SELinux configuration and reboot. While it's possible to change from enforcing mode to either permissive or disabled mode without a reboot, going back requires us to reboot. Let's edit the /etc/sysconfig/selinux file and set the SELINUX variable to permissive on our puppetmaster. Remember to start the vagrant machine and SSH in as it is necessary. Once this is done, the file should look as follows: Once this is complete, we need to reboot. To do so, run the following command: sudo shutdown -r now Wait for the system to come back online. Once the machine is back up and you SSH back into it, run the getenforce command. It should return permissive, which means SELinux is running, but not enforced. Now, we can make sure our master is running and take a look at its context. If it's not running, you can start the service with the sudo service puppetmaster start command. Now, we'll use the -Z flag on the ps command to examine the SELinux flag. Many commands, such as ps and ls use the -Z flag to view the SELinux data. We'll go ahead and run the following command to view the SELinux data for the running puppetmaster: ps -efZ|grep puppet When you do this, you'll see a Linux output, such as follows: unconfined_u:system_r:initrc_t:s0 puppet 1463     1 1 11:41 ? 00:00:29 /usr/bin/ruby /usr/bin/puppet master If you take a look at the first part of the output line, you'll see that Puppet is running in the unconfined_u:system_r:initrc_t context. This is actually somewhat of a bug and a result of the Puppet policy on CentOS 6 being out of date. We should actually be running under the system_u:system_r:puppetmaster_t:s0 context, but the policy is for a much older version of Puppet, so it runs unconfined. Let's take a look at the sshd process to see what it looks like also. To do so, we'll just grep for sshd instead: ps -efZ|grep sshd The output is as follows: system_u:system_r:sshd_t:s0-s0:c0.c1023 root 1206 1 0 11:40 ? 00:00:00 /usr/sbin/sshd This is a more traditional output one would expect. The sshd process is running under the system_u:system_r:sshd_t context. This actually corresponds to the system user, the system role, and the sshd type. The user and role are SELinux constructs that help you allow role-based access controls. The users do not map to system users, but allow us to set a policy based on the SELinux user object. This allows role-based access control, based on the SELinux user. Previously the unconfined user was a user that will not be enforced. Now, we can take a look at some objects. Doing a ls -lZ /etc/ssh command results in the following: As you can see, each of the files belongs to a context that includes the system user, as well as the object role. They are split among the etc type for configuration files and the sshd_key type for keys. The SSH policy allows the sshd process to read both of these file types. Other policies, say, for NTP, would potentially allow the ntpd process to read the etc types, but it would not be able to read the sshd_key files. This very fine-grained control is the power of SELinux. However, with great power comes very complex configuration. Configuration can be confusing to set up, if it doesn't happen correctly. For instance, with Puppet, the wrong type can potentially impact the system if not dealt with. Fortunately, in permissive mode, we will log data that we can use to assist us with this. This leads us into the second half of the system that we wish to discuss, which is auditd. In the meantime, there is a bunch of information on SELinux available on its website at http://selinuxproject.org/page/Main_Page. There's also a very funny, but informative, resource available describing SELinux at https://people.redhat.com/duffy/selinux/selinux-coloring-book_A4-Stapled.pdf. The auditd framework for audit logging SELinux does a great job at limiting access to system components; however, reporting what enforcement took place was not one of its objectives. Enter the auditd. The auditd is an auditing framework developed by Red Hat. It is a complete auditing system using rules to indicate what to audit. This can be used to log SELinux events, as well as much more. Under the hood, auditd has hooks into the kernel to watch system calls and other processes. Using the rules, you can configure logging for any of these events. For instance, you can create a rule that monitors writes to the /etc/passwd file. This would allow you to see if any users were added to the system. We can also add monitoring of files, such as lastlog and wtmp to monitor the login activity. We'll explore this example later when we configure auditd. To quickly see how a rule works, we'll manually configure a quick rule that will log the time when the wtmp file was edited. This will add some system logging around users logging in. To do this, let's edit the /etc/audit/audit.rules file to add a rule to monitor this. Edit the file and add the following lines: -w /var/log/wtmp -p wa -k logins-w /etc/passwd –p wa –k password We'll take a look at what the preceding lines do. These lines both start with the –w clauses. These indicate the files that we are monitoring. Second, we have the –p clauses. This lets you set what file operations we monitor. In this case, it is write and append operations. Finally, with the the –k entries, we're setting a keyword that is logged and can be filtered on. This should go at the end of the file. Once it's done, reload auditd with the following command: sudo service auditd restart Once this is complete, go ahead and log another ssh session in. Once you can simply log, back out. Once this is done, take a look at the /var/log/audit/audit.log file. You should see the content like the following: type=SYSCALL msg=audit(1416795396.816:482): arch=c000003e syscall=2 success=yes exit=8 a0=7fa983c446aa a1=1 a2=2 a3=7fff3f7a6590 items=1 ppid=1206 pid=2202 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=51 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 key="logins"type=SYSCALL msg=audit(1416795420.057:485): arch=c000003e syscall=2 success=yes exit=7 a0=7fa983c446aa a1=1 a2=2 a3=8 items=1 ppid=1206 pid=2202 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=51 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 key="logins" There are tons of fields in this output, including the SELinux context, the userID, and so on. Of interest is the auid, which is the audit user ID. On commands run via the sudo command, this will still contain the user ID of the user who called sudo. This is a great way to log commands performed via sudo. Auditd also logs SELinux failures. They get logged under the type AVC. These access vector cache logs will be placed in the auditd log file when a SELinux violation occurs. Much like SELinux, auditd is somewhat complicated. The intricacies of it are beyond the scope of this book. You can get more information at http://people.redhat.com/sgrubb/audit/. SELinux and Puppet Puppet has direct support for several features of SELinux. There are two native Puppet types for SELinux: selboolean and selmodule. These types support setting SELinux Booleans and installing SELinux policy modules. SELinux Booleans are variables that impact on how SELinux behaves. They are set to allow various functions to be permitted. For instance, you set a SELinux Boolean to true to allow the httpd process to access network ports. SELinux modules are groupings of policies. They allow policies to be loaded in a more granular way. The Puppet selmodule type allows Puppet to load these modules. The selboolean type The targeted SELinux policy that most distributions use is based on the SELinux reference policy. One of the features of this policy is the use of Boolean variables that control actions of the policy. There are over 200 of these Booleans on a Red Hat 6-based machine. We can investigate them by installing the policycoreutils-python package on the operating system. You can do this by executing the following command: sudo yum install policycoreutils-python Once installed, we can run the semanage boolean -l command to get a list of the Boolean values, along with their descriptions. The output of this will look as follows: As you can see, there exists a very large number of settings that can be reconfigured, simply by setting the appropriate Boolean value. The selboolean Puppet type supports managing these Boolean values. The provider is fairly simple, accepting the following values: Parameter Description name This contains the name of the Boolean to be set. It defaults to the title. persistent This checks whether to write the value to disk for the next boot. provider This is the provider for the type. Usually, the default getsetsebool value is accepted. value This contains the value of the Boolean, true or false. Usage of this type is rather simple. We'll show an example that will set the puppetmaster_use_db parameter to true value. If we are using the SELinux Puppet policy, this would allow the master to talk to a database. For our use, it's a simple unused variable that we can use for demonstration purposes. As a reminder, the SElinux policy for Puppet on CentOS 6 is outdated, so setting the Boolean does not impact the version of Puppet we're running. It does, however, serve to show how a Boolean is set. To do this, we'll create a sample role and profile for our puppetmaster. This is something that would likely exist in a production environment to manage the configuration of the master. In this example, we'll simply build a small profile and role for the master. Let's start with the profile. Copy over the profiles module we've slowly been building up, and let's add a puppetmaster.pp profile. To do so, edit the profiles/manifests/puppetmaster.pp file and make it look as follows: class profiles::puppetmaster {selboolean { 'puppetmaster_use_db':   value     => on,   persistent => true,}} Then, we'll move on to the role. Copy the roles, and edit the roles/manifests/puppetmaster.pp file there and make it look as follows: class roles::puppetmaster {include profiles::puppetmaster} Once this is done, we can apply it to our host. Edit the /etc/puppet/manifests/site.pp file. We'll apply the puppetmaster role to the puppetmaster machine, as follows: node 'puppet.book.local' {include roles::puppetmaster} Now, we'll run Puppet and get the output as follows: As you can see, it set the value to on when run. Using this method, we can set any of the SELinux Boolean values we need for our system to operate properly. More information on SELinux Booleans with information on how to obtain a list of them can be found at https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Security-Enhanced_Linux/sect-Security-Enhanced_Linux-Working_with_SELinux-Booleans.html. The selmodule type The other native type inside Puppet is a type to manage the SELinux modules. Modules are compiled collections of the SELinux policy. They're loaded into the kernel using the selmodule command. This Puppet type provides support for this mechanism. The available parameters are as follows: Parameter Description name This contains the name of the module— it defaults to the title ensure This is the desired state—present or absent provider This specifies the provider for the type—it should be selmodule selmoduledir This is the directory that contains the module to be installed selmodulepath This provides the complete path to the module to be installed if not present in selmoduledir syncversion This checks whether to resync the module if a new version is found, such as ensure => latest  Using the module, we can take our compiled module and serve it onto the system with Puppet. We can then use the module to ensure that it gets installed on the system. This lets us centrally manage the module with Puppet. We'll see an example where this module compiles a policy and then installs it, so we won't show a specific example here. Instead, we'll move on to talk about the last SELinux-related component in Puppet. File parameters for SELinux The final internal support for SELinux types comes in the form of the file type. The file type parameters are as follows: Parameter Description selinux_ignore_defaults By default, Puppet will use the matchpathcon function to set the context of a file. This overrides that behavior if set to true value. Selrange This sets the SELinux range component. We've not really covered this. It's not used in most mainstream distributions at the time this book was written. Selrole This sets the SELinux role on the file. seltype This sets the SELinux type on the file. seluser This sets the SELinux role on the file. Usually, if you place files in the correct location (the expected location for a service) on the filesystem, Puppet will get the SELinux properties correct via its use of the matchpathcon function. This function (which also has a matching utility) applies a default context based on the policy settings. Setting the context manually is used in cases where you're storing data outside the normal location. For instance, you might be storing web data under the /opt file. The preceding types and providers provide the basics that allow you to manage SELinux on a system. We'll now take a look at a couple of community modules that build on these types and create a more in-depth solution. Summary This article looked at what SELinux and auditd were, and gave a brief example of how they can be used. We looked at what they can do, and how they can be used to secure your systems. After this, we looked at the specific support for SELinux in Puppet. We looked at the two built-in types to support it, as well as the parameters on the file type. Then, we took a look at one of the several community modules for managing SELinux. Using this module, we can store the policies as text instead of compiled blobs. Resources for Article: Further resources on this subject: The anatomy of a report processor [Article] Module, Facts, Types and Reporting tools in Puppet [Article] Designing Puppet Architectures [Article]
Read more
  • 0
  • 0
  • 13336

article-image-ios-security-overview
Packt
04 Mar 2015
20 min read
Save for later

iOS Security Overview

Packt
04 Mar 2015
20 min read
In this article by Allister Banks and Charles S. Edge, the authors of the book, Learning iOS Security, we will go through an overview of the basic security measures followed in an iOS. Out of the box, iOS is one of the most secure operating systems available. There are a number of factors that contribute to the elevated security level. These include the fact that users cannot access the underlying operating system. Apps also have data in a silo (sandbox), so instead of accessing the system's internals they can access the silo. App developers choose whether to store settings such as passwords in the app or on iCloud Keychain, which is a secure location for such data on a device. Finally, Apple has a number of controls in place on devices to help protect users while providing an elegant user experience. However, devices can be made even more secure than they are now. In this article, we're going to get some basic security tasks under our belt in order to get some basic best practices of security. Where we feel more explanation is needed about what we did on devices, we'll explore a part of the technology itself in this article. This article will cover the following topics: Pairing Backing up your device Initial security checklist Safari and built-in app protection Predictive search and spotlight (For more resources related to this topic, see here.) To kick off the overview of iOS security, we'll quickly secure our systems by initially providing a simple checklist of tasks, where we'll configure a few device protections that we feel everyone should use. Then, we'll look at how to take a backup of our devices and finally, at how to use a built-in web browser and protections around a browser. Pairing When you connect a device to a computer that runs iTunes for the first time, you are prompted to enter a password. Doing so allows you to synchronize the device to a computer. Applications that can communicate over this channel include iTunes, iPhoto, Xcode, and others. To pair a device to a Mac, simply plug the device in (if you have a passcode, you'll need to enter that in order to pair the device.) When the device is plugged in, you'll be prompted on both the device and the computer to establish a trust. Simply tap on Trust on the iOS device, as shown in the following screenshot: Trusting a computer For the computer to communicate with the iOS device, you'll also need to accept the pairing on your computer (although, when you use libimobiledevice, which is the command to pair, does not require doing so, because you use the command line to accept). When prompted, click on Continue to establish the pairing, as seen in the following screenshot (the screenshot is the same in Windows): Trusting a device When a device is paired, a file is created in /var/db/lockdown, which is the UDID of the device with a property list (plist) extension. A property list is an Apple XML file that stores a variety of attributes. In Windows, iOS data is stored in the MobileSync folder, which you can access by navigating to Users(username)AppDataRoamingApple ComputerMobileSync. The information in this file sets up a trust between the computers and includes the following attributes: DeviceCertificate: This certificate is unique to each device. EscrowBag: The key bag of EscrowBag contains class keys used to decrypt the device. HostCertificate: This certificate is for the host who's paired with iOS devices (usually, the same for all files that you've paired devices with, on your computer). HostID: This is a generated ID for the host. HostPrivateKey: This is the private key for your Mac (should be the same in all files on a given computer). RootCertificate: This is the certificate used to generate keys (should be the same in all files on a given computer). RootPrivateKey: This is the private key of the computer that runs iTunes for that device. SystemBUID: This refers to the ID of the computer that runs iTunes. WiFiMACAddress: This is the Mac address of the Wi-Fi interface of the device that is paired to the computer. If you do not have an active Wi-Fi interface, MAC is still used while pairing. Why does this matter? It's important to know how a device interfaces with a computer. These files can be moved between computers and contain a variety of information about a device, including private keys. Having keys isn't all that is required for a computer to communicate with a device. When the devices are interfacing with a computer over USB, if you have a passcode enabled on the device, you will be required to enter that passcode in order to unlock the device. Once a computer is able to communicate with a device, you need to be careful as the backups of a device, apps that get synchronized to a device, and other data that gets exchanged with a device can be exposed while at rest on devices. Backing up your device What do most people do to maximize the security of iOS devices? Before we do anything, we need to take a backup of our devices. This protects the device from us by providing a restore point. This also secures the data from the possibility of losing it through a silly mistake. There are two ways, which are most commonly used to take backups: iCloud and iTunes. As the names imply, the first makes backups for the data on Apple's cloud service and the second on desktop computers. We'll cover how to take a backup on iCloud first. iCloud backups An iCloud account comes with free storage, to back up your Apple devices. An iOS device takes a backup to Apple servers and can be restored when a new device is set up from those same servers (it's a screen that appears during the activation process of a new device. Also, it appears as an option in iTunes if you back up to iTunes over USB—covered later in this article). Setting up and checking the status of iCloud backups is a straightforward process. From the Settings app, tap on iCloud and then Backup. As you can see from the Backup screen, you have two options, iCloud Backup, which enables automatic backups of the device to your iCloud account, and Back Up Now, which runs an immediate backup of the device. iCloud backups Allowing iCloud to take backups on devices is optional. You can disable access to iCloud and iCloud backups. However, doing so is rarely a good idea as you are limiting the functionality of the device and putting the data on your device at risk, if that data isn't backed up another way such as through iTunes. Many people have reservations about storing data on public clouds; especially, data as private as phone data (texts, phone call history, and so on). For more information on Apple's security and privacy around iCloud, refer to http://support.apple.com/en-us/HT202303. If you do not trust Apple or it's cloud, then you can also take a backup of your device using iTunes, described in the next section. Taking backups using iTunes Originally, iTunes was used to take backups for iOS devices. You can still use iTunes and it's likely you will have a second backup even if you are using iCloud, simply for a quick restore if nothing else. Backups are usually pretty small. The reason is that the operating system is not part of backups, since users can't edit any of those files. Therefore, you can use an ipsw file (the operating system) to restore a device. These are accessed through Apple Configurator or through iTunes if you have a restore file waiting to be installed. These can be seen in ~/Library/iTunes, and the name of the device and its software updates, as can be seen in the following screenshot: IPSW files Backups are stored in the ~/Library/Application Support/MobileSync/Backup directory. Here, you'll see a number of directories that are associated with the UDID of the devices, and within those, you'll see a number of files that make up the modular incremental backups beyond the initial backup. It's a pretty smart system and allows you to restore a device at different points in time without taking too long to perform each backup. Backups are stored in the Documents and SettingsUSERNAMEApplication DataApple ComputerMobileSyncBackup directory on Windows XP and in the UsersUSERNAMEAppDataRoamingApple ComputerMobileSyncBackup directory for newer operating systems. To enable an iTunes back up, plug a device into a computer, and then open iTunes. Click on the device for it to show the device details screen. The top section of the screen is for Backups (in the following screenshot, you can set a back up to This computer, which takes a backup on the computer you are on). I would recommend you to always choose the Encrypt iPhone backup option as it forces you to save a password in order to restore the back up. Additionally, you can use the Back Up Now button to kick off the first back up, as shown in the following screenshot: iTunes Viewing iOS data in iTunes To show why it's important to encrypt backups, let's look at what can be pulled out of those backups. There are a few tools that can extract backups, provided you have a password. Here, we'll look at iBackup Extractor to view the backup of your browsing history, calendars, call history, contacts, iMessages, notes, photos, and voicemails. To get started, download iBackup Extractor from http://www.wideanglesoftware.com/ibackupextractor. When you open iBackup Extractor for the first time, simply choose the device backup you wish to extract in iBackup Extractor. As you can see in following screenshot, you will be prompted for a password in order to unlock the Backup key bag. Enter the password to unlock the system. Unlock the backups Note that the file tree in the following screenshot gives away some information on the structure of the iOS filesystem, or at least, the data stored in the backups of the iOS device. For now, simply click on Browser to see a list of files that can be extracted from the backup, as you can see in the next screenshot: View Device Contents Using iBackup Extractor Note the prevalence of SQL databases in the files. Most apps use these types of databases to store data on devices. Also, check out the other options such as extracting notes (many that were possibly deleted), texts (some that have been deleted from devices), and other types of data from devices. Now that we've exhausted backups and proven that you should really put a password in place for your back ups, let's finally get to some basic security tasks to be performed on these devices! Initial security checklist Apple has built iOS to be one of the most secure operating systems in the world. This has been made possible by restricting access to much of the operating system by end users, unless you jailbreak a device. In this article, we won't cover jail-breaking devices much due to the fact that securing the devices then becomes a whole new topic. Instead, we have focused on what you need to do, how you can do those tasks, what the impacts are, and, how to manage security settings based on a policy. The basic steps required to secure an iOS device start with encrypting devices, which is done by assigning a passcode to a device. We will then configure how much inactive time before a device requires a PIN and accordingly manage the privacy settings. These settings allow us to get some very basic security features under our belt, and set the stage to explain what some of the features actually do. Configuring a passcode The first thing most of us need to do on an iOS device is configure a passcode for the device. Several things happen when a passcode is enabled, as shown in the following steps: The device is encrypted. The device then requires a passcode to wake up. An idle timeout is automatically set that puts the device to sleep after a few minutes of inactivity. This means that three of the most important things you can do to secure a device are enabled when you set up a passcode. Best of all, Apple recommends setting up a passcode during the initial set up of new devices. You can manage passcode settings using policies (or profiles as Apple likes to call them in iOS). Best of all—you can set a passcode and then use your fingerprint on the Home button instead of that passcode. We have found that by the time our phone is out of our pocket and if our finger is on the home button, the device is unlocked by the time we check it. With iPhone 6 and higher versions, you can now use that same fingerprint to secure payment information. Check whether a passcode has been configured, and if needed, configure a passcode using the Settings app. The Settings app is by default on the Home screen where many settings on the device, including Wi-Fi networks the device has been joined to, app preferences, mail accounts, and other settings are configured. To set a passcode, open the Settings app and tap on Touch ID & Passcode If a passcode has been set, you will see the Turn Passcode Off (as seen in the following screenshot) option If a passcode has not been set, then you can do so at this screen as well Additionally, you can change a passcode that has been set using the Change Passcode button and define a fingerprint or additional fingerprints that can be used with a touch ID There are two options in the USE TOUCH ID FOR section of the screen. You can choose whether, or not, you need to enter the passcode in order to unlock a phone, which you should use unless the device is also used by small children or as a kiosk. In these cases, you don't need to encrypt or take a backup of the device anyway. The second option is to force the entering of a passcode while using the App Store and iTunes. This can cost you money if someone else is using your device, so let the default value remain, which requires you to enter a passcode to unlock the options. Configure a Passcode The passcode settings are very easy to configure; so, they should be configured when possible. Scroll down on this screen and you'll see several other features, as shown in the next screenshot. The first option on the screen is Simple Passcode. Most users want to use a simple pin with an iOS device. Trying to use alphanumeric and long passcodes simply causes most users to try to circumvent the requirement. To add a fingerprint as a passcode, simply tap on Add a Fingerprint…, which you can see in the preceding screenshot, and follow the onscreen instructions. Additionally, the following can be accessed when the device is locked, and you can choose to turn them off: Today: This shows an overview of upcoming calendar items Notifications View: This shows you the recent push notifications (apps that have updates on the device) Siri: This represents the voice control of the device Passbook: This tool is used to make payments and display tickets for concert venues and meetups Reply with Message: This tool allows you to send a text reply to an incoming call (useful if you're on the treadmill) Each organization can decide whether it considers these options to be a security risk and direct users how to deal with them, or they can implement a policy around these options. Passcode Settings There aren't a lot of security options around passcodes and encryption, because by and large, Apple secures the device by giving you fewer options than you'll actually use. Under the hood, (for example, through Apple Configurator and Mobile Device Management) there are a lot of other options, but these aren't exposed to end users of devices. For the most part, a simple four-character passcode will suffice for most environments. When you complicate passcodes, devices become much more difficult to unlock, and users tend to look for ways around passcode enforcement policies. The passcode is only used on the device, so complicating the passcode will only reduce the likelihood that a passcode would be guessed before swiping open a device, which typically occurs within 10 tries. Finally, to disable a passcode and therefore encryption, simply go to the Touch ID & Passcode option in the Settings app and tap on Turn Passcode Off. Configuring privacy settings Once a passcode is set and the device is encrypted, it's time to configure the privacy settings. Third-party apps cannot communicate with one another by default in iOS. Therefore, you must enable communication between them (also between third-party apps and built-in iOS apps that have APIs). This is a fundamental concept when it comes to securing iOS devices. To configure privacy options, open the Settings app and tap on the entry for Privacy. On the Privacy screen, you'll see a list of each app that can be communicated with by other apps, as shown in the following screenshot: Privacy Options As an example, tap on the Location Services entry, as shown in the next screenshot. Here, you can set which apps can communicate with Location Services and when. If an app is set to While Using, the app can communicate with Location Services when the app is open. If an app is set to Always, then the app can only communicate with Location Services when the app is open and not when it runs in the background. Configure Location Services On the Privacy screen, tap on Photos. Here, you have fewer options because unlike the location of a device, you can't access photos when the app is running in the background. Here, you can enable or disable an app by communicating with the photo library on a device, as seen in the next screenshot: Configure What Apps Can Access Your Camera Roll Each app should be configured in such a way that it can communicate with the features of iOS or other apps that are absolutely necessary. Other privacy options which you can consider disabling include Siri and Handoff. Siri has the voice controls of an iOS. Because Siri can be used even when your phone is locked, consider to disable it by opening the Settings app, tapping on General and then on Siri, and you will be able disable the voice controls. To disable Handoff, you should use the General System Preference pane in any OS X computer paired to an iOS device. There, uncheck the Allow Handoff between this Mac and your iCloud devices option. Safari and built-in App protections Web browsers have access to a lot of data. One of the most popular targets on other platforms has been web browsers. The default browser on an iOS device is Safari. Open the Settings app and then tap on Safari. The Safari preferences to secure iOS devices include the following: Passwords & AutoFill: This is a screen that includes contact information, a list of saved passwords and credit cards used in web browsers. This data is stored in an iCloud Keychain if iCloud Keychain has been enabled in your phone. Favorites: This performs the function of bookmark management. This shows bookmarks in iOS. Open Links: This configures how links are managed. Block Pop-ups: This enables a pop-up blocker. Scroll down and you'll see the Privacy & Security options (as seen in the next screenshot). Here, you can do the following: Do Not Track: By this, you can block the tracking of browsing activity by websites. Block Cookies: A cookie is a small piece of data sent from a website to a visitor's browser. Many sites will send cookies to third-party sites, so the management of cookies becomes an obstacle to the privacy of many. By default, Safari only allows cookies from websites that you visit (Allow from Websites I Visit). Set the Cookies option to Always Block in order to disable its ability to accept any cookies; set the option to Always Allow to accept cookies from any source; and set the option to Allow from Current Website Only to only allow cookies from certain websites. Fraudulent Website Warning: This blocks phishing attacks (sites that only exist to steal personal information). Clear History and Website Data: This clears any cached history, web files, and passwords from the Safari browser. Use Cellular Data: When this option is turned off, it disables web traffic over cellular connections (so web traffic will only work when the phone is connected to a Wi-Fi network). Configure Privacy Settings for Safari There are also a number of advanced options that can be accessed by clicking on the Advanced button, as shown in the following screenshot: Configure the Advanced Safari Options These advanced options include the following: Website Data: This option (as you can see in the next screenshot) shows the amount of data stored from each site that caches files on the device, and allows you to swipe left on these entries to access any files saved for the site. Tap on Remove All Website Data to remove data for all the sites at once. JavaScript: This allows you to disable any JavaScripts from running on sites the device browses. Web Inspector: This shows the device in the Develop menu on a computer connected to the device. If the Web Inspector option has been disabled, use Advanced Preferences in the Safari Preferences option of Safari. View Website Data On Devices Browser security is an important aspect of any operating system. Predictive search and spotlight The final aspect of securing the settings on an iOS device that we'll cover in this article includes predictive search and spotlight. When you use the spotlight feature in iOS, usage data is sent to Apple along with the information from Location Services. Additionally, you can search for anything on a device, including items previously blocked from being accessed. The ability to search for blocked content warrants the inclusion in locking down a device. That data is then used to generate future searches. This feature can be disabled by opening the Settings app, tap on Privacy, then Location Services, and then System Services. Simply slide Spotlight Suggestions to Off to disable the location data from going over that connection. To limit the type of data that spotlight sends, open the Settings app, tap on General, and then on Spotlight Search. Uncheck each item you don't want indexed in the Spotlight database. The following screenshot shows the mentioned options: Configure What Spotlight Indexes These were some of the basic tactical tasks that secure devices. Summary This article was a whirlwind of quick changes that secure a device. Here, we paired devices, took a backup, set a passcode, and secured app data and Safari. We showed how to manually do some tasks that are set via policies. Resources for Article: Further resources on this subject: Creating a Brick Breaking Game [article] New iPad Features in iOS 6 [article] Sparrow iOS Game Framework - The Basics of Our Game [article]
Read more
  • 0
  • 0
  • 13184

article-image-overview-data-protection-manager-2010
Packt
30 May 2011
9 min read
Save for later

Overview of Data Protection Manager 2010

Packt
30 May 2011
9 min read
  Microsoft Data Protection Manager 2010 A practical step-by-step guide to planning deployment, installation, configuration, and troubleshooting of Data Protection Manager 2010         Read more about this book       (For more resources on Microsoft, see here.) DPM structure In this section we will look at the DPM file structure in order to have a better understanding of where DPM stores its components. We will also look at important processes that DPM runs and what they are used for. There will be some hints and tips that you should know about that will be useful when administering DPM. DPM file locations It is important to know not only how DPM operates, but also to know the structure that is underneath the application. Understanding the structure of where the DPM components are will help you with administering and troubleshooting DPM if the need arises. The following are some important locations: The DPM database backups are stored in the following location. Also when you make backup shadow copies for the replicas these will be stored in this directory. You would make backup show copies of your replicas if you were archiving them using a third-party backup solution: C:Program FilesMicrosoft DPMDPMVolumesShadowCopyDatabase Backups The following directory is where DPM is installed: C:Program FilesMicrosoft DPM The following directory contains PowerShell scripts that come with DPM. There are many scripts that can be used for performing common DPM tasks. C:Program FilesMicrosoft DPMDPMbin The following folder contains the database and files for SQL reporting services: C:Program FilesMicrosoft DPMSQL The following directory contains the SQL DPM database. MDF and LDF files: C:Program FilesMicrosoft DPMDPMDPMDB The following directory stores shadow copy volumes that are recovery points for a data source. These essentially are the changed blocks of VSS (Volume Shadow Copy Service) (Shadow Copy). C:Program FilesMicrosoft DPMDPMVolumesDiffArea The following folder contains mounted replica volumes. Mounted replica volumes are essentially pointers for every protected data object that points to the partition in a DPM storage pool. Think of these mounted replica points as a map from DPM to the protected data on the hard drives where the actual protected data lives. C:Program FilesMicrosoft DPMDPMVolumesReplica DPM processes We are now going to explore DPM processes. The executable files for these are all located in C:Program FilesMicrosoft DPMDPMbin. You can view these processes in Windows Task Manager and they show up in Windows Services as well: The following screenshot shows the DPM services as they appear in Windows Services: We will look at what each of these processes are and what they do. We will also look at the processes that have an impact on the performance of your DPM server. The processes are as follows: DPMAMService.exe: In Windows Services this is listed as the DPM AccessManager Service. This manages access to DPM. DpmWriter.exe: This is a service as well, so you will see it on the services list. This service is used for archiving. It manages the backup shadow copies or replicas, backups of report databases, as well as DPM backups. Msdpm.exe: The DPM service is the core component of DPM. The DPM service manages all core DPM operations, including replica creation, synchronization, and recovery point creation. This service implements and manages synchronization and shadow copy creation for protected file servers. DPMLA.exe: This is the DPM Library Agent Service. DPMRA.exe: This is the DPM Replication Agent. It helps to back up and recover file and application data to DPM. Dpmac.exe: This is known as the DPM Agent Coordinator Service. This manages the installations, uninstallations, and upgrades of DPM protection agents on remote computers that you need to protect. DPM processes that impact DPM performance The Msdpm.exe, MsDpmProtectionAgent.exe, Microsoft$DPM$Acct.exe, and mmc.exe processes take a toll on DPM performance. mmc.exe is a standard Windows service. "MMC" stands for Microsoft Management Console application and is used to display various management plug-ins. Not all but a good amount of Microsoft server applications run in the MMC such as Exchange, ISA, IIS, System Center, and the Microsoft Server Manager. The DPM Administrator Console runs in an MMC as well. mmc.exe can cause high memory usage. The best way to ensure that this process does not overload your memory is to close the DPM Administrator Console when not using it. MsDpmProtectionAgent.exe is the DPM Protection Agent service and affects both CPU and memory usage when DPM jobs and consistency checks are run. There is nothing you can do to get the usage down for this service. You just need to be aware of this and try not to schedule any other resource intensive applications such as antivirus scans at the same time as DPM jobs or consistency checks. Mspdpm.exe is a service that runs synchronization and shadow copy creations as stated previously. Like MsDpmProtectionAgent.exe, Mspdpm.exe also affects CPU and memory usage when running synchronizations and shadow copies. Like MsDpmProtectionAgent.exe there is nothing you can do to the Mspdpm. exe service to reduce memory and CPU usage. Just make sure to keep the system clear of resource intensive applications when the Mspdpm.exe is running jobs. If you are running a local SQL instance for your DPM deployment you will notice a Microsoft$DPM$Acct.exe process. The SQL Server and SQL Agent services use a Microsoft$DPM$Acct account. This normally runs on a high level. This service reserves part of your system's memory for cache. If the system memory goes low, the Microsoft$DPM$Acct.exe process will let go of the memory cache it has reserved. Important DPM terms In this section you will learn some important terms used commonly in DPM. You will need to understand these terms as you begin to administer DPM on a regular basis. You can read the full list of terms at this site: http://technet.microsoft.com/en-us/library/bb795543.aspx We group the terms in a way that each group relates to an area of DPM. The following are some important terms: Bare metal recovery: This is a restore technique that allows one to restore a complete system onto bare metal, without any requirements, to the previous hardware. This allows restoring to dissimilar hardware. Change journal: A feature that tracks changes to NTFS (New Technology File System) volumes, including additions, deletions, and modifications. The change journal exists on the volume as a sparse file. Sparse files are used to make disk space usage more efficient in NTFS. A sparse file allocates disk space only when it is needed. This allows files to be created even when there is insufficient space on a hard drive. These files contain zeroes instead of disk blocks. Consistency check: The process by which DPM checks for and corrects inconsistencies between a protected data source and its replica. A consistency check is only performed when normal mechanisms for recording changes to protected data, and for applying those changes to replicas, have been interrupted. Express full backup: A synchronization operation in which the protection agent transfers a snapshot of all the blocks that have changed since the previous express full backup (or initial replica creation, for the first express full backup). Shadow copy: A point-in-time copy of files and folders that is stored on the DPM server. Shadow copies are sometimes referred to as snapshots. Shadow copy client software: Client software that enables an end-user to independently recover data by retrieving a shadow copy. Replica: A complete copy of the protected data on a single volume, database, or storage group. Each member of a protection group is associated with a replica on the DPM server. Replica creation: The process by which a full copy of data sources, selected for inclusion in a protection group, is transferred to the DPM storage pool. The replica can be created over the network from data on the protected computer or from a tape backup system. Replica creation is an initialization process that is performed for each data source when the data source is added to a protection group. Replica volume: A volume on the DPM server that contains the replica for a protected data source. Custom volume: A volume that is not in the DPM storage pool and is specified to store the replica and recovery points for a protection group member. Dismount: To remove a removable tape or disc from a drive. DPM Alerts log: A log that stores DPM alerts as Windows events so that the alerts can be displayed in Microsoft System Center Operations Manager (SCOM). DPMDB.mdf: The filename of the DPM database, the SQL Server database that stores DPM settings and configuration information. DPMDBReaders group: A group, created during DPM installation, that contains all accounts that have read-only access to the DPM database. The DPMReport account is a member of this group. DPMReport account: The account that the Web and NT services of SQL Server Reporting Services use to access the DPM database. This account is created when an administrator configures DPM reporting. MICROSOFT$DPM$: The name that the DPM setup assigns to the SQL Server instance used by DPM. Microsoft$DPMWriter$ account: The low-privilege account under which DPM runs the DPM Writer service. This account is created during the DPM installation. MSDPMTrustedMachines group: A group that contains the domain accounts for computers that are authorized to communicate with the DPM server. DPM uses this group to ensure that only computers that have the DPM protection agent installed from a specific DPM server can respond to calls from that server. Protection configuration: The collection of settings that is common to a protection group; specifically, the protection group name, disk allocations, replica creation method, and on-the-wire compression. Protection group: A collection of data sources that share the same protection configuration. Protection group member: A data source within a protection group. Protected computer: A computer that contains data sources that are protection group members. Synchronization: The process by which DPM transfers changes from the protected computer to the DPM server, and applies the changes to the replica of the protected volume. Recovery goals: The retention range, data loss tolerance, and frequency of recovery points for protected data. Recovery collection: The aggregate of all recovery jobs associated with a single recovery operation. Recovery point: The date and time of a previous version of a data source that is available for recovery from media that is managed by DPM. Report database: The SQL Server database that stores DPM reporting information (ReportServer.mdf). ReportServer.mdf: In DPM, the filename for the report database—a SQL Server database that stores reporting information. Retention range: Duration of time for which the data should be available for recovery.
Read more
  • 0
  • 0
  • 13140

article-image-booting-system
Packt
27 Feb 2015
12 min read
Save for later

Booting the System

Packt
27 Feb 2015
12 min read
In this article by William Confer and William Roberts, author of the book, Exploring SE for Android, we will learn once we have an SE for Android system, we need to see how we can make use of it, and get it into a usable state. In this article, we will: Modify the log level to gain more details while debugging Follow the boot process relative to the policy loader Investigate SELinux APIs and SELinuxFS Correct issues with the maximum policy version number Apply patches to load and verify an NSA policy (For more resources related to this topic, see here.) You might have noticed some disturbing error messages in dmesg. To refresh your memory, here are some of them: # dmesg | grep –i selinux <6>SELinux: Initializing. <7>SELinux: Starting in permissive mode <7>SELinux: Registering netfilter hooks <3>SELinux: policydb version 26 does not match my version range 15-23 ... It would appear that even though SELinux is enabled, we don't quite have an error-free system. At this point, we need to understand what causes this error, and what we can do to rectify it. At the end of this article, we should be able to identify the boot process of an SE for Android device with respect to policy loading, and how that policy is loaded into the kernel. We will then address the policy version error. Policy load An Android device follows a boot sequence similar to that of the *NIX booting sequence. The boot loader boots the kernel, and the kernel finally executes the init process. The init process is responsible for managing the boot process of the device through init scripts and some hard coded logic in the daemon. Like all processes, init has an entry point at the main function. This is where the first userspace process begins. The code can be found by navigating to system/core/init/init.c. When the init process enters main (refer to the following code excerpt), it processes cmdline, mounts some tmpfs filesystems such as /dev, and some pseudo-filesystems such as procfs. For SE for Android devices, init was modified to load the policy into the kernel as early in the boot process as possible. The policy in an SELinux system is not built into the kernel; it resides in a separate file. In Android, the only filesystem mounted in early boot is the root filesystem, a ramdisk built into boot.img. The policy can be found in this root filesystem at /sepolicy on the UDOO or target device. At this point, the init process calls a function to load the policy from the disk and sends it to the kernel, as follows: int main(int argc, char *argv[]) { ...   process_kernel_cmdline();   unionselinux_callback cb;   cb.func_log = klog_write;   selinux_set_callback(SELINUX_CB_LOG, cb);     cb.func_audit = audit_callback;   selinux_set_callback(SELINUX_CB_AUDIT, cb);     INFO(“loading selinux policyn”);   if (selinux_enabled) {     if (selinux_android_load_policy() < 0) {       selinux_enabled = 0;       INFO(“SELinux: Disabled due to failed policy loadn”);     } else {       selinux_init_all_handles();     }   } else {     INFO(“SELinux:  Disabled by command line optionn”);   } … In the preceding code, you will notice the very nice log message, SELinux: Disabled due to failed policy load, and wonder why we didn't see this when we ran dmesg before. This code executes before setlevel in init.rc is executed. The default init log level is set by the definition of KLOG_DEFAULT_LEVEL in system/core/include/cutils/klog.h. If we really wanted to, we could change that, rebuild, and actually see that message. Now that we have identified the initial path of the policy load, let's follow it on its course through the system. The selinux_android_load_policy() function can be found in the Android fork of libselinux, which is in the UDOO Android source tree. The library can be found at external/libselinux, and all of the Android modifications can be found in src/android.c. The function starts by mounting a pseudo-filesystem called SELinuxFS. In systems that do not have sysfs mounted, the mount point is /selinux; on systems that have sysfs mounted, the mount point is /sys/fs/selinux. You can check mountpoints on a running system using the following command: # mount | grep selinuxfs selinuxfs /sys/fs/selinux selinuxfs rw,relatime 0 0 SELinuxFS is an important filesystem as it provides the interface between the kernel and userspace for controlling and manipulating SELinux. As such, it has to be mounted for the policy load to work. The policy load uses the filesystem to send the policy file bytes to the kernel. This happens in the selinux_android_load_policy() function: int selinux_android_load_policy(void) {   char *mnt = SELINUXMNT;   int rc;   rc = mount(SELINUXFS, mnt, SELINUXFS, 0, NULL);   if (rc < 0) {     if (errno == ENODEV) {       /* SELinux not enabled in kernel */       return -1;     }     if (errno == ENOENT) {       /* Fall back to legacy mountpoint. */       mnt = OLDSELINUXMNT;       rc = mkdir(mnt, 0755);       if (rc == -1 && errno != EEXIST) {         selinux_log(SELINUX_ERROR,”SELinux:           Could not mkdir:  %sn”,         strerror(errno));         return -1;       }       rc = mount(SELINUXFS, mnt, SELINUXFS, 0, NULL);     }   }   if (rc < 0) {     selinux_log(SELINUX_ERROR,”SELinux:  Could not mount selinuxfs:  %sn”,     strerror(errno));     return -1;   }   set_selinuxmnt(mnt);     return selinux_android_reload_policy(); } The set_selinuxmnt(car *mnt) function changes a global variable in libselinux so that other routines can find the location of this vital interface. From there it calls another helper function, selinux_android_reload_policy(), which is located in the same libselinux android.c file. It loops through an array of possible policy locations in priority order. This array is defined as follows: Static const char *const sepolicy_file[] = {   “/data/security/current/sepolicy”,   “/sepolicy”,   0 }; Since only the root filesystem is mounted, it chooses /sepolicy at this time. The other path is for dynamic runtime reloads of policy. After acquiring a valid file descriptor to the policy file, the system is memory mapped into its address space, and calls security_load_policy(map, size) to load it to the kernel. This function is defined in load_policy.c. Here, the map parameter is the pointer to the beginning of the policy file, and the size parameter is the size of the file in bytes: int selinux_android_reload_policy(void) {   int fd = -1, rc;   struct stat sb;   void *map = NULL;   int i = 0;     while (fd < 0 && sepolicy_file[i]) {     fd = open(sepolicy_file[i], O_RDONLY | O_NOFOLLOW);     i++;   }   if (fd < 0) {     selinux_log(SELINUX_ERROR, “SELinux:  Could not open sepolicy:  %sn”,     strerror(errno));     return -1;   }   if (fstat(fd, &sb) < 0) {     selinux_log(SELINUX_ERROR, “SELinux:  Could not stat %s:  %sn”,     sepolicy_file[i], strerror(errno));     close(fd);     return -1;   }   map = mmap(NULL, sb.st_size, PROT_READ, MAP_PRIVATE, fd, 0);   if (map == MAP_FAILED) {     selinux_log(SELINUX_ERROR, “SELinux:  Could not map %s:  %sn”,     sepolicy_file[i], strerror(errno));     close(fd);     return -1;   }     rc = security_load_policy(map, sb.st_size);   if (rc < 0) {     selinux_log(SELINUX_ERROR, “SELinux:  Could not load policy:  %sn”,     strerror(errno));     munmap(map, sb.st_size);     close(fd);     return -1;   }     munmap(map, sb.st_size);   close(fd);   selinux_log(SELINUX_INFO, “SELinux: Loaded policy from %sn”, sepolicy_file[i]);     return 0; } The security load policy opens the <selinuxmnt>/load file, which in our case is /sys/fs/selinux/load. At this point, the policy is written to the kernel via this pseudo file: int security_load_policy(void *data, size_t len) {   char path[PATH_MAX];   int fd, ret;     if (!selinux_mnt) {     errno = ENOENT;     return -1;   }     snprintf(path, sizeof path, “%s/load”, selinux_mnt);   fd = open(path, O_RDWR);   if (fd < 0)   return -1;     ret = write(fd, data, len);   close(fd);   if (ret < 0)   return -1;   return 0; } Fixing the policy version At this point, we have a clear idea of how the policy is loaded into the kernel. This is very important. SELinux integration with Android began in Android 4.0, so when porting to various forks and fragments, this breaks, and code is often missing. Understanding all parts of the system, however cursory, will help us to correct issues as they appear in the wild and develop. This information is also useful to understand the system as a whole, so when modifications need to be made, you'll know where to look and how things work. At this point, we're ready to correct the policy versions. The logs and kernel config are clear; only policy versions up to 23 are supported, and we're trying to load policy version 26. This will probably be a common problem with Android considering kernels are often out of date. There is also an issue with the 4.3 sepolicy shipped by Google. Some changes by Google made it a bit more difficult to configure devices as they tailored the policy to meet their release goals. Essentially, the policy allows nearly everything and therefore generates very few denial logs. Some domains in the policy are completely permissive via a per-domain permissive statement, and those domains also have rules to allow everything so denial logs do not get generated. To correct this, we can use a more complete policy from the NSA. Replace external/sepolicy with the download from https://bitbucket.org/seandroid/external-sepolicy/get/seandroid-4.3.tar.bz2. After we extract the NSA's policy, we need to correct the policy version. The policy is located in external/sepolicy and is compiled with a tool called check_policy. The Android.mk file for sepolicy will have to pass this version number to the compiler, so we can adjust this here. On the top of the file, we find the culprit: ... # Must be <= /selinux/policyvers reported by the Android kernel. # Must be within the compatibility range reported by checkpolicy -V. POLICYVERS ?= 26 ... Since the variable is overridable by the ?= assignment. We can override this in BoardConfig.mk. Edit device/fsl/imx6/BoardConfigCommon.mk, adding the following POLICYVERS line to the bottom of the file: ... BOARD_FLASH_BLOCK_SIZE := 4096 TARGET_RECOVERY_UI_LIB := librecovery_ui_imx # SELinux Settings POLICYVERS := 23 -include device/google/gapps/gapps_config.mk Since the policy is on the boot.img image, build the policy and bootimage: $ mmm -B external/sepolicy/ $ make –j4 bootimage 2>&1 | tee logz !!!!!!!!! WARNING !!!!!!!!! VERIFY BLOCK DEVICE !!!!!!!!! $ sudo chmod 666 /dev/sdd1 $ dd if=$OUT/boot.img of=/dev/sdd1 bs=8192 conv=fsync Eject the SD card, place it into the UDOO, and boot. The first of the preceding commands should produce the following log output: out/host/linux-x86/bin/checkpolicy: writing binary representation (version 23) to out/target/product/udoo/obj/ETC/sepolicy_intermediates/sepolicy At this point, by checking the SELinux logs using dmesg, we can see the following: # dmesg | grep –i selinux <6>init: loading selinux policy <7>SELinux: 128 avtab hash slots, 490 rules. <7>SELinux: 128 avtab hash slots, 490 rules. <7>SELinux: 1 users, 2 roles, 274 types, 0 bools, 1 sens, 1024 cats <7>SELinux: 84 classes, 490 rules <7>SELinux: Completing initialization. Another command we need to run is getenforce. The getenforce command gets the SELinux enforcing status. It can be in one of three states: Disabled: No policy is loaded or there is no kernel support Permissive: Policy is loaded and the device logs denials (but is not in enforcing mode) Enforcing: This state is similar to the permissive state except that policy violations result in EACCESS being returned to userspace One of the goals while booting an SELinux system is to get to the enforcing state. Permissive is used for debugging, as follows: # getenforce Permissive Summary In this article, we covered the important policy load flow through the init process. We also changed the policy version to suit our development efforts and kernel version. From there, we were able to load the NSA policy and verify that the system loaded it. This article additionally showcased some of the SELinux APIs and their interactions with SELinuxFS. Resources for Article: Further resources on this subject: Android And Udoo Home Automation? [article] Sound Recorder For Android [article] Android Virtual Device Manager [article]
Read more
  • 0
  • 0
  • 12848

article-image-securing-openstack-networking
Packt
10 Aug 2015
10 min read
Save for later

Securing OpenStack Networking

Packt
10 Aug 2015
10 min read
In this article by Fabio Alessandro Locati, author of the book OpenStack Cloud Security, you will learn about the importance of firewall, IDS, and IPS. You will also learn about Generic Routing Encapsulation, VXLAN. (For more resources related to this topic, see here.) The importance of firewall, IDS, and IPS The security of a network can and should be achieved in multiple ways. Three components that are critical to the security of a network are: Firewall Intrusion detection system (IDS) Intrusion prevention system (IPS) Firewall Firewalls are systems that control traffic passing through them based on rules. This can seem something like a router, but they are very different. The router allows communication between different networks while the firewall limits communication between networks and hosts. The root of this confusion may occur because very often the router will have the firewall functionality and vice versa. Firewalls need to be connected in a series to your infrastructure. The first paper on the firewall technology appeared in 1988 and designed the packet filter firewall. This kind of firewall is often known as first generation firewall. This kind of firewall analyzes the packages passing through and if the package matches a rule, the firewall will act accordingly to that rule. This firewall will analyze each package by itself and will not consider other aspects such as other packages. It works on the first three layers of the OSI model with very few features using layer 4 specifically to check port numbers and protocols (UDP/TCP). First generation firewalls are still in use, because in a lot of situations, to do the job properly and are cheap and secure. Examples of typical filtering those firewalls prohibit (or allow) to IPs of certain classes (or specific IPs), to access certain IPs, or allow traffic to a specific IP only on specific ports. There are no known attacks to those kind of firewalls, but specific models can have specific bugs that can be exploited. In 1990, a new generation of firewall appeared. The initial name was circuit-level gateway, but today it is far more commonly known as stateful firewalls or second generation firewall. These firewalls are able to understand when connections are being initialized and closed so that the firewall comes to know what is the current state of a connection when a package arrives. To do so, this kind of firewall uses the first four layers of the networking stack. This allows the firewall to drop all packages that are not establishing a new connection or are in an already established connection. These firewalls are very powerful with the TCP protocol because it has states, while they have very small advantages compared to first generation firewalls handling UDP or ICMP packages, since those packages travel with no connection. In these cases, the firewall sets the connection as established; only the first valid package passes through and closes it after the connection times out. Performance-wise, stateful firewall can be faster than packet firewall because if the package is part of an active connection, no further test will be performed against that package. These kinds of firewalls are more susceptible to bugs in their code since reading more about the package makes it easier to exploit. Also, on many devices, it is possible to open connections (with SYN packages) until the firewall is saturated. In such cases, the firewall usually downgrades itself as a simple router allowing all traffic to pass through it. In 1991, improvements were made to the stateful firewall allowing it to understand more about the protocol of the package it was evaluating. The firewalls of this kind before 1994 had major problems, such as working as a proxy that the user had to interact with. In 1994, the first application firewall, as we know it, was born doing all its job completely transparently. To be able to understand the protocol, this kind of firewall requires an understanding of all seven layers of the OSI model. As for security, the same as the stateful firewall does apply to the application firewall as well. Intrusion detection system (IDS) IDSs are systems that monitor the network traffic looking for policy violation and malicious traffic. The goal of the IDS is not to block malicious activity, but instead to log and report them. These systems act in a passive mode, so you'll not see any traffic coming from them. This is very important because it makes them invisible to attackers so you can gain information about the attack, without the attacker knowing. IDSs need to be connected in parallel to your infrastructure. Intrusion prevention system (IPS) IPSs are sometimes referred to as Intrusion Detection and Prevention Systems (IDPS), since they are IDS that are also able to fight back malicious activities. IPSs have greater possibility to act than IDSs. Other than reporting, like IDS, they can also drop malicious packages, reset the connection, and block the traffic from the offending IP address. IPSs need to be connected in series to your infrastructure. Generic Routing Encapsulation (GRE) GRE is a Cisco tuning protocol that is difficult to position in the OSI model. The best place for it to be is between layers 2 and 3. Being above layer 2 (where VLANs are), we can use GRE inside VLAN. We will not go deep into the technicalities of this protocol. I'd like to focus more on the advantages and disadvantages it has over VLAN. The first advantage of (extended) GRE over VLAN is scalability. In fact, VLAN is limited to 4,096, while GRE tunnels do not have this limitation. If you are running a private cloud and you are working in a small corporation, 4,096 networks could be enough, but will definitely not be enough if you work for a big corporation or if you are running a public cloud. Also, unless you use VTP for your VLANs, you'll have to add VLANs to each network device, while GREs don't need this. You cannot have more than 4,096 VLANs in an environment. The second advantage is security. Since you can deploy multiple GRE tunnels in a single VLAN, you can connect a machine to a single VLAN and multiple GRE networks without the risks that come with putting a port in trunking that is needed to bring more VLANs in the same physical port. For these reasons, GRE has been a very common choice in a lot of OpenStack clusters deployed up to OpenStack Havana. The current preferred networking choice (since Icehouse) is Virtual Extensible LAN (VXLAN). VXLAN VXLAN is a network virtualization technology whose specifications have been originally created by Arista Networks, Cisco, and VMWare, and many other companies have backed the project. Its goal is to offer a standardized overlay encapsulation protocol and it was created because the standard VLAN were too limited for the current cloud needs and the GRE protocol was a Cisco protocol. It works using layer 2 Ethernet frames within layer 4 UDP packages on port 4789. As for the maximum number of networks, the limit is 16 million logical networks. Since the Icehouse release, the suggested standard for networking is VXLAN. Flat network versus VLAN versus GRE in OpenStack Quantum In OpenStack Quantum, you can decide to use multiple technologies for your networks: flat network, VLAN, GRE, and the most recent, VXLAN. Let's discuss them in detail: Flat network: It is often used in private clouds since it is very easy to set up. The downside is that any virtual machine will see any other virtual machines in our cloud. I strongly discourage people from using this network design because it's unsafe, and in the long run, it will have problems, as we have seen earlier. VLAN: It is sometimes used in bigger private clouds and sometimes even in small public clouds. The advantage is that many times you already have a VLAN-based installation in your company. The major disadvantages are the need to trunk ports for each physical host and the possible problems in propagation. I discourage this approach, since in my opinion, the advantages are very limited while the disadvantages are pretty strong. VXLAN: It should be used in any kind of cloud due to its technical advantages. It allows a huge number of networks, its way more secure, and often eases debugging. GRE: Until the Havana release, it was the suggested protocol, but since the Icehouse release, the suggestion has been to move toward VXLAN, where the majority of the development is focused. Design a secure network for your OpenStack deployment As for the physical infrastructure, we have to design it securely. We have seen that the network security is critical and that there a lot of possible attacks in this realm. Is it possible to design a secure environment to run OpenStack? Yes it is, if you remember a few rules: Create different networks, at the very least for management and external data (this network usually already exists in your organization and is the one where all your clients are) Never put ports on trunking mode if you use VLANs in your infrastructure, otherwise physically separated networks will be needed The following diagram is an example of how to implement it: Here, the management, tenant external networks could be either VLAN or real networks. Remember that to not use VLAN trunking, you need at least the same amount of physical ports as of VLAN, and the machine has to be subscribed to avoid port trunking that can be a huge security hole. A management network is needed for the administrator to administer the machines and for the OpenStack services to speak to each other. This network is critical, since it may contain sensible data, and for this reason, it has to be disconnected from other networks, or if not possible, have very limited connectivity. The external network is used by virtual machines to access the Internet (and vice versa). In this network, all machines will need an IP address reachable from the Web. The tenant network, sometimes even called internal or guest network is the network where the virtual machines can communicate with other virtual machines in the same cloud. This network, in some deployment cases, can be merged with the external network, but this choice has some security drawbacks. The API network is used to expose OpenStack APIs to the users. This network requires IP addresses reachable from the Web, and for this reason, is often merged into the external network. There are cases where provider networks are needed to connect tenant networks to existing networks outside the OpenStack cluster. Those networks are created by the OpenStack administrator and map directly to an existing physical network in the data center. Summary In this article, we have seen how networking works, which attacks we can expect, and how we can counter them. Also, we have seen how to implement a secure deployment of OpenStack Networking. Resources for Article: Further resources on this subject: Cloud distribution points [Article] Photo Stream with iCloud [Article] Integrating Accumulo into Various Cloud Platforms [Article]
Read more
  • 0
  • 0
  • 12711
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-furthering-the-net-neutrality-debate-gop-proposes-the-21st-century-internet-act
Sugandha Lahoti
18 Jul 2018
3 min read
Save for later

Furthering the Net Neutrality debate, GOP proposes the 21st Century Internet Act

Sugandha Lahoti
18 Jul 2018
3 min read
GOP Rep. Mike Coffman has proposed a new bill to solidify the principles of net neutrality into law, rather than them being a set of rules to be modified by the FCC every year. The bill known as the 21st Century Internet Act would ban providers from blocking, throttling, or offering paid fast lanes. It will also forbid them from participating in paid prioritization and charging access fees from edge providers. It could take some time for this amendment to be voted on in the Congress. It mostly depends on the makeup of Congress after the midterm elections. The 21st Century Internet Act modifies the Communications Act of 1934 and adds a new Title VIII section full of conditions specific to internet providers. This new title permanently codifies into law the ‘four corners’ of net neutrality”. The amendment proposes these measures: No Blocking A broadband internet access service provider can not block lawful content, or charge an edge provider a fee to avoid blocking of content. No Throttling The service provider cannot degrade and enhance (slow down or speed up) the internet traffic. No Paid prioritization The internet access provider may not engage in paid preferential treatment. No unreasonable Interference The service provider cannot interfere with the ability of end users to select, the internet access service of their choice. This bill aims to settle the long ongoing debate over whether internet access is an information service or a telecommunications service. In his letter to FCC Chairman Ajit Pai, Coffman mentions, “The Internet has been and remains a transformative tool, and I am concerned any action you may take to alter the rules under which it functions may well have significant unanticipated negative consequences.” As far as the FCC’s role is concerned, The 21st Century Internet act will solidify the rules of net neutrality, barring the FCC from modifying it. The commision will solely be responsible for watching over the bill’s implementation and enforce the law. This would include investigating unfair acts or practices, such as false advertising, misrepresenting the product etc. The Senate has already voted to save net neutrality, by passing the CRA measure back in May 2018. The Congressional Review Act, or CRA received 52-47 vote, overturning the FCC and taking net neutrality rules off the books. The 21st century Internet Act is being seen in a good light by The Internet Association, which represents Google, Facebook, Netflix and others, who commended Coffman on his bill and called it a "step in the right direction." For the rest of us, it will be quite interesting to see the bill’s progress and its fate as it goes through the voting process and then into the White House for final approval. 5 reasons why the government should regulate technology DCLeaks and Guccifer 2.0: How hackers used social engineering to manipulate the 2016 U.S. elections Tech, unregulated: Washington Post warns Robocalls could get worse
Read more
  • 0
  • 0
  • 12617

article-image-cross-site-request-forgery
Packt
17 Nov 2014
9 min read
Save for later

Cross-site Request Forgery

Packt
17 Nov 2014
9 min read
In this article by Y.E Liang, the author of JavaScript Security, we will cover cross-site forgery. This topic is not exactly new. In this article, we will go deeper into cross-site forgery and learn the various techniques of defending against it. (For more resources related to this topic, see here.) Introducing cross-site request forgery Cross-site request forgery (CSRF) exploits the trust that a site has in a user's browser. It is also defined as an attack that forces an end user to execute unwanted actions on a web application in which the user is currently authenticated. Examples of CSRF We will now take a look at a basic CSRF example: Go to the source code and change the directory. Run the following command: python xss_version.py Remember to start your MongoDB process as well. Next, open external.html found in templates, in another host, say http://localhost:8888. You can do this by starting the server, which can be done by running pyton xss_version.py –port=8888, and then visiting http://loaclhost:8888/todo_external. You will see the following screenshot: Adding a new to-do item Click on Add To Do, and fill in a new to-do item, as shown in the following screenshot: Adding a new to-do item and posting it Next, click on Submit. Going back to your to-do list app at http://localhost:8000/todo and refreshing it, you will see the new to-do item added to the database, as shown in the following screenshot: To-do item is added from an external app; this is dangerous! To attack the to-do list app, all we need to do is add a new item that contains a line of JavaScript, as shown in the following screenshot: Adding a new to do for the Python version Now, click on Submit. Then, go back to your to-do app at http://localhost:8000/todo, and you will see two subsequent alerts, as shown in the following screenshot: Successfully injected JavaScript part 1 So here's the first instance where CSRF happens: Successfully injected JavaScript part 2 Take note that this can happen to the other backend written in other languages as well. Now go to your terminal, turn off the Python server backend, and change the directory to node/. Start the node server by issuing this command: node server.js This time around, the server is running at http://localhost:8080, so remember to change the $.post() endpoint to http://localhost:8080 instead of http://localhost:8000 in external.html, as shown in the following code:    function addTodo() {      var data = {        text: $('#todo_title').val(),        details:$('#todo_text').val()      }      // $.post('http://localhost:8000/api/todos', data,      function(result) {      $.post('http://localhost:8080/api/todos', data,      function(result) {        var item = todoTemplate(result.text, result.details);        $('#todos').prepend(item);        $("#todo-form").slideUp();      })    } The line changed is found at addTodo(); the highlighted code is the correct endpoint for this section. Now, going back to external.html, add a new to-do item containing JavaScript, as shown in the following screenshot: Trying to inject JavaScript into a to-do app based on Node.js As usual, submit the item. Go to http://localhost:8080/api/ and refresh; you should see two alerts (or four alerts if you didn't delete the previous ones). The first alert is as follows: Successfully injected JavaScript part 1 The second alert is as follows: Successfully injected JavaScript part 1 Now that we have seen what can happen to our app if we suffered a CSRF attack, let's think about how such attacks can happen. Basically, such attacks can happen when our API endpoints (or URLs accepting the requests) are not protected at all. Attackers can exploit such vulnerabilities by simply observing which endpoints are used and attempt to exploit them by performing a basic HTTP POST operation to it. Basic defense against CSRF attacks If you are using modern frameworks or packages, the good news is that you can easily protect against such attacks by turning on or making use of CSRF protection. For example, for server.py, you can turn on xsrf_cookie by setting it to True, as shown in the following code: class Application(tornado.web.Application):    def __init__(self):        handlers = [            (r"/api/todos", Todos),            (r"/todo", TodoApp)          ]        conn = pymongo.Connection("localhost")        self.db = conn["todos"]        settings = dict(            xsrf_cookies=True,            debug=True,            template_path=os.path.join(os.path.dirname(__file__),            "templates"),            static_path=os.path.join(os.path.dirname(__file__),            "static")        )        tornado.web.Application.__init__(self, handlers, **settings) Note the highlighted line, where we set xsrf_cookies=True. Have a look at the following code snippet: var express   = require('express'); var bodyParser = require('body-parser'); var app       = express(); var session   = require('cookie-session'); var csrf   = require('csrf');   app.use(csrf()); app.use(bodyParser()); The highlighted lines are the new lines (compared to server.js) to add in CSRF protection. Now that both backends are equipped with CSRF protection, you can try to make the same post from external.html. You will not be able to make any post from external.html. For example, you can open Chrome's developer tool and go to Network. You will see the following: POST forbidden On the terminal, you will see a 403 error from our Python server, which is shown in the following screenshot: POST forbidden from the server side Other examples of CSRF CSRF can also happen in many other ways. In this section, we'll cover the other basic examples on how CSRF can happen. CSRF using the <img> tags This is a classic example. Consider the following instance: <img src=http://yousite.com/delete?id=2 /> Should you load a site that contains this img tag, chances are that a piece of data may get deleted unknowingly. Now that we have covered the basics of preventing CSRF attacks through the use of CSRF tokens, the next question you may have is: what if there are times when you need to expose an API to an external app? For example, Facebook's Graph API, Twitter's API, and so on, allow external apps not only to read, but also write data to their system. How do we prevent malicious attacks in this situation? We'll cover this and more in the next section. Other forms of protection Using CSRF tokens may be a convenient way to protect your app from CSRF attacks, but it can be a hassle at times. As mentioned in the previous section, what about the times when you need to expose an API to allow mobile access? Or, your app is growing so quickly that you want to accelerate that growth by creating a Graph API of your own. How do you manage it then? In this section, we will go quickly over the techniques for protection. Creating your own app ID and app secret – OAuth-styled Creating your own app ID and app secret is similar to what the major Internet companies are doing right now: we require developers to sign up for developing accounts and to attach an application ID and secret key for each of the apps. Using this information, the developers will need to exchange OAuth credentials in order to make any API calls, as shown in the following screenshot: Google requires developers to sign up, and it assigns the client ID On the server end, all you need to do is look for the application ID and secret key; if it is not present, simply reject the request. Have a look at the following screenshot: The same thing with Facebook; Facebook requires you to sign up, and it assigns app ID and app secret Checking the Origin header Simply put, you want to check where the request is coming from. This is a technique where you can check the Origin header. The Origin header, in layman's terms, refers to where the request is coming from. There are at least two use cases for the usage of the Origin header, which are as follows: Assuming your endpoint is used internally (by your own web application) and checking whether the requests are indeed made from the same website, that is, your website. If you are creating an endpoint for external use, such as those similar to Facebook's Graph API, then you can make those developers register the website URL where they are going to use the API. If the website URL does not match with the one that is being registered, you can reject this request. Note that the Origin header can also be modified; for example, an attacker can provide a header that is modified. Limiting the lifetime of the token Assuming that you are generating your own tokens, you may also want to limit the lifetime of the token, for instance, making the token valid for only a certain time period if the user is logged in to your site. Similarly, your site can make this a requirement in order for the requests to be made; if the token does not exist, HTTP requests cannot be made. Summary In this article, we covered the basic forms of CSRF attacks and how to defend against it. Note that these security loopholes can come from both the frontend and server side. Resources for Article: Further resources on this subject: Implementing Stacks using JavaScript [Article] Cordova Plugins [Article] JavaScript Promises – Why Should I Care? [Article]
Read more
  • 0
  • 0
  • 12359

article-image-install-gnome-shell-ubuntu-910-karmic-koala
Packt
18 Jan 2010
3 min read
Save for later

Install GNOME-Shell on Ubuntu 9.10 "Karmic Koala"

Packt
18 Jan 2010
3 min read
Remember, these are development builds and preview snapshots, and are still in the early stages. While it appears to be functional (so far) your mileage may vary. Installing GNOME-Shell With the release of Ubuntu 9.10, a GNOME-Shell preview is included in the repositories. This makes it very easy to install (and remove) as needed. The downside is that it is just a snapshot so you are not running the latest-greatest builds. For this reason I've included instructions on installing the package as well as compiling the latest builds. I should also note that GNOME Shell requires reasonable 3D support. This means that it will likely *not* work within a virtual machine. In particular, problems have been reported trying to run GNOME Shell with 3D support in VirtualBox. Package Installation If you'd prefer to install the package and just take a sneak-peek at the snapshot, simply run the command below in your terminal: sudo aptitude install gnome-shell Manual Compilation Manually compiling GNOME Shell will allow you to use the latest and greatest builds, but it can also require more work. The notes below are based on a successful build I did in late 2009, but your mileage may vary. If you run into problems please note the following: Installing GNOME Shell does not affect your current installation, so if the build breaks you should still have a clean environment. You can find more details as well as known issues here: GnomeShell There is one package that you'll need to compile GNOME Shell called jhbuild. This package, however, has been removed from the Ubuntu 9.10 repositories for being outdated. I did find that I could use the package from the 9.04 repository and haven’t noticed any problems in doing so. To install jhbuild from the 9.04 repository use the instructions below: Visit http://packages.ubuntu.com/jaunty/all/jhbuild/download Select a mirror close to you Download / Install the .deb package. I don’t believe there are any additional dependencies needed for this package. After that package is installed you’ll want to download a GNOME Shell Build Setup script which makes this entire process much, much simpler. cd ~ wget http://git.gnome.org/cgit/gnome-shell/plain/tools/build/gnome-shell-build-setup.sh This script will handle finding and installing dependencies as well as compiling the builds, etc. To launch this script, run the command: gnome-shell-build-setup.sh You'll need to ensure that any suggested packages are installed before continuing. You may need to re-run this script multiple times until it has no more warnings. Lastly, you can begin the build process. This process took about twenty minutes on my C2D 2.0Ghz Dell laptop. My build was completely automated, but considering this is building newer and newer builds, your mileage may vary. To begin the build process on your machine, run the command: jhbuild build Ready To Launch Congratulations! You've now got GNOME-Shell installed and ready to launch. I've outlined the steps below. Please take note of the method, depending on how you installed. Also, please note that before you launch GNOME-Shell you must DISABLE Compiz. If you have Compiz running, navigate to System > Preferences > Appearances and disable it under the Desktop Effects tab. Package Installation Launch gnome-shell --replace Manual Compilation It will as follows: ~/gnome-shell/source/gnome-shell/src/gnome-shell --replace
Read more
  • 0
  • 0
  • 12276

article-image-how-ira-hacked-american-democracy-using-social-media-and-meme-warfare-to-promote-disinformation-and-polarization-a-new-report-to-senate-intelligence-committee
Natasha Mathur
18 Dec 2018
9 min read
Save for later

How IRA hacked American democracy using social media and meme warfare to promote disinformation and polarization: A new report to Senate Intelligence Committee

Natasha Mathur
18 Dec 2018
9 min read
A new report prepared for the Senate Intelligence Committee by the cybersecurity firm, New Knowledge was released yesterday. The report titled “The Tactics & Tropes of the Internet Research Agency” provides an insight into how IRA a group of Russian agents used and continue to use social media to influence politics in America by exploiting the political and racial separation in American society.   “Throughout its multi-year effort, the Internet Research Agency exploited divisions in our society by leveraging vulnerabilities in our information ecosystem. We hope that our work has resulted in a clearer picture for policymakers, platforms, and the public alike and thank the Senate Select Committee on Intelligence for the opportunity to serve”, says the report. Russian interference during the 2016 Presidential Elections comprised of Russian agents trying to hack the online voting systems, making cyber-attacks aimed at Democratic National Committee and Russian tactics of social media influence to exacerbate the political and social divisions in the US. As a part of SSCI’s investigation into IRA’s social media activities, some of the social platforms companies such as Twitter, Facebook, and Alphabet that were misused by the IRA, provided data related to IRA influence tactics. However, none of these platforms provided complete sets of related data to SSCI. “Some of what was turned over was in PDF form; other datasets contained extensive duplicates. Each lacked core components that would have provided a fuller and more actionable picture. The data set provided to the SSCI for this analysis includes data previously unknown to the public.and..is the first comprehensive analysis by entities other than the social platforms”, reads the report.   The report brings to light IRA’s strategy that involved deciding on certain themes, primarily social issues and then reinforcing these themes across its Facebook, Instagram, and YouTube content. Different topics such as black culture, anti-Clinton, pro-trump, anti-refugee, Muslim culture, LGBT culture, Christian culture, feminism, veterans, ISIS, and so on were grouped thematically on Facebook Pages and Instagram accounts to reinforce the culture and to foster the feelings of pride.  Here is a look at some key highlights from the report. Key Takeaways IRA used Instagram as the biggest tool for influence As per the report, Facebook executives, during the Congressional testimony held in April this year, hid the fact that Instagram played a major role in IRA’s influence operation. There were about 187 million engagements on Instagram compared to 76.5 million on Facebook and 73 million on Twitter, according to a data set of posts between 2015 and 2018. In 2017, IRA moved much of its activity and influence operations to Instagram as media started looking into Facebook and Twitter operations. Instagram was the most effective platform for the Internet Research Agency and approximately 40% of Instagram accounts achieved over 10,000 followers (a level referred to as “micro-influencers” by marketers) and twelve of these accounts had over 100,000 followers (“influencer” level).                                     The Tactics & Tropes of IRA “Instagram engagement outperformed Facebook, which may indicate its strength as a tool in image-centric memetic (meme) warfare. Our assessment is that Instagram is likely to be a key battleground on an ongoing basis,” reads the report. Apart from social media posts, another feature of Instagram platform activity by IRA was merchandise. This merchandise promotion aimed at building partnerships for boosting audience growth and getting the audience data. This was especially evident in the black targeted communities with hashtags #supportblackbusiness and #buyblack appearing quite frequently. In fact, sometimes these IRA pages also offered coupons in exchange for sharing content.                                               The Tactics & Tropes of IRA IRA promoted Voter Suppression Operations The report states that although Twitter and Facebook were debating on determining if there was any voter suppression content present on these platforms, three major variants of voter suppression narratives was found widespread on Twitter, Facebook, Instagram, and YouTube.  These included malicious misdirection (eg: tweets promoting false voting rules), candidates supporting redirection, and turnout depression ( eg: no need to vote, your vote doesn’t matter). The Tactics & Tropes of IRA For instance, few days before the 2016 presidential elections in the US, IRA started to implement voter suppression tactics on the Black-community targeted accounts. IRA started to spread content about voter fraud and delivering warnings that “election would be stolen and violence might be necessary”. These suppression narratives and content was largely targeted almost exclusively at the Black community on Instagram and Facebook. There was also the promotion of other kinds of content on topics such as alienation and violence to divert people’s attention away from politics. Other varieties of voter suppression narratives include: “don’t vote, stay home”, “this country is not for Black people”, “these candidates don’t care about Black people”, etc. Voter-suppression narratives aimed at non-black communities focused primarily on promoting identity and pride for communities like Native Americans, LGBT+, and Muslims. The Tactics & Tropes of IRA Then there were narratives that directly and broadly called out for voting for candidates apart from Hillary Clinton and pages on Facebook that posted repeatedly about voter fraud, stolen elections, conspiracies about machines provided by Soros, and rigged votes. IRA largely targeted black American communities IRA’s major efforts over Facebook and Instagram were targeted at Black communities in America and involved developing and recruiting Black Americans as assets. The report states that IRA adopted a cross-platform media mirage strategy which shared authentic black related content to create a strong influence on the black community over social media.   An example presented in the report is that of a case study of “Black Matters” which illustrates the extent to which IRA created “inauthentic media property” by creating different accounts across the social platforms to “reinforce its brand” and widely distribute its content.  “Using only the data from the Facebook Page posts and memes, we generated a map of the cross-linked properties – other accounts that the Pages shared from, or linked to – to highlight the complex web of IRA-run accounts designed to surround Black audiences,” reads the report. So, an individual who followed or liked one of the Black-community-targeted IRA Pages would get exposed to content from a dozen other pages more. Apart from IRA’s media mirage strategy, there was also the human asset recruitment strategy. It involved posts encouraging Americans to perform different types of tasks for IRA handlers. Some of these tasks included requests for contact with preachers from Black churches, soliciting volunteers to hand out fliers, offering free self-defense classes (Black Fist/Fit Black), requests for speakers at protests, etc. These posts appeared in the Black, Left, and Right-targeted groups, although they were mostly present in the black groups and communities. “The IRA exploited the trust of their Page audiences to develop human assets, at least some of whom were not aware of the role they played. This tactic was substantially more pronounced on Black-targeted accounts”, reads the report. IRA also created domain names such as blackvswhite.info, blackmattersusa.com, blacktivist.info, blacktolive.org, and so on. It also created YouTube channels like “Cop Block US” and “Don’t Shoot” to spread anti-Clinton videos. In response to these reports of specific black targeting at Facebook, National Association for the Advancement of Colored People (NAACP) returned a donation from Facebook and called on its users yesterday to log out of all Facebook-owned products such as Facebook, Instagram, and Whatsapp today. “NAACP remains concerned about the data breaches and numerous privacy mishaps that the tech giant has encountered in recent years, and is especially critical about those which occurred during the last presidential election campaign”, reads the NAACP announcement. IRA promoted Pro-Trump and anti-Clinton operations As per the report, IRA focussed on promoting political content surrounding pro-Donald Trump sentiments over different channels and pages regardless of whether these pages targeted conservatives, liberals, or racial and ethnic groups. The Tactics & Tropes of IRA On the other hand, large volumes of political content articulated anti-Hillary Clinton sentiments among both the Right and Left-leaning communities created by IRA. Moreover, there weren’t any communities or pages on Instagram and Facebook that favored Clinton. There were some pro-Clinton Twitter posts, however, most of the tweets were still largely anti-Clinton. The Tactics & Tropes of IRA Additionally, there were different YouTube channels created by IRA such as Williams & Kalvin, Cop Block US, don’t shoot, etc, and 25 videos across these different channels consisted election-related keywords in their title and all of these videos were anti-Hillary Clinton. An example presented in a report is of one of the political channels, Paul Jefferson, solicited videos for a #PeeOnHillary video challenge for which the hashtag appeared on Twitter and Instagram.  and shared submissions that it received. Other videos promoted by these YouTube channels were “The truth about elections”, “HILLARY RECEIVED $20,000 DONATION FROM KKK TOWARDS HER CAMPAIGN”, and so on. Also, on IRA’s Facebook account, the post with maximum shares and engagement was a conspiracy theory about President Barack Obama refusing to ban Sharia Law, and encouraging Trump to take action. The Tactics & Tropes of IRA Also, the number one post on Facebook featuring Hillary Clinton was a conspiratorial post that was made public a month before the election. The Tactics & Tropes of IRA These were some of the major highlights from the report. However, the report states that there is still a lot to be done with regard to IRA specifically. There is a need for further investigation of subscription and engagement pathways and only these social media platforms currently have that data. New Knowledge team hopes that these platforms will provide more data that can speak to the impact among the targeted communities. For more information into the tactics of IRA, read the full report here. Facebook, Twitter takes down hundreds of fake accounts with ties to Russia and Iran, suspected to influence the US midterm elections Facebook plans to change its algorithm to demote “borderline content” that promotes misinformation and hate speech on the platform Facebook’s outgoing Head of communications and policy takes the blame for hiring PR firm ‘Definers’ and reveals more
Read more
  • 0
  • 0
  • 12051
article-image-blocking-common-attacks-using-modsecurity-25-part-3
Packt
01 Dec 2009
12 min read
Save for later

Blocking Common Attacks using ModSecurity 2.5: Part 3

Packt
01 Dec 2009
12 min read
Source code revelation Normally, requesting a file with a .php extension will cause mod_php to execute the PHP code contained within the file and then return the resulting web page to the user. If the web server is misconfigured (for example if mod_php is not loaded) then the .php file will be sent by the server without interpretation, and this can be a security problem. If the source code contains credentials used to connect to an SQL database then that opens up an avenue for attack, and of course the source code being available will allow a potential attacker to scrutinize the code for vulnerabilities. Preventing source code revelation is easy. With response body access on in ModSecurity, simply add a rule to detect the opening PHP tag: Prevent PHP source code from being disclosed SecRule RESPONSE_BODY "<?" "deny,msg:'PHP source code disclosure blocked'" Preventing Perl and JSP source code from being disclosed works in a similar manner: # Prevent Perl source code from being disclosed SecRule RESPONSE_BODY "#!/usr/bin/perl" "deny,msg:'Perl source code disclosure blocked'" # Prevent JSP source code from being disclosed SecRule RESPONSE_BODY "<%" "deny,msg:'JSP source code disclosure blocked'" Directory traversal attacks Normally, all web servers should be configured to reject attempts to access any document that is not under the web server's root directory. For example, if your web server root is /home/www, then attempting to retrieve /home/joan/.bashrc should not be possible since this file is not located under the /home/www web server root. The obvious attempt to access the /home/joan directory is, of course, easy for the web server to block, however there is a more subtle way to access this directory which still allows the path to start with /home/www, and that is to make use of the .. symbolic directory link which links to the parent directory in any given directory. Even though most web servers are hardened against this sort of attack, web applications that accept input from users may still not be checking it properly, potentially allowing users to get access to files they shouldn't be able to view via simple directory traversal attacks. This alone is reason to implement protection against this sort of attack using ModSecurity rules. Furthermore, keeping with the principle of Defense in Depth, having multiple protections against this vulnerability can be beneficial in case the web server should contain a flaw that allows this kind of attack in certain circumstances. There is more than one way to validly represent the .. link to the parent directory. URL encoding of .. yields % 2e% 2e, and adding the final slash at the end we end up with % 2e% 2e% 2f(please ignore the space). Here, then is a list of what needs to be blocked: ../ ..% 2f .% 2e/ %  2e%  2e% 2f % 2e% 2e/ % 2e./ Fortunately, we can use the ModSecurity transformation t:urlDecode. This function does all the URL decoding for us, and will allow us to ignore the percent-encoded values, and thus only one rule is needed to block these attacks: SecRule REQUEST_URI "../" "t:urlDecode,deny" Blog spam The rise of weblogs, or blogs, as a new way to present information, share thoughts, and keep an online journal has made way for a new phenomenon: blog comments designed to advertise a product or drive traffic to a website. Blog spam isn't a security problem per se, but it can be annoying and cost a lot of time when you have to manually remove spam comments (or delete them from the approval queue, if comments have to be approved before being posted on the blog). Blog spam can be mitigated by collecting a list of the most common spam phrases, and using the ability of ModSecurity to scan POST data. Any attempted blog comment that contains one of the offending phrases can then be blocked. From both a performance and maintainability perspective, using the @pmFromFile operator is the best choice when dealing with large word lists such as spam phrases. To create the list of phrases to be blocked, simply insert them into a text file, for example, /usr/local/spamlist.txt: viagra v1agra auto insurance rx medications cheap medications ... Then create ModSecurity rules to block those phrases when they are used in locations such as the page that creates new blog comments: # # Prevent blog spam by checking comment against known spam # phrases in file /usr/local/spamlist.txt # <Location /blog/comment.php> SecRule ARGS "@pmFromFile /usr/local/spamlist.txt" "t: lowercase,deny,msg:'Blog spam blocked'" </Location> Keep in mind that the spam list file can contain whole sentences—not just single words—so be sure to take advantage of that fact when creating the list of known spam phrases. SQL injection SQL injection attacks can occur if an attacker is able to supply data to a web application that is then used in unsanitized form in an SQL query. This can cause the SQL query to do completely different things than intended by the developers of the web application. Consider an SQL query like this: SELECT * FROM user WHERE username = '%s' AND password = '%s'; The flaw here is that if someone can provide a password that looks like ' OR '1'='1, then the query, with username and password inserted, will become: SELECT * FROM user WHERE username = 'anyuser' AND password = '' OR '1'='1'; This query will return all users in the results table, since the OR '1'='1' part at the end of the statement will make the entire statement true no matter what username and password is provided. Standard injection attempts Let's take a look at some of the most common ways SQL injection attacks are performed. Retrieving data from multiple tables with UNION An SQL UNION statement can be used to retrieve data from two separate tables. If there is one table named cooking_recipes and another table named user_credentials, then the following SQL statement will retrieve data from both tables: SELECT dish_name FROM recipe UNION SELECT username, password FROM user_credentials; It's easy to see how the UNION statement can allow an attacker to retrieve data from other tables in the database if he manages to sneak it into a query. A similar SQL statement is UNION ALL, which works almost the same way as UNION—the only difference is that UNION ALL will not eliminate any duplicate rows returned in the result. Multiple queries in one call If the SQL engine allows multiple statements in a single SQL query then seemingly harmless statements such as the following can present a problem: SELECT * FROM products WHERE id = %d; If an attacker is able to provide an ID parameter of 1; DROP TABLE products;, then the statement suddenly becomes: SELECT * FROM products WHERE id = 1; DROP TABLE products; When the SQL engine executes this, it will first perform the expected SELECT query, and then the DROP TABLE products statement, which will cause the products table to be deleted. Reading arbitrary files MySQL can be used to read data from arbitrary files on the system. This is done by using the LOAD_FILE() function: SELECT LOAD_FILE("/etc/passwd"); This command returns the contents of the file /etc/passwd. This works for any file to which the MySQL process has read access. Writing data to files MySQL also supports the command INTO OUTFILE which can be used to write data into files. This attack illustrates how dangerous it can be to include user-supplied data in SQL commands, since with the proper syntax, an SQL command can not only affect the database, but also the underlying file system. This simple example shows how to use MySQL to write the string some data into the file test.txt: mysql> SELECT "some data" INTO OUTFILE "test.txt"; Preventing SQL injection attacks There are three important steps you need to take to prevent SQL injection attacks: Use SQL prepared statements. Sanitize user data. Use ModSecurity to block SQL injection code supplied to web applications. These are in order of importance, so the most important consideration should always be to make sure that any code querying SQL databases that relies on user input should use prepared statements. A prepared statement looks as follows: SELECT * FROM books WHERE isbn = ? AND num_copies < ?; This allows the SQL engine to replace the question marks with the actual data. Since the SQL engine knows exactly what is data and what SQL syntax, this prevents SQL injection from taking place. The advantages of using prepared statements are twofold: They effectively prevent SQL injection. They speed up execution time, since the SQL engine can compile the statement once, and use the pre-compiled statement on all subsequent query invocations. So not only will using prepared statements make your code more secure—it will also make it quicker. The second step is to make sure that any user data used in SQL queries is sanitized. Any unsafe characters such as single quotes should be escaped. If you are using PHP, the function mysql_real_escape_string() will do this for you. Finally, let's take a look at strings that ModSecurity can help block to prevent SQL injection attacks. What to block The following table lists common SQL commands that you should consider blocking, together with a suggested regular expression for blocking. The regular expressions are in lowercase and therefore assume that the t:lowercase transformation function is used. SQL code Regular expression UNION SELECT unions+select UNION ALL SELECT unions+alls+select INTO OUTFILE intos+outfile DROP TABLE drops+table ALTER TABLE alters+table LOAD_FILE load_file SELECT * selects+* For example, a rule to detect attempts to write data into files using INTO OUTFILE looks as follows: SecRule ARGS "intos+outfile" "t:lowercase,deny,msg: 'SQL Injection'" The s+ regular expression syntax allows for detection of an arbitrary number of whitespace characters. This will detect evasion attempts such as INTO%20%20OUTFILE where multiple spaces are used between the SQL command words. Website defacement We've all seen the news stories: "Large Company X was yesterday hacked and their homepage was replaced with an obscene message". This sort of thing is an everyday occurrence on the Internet. After the company SCO initiated a lawsuit against Linux vendors citing copyright violations in the Linux source code, the SCO corporate website was hacked and an image was altered to read WE OWN ALL YOUR CODE—pay us all your money. The hack was subtle enough that the casual visitor to the SCO site would likely not be able to tell that this was not the official version of the homepage: The above image shows what the SCO homepage looked like after being defaced—quite subtle, don't you think? Preventing website defacement is important for a business for several reasons: Potential customers will turn away when they see the hacked site There will be an obvious loss of revenue if the site is used for any sort of e-commerce sales Bad publicity will tarnish the company's reputation Defacement of a site will of course depend on a vulnerability being successfully exploited. The measures we will look at here are aimed to detect that a defacement has taken place, so that the real site can be restored as quickly as possible. Detection of website defacement is usually done by looking for a specific token in the outgoing web pages. This token has been placed within the pages in advance specifically so that it may be used to detect defacement—if the token isn't there then the site has likely been defaced. This can be sufficient, but it can also allow the attacker to insert the same token into his defaced page, defeating the detection mechanism. Therefore, we will go one better and create a defacement detection technology that will be difficult for the hacker to get around. To create a dynamic token, we will be using the visitor's IP address. The reason we use the IP address instead of the hostname is that a reverse lookup may not always be possible, whereas the IP address will always be available. The following example code in JSP illustrates how the token is calculated and inserted into the page. <%@ page import="java.security.*" %> <% String tokenPlaintext = request.getRemoteAddr(); String tokenHashed = ""; String hexByte = ""; // Hash the IP address MessageDigest md5 = MessageDigest.getInstance("MD5"); md5.update(tokenPlaintext.getBytes()); byte[] digest = md5.digest(); for (int i = 0; i < digest.length; i++) { hexByte = Integer.toHexString(0xFF & digest[i]); if (hexByte.length() < 2) { hexByte = "0" + hexByte; } tokenHashed += hexByte; } // Write MD5 sum token to HTML document out.println(String.format("<span style='color: white'>%s</span>", tokenHashed)); %>   Assuming the background of the page is white, the <span style="color: white"> markup will ensure it is not visible to website viewers. Now for the ModSecurity rules to handle the defacement detection. We need to look at outgoing pages and make sure that they include the appropriate token. Since the token will be different for different users, we need to calculate the same MD5 sum token in our ModSecurity rule and make sure that this token is included in the output. If not, we block the page from being sent and sound the alert by sending an email message to the website administrator. # # Detect and block outgoing pages not containing our token # SecRule REMOTE_ADDR ".*" "phase:4,deny,chain,t:md5,t:hexEncode, exec:/usr/bin/emailadmin.sh" SecRule RESPONSE_BODY "!@contains %{MATCHED_VAR}" We are placing the rule in phase 4 since this is required when we want to inspect the response body. The exec action is used to send an email to the website administrator to let him know of the website defacement.
Read more
  • 0
  • 1
  • 11735

article-image-security-microsoft-azure
Packt
06 Apr 2015
9 min read
Save for later

Security in Microsoft Azure

Packt
06 Apr 2015
9 min read
In this article, we highlight some security points of interest, according to the ones explained in the book Microsoft Azure Security, by Roberto Freato. Microsoft Azure is a comprehensive set of services, which enable Cloud computing solutions for enterprises and small businesses. It supports a variety of tools and languages, providing users with building blocks that can be composed as needed. Azure is actually one of the biggest players in the Cloud computing market, solving scalability issues, speeding up the entire management process, and integrating with the existing development tool ecosystem. (For more resources related to this topic, see here.) Standards and Azure It is probably well known that the most widely accepted principles of IT security are confidentiality, integrity, and availability. Despite many security experts defining even more indicators/principles related to IT security, most security controls are focused on these principles, since the vulnerabilities are often expressed as a breach of one (or many) of these three. These three principles are also known as the CIA triangle: Confidentiality: It is about disclosure. A breach of confidentiality means that somewhere, some critical and confidential information has been disclosed unexpectedly. Integrity: It is about the state of information. A breach of integrity means that information has been corrupted or, alternatively, the meaning of the information has been altered unexpectedly. Availability: It is about interruption. A breach of availability means that information access is denied unexpectedly. Ensuring confidentiality, integrity, and availability means that information flows are always monitored and the necessary controls are enforced. To conclude, this is the purpose of a Security Management System, which, when talking about IT, becomes Information Security Management System (ISMS). The International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) often work together to build international standards for specific technical fields. They released the ISO/IEC 27000 series to provide a family of standards for ISMS, starting from definitions (ISO/IEC 27000), up to governance (ISO/IEC 27014), and even more. Two standards of particular interests are the ISO/IEC 27001 and the ISO/IEC 27002. Microsoft manages the Azure infrastructure, At the most, users can manage the operating system inside a Virtual Machine (VM), but they do not need to administer, edit, or influence the under-the-hood infrastructure. They should not be able to do this at all. Therefore, Azure is a shared environment. This means that a customer's VM can run on the physical server of another customer and, for any given Azure service, two customers can even share the same VM (in some Platform as a Service (PaaS) and Software as a Service (SaaS) scenarios). The Microsoft Azure Trust Center (http://azure.microsoft.com/en-us/support/trust-center/) highlights the attention given to the Cloud infrastructure, in terms of what Microsoft does to enforce security, privacy, and compliance. Identity and Access Management It is very common that different people within the same organization would access and use the same Azure resources. In this case, a few scenarios arise: with the current portal, we can add several co-administrators; with the Preview portal, we can define fine-grained ACLs with the Role-Based Access Control (RBAC) features it implements. By default, we can add external users into the Azure Active Directory (AD), by inviting them through their e-mail address, which must be either a Microsoft account or an Azure AD account. In the Preview portal, the following hierarchy is, as follows: Subscription: It is the permission given at the subscription level, which is valid for each object within the subscription (that is, a Reader subscription can view everything within the subscription). Resource group: This is a fairly new concept of Azure. A resource group is (as the name suggests) a group or resources logically connected, as the collection of resources used for the same web project (a web hosting plan, an SQL server, and so on). Permission given at this level is valid for each object within the resource group. Individual resource: It is the permission given to an individual resource, and is valid only for that resource (that is, giving read-only access to a client to view the Application Insights of a website). Despite it resembles from its name, Azure AD is just an Identity and Access Management (IAM) service, managed and hosted by Microsoft in Azure. We should not even try to make a comparison, because they have different scopes and features. It is true that we can link Azure AD with an on-premise AD, but only for the purpose of extending its functionalities to work with Internet-based applications. Azure AD can be considered a SaaS for IAM before its relationship with Azure Services. A company that is offers its SaaS solution to clients, can also use Azure AD as the Identity Provider, relying on the several existing users of Office 365 (which relies on Azure AD for authentication) or Azure AD itself. Access Control Service (ACS) has been famous for a while for its capability to act as an identity bridge between applications and social identities. In the last few years, if developers wanted to integrate Facebook, Google, Yahoo, and Microsoft accounts (Live ID), they would have probably used ACS. Using Platform as a Service Although there are several ways to host custom code on Azure, the two most important building blocks are Websites and Cloud services. The first is actually a PaaS built on top of the second (a PaaS too), and uses an open source engine named Project Kudu (https://github.com/projectkudu/kudu). Kudu is an open source engine, which works with IIS and manages automatic or manual deployments of Azure Websites in a sandboxed environment. Kudu can also run outside Azure, but it is primarily supported to enable Website services. An Azure Cloud service is a container of roles: a role is the representation of a unit of execution and it can be a worker role (an arbitrary application) or a web role (an IIS application). Each role within a Cloud service can be deployed to several VMs (instances) at the same time, to provide scalability and load-balancing. From the security perspective, we need to pay attention to these aspects: Remote endpoints Remote desktops Startup tasks Microsoft Antimalware Network communication Azure Websites are some of the most advanced PaaS in the Cloud computing market, providing users with a lock-in free solution to run applications built in various languages/platforms. From the security perspective, we need to pay attention to these aspects: Credentials Connection modes Settings and connection strings Backups Extensions Azure services have grown much faster (with regard to the number of services and the surface area) than in the past, at an amazingly increasing rate: consequently, we have several options to store any kind of data (relational, NoSQL, binary, JSON, and so on). Azure Storage is the base service for almost everything on the platform. Storage security is implemented in two different ways: Account Keys Shared Access Signatures While looking at the specifications of many Azure Services, we often see the scalability targets section. For a given service, Azure provides users with a set of upper limits, in terms of capacity, bandwidth, and throughput to let them design their Cloud solutions better. Working with SQL Database is straightforward. However, a few security best practices must be implemented to improve security: Setting up firewall rules Setting up users and roles Connection settings Modern software architectures often rely on an in-memory caching system to save frequently accessed data that do not change too often. Some extreme scenarios require us to use an in-memory cache as the primary data store for sensitive data, pursuing design patterns oriented to eventual persistence and consistency. Azure Managed Cache is the evolution of the former AppFabric Cache for Windows servers and it is a managed by an in-memory cache service. Redis is an open source, high performance data store written in ANSI C: since its name stands for Remote Dictionary Server, it is a key value data store with optional durability. Azure Key Vault is a new and promising service that is used to store cryptographic keys and application secrets. There is an official library to operate against Key Vault from .NET, using Azure AD authentication to get secrets or use the keys. Before using it, it is necessary to set appropriate permissions on the Key Vault for external access, using the Set-AzureKeyVaultAccessPolicy command. Using Infrastructure as a Service Customers choosing Infrastructure as a Service (IaaS) usually have existing project constraints, which are not adaptive to PaaS. We can think about a complex installation of an enterprise-level software suite, such as ERP or a SharePoint farm. This is one of the cases where a service, such as an Azure Website, probably cannot fit. There two main services where the security requirements should be correctly understood and addressed are: Azure Virtual Machines Azure Virtual Networks VMs are the most configurable execution environments for applications that Azure provides. With VMs, we can run arbitrary workloads and run custom tools and applications, but we need to manage and maintain them directly, including the security. From the security perspective, we need to pay attention to these aspects: VM creation Endpoints and ACLs Networking and isolation Microsoft Antimalware Operating system firewalls Auditing and best practices Azure Backup helps protect servers or clients against data loss, providing a second place backup solution. While performing a backup, in fact, one of the primary requirements is the location of the backup: avoid backing up sensitive or critical data to a physical location that is strictly connected to the primary source of the data itself. In case of a disaster, if you involve the facility where the source is located, a higher probability of losing data (including the backup) can occur. Summary In this article we covered the various security related aspects of Microsoft Azure with services, such as PaaS, IaaS, and IAM. Resources for Article: Further resources on this subject: Web API and Client Integration [article] Setting up of Software Infrastructure on the Cloud [article] Windows Phone 8 Applications [article]
Read more
  • 0
  • 0
  • 11699

article-image-general-considerations
Packt
25 Oct 2013
9 min read
Save for later

General Considerations

Packt
25 Oct 2013
9 min read
(For more resources related to this topic, see here.) Building secure Node.js applications will require an understanding of the many different layers that it is built upon. Starting from the bottom, we have the language specification that defines what JavaScript consists of. Next, the virtual machine executes your code and may have differences from the specification. Following that, the Node.js platform and its API have details in their operation that affect your applications. Lastly, third-party modules interact with our own code and need to be audited for secure programming practices. First, JavaScript's official name is ECMAScript. The international European Computer Manufacturers Association (ECMA) first standardized the language as ECMAScript in 1997. This ECMA-262 specification defines what comprises JavaScript as a language, including its features, and even some of its bugs. Even some of its general quirkiness has remained unchanged in the specification to maintain backward compatibility. While I won't say the specification itself is required reading, I will say that it is worth considering. Second, Node.js uses Google's V8 virtual machine to interpret and execute your source code. While developing for the browser, you have to consider all the other virtual machines (not to mention versions), when it comes to available features. In a Node.js application, your code only runs on the server, so you have much more freedom, and you can use all the features available to you in V8. Additionally, you can also optimize for the V8 engine exclusively. Next, Node.js handles setting up the event loop, and it takes your code to register callbacks for events and executes them accordingly. There are some important details regarding how Node.js responds to exceptions and other errors that you will need to be aware of while developing your applications. Atop Node.js is the developer API. This API is written mostly in JavaScript which allows you, as a JavaScript developer, to read it for yourself, and understand how it works. There are many provided modules that you will likely end up using, and it's important for you to know how they work, so you can code defensively. Last, but not least, the third-party modules that npm gives you access to, are in great abundance, which can be a double-edged sword. On one hand, you have many options to pick from that suit your needs. On the other hand, having a third-party code is a potential security liability, as you will be expected to support and audit each of these modules (in addition to their own dependencies) for security vulnerabilities. JavaScript security One of the biggest security risks in JavaScript itself, both on the client and now on the server, is the use of the eval() function. This function, and others like it, takes a string argument, which can represent an expression, statement, or a series of statements, and it is executed as any other JavaScript source code. This is demonstrated in the following code: // these variables are available to eval()'d code // assume these variables are user input from a POST request var a = req.body.a; // => 1 var b = req.body.b; // => 2 var sum = eval(a + "+" + b); // same as '1 + 2' This code has full access to the current scope, and can even affect the global object, giving it an alarming amount of control. Let's look at the same code, but imagine if someone malicious sent arbitrary JavaScript code instead of a simple number. The result is shown in the following code: var a = req.body.a; // => 1 var b = req.body.b; // => 2; console.log("corrupted"); var sum = eval(a + "+" + b); // same as '1 + 2; console.log("corrupted"); Due to how eval() is exploited here, we are witnessing a "remote code execution" attack! When executed directly on the server, an attacker could gain access to server files and databases. There are a few cases where eval() can be useful, but if the user input is involved in any step of the process, it should likely be avoided at all costs! There are other features of JavaScript that are functional equivalents to eval(), and should likewise be avoided unless absolutely necessary. First is the Function constructor that allows you to create a callable function from strings, as shown in the following code: // creates a function that returns the sum of 2 arguments var adder = new Function("a", "b", "return a + b"); adder(1, 2); // => 3 While very similar to the eval() function, it is not exactly the same. This is because it does not have access to the current scope. However, it does still have access to the global object, and should be avoided whenever a user input is involved. If you find yourself in a situation where there is an absolute need to execute an arbitrary code that involves user input, you do have one secure option. Node.js platform's API includes a vm module that is meant to give you the ability to compile and run code in a sandbox, preventing manipulation of the global object and even the current scope. It should be noted that the vm module has many known issues and edge cases. You should read the documentation, and understand all the implications of what you are doing to make sure you don't get caught off-guard. ES5 features ECMAScript5 included an extensive batch of changes to JavaScript, including the following changes: Strict mode for removing unsafe features from the language. Property descriptors that give you control over object and property access. Functions for changing object mutability. Strict mode Strict mode changes the way JavaScript code runs in select cases. First, it causes errors to be thrown in cases that were silent before. Second, it removes and/or change features that made optimizations for JavaScript engines either difficult or impossible. Lastly, it prohibits some syntax that is likely to show up in future versions of JavaScript. Additionally, strict mode is opt-in only, and can be applied either globally or for an individual function scope. For Node.js applications, to enable strict mode globally, add the –use_strict command line flag, while executing your program. While dealing with third-party modules that may or may not be using strict mode, this can potentially have negative side effects on your overall application. With that said, you could potentially make strict mode compliance a requirement for any audits on third-party modules. Strict mode can be enabled by adding the "use strict" pragma at the beginning of a function, before any other expressions as shown in the following code: function sayHello(name) { "use strict"; // enables strict mode for this function scope console.log("hello", name); } In Node.js, all the required files are wrapped with a function expression that handles the CommonJS module API. As a result, you can enable strict mode for an entire file, by simply putting the directive at the top of the file. This will not enable strict mode globally, as it would in an environment like the browser. Strict mode makes many changes to the syntax and runtime behavior, but for the sake of brevity we will only discuss changes relevant to application security. First, scripts run via eval() in strict mode cannot introduce new variables to the enclosing scope. This prevents leaking new and possibly conflicting variables into your code, when you run eval() as shown in the following code: "use strict"; eval("var a = true"); console.log(a); // ReferenceError thrown – a does not exist In addition, the code run via eval() is not given access to the global object through its context. This is similar, if not related, to other changes for function scope, which will be explained shortly, but this is specifically important for eval(), as it can no longer use the global object to perform additional black magic. It turns out that the eval() function is able to be overridden in JavaScript. It can be accomplished by creating a new global variable called eval, and assigning something else to it, which could be malicious. Strict mode prohibits this type of operation. It is treated more like a language keyword than a variable, and attempting to modify it will result in a syntax error as shown in the following code: // all of the examples below are syntax errors "use strict"; eval = 1; ++eval; var eval; function eval() { } Next, the function objects are more tightly secured. Some common extensions to ECMAScript add the function.caller and function.arguments references to each function, after it is invoked. Effectively, you can "walk" the call stack for a specific function by traversing these special references. This potentially exposes information that would normally appear to be out of scope. Strict mode simply makes these properties throw a TypeError remark, while attempting to read or write them, as shown in the following code: "use strict"; function restricted() { restricted.caller; // TypeError thrown restricted.arguments; // TypeError thrown } Next, arguments.callee is removed in strict mode (such as function.caller and function.arguments shown previously). Normally, arguments.callee refers to the current function, but this magic reference also exposes a way to "walk" the call stack, and possibly reveal information that previously would have been hidden or out of scope. In addition, this object makes certain optimizations difficult or impossible for JavaScript engines. Thus, it also throws a TypeError exception, when an access is attempted, as shown in the following code: "use strict"; function fun() { arguments.callee; // TypeError thrown } Lastly, functions executed with null or undefined as the context no longer coerce the global object as the context. This applies to eval() as we saw earlier, but goes further to prevent arbitrary access to the global object in other function invocations, as shown in the following code: "use strict"; (function () { console.log(this); // => null }).call(null); Strict mode can help make the code far more secure than before, but ECMAScript 5 also includes access control through the property descriptor APIs. A JavaScript engine has always had the capability to define property access, but ES5 includes these APIs to give that same power to application developers. Summary In this article, we examined the security features that applied generally to the language of JavaScript itself, including how to use static code analysis to check for many of the aforementioned pitfalls. Also, we looked at some of the inner workings of a Node.js application, and how it differs from typical browser development, when it comes to security. Resources for Article: Further resources on this subject: Setting up Node [Article] So, what is Node.js? [Article] Learning to add dependencies [Article]
Read more
  • 0
  • 0
  • 11391
article-image-features-cloudflare
Packt
10 Sep 2013
5 min read
Save for later

Features of CloudFlare

Packt
10 Sep 2013
5 min read
(For more resources related to this topic, see here.) Top 5 features you need to know about Here we will go over the various security, performance, and monitoring features CloudFlare has to offer. Malicious traffic Any website is susceptible to attacks from malicious traffic. Some attacks might try to take down a targeted website, while others may try to include their own spam. Worse attacks might even try and trick your users to provide information or compromise user accounts. CloudFlare has tools available to mitigate various types of attacks. Distributed denial of service A common attack on the Internet is the distributed denial-of-service(DDoS) attack. A distributed denial-of-service attack involves producing so many requests for a service that it cannot fulfill them, and crumbles under the load. A common way this is handled in practice is by having the attacker make a server request, but never listen for the response. Typically a response will be presented by the client notifying the server that it received data, but if a client does not acknowledge, the server will keep trying for quite a while. A single client could send thousands of these requests per second, but the server would not be able to handle many at once. Another twist to these attacks is the dynamic denial-of-service attack. This attack will be spread across many machines, making it difficult to tell where the attacks are coming from. CloudFlare can help with this because it can monitor when users are trying an attack and reject access, or require a captcha challenge to gain access. It also monitors all of its customers for this, so if there is an attack happening on another CloudFlare site, it can protect yours from the traffic attacking the site as well. It is a difficult problem to solve. Sometimes traffic just spikes if big news article are run. It is hard to tell when it's legitimate traffic and when it is an attack. For this, CloudFlare offers multiple levels of DoS protection. On the CloudFlare settings the Securitytab is where you can configure this advanced protection: On the CloudFlare settings the Security tab is where you can configure this advanced protection: The basic settings are rolled into the Basic protection level setting: SQL injection SQL injection is a more involved attack. On a web page, you may have a field like a username/password field. That field will probably be checked against a database for validity. The database queries to do this are simple text strings. This means that if the query is written in a way that doesn't explicitly prevent it, an attacker can start writing their own queries. A site that is not equipped to handle these cases would be susceptible to hackers destroying data, gaining access by pretending to be other users, or accessing data they otherwise would not have access to. It is a difficult problem to check against when building a software. Even big companies have had issues. CloudFlare mitigates this by looking for requests containing things that look like database queries. Almost no websites take in raw database commands as normal queries. This means that CloudFlare can search for suspicious traffic and prevent it from accessing your page. Cross-site scripting Cross-site scripting is similar to SQL injection except that it deals with JavaScript and not database SQL. If you have a site that has comments, for example, an unprotected site might allow a hacker to put their own JavaScript on it. Any other user of the site could execute that JavaScript. They could do things like sniff for passwords, or even credit card information. CloudFlare prevents this in a similar fashion by looking for requests that contain JavaScript and blocking them. Open ports Often, services available on a server can be available without the sysadmin knowing about it. If Telnet is allowed, for example, an attacker could simply log in to the system and start checking out source code, looking into the database, or taking down the website. CloudFlare acts as a firewall to ensure that the ports are blocked even if the server has them open. Challenge page When CloudFlare receives a request from a suspect user, it will usually show a challenge page asking the user to fill out a captcha to access the site. The options for customizing these settings is on the Security Settings tab: You can also configure how that page looks by clicking on Customize. By default, it will look something like the following: E-mail address obfuscation E-mail address obfuscation scrambles any e-mail addresses on your page, then runs some JavaScript to decode it so that the text ends up being readable. This is nice in order to avoid getting spam in your user's e-mails, but the downside is that if a user has JavaScript disabled, they will not be able to read e-mail addresses: Summary In this article, we have looked at the various security features provided by CloudFlare against malicious traffic, distributed denial of service, e-mail address obfuscation, and so on. Therefore, it can be concluded that CloudFlare is one of the better website-designing options available in the market today. Resources for Article: Further resources on this subject: Getting Started with RapidWeaver [Article] LESS CSS Preprocessor [Article] Translations in Drupal 6 [Article]
Read more
  • 0
  • 0
  • 10964

article-image-cissp-security-measures-access-control
Packt
27 Nov 2009
4 min read
Save for later

CISSP: Security Measures for Access Control

Packt
27 Nov 2009
4 min read
Knowledge requirements A candidate appearing for the CISSP exam should have knowledge in the following areas that relate to access control: Control access by applying concepts, methodologies, and techniques Identify, evaluate, and respond to access control attacks such as Brute force attack, dictionary, spoofing, denial of service, etc. Design, coordinate, and evaluate penetration test(s) Design, coordinate, and evaluate vulnerability test(s) The approach In accordance with the knowledge expected in the CISSP exam, this domain is broadly grouped under five sections as shown in the following diagram: Section 1: The Access Control domain consists of many concepts, methodologies, and some specific techniques that are used as best practices. This section coverssome of the basic concepts, access control models, and a few examples of access control techniques. Section 2: Authentication processes are critical for controlling access to facilities and systems. This section looks into important concepts that establish the relationship between access control mechanisms and authentication processes. Section 3: A system or facility becomes compromised primarily through unauthorized access either through the front door or the back door. We'll see some of the common and popular attacks on access control mechanisms, and also learn about the prevalent countermeasures to such attacks. Section 4: An IT system consists of an operating system software, applications, and embedded software in the devices to name a few. Vulnerabilities in such software are nothing but holes or errors. In this section we see some of the common vulnerabilities in IT systems, vulnerability assessment techniques, and vulnerability management principles. Section 5: Vulnerabilities are exploitable, in the sense that the IT systems can be compromised and unauthorized access can be gained by exploiting the vulnerabilities. Penetration testing or ethical hacking is an activity that tests the exploitability of vulnerabilities for gaining unauthorized access to an IT system. Today, we'll quickly review some of the important concepts in the Sections 1, 2,and 3. Access control concepts, methodologies, and techniques Controlling access to the information systems and the information processing facilities by means of administrative, physical, and technical safeguards is the primary goal of access control domain. Following topics provide insight into someof the important access control related concepts, methodologies, and techniques. Basic concepts One of the primary concepts in access control is to understand the subject and the object. A subject may be a person, a process, or a technology component that either seeks access or controls the access. For example, an employee trying to access his business email account is a subject. Similarly, the system that verifies the credentials such as username and password is also termed as a subject. An object can be a file, data, physical equipment, or premises which need controlled access. For example, the email stored in the mailbox is an object that a subject is trying to access. Controlling access to an object by a subject is the core requirement of an access control process and its associated mechanisms. In a nutshell, a subject either seeks or controls access to an object. An access control mechanism can be classified broadly into the following two types: If access to an object is controlled based on certain contextual parameters, such as location, time, sequence of responses, access history, and so on, then it is known as a context-dependent access control. In this type of control, the value of the asset being accessed is not a primary consideration. Providing the username and password combination followed by a challenge and response mechanism such as CAPTCHA, filtering the access based on MAC adresses in wireless connections, or a firewall filtering the data based on packet analysis are all examples of context-dependent access control mechanisms. Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) is a challenge-response test to ensure that the input to an access control system is supplied by humans and not by machines. This mechanism is predominantly used by web sites to prevent Web Robots(WebBots) to access the controlled section of the web site by brute force methods The following is an example of CAPTCHA: If the access is provided based on the attributes or content of an object,then it is known as a content-dependent access control. In this type of control, the value and attributes of the content that is being accessed determines the control requirements. For example, hiding or showing menus in an application, views in databases, and access to confidential information are all content-dependent.
Read more
  • 0
  • 0
  • 10505
Modal Close icon
Modal Close icon