OpenAM — Save 50%
Written and tested with OpenAM Snapshot 9—the Single Sign-On (SSO) tool for securing your web applications in a fast and easy way
OpenAM is an open source continuation of the OpenSSO project that was taken over, and later scrapped, by Oracle. OpenAM is the only commercial-grade, feature-rich web application that provides SSO solutions. It has a variety of features and a powerful Single Sign-On (SSO) capability.
In this article by Indira Thangasamy, author of the book OpenAM, you will be able to learn the following topics:
- Identity Repository schema
- Types of identity stores supported
- Caching and notification
- Supported identity stores
|Read more about this book|
(For more resources on OpenSSO, see here.)
Like any other service, the Identity Repository service is also defined using an XML file named idRepoService.xml that can be found in <conf-dir>/config/xml. In this file one can define as many subschema as needed. By default, the following subschema names are defined:
However, not all of them are supported in the version that has been tested while writing this article. For instance the files, LDAPv3, and Database subschema are meant to be sample implementations. One can extend it for other databases, keeping this as an example. The rest of the sub configurations are all well tested and supported.
One of the Identity Repository types Access Manager Repository is missing from this definition, as it is a manual process to add it into the OpenSSO server. That is something which will be detailed later in this article. It is also called a legacy SDK plugin for OpenSSO. The Identity Repository framework requires support for logging service and session management to deliver its overall functionality.
Identity store types
In OpenSSO, multiple types of Identity Repository plugins are implemented including the following:
Unlike the Access Manager Repository plugin, these are available in a vanilla OpenSSO server. So customers can readily use it without requiring to perform any additional configuration steps.
LDAPv3Repo: Is the plugin that will be used by customers and the administrators quite frequently as the other types of plugin implementations are mostly meant to be used by OpenSSO internal services. This plugin forms the basis for building the configuration for supporting various LDAP servers including Microsoft Active Directory, Active Directory Application Mode (ADAM/LDS), IBM Tivoli Directory, OpenDS, and Oracle Directory Server Enterprise Edition. There are subschema defined for each of the recently mentioned LDAP servers in the IdRepo service schema as described in the beginning of this section.
AgentsRepo: Is a plugin that is used to manage the OpenSSO policy agents' profiles. Unlike the LDAPv3Repo, AgentsRepo uses the configuration repository to store the agent's configuration data including authentication information. Prior to the Agents 3.0 version, all agents accessing earlier versions of OpenSSO such as Access Manager 7.1, had most of the configuration data of the agents stored locally in the file system as plain text files. This imposed huge management problems for the customers to upgrade or change any configuration parameters as it required them to log in to each host where the agents are installed. Besides, the configuration of all agents prior to 3.0 was stored in the user identity store. In OpenSSO the agent's profiles and configurations are stored as part of the configuration Directory Information Tree (DIT).
The AgentsRepo is a hidden internal repository plugin, and at no point should it be visible to end users or administrators for modification.
SpecialRepo: In the predecessor of OpenSSO the administrative users were stored as part of the user identity store. So even when the configuration store is up and running administrators still cannot log in to the system unless the user identity store is up and running. This kind of limits the customer experience especially during pilot testing and troubleshooting scenarios. To overcome this, OpenSSO introduced a feature wherein all the core administrative users are stored as part of the configuration store in the IdRepo service. All the administrative and special user authentication by default uses this specialrepo framework. It may be possible to override this behavior by invoking module based authentication. SpecialRepo is used as a fallback repository to get authenticated to the OpenSSO server.
SpecialRepo is also a hidden internal repository plugin. At no point, should it be visible to end users or administrators for modification.
FilesRepo: Is no longer supported in the OpenSSO product. You can see the references of this in the source code but it cannot be configured to use flat files store for either configuration data or user identity data.
AMSDKRepo: This plugin has been made available to maintain the compatibility with the Sun Java System Access Manager versions. When this plugin is enabled the identity schema is defined using the DAI service as described in the ums.xml. This plugin will not be available in the vanilla OpenSSO server, the administrator has to perform certain manual steps to have this plugin available for use. In this plugin, identity management is tightly coupled with the Oracle Directory Server Enterprise Edition. It is generally useful in the co-existence scenario where OpenSSO needs to co-exist with Sun Access Manager. In this article wherever we refer to "Access Manager Repository plugin" it means refer to AMSDKRepo.
Besides this there is a sample implementation for the MySQL-based database repository available as part of the server default configuration. It works; however, it is not extensively tested for all the OpenSSO features. You can also refer to another discussion on the custom database repository implementation at this link: http:// www.badgers-in-foil.co.uk/notes/installing_a_custom_opensso_identity_ repository/.
Caching and notification
For the LDAPv3Repo, the key feature that enables it to perform and scale is the caching of results set for each client query, without which it would be impossible to achieve the performance and scalability. When caching is employed there is a possibility that clients could get stale information about identities. This can be avoided by keeping the cache cleaned up periodically or having an external event dirty the cache so new values can be cached. OpenSSO provides more than one way to tackle this caching and notification. There are a couple of ways in which the cache can be invalidated and refreshed.
The Identity Repository design relies broadly on two types of mechanisms to refresh the IdRepo cache. They are:
- Persistent search-based event notification
- Time-to-live (TTL) based refresh
Both methods have their own merits and can be enabled simultaneously, and it is recommended. This is to handle the scenario where a network glitch (which could cause a packet loss) might have caused the OpenSSO server to miss some change notifications. The value of TTL purely depends on the deployment environment and end user experience.
Persistent search-based notification
The OpenSSO Identity Repository plugin cache can be invalidated and refreshed by registering a persistent search connection to the backend LDAP server provided the LDAP server supports the persistent search control. The persistent search (http:// www.mozilla.org/directory/ietf-docs/draft-smith-psearch-ldap-01.txt) control 2.16.840.1.1137184.108.40.206 is implemented by many of the commercial LDAP servers including:
- IBM (Tivoli Directory)
- Novell (eDirectory)
- Oracle Directory Server Enterprise Edition(ODSEE)
- OpenDS (OpenDS Directory Server 1.0.0-build007)
- Fedora-Directory/1.0.4 B2006.312.1539
In order to determine whether your LDAP vendor supports a persistent search, perform the following search for the persistent search control 2.16.840.1.1137220.127.116.11:
ldapsearch -p 389 -h ds_host -s base -b '' "objectclass=*"
supportedControl | grep 2.16.840.1.113718.104.22.168
Microsoft Active Directory implements in a different form using the LDAP control 1.2.840.113522.214.171.1248.
Persistent searches are handled by the max-psearch-count property in the Sun Java Directory Server that defines the maximum number of persistent searches that can be performed on the directory server. The persistent search mechanism provides an active channel through which entries that change (and information about the changes that occur) can be communicated. As each persistent search operation uses one thread, limiting the number of simultaneous persistent searches prevents certain kinds of denial of service attacks.
It is quite apparent that a client implementation that generates a large number of persistent connections to a single directory server may indicate that the LDAP protocol may not have been the correct transport. However, horizontal scaling using Directory Proxy Servers, or an LDAP Consumer tier, may assist to spread the load.
The best solution, from an LDAP implementation, would be to limit persistent searches.
If you have created a user data store against an LDAP server which supports RFC2026, then a persistent search connection will be created with base DN configured in the LDAPv3 configuration.
The search filter for this connection is obtained from the data store configuration properties. Though it is possible to listen to a specific type of change event, OpenSSO registers the persistent search connections to receive all kinds of change events. The IdRepo framework has the logic to determine whether the underlying directory server supports persistent searches or not. If not supported it does not try to submit the persistent search. In this case customers may resort to a TTL-based notification as described in the next section.
Each active persistent search request requires that an open TCP connection be maintained between an LDAP client (in this case it is OpenSSO) and an LDAP (backend user store LDAP server) server that might not otherwise be kept open. The OpenSSO server that acts as an LDAP client closes idle LDAP connections to the backend LDAP server in order to maximize the resource utilization. If the OpenSSO servers are behind the load balancer or a firewall you need to tune the value of "com. sun.am.event.connection.idle.timeout".
If the persistent search connections are made through a Load Balancer (LB) or firewall, then these connections are subject to the TCP timeout value of the respective LB and/or firewall. In such a scenario once the firewall closes the persistent search connection due to an idle TCP timeout, then the change notifications cannot happen to OpenSSO unless the persistent search connection is re-established. Customers could avoid this scenario by configuring the idle timeout for the persistent search connection so that it would restart the persistent search TCP connection before the LB/firewall idle timeout, that way the LB/firewall will not have an idle persistent search connection.
The advanced server configuration property "com.sun.am.event.connection. idle.timeout" specifies timeout value in minutes after which the persistent searches will be restarted. Ideally, this value should be lower than the LB/firewall TCP timeout, to make sure that the persistent searches are restarted before the connections are dropped. A value of "0" indicates that these searches will not be restarted. By default the value is "0".
Only the connections that are timed out will be reset. You should never set this value to a value lower than the LB/firewall timeout. The delta should not be more than five minutes. If your LB's idle connection timeout is "50" minutes, then set this property value to "45" minutes.
For some reason if you want to disable the persistent search to be submitted to the backend LDAP server, just leave the persistent search base (sun-idrepo-ldapv3- config-psearchbase) empty, this will cause the IdRepo to disable the persistent search connection.
Time-to-live based notification
There may be deployment scenarios where persistent search-based notifications may not be possible or the underlying LDAP server may not be supporting the persistent search control. In such scenarios customers can employ the TTL or timeto- live based notification mechanism. It is a feature that involves a proprietary implementation by the OpenSSO server. This feature works in a fashion that is similar to the polling mechanism in the OpenSSO clients where the client periodically polls the OpenSSO server for changes, often called "pull" model. Whereas persistent search-based notifications are termed as "push" model (the LDAP server pushes the changes to the clients).
Regardless of the persistent search based change notifications, the OpenSSO server polls the underlying directory server and gets the data to refresh its Identity Repository cache.
TTL-specific properties for Identity Repository cache
When the OpenSSO deployment is configured for TTL-based cache refresh, there are certain server-side properties that need to be configured to enable the Identity Repository framework to refresh the cache. The following are the core properties that are relevant in the TTL context:
- com.sun.identity.idm.cache.entry.user.expire.time=1 (in minutes).
- com.sun.identity.idm.cache.entry.default.expire.time=1 (in minutes).
The property com.sun.identity.idm.cache.user.expire.time and com.sun. identity.idm.cache.default.expire.time specify time in minutes for which the user and non-user entries such as roles and groups respectively remain valid after their last modification. In other words after this specified period of time elapses, the data for the entry that is cached will expire. At that instant, new requests for these entries will result in fresh reading from the underlying Identity Repository plugins.
Suppose the property com.sun.identity.idm.cache.entry.expire.enabled is set to true, the non-user objects cache entries will expire based on the time specified in the com.sun.identity.idm.cache.entry.default.expire.time property. The rest of the user entries objects will be cleaned up based on the value set in the property com.sun.identity.idm.cache.entry.user.expire.time.
eBook Price: $26.99
Book Price: $44.99
|Read more about this book|
(For more resources on OpenSSO, see here.)
Supported identity stores
Identity stores in OpenSSO serves as the key feature for authentication and authorization. These are predominantly Lightweight Directory Access Protocol, or LDAP servers. There are multiple types of LDAP servers tested and certified with OpenSSO as identity stores. Almost all of the market-leading commercial LDAP servers are supported. In this section let us explore how to create and manage identity stores for each type of LDAP server. Each LDAP server has its own sub configuration identifier in the Identity Repository schema definition. The database plugin is only an early access feature; hence, we will not be covering here.
When a new identity user store is created from the console or CLI, the OpenSSO server creates a configuration entry in its configuration store followed by loading the OpenSSO-specific schema into the corresponding LDAP servers provided. The LDAP user provided (sun-idrepo-ldapv3-config-authid) should have read and write access to the base (sun-idrepo-ldapv3-config-organization_name) distinguished name (DN) and the schema configuration DN.
To take advantage of the authentication lockout, password reset, and federation features of OpenSSO, one needs to extend the underlying LDAP servers' schema to accommodate OpenSSO-specific object classes and attributes. This schema extension is done automatically for the supported LDAP servers discussed in this section with exception of OpenLDAP. Even though it is possible to work with these directories without extending the schema, you cannot manage the data store configuration from the OpenSSO administrative console, and the following functionalities will not be available:
- User account lockout in multi-server configuration
- User account expiry
- Password reset
- Extensive federation features such as IDFF PP service
- User-based authentication, session constraint, success, and failure URLs
The specific schema files and associated entries for each directory server will be kept under the <conf-dir>/ldif directory. These files are created at the time of the OpenSSO server configuration. These LDIF files will be consumed by the Identity Repository framework either at the time of server configuration or at the point when the checkbox Load schema when saved: is enabled while saving the data store configuration from the administrative console. This option is not available from the ssoadm CLI tool. So just creating the data store does not load the schema into the backend directory server, it adds the schema only when the previously mentioned checkbox is checked. You can safely test a read-only data store when needed. The checkbox is enabled, and you can safely test a read-only data store when needed by disabling this checkbox.
Now let us see how data stores can be created. As usual this process can be achieved via the administrative console (Access Control Top Level Realm | Data Stores|) or ssoadm CLI tool. As it is trivial from the console, I will show you how to manage the data stores using the CLI tool.
Access Manager Repository plugin
This type of Identity Repository is not available in the out of the box configuration; the administrator has to explicitly configure the server to enable the plugin. The default configuration will support the repository types as shown in the following screenshot:
The Access Manager Repository plugin is also called amSDK or legacy SDK as it provides downward compatibility to work with the existing Sun Access Manager 7.x version deployment identity stores. This repository is tightly coupled with the Oracle DSEE server; hence, will not work with any other LDAP servers. To enable this plugin invoke the following command (there is no console user interfaces for this):
./ssoadm add-amsdk-idrepo-plugin -u amadmin -f /tmp/.passwd_of_amadmin
-b dc=opensso,dc=java,dc=net -s ldap://dsee.packt-services.net:4355 -x
/tmp/.passwd_of_amadmin -p /tmp/.passwd_of_amadmin -v -a uid -o o -e
'cn=Directory Manager' -m /tmp/.passwd_of_dir_mgr
Once this command is successful, you will be able to create amSDK plugin after restarting the server. The list of supported data stores will include the Access Manager Repository plugin as follows:
Creating an Access Manager Repository plugin data store
Access Manager Repository plugin or amSDK data store can be created from a console interface or from CLI (the former is a trivial process). Let us use ssoadm to accomplish this. Here is how you can create the datastore:
ssoadm create-datastore -e / -u amadmin -f /tmp/.passwd_of_amadmin -t
amSDK -D /tmp/mydata -m datastoreforamsdk
where /tmp/mydata contains the following attributes:
These attributes can be edited from the console as shown in the following screenshot. As you can see there are no options to change the directory server settings. To change the directory server-related information you need to navigate to Configuration Server and Sites | Instance | Directory Configuration|. From this location you will be able to update the directory server settings.
Now you can start creating user and/or role objects in the new repository. One exception here is that the Load schema when finished does not have any impact, as the schema will be loaded when you invoke the add-amsdk-idrepo-plugin subcommand.
Displaying the data store properties
In case you want to view the properties of the data store that you have created, you can leverage the show-datastore sub command as shown in the following:
./ssoadm show-datastore -e / -m datastoreforamsdk -u amadmin -f /tmp/.
This will be handy if you want to view the value of a specific property or to update a specific property for which you don't know the property name. This command will dump all the data store properties.
Updating data store properties
Like any other object, one can update the specific properties of an existing data store using the command line tool, ssoadm. For example: the following command after successful execution will change the property sun-idrepo-amSDK-configrecursive- enabled value from false to true:
./ssoadm update-datastore -e / -m datastoreforamsdk -f /tmp/.passwd_of_
amadmin -u amadmin -a "sun-idrepo-amSDK-config-recursive-enabled=true"
In this manner you can change any property whose value needs to be updated.
Deleting data stores
Finally let us close the loop by showing you how an existing data store can be deleted using the delete-datastores subcommand.
./ssoadm delete-datastores -e / -m datastoreforamsdk -f /tmp/.passwd_of_
amadmin -u amadmin
This will only remove the data store named datastoreforamsdk but will not remove the schema or Access Manager Repository plugin from the server. The sequence of commands given in the following section will remove the schema from the server.
Removing the Access Manager Repository plugin
In case you want to remove the Access Manager Repository plugin from the server, you need to remove the subschema entry that is part of the Identity Repository service and the DAI service. Here is the procedure to remove them in order. There is no other interface to perform these actions:
./ssoadm remove-sub-schema -s sunIdentityRepositoryService -t
Organization -a amSDK -u amadmin -f /tmp/.passwd_of_amadmin
./ssoadm delete-svc -s DAI -u amadmin -f /tmp/.passwd_of_amadmin
The one exception here is that the delegation policies will not be removed.
In this article we took a look at types of identity stores supported, caching and notification and a supported identity store.
In the next article we will take a look at some more supported identity stores and multiple identity stores. We will also take a look at extending schema for OpenLDAP Identity Repository schema.
- Seam 2.x Web Development [Book]
- JBoss Tools 3 Developers Guide [Book]
- Designing Secure Java EE Applications in GlassFish [article]
- Developing Secure Java EE Applications in GlassFish [article]
- Securing our Applications using OpenSSO in GlassFish Security [article]
eBook Price: $26.99
Book Price: $44.99
About the Author :
Indira Thangasamy is currently serving as a software development senior manager at Oracle Corporation, managing the Fusion middleware access management quality engineering organization. Indira spent over a decade at Sun Microsystems Inc. in various roles. He has been associated with the OpenSSO product since its inception and has been instrumental in delivering a high-quality product to the customers. Indira is very passionate about technology. He graduated with an M.Tech. in Computer Science, started his career as an embedded systems developer in Germany, and later served as a security consultant at Wells Fargo before joining