Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-blackberry-enterprise-server-5-activating-devices-and-users
Packt
03 Mar 2011
11 min read
Save for later

BlackBerry Enterprise Server 5: Activating Devices and Users

Packt
03 Mar 2011
11 min read
BlackBerry Enterprise Server 5 Implementation Guide Simplify the implementation of BlackBerry Enterprise Server in your corporate environment Install, configure, and manage a BlackBerry Enterprise Server Use Microsoft Internet Explorer along with Active X plugins to control and administer the BES with the help of Blackberry Administration Service Troubleshoot, monitor, and offer high availability of the BES in your organization Updated to the latest version – BlackBerry Enterprise Server 5 Implementation Guide       BlackBerry Enterprise users must already exist on the Microsoft Exchange Server. As with the administrative users, to make tasks and management of device users easier, we can create groups and add users to the groups, and then assign policies to the whole group rather than individual users. Again, users can be part of multiple groups and we will see how the policies are affected and applied when users are in more than one group. Creating users on the BES 5.0 We will go through the following steps to create users on the BES 5.0: Within the BlackBerry Administration Service, navigate to the BlackBerry solution management section. Expand User and select Create a user. We can now search for the user we want to add either by typing the user's display name or e-mail address. Enter the search criteria and select Search. We then have the ability to add the user to any group we have already created; in our case we only have an administrative group. We have three options on how the user will be created, with regards to how the device for the user will be activated: With activation password: This will allow us to set an activation password along with the expiry time of the activation password for the user With generated activation password: The system will autogenerate a password for activation, based on the settings we have made in our BlackBerry Server (shown further on in this article) Without activation password: This will create just a user who will have no pre-configured method for assigning a device For this example, we will select Create a user without activation password. Once we have covered the theory and explored the settings within this article regarding activating devices, we will return to the other two options. We can create a user even if the search results do not display the user—generally this occurs when the Exchange Server has not yet synched the user account to the BlackBerry Configuration Database, typically when new users are added. This method is shown in Lab. Groups can be created to help manage users within our network and simplify tasks. Next we are going to look at creating a group that will house users—all belonging to our Sales Team. Creating a user-based group To create a user-based group, go through the following steps: Expand Group, select Create a group, in the Name field enter Sales Team, and click on Save. Select View group list. Click on Sales Team. Select Add users to group membership. Select the user we have just created by placing a tick in the checkbox next to the user's name, and click on Add to group membership. We can click on View group membership to confirm the addition of our user to the group. We will be adding more users to this group later on in the Lab when we import the users via a text file. Preparing to distribute a BlackBerry device Before we can distribute a BlackBerry device to a user using various methods, we need to address a few more settings that will affect how the device will initially be populated. By default when a device is activated for a user, the BlackBerry Enterprise Server will prepopulate/synchronize the BlackBerry device with the headers of 200 e-mail messages from the previous five days. We can alter these settings so that headers and the full body of the e-mail message can be synched to the device for up to a maximum of 750 messages over the past 14 days. In the BlackBerry Administration Service, under Servers and components expand BlackBerry Domain | Component view | Email and select the BES instance. On the right-hand pane select the Messaging tab. Scroll down and select Edit instance. To ensure that both headers and the full e-mail message is populated to the BlackBerry Device, in the Message prepopulation settings, change the Send headers only drop-down to False. Change the Prepopulation by message age to a max of 14 days, by entering 14. We can change the number of e-mails that are prepopulated on the device by changing the number of Prepopulation by message count, again a max of 750. By making the preceding two values to zero, we can ensure that no previous e-mails are populated on the device. Within the same tab, we can set our Messaging options, which we will examine next. We have the ability to set: A Prepended disclaimer (goes before the body of the message) An Appended disclaimer (goes after the user's signature) We can enter the text of our disclaimer in the space provided, then choose what happens if there is a conflict. The majority of these settings can also be set at a user level (settings made on the server override any settings made by the user, that's why it is best practice to have these set on the server level), which we will see later in Lab. If user setting exists then we need to notify the server how to deal with a potential conflict. The default setting is to use the user's disclaimer first then the one set on the server. Bear in mind, the default setting will show both the user's disclaimer and then the server disclaimer on the e-mail message. Wireless message reconciliation should be set to True—the BlackBerry Enterprise Server synchronizes e-mail message status changes between the BlackBerry device and Outlook on the user's computer. The BES reconciles e-mail messages that are moved from one folder to another, deleted messages, and also changes the status of read and unread messages. By default the BES performs a reconcile every 30 minutes; the reconcile is in effect checking that for a particular user the Outlook and the BlackBerry have the same information in their databases. If this is set to False then the above mentioned changes will only take effect when the device is plugged in to Desktop Manager or Web Desktop Access. We have the option of setting the maximum size for a single attachment or multiple attachments in KB. We can also specify the maximum download size for a single attachment. Rich content turned on set to True allows e-mail messages that contain HTML and rich content to be delivered to BlackBerry devices; having it set to False would mean all messages are delivered in plain text. This will save a lot of resources on the server(s) housing the BES components. We can set the same principle for downloading inline images. Remote search turned on set to True—this will allow users to search the Microsoft Exchange server for e-mails from their BlackBerry devices. In BES 5, we have a new feature that allows the user, when on his device-prior to sending out a meeting request—to check if a potential participant is available at that time or not. (Microsoft Exchange 2007 users need to make some changes to support this feature; see the BlackBerry website for further details on the hot fixes required.) Free busy lookup turned on is set to True if you want the above service. If system resources are being utilized heavily, this feature can be turned off by selecting False. Hard deletes reconciliation allows users to delete e-mail messages permanently in Microsoft Outlook (by holding the shift + del keys). You can also configure the BES to remove permanently deleted messages from the user's BlackBerry device. You must have wireless reconciliation turned on for this to work. Now that we have prepared our messaging environment, we are ready to activate our first user. Activating users When it comes to activating users, we have five options to choose from: BlackBerry Administration Service: We can connect the device to a computer and log on to the BAS to assign and activate a device for a user Over the Wireless Network (OTA): We can activate a BlackBerry to join our BES without needing it to be physically connected to our organization Over the LAN: A user who has BlackBerry Desktop Manager running on his or her computer in the corporate LAN can activate the device by plugging the device into his or her machine and running the BlackBerry Desktop Manager BlackBerry Web Desktop Manager: This is a new feature of BES 5 that allows users to connect the device to a computer and log in to the BlackBerry Web Desktop Manager to activate the device, with no other software required Over your corporate organization's Wi-Fi network: You can activate Wi-Fi-enabled BlackBerry devices over your corporate Wi-Fi network Before we look at each of the options available to us, let's examine what enterprise activation is and how it works along with its settings; this will also help us choose the best option for activating devices for users and avoid errors during the enterprise activation. Understanding enterprise activation To allow a user's device to join the BlackBerry Enterprise Server, we need to activate the device for the user when we create a user and assign the user an activation password. The user will enter his or her corporate e-mail address and the activation password into the device in the Enterprise Activation screen, which can be reached on the device by going to Options | Advance Options | Enterprise Activation. Once the user types in the information and selects Activate, the BlackBerry device will generate an ETP.dat message. It is important that if you have any virus scanning or e-mail sweeping systems running in your organization, we ensure that this type of filename with extension is added to the safe list. Please note that this ETP.dat message is only generated when we activate a device over the air. If we use other methods where the device is plugged in via a cable to activate it, NO ETP.dat file is generated. The ETP.dat message is then sent to the user's mailbox on the Exchange Server over the wireless network. To ensure that the activation occurs smoothly, make sure the device has good battery life and the wireless coverage on the device is less than 100db. This can be checked by pressing the following combination on the device Alt + NMLL. The BlackBerry Enterprise Server then confirms that the activation password is correct and generates a new permanent encryption key and sends it to the BlackBerry device. The BlackBerry Policy service then receives a request to send out an IT policy. Service books control the wireless synchronization data. Data is now transferred between the BlackBerry device and the user's mailbox using a slow synch process. The information that is sent to the BlackBerry device is stored in databases on the device, and each application database is shown with a percentage completed next to it during the slow synch. Once the activation is complete, a message will pop up on the device stating 'Activation complete'. The device is now fully in synch with the user's mailbox and is ready to send and receive data. Now that we have got a general grasp of the device activation process, we are going to look at the five options mentioned previously, in more detail. Activating a device using BlackBerry Administration Service This method provides a higher level of control over the device, but is more labor-intensive on the administrator as it requires no user interaction. Connect the device to a computer that can access the BlackBerry Administration Service, and log in to the service using an account that has permissions to assign devices. Under the Devices section, expand Attached devices. Click on Manage current device and then select Assign current device. This will then prompt you to search for the user's account that we want to assign the device to. Once we have found the user, we can click on User and then select Associate user and finally click on Assign current device.
Read more
  • 0
  • 0
  • 3166

article-image-securing-moodle-data
Packt
03 Mar 2011
7 min read
Save for later

Securing Moodle Data

Packt
03 Mar 2011
7 min read
Moodle Security Learn how to install and configure Moodle in the most secure way possible User information protection Every user within Moodle has a profile which can contain information we may or may not want to show to other users, or at least not to all of them. The level of exposure will depend on the privacy policy we want to adopt. For example, we may want to completely isolate users within a course so that nobody knows who else is participating, or we may want to expose just the user names and nothing else, and so on. Let us first describe how Moodle handles presentation of user profiles. This is important as it will expose internal workings of that subsystem and identify all access points and ways of disabling them if that is what we want to do. User profile page User profile page is used to define personal information about a user within a Moodle. It can contain name, surname, address, telephone, etc. The user profile page is reached by <Moodle URL>/user/view.php?id=<userid>&course=<courseid> where userid and courseid are identifiers of user and course as they are stored in database. This is how Moodle determines whether to show or not the profile page for a particular user:     Logged-on user User to see Condition Show profile User Other user Other user is teacher in at least one course yes     User is teacher in at least one course yes       User has View user profiles capability enabled in current context yes     None of the above no User User None yes When we say teacher we refer to the Moodle roles Teacher and Non-editing teacher. Reaching profile page There are several ways a user can reach the profile page for a particular user. We are presenting them here in order to help the administrator to block potentially unwanted access points to user information. People block Every course upon creation gets a set of predefined blocks. One of these blocks is the people block. When present and visible it gives every user an opportunity to browse all users participating in the current course. This block is visible to any user that has the View participants capability enabled. This capability exists for system and course level. In Moodle 1.9.8 and later, by default this capability is enabled only for the Administrator role on both levels. That way no user other than Administrator will be able to see participants on the system level or in specific course. If by any chance you use an older version of Moodle, then most likely you have this capability enabled on the course level for all standard roles except for guest and authenticated user. Unless you want to open privacy policy on your site we recommend you to disable this capability. Visit the Administration Users | Permissions | Define roles| page, then locate and modify that capability by setting it to "Not set". Apply this at least on the Student role. Forum topics Forum topic offers another way of accessing the user profile. Regardless of the forum type, Moodle displays the author name for every post. This name is actually linked to the profile page for that user. Messaging system Moodle offers a messaging system for internal communications between users. The Messaging system can be accessed from three locations—personal profile page, platform front page, and course content page.   Moodle page Conditions Displayed Profile page Send message to any user capability is enabled Yes Front page Message block is added by Administrator Yes Course content page Message block is added to the course by Administrator or teacher Yes If any of these conditions are fulfilled users will be able to access the messaging system. By default none of these conditions are present for Students and therefore there is no danger of any privacy intrusion. However, it is a common practice in various installations of Moodle to add a messaging block to one or more courses. Any user will be able to communicate with other users within same context (course). The problem with messaging is that it enables any user to locate any other user registered in the platform. We can demonstrate this easily. Open the messaging dialog and switch to the Search tab. In the Name field enter one letter and press the Search button. You will get ALL user accounts that have the specified letter either in name or surname as a result. The search result apart from the actual names of the users also offers a direct link to their personal profile. This is a potentially dangerous feature that can expose more information than we are willing to permit. If messaging is called from a context in which the users have permission to view user profiles he will be able to see any profile in the system. This way user names and profiles are completely open. There is no way to modify this behavior (listing all users) other than disabling the messaging system. Having a messaging system enabled can be a problem if you have a malicious user within your system that wants to get names of all users or a spam-bot that wishes to harvest e-mail addresses. That is the reason we should do something about that. Protecting user profile information We have several options available for protecting access to private information located in personal user profile. You can choose one that is most appropriate for your particular use case. Limit information exposed to all users If we do not have a problem exposing some information of the user in their profile then we can then just hide some fields. To do that visit the Administration Users | Permissions | User policies| page and locate the Hide user fields section. Using this approach you still cannot hide the user e-mail or his actual name which is good for cases where you want users to communicate with each other without knowing too many personal details. Completely block ability to view profiles If you want to completely block access to the user's profiles you have several options explained as follows: Disable View participants capability We already explained that by default every Moodle as of version 1.9.8 has this disabled by default. We are listing it here just for the sake of being complete. Hide messaging system Hiding messaging system means removing access points from user's reach. This means do not add Messages block on the front page and in any course where you wish to avoid users from knowing the other participants. This is useful where you want to have mixed messaging policy for different courses—set of users. Have in mind that this setup gives sort of a false sense of separation. Users from courses which do not have Messages block can still access Messaging system if they type the URL by hand. Disable Messaging system If you do not care for Messaging in your Moodle site you can completely disable it. To do that visit the Administration Security | Site policies| page and uncheck Enable messaging system option. Not using general forums If you have a website where you want to completely isolate only part of users within a course, among other things you can adopt the policy of not adding general forums inside such courses and on the site front page. That way you can still use forums in other courses where you do not have security concerns. Disable View user profiles capability If you want to completely block any possibility of viewing user profiles for specific role(s) you need to modify the View user profile capability and set it to "Not set". Visit the Administration Users | Permissions | Define roles| page, locate and modify that capability for every role you wish to prevent from viewing user profiles.
Read more
  • 0
  • 0
  • 2532

article-image-faq-web-services-and-apache-axis2
Packt
28 Feb 2011
12 min read
Save for later

FAQ on Web Services and Apache Axis2

Packt
28 Feb 2011
12 min read
Apache Axis2 Web Services, 2nd Edition Create secure, reliable, and easy-to-use web services using Apache Axis2. Extensive and detailed coverage of the enterprise ready Apache Axis2 Web Services / SOAP / WSDL engine. Attain a more flexible and extensible framework with the world class Axis2 architecture. Learn all about AXIOM - the complete XML processing framework, which you also can use outside Axis2. Covers advanced topics like security, messaging, REST and asynchronous web services. Written by Deepal Jayasinghe, a key architect and developer of the Apache Axis2 Web Service project; and Afkham Azeez, an elected ASF and PMC member.      Q: How did SOA change the world view? A: The era of isolated computers is over. Now "connected we stand, isolated we fall" is becoming the motto of computing. Networking and communication facilities have connected the world in a way as never before. The world has hardware that could support the systems that connect thousands of computers, and these systems have the capacity to wield power that was once only dreamed of. Yet, computer science lacked the technologies and abstraction to utilize the established communication networks. The goal of distributed computing is to provide such abstractions. RPC, RMI, IIOP, and CORBA are a few proposals that provide abstractions over the network for the developers to build upon. These proposals fail to consider one critical nature of the problem. The systems are a composition of numerous heterogeneous subsystems, but these proposals require all the participants to share a programming language or a few languages. Service Oriented Architecture (SOA) provides the answer by defining a set of concepts and patterns to integrate homogenous and heterogeneous components together. SOA provides a better way to achieve loosely coupled systems, and hence more extensibility and flexibility. In addition, similar to object-oriented programming (OOP), SOA enables a high degree of reusability. There are three main ways one can enable SOA capabilities in their systems and applications: Existing messaging systems: for example, JMS, IBM MQSeries, Tibco, and so on Plain Old XML (POX): for example, REST, XML/HTTP and so on Web services: for example, SOAP, WSDL, WS-* Q: What are the shortcomings of Java Messaging Service (JMS)? A: Among the commonly used messaging systems, Java Messaging Service (JMS) plays a major role in the industry and has become a common API for messaging systems. We can find a number of different message types of JMS, such as Text, Bytes, Name-Value pair, Stream, and Object. One of the main disadvantages of these types of messaging systems is that they do not have a single wire format (serialization format). As a result, interoperability is a big issue: if two applications are using JMS to communicate, then they must be on the same implementation. Sonic, Tibco, and IBM are the leaders in the commercial markets, and JBoss, Manta, and ActiveMQ are the commonly used open source implementations. Q: What is POX and how does it serve the web? A: Plain Old XML or POX is another way of exposing functionality and enabling SOA in the system. With the widespread use of the Web, the POX approach has become more popular. Most of the web applications expose the XML APIs, where we can develop components and communicate with them. Google Maps, Auto complete, and Amazon services are a few examples of applications that heavily use XML APIs to expose the functionality. In most cases, POX is used in combination with REST (Representational State Transfer). REST is a model of an underlying architecture of the Web, and it is based on the concept that every URL identifies resources. GET, PUT, POST, and DELETE are the verbs that are used in the REST architecture. REST is often associated with the theoretical standpoints, and for this reason, REST is generally not used for complex interactions. Q: What are web services? A: The fundamental concept behind web services is the SOA where an application is no longer a large monolithic program, but it is divided into smaller, loosely coupled programs. The provided services are loosely coupled together with standardized and well-defined interfaces. These loosely coupled programs make the architecture very extensible due to the possibility to add or remove services with limited costs. Therefore, new services can be created by combining existing services. To understand loose coupling clearly, it is better to understand the opposite, which is tight coupling, and its problems: Errors, delays, and downtime spread through the system The resilience of the whole system is based on the weakest part Cost of upgrading or migrating spreads It's hard to evaluate the useful parts from the dead weight The benefits a web service provides are listed below: Increased interoperability, resulting in lower maintenance costs Increased reusability and composablity (for example, use publicly available services and reuse them or integrate them to provide new services) Increased competition among vendors, resulting in lower product costs Easy transition from one product to another, resulting in lower training costs Greater degree of adoption and longevity for a standard, a large degree of usage from vendors and users leading to a higher degree of acceptance Q: What contributes to the popularity of web services? A: Among the three commonly used methods to enable SOA, a web service can be considered as the most standard and flexible way. Web services extend the idea of POX and add additional standards to make the communication more organized and standardized. There are several reasons behind the web services being the most popular SOA-enabled mechanism, as stated here: Web services are described using WSDL, and WSDL can capture any complex application and the required quality of services. Web services use SOAP as the message transmission mechanism, as SOAP is a special type of XML. It gains all the extensibility features from XML. There are a number of standard bodies to create and enforce the standards for web services. There are multiple open source and commercial web service implementations. By using the standards and procedures, web services provide application and programming language-independent mechanism to integrate and communicate. Different programming languages may define different implementations for web services, yet they interoperate because they all agree on the format of the information they share. Q: What are the standard bodies for web services? A: In web services, there are three main standard bodies that helped to improve the interoperability, quality of service, and base standards: WS-I OASIS W3C Q: How do organizations move into web services? A: There are three ways in which an organization could possibly use to move into the web services, listed next: Create a new web service from scratch. The developer creates the functionalities of the services as well as the description (i.e., WSDL). Expose the existing functionality through a web service. Here the functionalities of the service already exist. Only the service description needs to be implemented. Integrate web services from other vendors or business partners. There are occasions when using a service implemented by another is more cost effective than building from the scratch. On these occasions, the organization will need to integrate others' or even business partners' web services. The real usage of web service concepts is for the second and third methods, which enables other web services and applications to use the existing applications. Web services describe a new model for using the web; the model allows publication of business functions to the Web and provides universal access to those business functions. Both developers and end users benefit from web services. The web service model simplifies business application development and interoperation. Q: How does a Web services model look like? A: Web service model consists of a set of basic functionalities such as describe, publish, discover, bind, invoke, update, and unpublish. In the meantime, the model also consists of three actors—service provider, service broker, and service requester. Both the functionalities as well as actors are shown in the next figure. Service provider is the individual (organization) that provides the service. The service provider's job is to create, publish, maintain, and unpublish their services. From a business point of view, a service provider is the owner of the service. From an architectural view, a service provider is the platform that holds the implementation of the service. Google API, Yahoo! Financial services, Amazon Services, and Weather services are some examples of service providers. Service broker provides a repository of service descriptions (WSDL). These descriptions are published by the service provider. Service requesters will search the repository to identify the required service and obtain the binding information for these services. Service broker can be either public, where the services are universally accessible, or private, where only a specified set of service requesters are able to access the service. Service requester is the party that is looking for a service to fulfill its requirements. A requester could be a human accessing the service or an application program (a program could also be a service). From a business view, this is the business that wants to fulfill a particular service. From an architectural view, this is the application that is looking for and invoking a service. Q: What are web services standards? A: So far we have discussed SOA, standard bodies of web services, and the web service model. Here, we are going to discuss more about standards, which make web services more usable and flexible. In the past few years, there has been a significant growth in the usage of web services as application integration mechanism. As mentioned earlier, a web service is different from other SOA exposing mechanisms because it consists of various standards to address issues encountered in the other two mechanisms. The growing collection of WS-* (for example, Web Service security, Web Service reliable messaging, Web Service addressing, and others) standards, supervised by the web services governing bodies, define the web service protocol stack shown in the following figure. Here we will be looking at the standards that have been specified in the most basic layers: messaging and description, and discovery. The messaging standards are intended to give the framework for exchanging information in a distributed environment. These standards have to be reliable so that the message will be sent only once and only the intended receiver will receive it. This is one of the primary areas where research is being conducted, as everything depends on the messaging ability. Q: Describe the web services standards, XML-RPC and SOAP? A: The web services standards; XML-RPC and SOAP are described below. XML-RPC: The XML-RPC standard was created by Dave Winer in 1998 with Microsoft. That time the existing RPC systems were very bulky. Therefore, to create a light-weight system, the developer simplified it by specifying only the essentials and defined only a handful of data types and commands. This protocol uses XML to encode its calls to HTTP as a transport mechanism. The message is sent as a POST request in which the body of the request is in XML. A procedure is executed on the server and the value it returns is also formatted into XML. The parameters can be scalars, numbers, strings, dates, as well as complex record and list structures. As new functionalities were introduced, XML-RPC evolved into what is now known as SOAP, which is discussed next. Still, some people prefer using XML-RPC because of its simplicity, minimalism, and the ease of use. SOAP: The concept of SOAP is a stateless, one-way message exchange. However, applications can create more complex interaction patterns—such as request-response, request-multiple responses, and so on—by combining such one-way exchanges with features provided by an underlying protocol and application-specific information. SOAP is silent on the semantics of any application-specific data it conveys as it is on issues such as routing of SOAP messages, reliable data transfer, firewall traversal, and so on. However, SOAP provides the framework by which application-specific information may be conveyed in an extensible manner. The developers had chosen XML as the standard message format because of its widespread use by major organizations and open source initiatives. Also, there is a wide variety of freely available tools that ease the transition to a SOAP-based implementation. Q: Define the scope of Web Services Addressing (WS-Addressing)? A: The standard provides transport independent mechanisms to address messages and identifies web services, corresponding to the concepts of address and message correlation described in the web services architecture. The standard defines XML elements to identify web services endpoints and to secure end-to-end endpoint identification in messages. This enables messaging systems to support message transmission through networks that include processing nodes such as endpoint managers, firewalls, and gateways in a transport-neutral manner. Thus, WS-Addressing enables organizations to build reliable and interoperable web service applications by defining a standard mechanism for identifying and exchanging Web Services messages between multiple end points. Q: What is Web Services Description Language (WSDL)? A: WSDL developed by IBM, Ariba, and Microsoft is an XML-based language that provides a model for describing web services. The standard defines services as network endpoints or ports. WSDL is normally used in combination with SOAP and XML schema to provide web services over networks. A service requester who connects to a web service can read the WSDL to determine what functions are available in the web service. Special data types are embedded in the WSDL file in the form of XML Schema. The client can then use SOAP to call functions listed in the WSDL. The standard enables one to separate the description of the abstract functionality offered by a service from the concrete details of a service description such as how and where that functionality is offered. This specification defines a language for describing the abstract functionality of a service as well as a framework for describing the concrete details of a service description. The abstract definition of ports and messages is separated from their concrete use, allowing the reuse of these definitions.
Read more
  • 0
  • 0
  • 3323

article-image-tips-and-tricks-using-alfresco-3-business-solutions
Packt
25 Feb 2011
4 min read
Save for later

Tips and Tricks for using Alfresco 3 Business Solutions

Packt
25 Feb 2011
4 min read
  Alfresco 3 Business Solutions Practical implementation techniques and guidance for delivering business solutions with Alfresco Deep practical insights into the vast possibilities that exist with the Alfresco platform for designing business solutions. Each and every type of business solution is implemented through the eyes of a fictitious financial organization - giving you the right amount of practical exposure you need. Packed with numerous case studies which will enable you to learn in various real world scenarios. Learn to use Alfresco's rich API arsenal with ease. Extend Alfresco's functionality and integrate it with external systems.           Read more about this book       (For more resources on Alfresco, see here.) Node references are important. Tip: Node references are used to identify a specific node in one of the stores in the repository. You construct a node reference by combining a Store Reference such as workspace://SpacesStore with an identifier. The identifier is a Universally Unique Identifier (UUID) and it is generated automatically when a node is created. A UUID looks something like this: 986570b5-4a1b-11dd-823c-f5095e006c11 and it represents a 128-bit value. A complete Node Reference looks like workspace://SpacesStore/986570b5-4a1b-11dd-823c-f5095e006c11. The node reference is one of the most important concepts when developing custom behavior for Alfresco as it is required by a lot of the application interface methods. Avoid CRUD operations directly against the database. Tip: One should not do any CRUD operations directly against the database bypassing the foundation services when building a custom solution on top of Alfresco. This will cause the code to break in the future if the database design is ever changed. Alfresco is required to keep older APIs available for backward compatibility (if they ever change), so it is better to always use the published service APIs. Query the database directly only when: The customization built with available APIs is not providing acceptable performance and you need to come up with a solution that works satisfyingly Reporting is necessary Information is needed during development for debugging purposes For bootstrapping tweaking, such as when you want to run a patch again Executing patches in a specific order Tip: If we have several patches to execute and they should be in a specific order, we can control that with the targetSchema value. The fixesToSchema value is set to Alfresco's current schema version (that is, via the version.schema variable), which means that this patch will always be run no matter what version of Alfresco is being used. It is a good idea to export complex folder structures into ACP packages. Tip: When we set up more complex folder structures with rules, permission settings, template documents etc, it is a good idea to export them into Alfresco Content Packages (ACP) and store them in the version control system. The same is true for any Space Templates that we create. These packages are also useful to include in releases. Deploying the Share JAR extension Tip: When working with Spring Surf extensions for Alfresco Share it is not necessary to stop and start the Alfresco server between each deployment. We can set up Apache Tomcat to watch the JAR file we are working with and tell it to reload the JAR every time it changes. Update the tomcat/conf/context.xml configuration file to include the following line: <WatchedResource>WEB-INF/lib/3340_03_Share_Code.jar</WatchedResource> Now every time we update this Share extension, JAR Tomcat will reload it for us and this shortens the development cycle quite a bit. The Tomcat console should print something like this when this happens: INFO: Reloading context [/share] To deploy a new version of the JAR just run the deploy-share-jar ant target: C:3340_03_Codebestmoneyalf_extensionstrunk>ant -q deploy-share-jar [echo] Packaging extension JAR file for share.war [echo] Copies extension JAR file to share.war WEB-INF libBUILD SUCCESSFULTotal time: 0 seconds Debugging AMP extensions Tip: To debug AMP extensions, start the Alfresco server so that it listens for remote debugging connections; or more correctly, start the JVM so that it listens for remote debugging connection attempts. This can be done by adding the following line to the operating system as an environment variable: CATALINA_OPTS=-Dcom.sun.management.jmxremote -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5005 This means that any Alfresco installation that we have installed locally on our development machine will be available for debugging as soon as we start it. Change the address as you see fit according to your development environment. With this setting we can now debug both into Alfresco's source code and our own source code at the same time.
Read more
  • 0
  • 0
  • 2132

article-image-blackberry-enterprise-server-5-mds-applications
Packt
25 Feb 2011
6 min read
Save for later

BlackBerry Enterprise Server 5: MDS Applications

Packt
25 Feb 2011
6 min read
  BlackBerry Enterprise Server 5 Implementation Guide MDS (Mobile Data Service) runtime applications are custom applications that are developed for your organizational needs. MDS runtime applications are created using BlackBerry MDS Studio or Microsoft Visual Studio—a BlackBerry plugin. In general, these applications are form-based applications that users can use on their device to access databases or web services based inside your organization's firewall—the corporate LAN. For the purpose of this article you can download a sample MDS application from the BlackBerry website under the development section, current link is: http://us.blackberry.com/developers/javaappdev/devtools.jsp. This application is an Expenses Tracker, which an employee can populate in real time from his device as business expenses occur during a trip. Once the trip is complete, the application e-mails your finance department and attaches an Excel spreadsheet outlining the employee's business trip expenses. Understanding and setting up our MDS environment The MDS has two component services: MDS Connection Service: This service provides access to content on the Internet, intranet, and access to the organization's application servers MDS Integration Service: This service facilitates installation and management of applications and allows access to the server system in your corporate LAN via database connections or web services. Firstly, we need to set up our MDS environment. This includes the following: Ensure that the BlackBerry MDS integration Service is installed and running on our BlackBerry Enterprise Server. This service should have been selected during the initial installation of the BES; if it was not selected we can run the setup and install the MDS Sservices. If the MDS service is already installed, you will see the services running in the Windows server. Send the BlackBerry MDS Runtime platform to devices in our BlackBerry domain This can be achieved by using Software Configuration policies, as shown next: Publish the BlackBerry MDS application This will be done using the MDS console that is installed during the installation of MDS services Configure our IT policy and any application control policies for the MDS application Using IT policies and application policies we can lock down our MDS application Install the MDS application on the devices Using the MDS console and the application repository for MDS applications, we can deploy and install the MDS applications on the devices Each of the preceding sections will now be looked at in greater detail. Running MDS services During the installation of our BlackBerry Enterprise Server we can chose to install the MDS components. We need to ensure that the MDS service is running in our environment. This can be checked by going to services on the server that hosts the BlackBerry Enterprise Server and ensuring that the BlackBerry MDS Connection Service and BlackBerry MDS Integration Service are started, as shown in the following screenshot: Installing MDS runtime platform For MDS runtime applications to work, we need to ensure that the MDS runtime platform is installed on to devices in our corporate network. The version of MDS runtime platform that you need to install on to the devices will depend on the following: Model of the device BlackBerry software version on the device So, depending on the different devices and the different BlackBerry device software running on the devices, you might need to create several MDS runtime software configuration packages to cover the different models and device software within your corporate environment. We can use a software configuration to deploy the MDS runtime platform that is needed on the devices. For the purpose of this article, we are going to assume all our devices are the same make and have the same device software: BlackBerries 8900. Creating a software configuration to deploy the MDS runtime platform to devices Download the appropriate MDS runtime platform for your device from the BlackBerry website-the current link is: https://www.blackberry.com/Downloads/entry.do?code=F9BE311E65D81A9AD8150A60844BB94C. For our example, we are going to download the MDS runtime package for a BlackBerry 8900 device, which is entitled BlackBerry MDS runtime v4.6.1.21 Extract the contents to a shared folder on the BES server. Log in to the BlackBerry Administration Service. Under BlackBerry solution management expand Software then Applications and click on Add or update applications. Browse to the ZIP files for the MDS runtime application, and once selected click Next. Select to publish the application To ensure the correct packages were created browse to the BSC share (code downlosd, ch:5) and ensure the following files are present: We now need to create our software configuration (since the preceding steps have just added the MDS runtime application to the application repository only). Select Create a software configuration. Enter the name Runtime, and leave the other settings as default. Click on Manage software configurations and select Runtime. Select the Applications tab and click on Edit software configuration, as shown in the following screenshot: Click on Add applications to software configuration. Click on Search or fill in the search criteria to display the Runtime packages. Select the Runtime applications (in some cases two applications may have been created; select both, one is the default launcher and one is the runtime platform, this is dependant on the device). In our example, we need both the MDS Runtime and the MDS Default Launcher, so we need to place a tick in both to show additional configuration steps, as shown in the following screenshot: Select Wireless as the Deployment method and the Standard Required for the Application control policy, and Required for the Disposition setting. Once added, click on Save all. We now need to assign this software configuration to the devices in our BES environment. For the purpose of this article, we are going to assign it to the Sales Group. Please bear in mind that—as mentioned before—if you have different devices or same devices but with different device software operating on them then you will need to download the right MDS runtime platform for each scenario and configure the appropriate number of software configurations. Click on Manage groups. Select the Sales Team. Click on Edit group. Select the Software configuration tab. In the Available software configurations list, click on Runtime and select Add, as shown in the following screenshot: Click on Save all. Now that our devices are ready to run MDS applications we need to add our MDS application to the MDS application repository. The MDS application repository is installed by default during the initial installation of the BES as long as we choose to install all default components of MDS. The MDS application console is a web-based administration tool, like the BlackBerry Administration Service, which is used to control, install, manage, and update MDS applications Please note that you use the BlackBerry Administration Service to control Java-based applications and you use the MDS console to administer MDS applications.
Read more
  • 0
  • 0
  • 2401

article-image-enabling-apache-axis2-clustering
Packt
25 Feb 2011
6 min read
Save for later

Enabling Apache Axis2 clustering

Packt
25 Feb 2011
6 min read
Clustering for high availability and scalability is one of the main requirements of any enterprise deployment. This is also true for Apache Axis2. High availability refers to the ability to serve client requests by tolerating failures. Scalability is the ability to serve a large number of clients sending a large number of requests without any degradation to the performance. Many large scale enterprises are adapting to web services as the de facto middleware standard. These enterprises have to process millions of transactions per day, or even more. A large number of clients, both human and computer, connect simultaneously to these systems and initiate transactions. Therefore, the servers hosting the web services for these enterprises have to support that level of performance and concurrency. In addition, almost all the transactions happening in such enterprise deployments are critical to the business of the organization. This imposes another requirement for production-ready web services servers, namely, to maintain very low downtime. It is impossible to support that level of scalability and high availability from a single server, despite how powerful the server hardware or how efficient the server software is. Web services clustering is needed to solve this. It allows you to deploy and manage several instances of identical web services across multiple web services servers running on different server machines. Then we can distribute client requests among these machines using a suitable load balancing system to achieve the required level of availability and scalability. Setting up a simple Axis2 cluster Enabling Axis2 clustering is a simple task. Let us look at setting up a simple two node cluster: Extract the Axis2 distribution into two different directories and change the HTTP and HTTPS ports in the respective axis2.xml files. Locate the "Clustering" element in the axis2.xml files and set the enable attribute to true. Start the two Axis2 instances using Simple Axis Server. You should see some messages indicating that clustering has been enabled. That is it! Wasn't that extremely simple? In order to verify that state replication is working, we can deploy a stateful web service on both instances. This web service should set a value in the ConfigurationContext in one operation and try to retrieve that value in another operation. We can call the set value operation on one node, and next call the retrieve operation on the other node. The value set and the value retrieved should be equal. Next, we will look at the clustering configuration language in detail. Writing a highly available clusterable web service In general, you do not have to do anything extra to make your web service clusterable. Any regular web service is clusterable in general. In the case of stateful web services, you need to store the Java serializable replicable properties in the Axis2 ConfigurationContext, ServiceGroupContext, or ServiceContext. Please note that stateful variables you maintain elsewhere will not be replicated. If you have properly configured the Axis2 clustering for state replication, then the Axis2 infrastructure will replicate these properties for you. In the next section, you will be able to look at the details of configuring a cluster for state replication. Let us look at a simple stateful Axis2 web service deployed in the soapsession scope: public class ClusterableService { private static final String VALUE = "value"; public void setValue(String value) { MessageContext.getCurrentMessageContext().getServiceContext(); serviceContext.setProperty(VALUE, value); } public String getValue() { MessageContext.getCurrentMessageContext().getServiceContext(); return (String) serviceContext.getProperty(VALUE); } } You can deploy this service on two Axis2 nodes in a cluster. You can write a client that will call the setValue operation on the first, and then call the getValue operation on the second node. You will be able to see that the value you set in the first node can be retrieved from the second node. What happens is, when you call the setValue operation on the first node, the value is set in the respective ServiceContext, and replicated to the second node. Therefore, when you call getValue on the second node, the replicated value has been properly set in the respective ServiceContext. As you may have already noticed, you do not have to do anything additional to make a web service clusterable. Axis does the state replication transparently. However, if you require control over state replication, Axis2 provides that option as well. Let us rewrite the same web service, while taking control of the state replication: public class ClusterableService { private static final String VALUE = "value"; public void setValue(String value) { MessageContext.getCurrentMessageContext().getServiceContext(); serviceContext.setProperty(VALUE, value); Replicator.replicate(serviceContext); } public String getValue() { MessageContext.getCurrentMessageContext().getServiceContext(); return (String) serviceContext.getProperty(VALUE); } } Replicator.replicate() will immediately replicate any property changes in the provided Axis2 context. So, how does this setup increase availability? Say, you sent a setValue request to node 1 and node 1 failed soon after replicating that value to the cluster. Now, node 2 will have the originally set value, hence the web service clients can continue unhindered. Stateless Axis2 Web Services Stateless Axis2 Web Services give the best performance, as no state replication is necessary for such services. These services can still be deployed on a load balancer-fronted Axis2 cluster to achieve horizontal scalability. Again, no code change or special coding is necessary to deploy such web services on a cluster. Stateless web services may be deployed in a cluster either to achieve failover behavior or scalability. Setting up a failover cluster A failover cluster is generally fronted by a load balancer and one or more nodes that are designated as primary nodes, while some other nodes are designated as backup nodes. Such a cluster can be set up with or without high availability. If all the states are replicated from the primaries to the backups, then when a failure occurs, the clients can continue without a hitch. This will ensure high availability. However, this state replication has its overhead. If you are deploying only stateless web services, you can run a setup without any state replication. In a pure failover cluster (that is, without any state replication), if the primary fails, the load balancer will route all subsequent requests to the backup node, but some state may be lost, so the clients will have to handle some degree of that failure. The load balancer can be configured in such a way that all requests are generally routed to the primary node, and a failover node is provided in case the primary fails, as shown in the following figure: Increasing horizontal scalability As shown in the figure below, to achieve horizontal scalability, an Axis2 cluster will be fronted by a load balancer (depicted by LB in the following figure). The load balancer will spread the load across the Axis2 cluster according to some load balancing algorithm. The round-robin load balancing algorithm is one such popular and simple algorithm, and works well when all hardware and software on the nodes are identical. Generally, a horizontally scalable cluster will maintain its response time and will not degrade performance under increasing load. Throughput will also increase when the load increases in such a setup. Generally, the number of nodes in the cluster is a function of the expected maximum peak load. In such a cluster, all nodes are active.
Read more
  • 0
  • 0
  • 3018
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-modx-20-web-development-basics
Packt
24 Feb 2011
7 min read
Save for later

MODx 2.0: Web Development Basics

Packt
24 Feb 2011
7 min read
  MODx Web Development - Second Edition Site configuration When you first log in to the MODx Manager interface, you will see the site configuration page in the rightmost panel. Here, you can customize some basic configurations of the site. You can reach this page from anywhere in the MODx Manager by clicking on the Configuration sub-menu in the Tools menu. All of the options that can be configured from this Configuration page are settings that are global to the entire site. After changing the configurations, you have to let MODx store them by clicking on the Save button. The following is the screenshot of the Configuration page: (Move the mouse over the image to enlarge it) The configurations are grouped into five categories: Site—mostly, settings that are used to personalize the site Friendly URLs—settings to help make the site search-engine optimized User—settings related to user logins Interface & Features—mostly, Manager interface customizations File Manager—settings defining what can be uploaded and where Configuring the site In this section, we are going to make a few changes to get you familiar with the various configurations available. Most configurations have tooltips that describe them in a little pop-up when you move the mouse over them. After making changes in the site configuration and saving it, you will be redirected to another page. This page is available by clicking on the Home link on the Site tab. This page is also the default Manager interface page. This means that every time you log in using the Manager login screen, you will reach this page by default. This page has seven tabs, which are briefly explained below: My MODx Site: Provides quick access to certain features in MODx. Configuration: Displays information on the current status of the site. MODx News: Shows updates on what is happening with MODx. Security Notices: Shows updates on what is happening with MODx that is specific to security. Recent Resources: Shows a list with hyperlinks of the recently created or edited resources. Info: Shows information about your login status. Online: Lists all of the active users. Noticing and fixing errors and warnings The Configuration tab of the default Manager interface page displays errors and warnings about issues in the installation, if any. Generally, it also has instructions on how to fix them. Most of the time, the warnings are for security issues or suggestions for improving performance. Hence, although the site will continue to work when there are warnings listed on this page, it is good practice to fix the issues that have caused these warnings. Here we discuss three such warnings that occur commonly, and also show how to fix them. config file still writable: This is shown when the config file is still writable. It can be fixed by changing the properties of the configuration file to read only. register_globals is set to ON in your php.ini configuration file: This is a setting in the PHP configuration file. This should be set to OFF. Having it ON makes the site more vulnerable to what is known as cross site scripting (XSS). Configuration warning—GD and/or Zip PHP extensions not found: This is shown when you do not have the specified packages installed with PHP. MAMP doesn't come with the ZIP extension and you can ignore this configuration if you are not using it in production. Both XAMPP and MAMP come with the GD extension by default. Changing the name of the site In the previous section, we listed the groups of configuration options that are available. Let us change one option—the name of the site—now. Click on the Tools menu in the top navigational panel Click on the Configuration Menu item Change the text field labeled Site Name to Learning MODx Click on the Save button The basic element of MODx: Resources Resources are the basic building blocks in MODx. They are the elements that make up the content of the site. Every web page in MODx corresponds to a Resource page. In early versions of MODx, Resources were called Documents. Thinking of them as documents may make it easier for you to understand. Every resource has a unique ID. This ID can be passed along in the URL, and MODx will display the page for the resource with the same ID. In the simplest case, a resource contains plain text. As can be seen from the previous screenshot, the ID referred to here is 2, and the content displayed on the screen is from resource ID 9. It is also possible to refer to a resource by an alias name instead of an ID. An alias is a friendly name that can be used instead of having to use numbers. Containers Resources can be contained within other resources called containers. Containers in MODx are like folders in filesystems, but with the difference that a container is also a resource. This means that every container also has a resource ID, and a corresponding page is shown when such an ID is referenced in the URL. MODx Manager interface MODx is administered and customized by using the provided Manager interface. From the Manager interface, you can edit resources, place them within containers, and change their properties. You can log in to the Manager interface by using the Manager login screen http://sitename/manager, by using the username and password that you supplied when installing MODx. The Manager interface is divided into two panes. The leftmost pane always displays the resources in a resource tree, and the rightmost pane displays the content relevant to your last action. The two preceding panes you see are the menu and the corresponding menu items. Each of these leads to the different functionalities of MODx. In the leftmost pane, you will see the site name followed by a hierarchically-grouped resource list. There is a + near every unexpanded container that has other resources. When you click on the + symbol, the container expands to show the children and the + symbol changes to a – symbol. Clicking on the – symbol hides the children of the respective container. The resource's ID is displayed in parentheses after the resource's title in the resource tree. The top of leftmost pane consists of a few icons, referred to as the Resource Toolbar, which help to control the visibility of the resource tree. Expand Site Tree—expand all of the containers to show their children and siblings. Collapse Site Tree—collapse all of the containers to hide their children and siblings. New Resource—open a new resource page in the rightmost pane. New Weblink—open a new weblink page in the rightmost pane. Refresh Site Tree—refresh the tree of containers and resources to make available any changes that are not yet reflected in the tree. Sort the Site Tree—open a pop-up page where you can select from the various criteria available to sort the tree. Purge—when you delete a resource, it stays in the recycle bin. The resources are struck out with a red line. The resources can be completely removed from the system by clicking on the Purge icon. Hide Site Tree—this icon slides the leftmost pane out of view, giving more space for the rightmost pane. Right-clicking on a resource brings up a context menu from where you can perform various actions on the resource. Clicking on Edit will open the page for editing in the rightmost pane. The context menu provides interesting shortcuts that are very handy.
Read more
  • 0
  • 0
  • 1942

article-image-tips-and-tricks-ibm-filenet-p8-content-manager
Packt
24 Feb 2011
11 min read
Save for later

Tips and Tricks on IBM FileNet P8 Content Manager

Packt
24 Feb 2011
11 min read
Getting Started with IBM FileNet P8 Content Manager Install, customize, and administer the powerful FileNet Enterprise Content Management platform Quickly get up to speed on all significant features and the major components of IBM FileNet P8 Content Manager Provides technical details that are valuable both for beginners and experienced Content Management professionals alike, without repeating product reference documentation Gives a big picture description of Enterprise Content Management and related IT areas to set the context for Content Manager Written by an IBM employee, Bill Carpenter, who has extensive experience in Content Manager product development, this book gives practical tips and notes with a step-by-step approach to design real Enterprise Content Management solutions to solve your business needs   Installation care If you are using a virtual server image with snapshot capability, it’s a good idea to use snapshots. In fact, we recommend that after each of the major installation steps. If something goes wrong in a later step, you can recover back to the snapshot point to save yourself the trouble of starting over. WAS Bootstrap Hostname In a development environment, the domain name might not resolve in your DNS. In that case, enter the IP address for that server instead. Populating Tivoli Directory Server (TDS) We could use the TDS Realms interface to construct our users and groups. If you use TDS in your enterprise, that’s a good way to go. It offers several user interface niceties for directory administration, and it also offers partial referential integrity for the entries. Directory concepts and notation Directory concepts and notation can seem pretty odd. Most people don’t encounter them every day. There is a lot of material available on the web to explain both the concepts and the notation. Here is one example that is clearly written and oriented toward directory novices: http://www.skills-1st.co.uk/papers/ldapschema-design-feb-2005/index.html. Close up all of the nodes before you exit FEM FEM remembers the state of the tree view from session to session. When you start FEM the next time, it will try to open the nodes you had open when you exited. That will often mean something of a delay as it reads extensive data for each open Object Store node. You might find it a useful habit to close up all of the nodes before you exit FEM. Using topology levels A set of configuration data, if used, is used as the complete configuration. That is, the configuration objects at different topology levels are not blended to create an "effective configuration". Trace logging Although similar technologies are used to provide trace logging in the CE server and the client APIs, the configuration mechanisms are completely separate. The panels in FEM control only tracing within the CE server and do not apply to any client tracing. If you find that performance still drags or that the trace log file continues to grow even after you have disabled trace logging in the Domain configuration, it could be that trace logging is still configured at a more specific level. That's very easy to overlook, especially in more complex deployments or where CM administration duties are shared. Collaborative checkout Even with a collaborative checkout, the subsequent checkin is still subject to access checks, so you can still have fine-grained control over that. In fact, because you can use fine-grained security to limit who can do a checkin, you might as well make the Object Store default be Collaborative unless you have some specific use case that demands Exclusive. Cancel the creation of the class Although the final step in the Create a Class Wizard will still let you cancel the creation of the class, any property templates and choice lists you created along the way will already have been created in the Object Store. If you wish to completely undo your work, you will have to delete them manually. FEM query interface A historical quirk of the FEM query interface is that the SELECT list must begin with the This property. That is not a general requirement of CE SQL. Running the CSE installer If you are running the CSE installer and eventually the CSE itself on the same machine as the CE, you might be tempted to use localhost as the CSE server host. From the CE point of view, that would be technically correct. However, exploiting little tricks like that is a bad habit to get into. It certainly won't work in any environment where you install the CSE separately from the CE or have multiple CSE servers installed. We suggest you use a proper host name. Be sure to get the server name correct since the installer and the Verity software will sprinkle it liberally throughout several configuration files. If it is not correct by default, which is one of the hazards of using dynamic IP addresses, correct it now. CBR Locale field uni stands for Unicode and is generally the best choice for mixed languages support. If you think you don't need mixed-languages support, there's a pretty good chance you are mistaken, even if all of your users have the same locale settings in their environments. In any case, if you are tempted to use a different CBR locale, you should first read the K2 locale customization guide, since it's a reasonably complicated topic. Process Service does not start If the Process Service does not start, check to make sure that the Windows service named Process Engine Services Manager is started. If not, start it manually and make sure it is marked it for automatic startup. Configure two WAS profiles When trying to duplicate configuration aspects of one WAS profile into another WAS profile, we could theoretically have the WAS consoles open simultaneously in separate browser windows, which would facilitate side-by-side comparisons. In practice, this is likely to confuse the browser cookies holding the session information and drive you slightly crazy. If you have two different browsers installed, for example Firefox and Internet Explorer, you can open one WAS console in each. Disk space used by XT Disk space used by XT may exceed your expectations. We recommend having at least 2 gigabytes of disk space available when doing an XT installation. A lot of that can be recovered after XT is deployed into the application server. Object deletion Deleted objects are really, permanently deleted by the CE server. There is no undo or recycle bin or similar mechanism unless an application implements one. Notional locking & Cooperative locking Don't confuse the notional locking that comes via checkout with the unrelated feature of cooperative locking. Cooperative locking is an explicit mechanism for applications to mark a Document, Folder, or Custom Object as being locked. As the name implies, this only matters for applications which check for and honor cooperative locks. The CE will not prevent any update operation—other than locking operations themselves—just because there is a cooperative lock on the object. Synchronous or asynchronous subscription As a terminology convenience, events or event handlers are sometimes referred to as being synchronous or asynchronous. This is not technically correct because the designation is always made on the subscription object. An event can have either kind of subscription, and an event handler can be invoked both ways. Synchronous subscription event handlers The CE does not always throw an exception if the event handler for a synchronous subscription updates the triggering object. This has allowed many developers to ignore the rule that such updates are not allowed, assuming it is merely a best practice. Nonetheless, it has always been the rule that synchronous subscription event handlers are not allowed to do that. Even if it works in a particular instance, it may fail at random times that escape detection in testing. Don't fall into this trap! AddOn in the P8 Domain If you don't happen to be a perfect person, you might have to iterate a few times during the creation and testing of your AddOn until you get things exactly the way you want them. For the sake of mere mechanical efficiency, we usually do this kind of work using a virtual machine image that includes a snapshot capability. We make a snapshot just before creating the AddOn in the P8 Domain. Then we do the testing. If we need to iterate, it's pretty fast to roll back to the snapshot point. "anonymous access" complaints from the CE When an application server sees a Subject that it doesn't trust, since there is no trust relationship with the sending application server, it will often simply discard the Subject or strip vital information out of it. Hence the complaints from the CE that you are trying to do "anonymous access" often mean that there is something wrong with your trust relationship setup. Unknown ACE An "unknown" Access Control Entry (ACE) in an Access Control List (ACL) comes about because ACEs sometimes get orphaned. The user or group mentioned in the ACE gets deleted from the directory, but the ACE still exists in the CE repository. These ACEs will never match any calling user and so will never figure into any access control calculation. Application developers have to be aware of this kind of ACE when programmatically displaying or modifying the ACL. The unknown ACEs should be silently filtered out and not displayed to end users. (FEM displays unknown ACEs, but it is an administrator tool.) If updates are made to the ACL, the unknown ACEs definitely must be filtered out. Otherwise, the CE will throw an exception because it cannot resolve the user or group in the directory. Virtualization Several years ago, CM product documentation said that virtual machine technology was supported, but that you might have to reproduce any problems directly on physical hardware if you needed support. That's no longer the case, and virtualization is supported as a first-class citizen. For your own purposes, you will probably want to evaluate whether there are any significant performance costs to the virtualization technology you have chosen. The safest way to evaluate that is under similar configuration and load as that of your intended production environment. File Storage Area Folders used internally within a File Storage Area for content have no relationship to the folders used for filing objects within an Object Store. On reflection, this should be obvious, since you can store content for unfiled documents. Whereas the folders in an Object Store are an organizing technique for objects, the folders in a File Storage Area are used to avoid overwhelming the native filesystem with too many files in a single directory (which can impact performance). Sticky sessions All API interactions with the CE are stateless. In other words, except for load balancing, it doesn't matter which CE server is used for any particular API request. Requests are treated independently, and the CE does not maintain any session state on behalf of the application. On the other hand, some CM web applications do need to be configured for sticky sessions. A sticky session means that incoming requests (usually from a web browser) must return to the same copy of the application for subsequent requests. Disaster Recovery (DR) There is technology available for near real time replication for DR. It can be tempting to think of your DR site as your data backup, or at least eliminating the need for traditional backups. It seems too good to be true since all of your updates are almost instantaneously copied to another datacenter. The trap is that the replication can't tell desirable updates from mistakes. If you have to recover some of your data because of an operational mistake (for example, if you drop the tables in an Object Store database), the DR copy will reflect the same mistake. You should still do traditional backups even if you have a replicated DR site. Further resources on this subject: IBM FileNet P8 Content Manager: Administrative Tools and Tasks [Article] IBM FileNet P8 Content Manager: Exploring Object Store-level Items [Article] IBM FileNet P8 Content Manager: End User Tools and Tasks [Article]
Read more
  • 0
  • 0
  • 4991

article-image-apache-axis2-web-services-writing-axis2-module
Packt
22 Feb 2011
14 min read
Save for later

Apache Axis2 Web Services: Writing an Axis2 Module

Packt
22 Feb 2011
14 min read
Apache Axis2 Web Services, 2nd Edition Create secure, reliable, and easy-to-use web services using Apache Axis2. Extensive and detailed coverage of the enterprise ready Apache Axis2 1.5 Web Services / SOAP / WSDL engine. Attain a more flexible and extensible framework with the world class Axis2 architecture. Learn all about AXIOM - the complete XML processing framework, which you also can use outside Axis2. Covers advanced topics like security, messaging, REST and asynchronous web services. Written by Deepal Jayasinghe, a key architect and developer of the Apache Axis2 Web Service project; and Afkham Azeez, an elected ASF and PMC member. Web services are gaining a lot of popularity in the industry and have become one of the major enabler for application integration. In addition, due to the flexibility and advantages of using web services, everyone is trying to enable web service support for their applications. As a result, web service frameworks need to support new and more custom requirements. One of the major goals of a web service framework is to deliver incoming messages into the target service. However, just delivering the message to the service is not enough; today's applications are required to have reliability, security, transaction, and other quality services. In our approach, we will be using code sample to help us understand the concepts better. Brief history of the Axis2 module Looking back at the history of Apache Web Services, the Handler concept can be considered as one of the most useful and interesting ideas. Due to the importance and flexibility of the handler concept, Axis2 has also introduced it into its architecture. Notably, there are some major differences in the way you deploy handlers in Axis1 and Axis2. In Axis1, adding a handler requires you to perform global configuration changes and for an end user, this process may become a little complex. In contrast, Axis2 provides an easy way to deploy handlers. Moreover, in Axis2, deploying a handler is similar to deploying a service and does not require global configuration changes. At the design stage of Axis2, one of the key considerations was to have a mechanism to extend the core functionality without doing much. One of the main reasons behind the design decision was due to the lesson learned from supporting WS reliable messaging in Axis1. The process of supporting reliable messaging in Axis1 involved a considerable amount of work, and part of the reason behind the complex process was due to the limited extensibility of Axis1 architecture. Therefore, learning from a session in Axis1, Axis2 introduced a very convenient and flexible way of extending the core functionality and providing the quality of services. This particular mechanism is known as the module concept. Module concept One of the main ideas behind a handler is to intercept the message flow and execute specific logic. In Axis2, the concept of a module is to provide a very convenient way of deploying service extension. We can also consider a module as a collection of handlers and required resources to run the handlers (for example, third-party libraries). One can also consider a module as an implementation of a web service standard specification. As an illustration, Apache Sandesha is an implementation of WS-RM specification. Apache Rampart is an implementation of WS-security; likewise, in a general module, is an implementation of a web service specification. One of the most important features and aspects of the Axis2 module is that it provides a very easy way to extend the core functionality and provide better customization of the framework to suit complex business requirements. A simple example would be to write a module to log all the incoming messages or to count the number of messages if requested. Module structure Axis1 is one of the most popular web service frameworks and it provides very good support for most of the web service standards. However, when it comes to new and complex specifications, there is a significant amount of work we need to do to achieve our goals. The problem becomes further complicated when the work we are going to do involves handlers, configuration, and third-party libraries. To overcome this issue, the Axis2 module concept and its structure can be considered as a good candidate. As we discussed in the deployment section, both Axis2 services and modules can be deployed as archive files. Inside any archive file, we can have configuration files, resources, and the other things that the module author would like to have. It should be noted here that we have hot deployment and hot update support for the service; in other words, you can add a service when the system is up and running. However, unfortunately, we cannot deploy new modules when the system is running. You can still deploy modules, but Axis2 will not make the changes to the runtime system (we can drop them into the directory but Axis2 will not recognize that), so we will not use hot deployment or hot update. The main reason behind this is that unlike services, modules tend to change the system configurations, so performing system changes at the runtime to an enterprise-level application cannot be considered a good thing at all. Adding a handler into Axis1 involves global configuration changes and, obviously, system restart. In contrast, when it comes to Axis2, we can add handlers using modules without doing any global level changes. There are instances where you need to do global configuration changes, which is a very rare situation and you only need to do so if you are trying to add new phases and change the phase orders. You can change the handler chain at the runtime without downer-starting the system. Changing the handler chain or any global configuration at the runtime cannot be considered a good habit. This is because in a production environment, changing runtime data may affect the whole system. However, at the deployment and testing time this comes in handy. The structure of a module archive file is almost identical to that of a service archive file, except for the name of the configuration file. We know that for a service archive file to be a valid one, it is required to have a services.xml. In the same way, for a module to be a valid module archive, it has to have a module.xml file inside the META-INF directory of the archive. A typical module archive file will take the structure shown in the following screenshot. We will discuss each of the items in detail and create our own module in this article as well. Module configuration file (module.xml) The module archive file is a self-contained and self-described file. In other words, it has to have all the configuration required to be a valid and useful module. Needless to say, that is the beauty of a self-contained package. The Module configuration file or module.xml file is the configuration file that Axis2 can understand to do the necessary work. A simple module.xml file has one or more handlers. In contrast, when it comes to complex modules, we can have some other configurations (for example, WS policies, phase rules) in a module.xml. First, let's look at the available types of configurations in a module.xml. For our analysis, we will use a module.xml of a module that counts all the incoming and outgoing messages. We will be discussing all the important items in detail and provide a brief description for the other items: Handlers alone with phase rules Parameters Description about module Module implementation class WS-Policy End points Handlers and phase rules A module is a collection of handlers, so a module could have one or more handlers. Irrespective of the number of handlers in a module, module.xml provides a convenient way to specify handlers. Most importantly, module.xml can be used to provide enough configuration options to add a handler into the system and specify the exact location where the module author would like to see the handler running. Phase rules can be used as a mechanism to tell Axis2 to put handlers into a particular location in the execution chain, so now it is time to look at them with an example. Before learning how to write phase rules and specifying handlers in a module.xml, let's look at how to write a handler. There are two ways to write a handler in Axis2: Implement the org.apache.axis2.engine.Handler interface Extend the org.apache.axis2.handlers.AbstractHandler abstract class In this article, we are going to write a simple application to provide a better understanding of the module. Furthermore, to make the sample application easier, we are going to ignore some of the difficulties of the Handler API. In our approach, we will extend the AbstractHandler. When we extend the abstract class, we only need to implement one method called invoke. So the following sample code will illustrate how to implement the invoke method: public class IncomingCounterHandler extends AbstractHandler implements CounterConstants { public InvocationResponse invoke(MessageContext messageContext) throws AxisFault { //get the counter property from the configuration context ConfigurationContext configurationContext = messageContext. getConfigurationContext(); Integer count = (Integer) configurationContext.getProperty(INCOMING_ MESSAGE_COUNT_KEY); //increment the counter count = Integer.valueOf(count.intValue() + 1 + «»); //set the new count back to the configuration context configurationContext.setProperty(INCOMING_MESSAGE_COUNT_KEY, count); //print it out System.out.println(«The incoming message count is now « + count); return InvocationResponse.CONTINUE; } } As we can see, the method takes MessageContext as a method parameter and returns InvocationResponse as the response. You can implement the method as follows: First get the configurationContext from the messageContext. Get the property value specified by the property name. Then increase the value by one. Next set it back to configurationContext. In general, inside the invoke method, as a module author, you have to do all the logic processing, and depending on the result you get, we can decide whether you let AxisEngine continue, suspend, or abort. Depending on your decision, you can return to one of the three following allowed return types: InvocationResponse.CONTINUE Give the signal to continue the message InvocationResponse.SUSPEND The message cannot continue as some of the conditions are not satisfied yet, so you need to pause the execution and wait. InvocationResponse.ABORT Something has gone wrong, therefore you need to drop the message and let the initiator know about it. The message cannot continue as some of the conditions are not satisfied yet, so you need to pause the execution and wait. InvocationResponse.ABORT Something has gone wrong, therefore you need to drop the message and let the initiator know about it. The corresponding CounterConstants class a just a collection of constants and will look as follows: public interface CounterConstants { String INCOMING_MESSAGE_COUNT_KEY = "incoming-message-count"; String OUTGOING_MESSAGE_COUNT_KEY = "outgoing-message-count"; String COUNT_FILE_NAME_PREFIX = "count_record"; } As we already mentioned, the sample module we are going to implement is for counting the number of request coming into the system and the number of messages going out from the system. So far, we have only written the incoming message counter and we need to write the outgoing message counter as well, and the implementation of the out message count hander will look like the following: public class OutgoingCounterHandler extends AbstractHandler implements CounterConstants { public InvocationResponse invoke(MessageContext messageContext) throws AxisFault { //get the counter property from the configuration context ConfigurationContext configurationContext = messageContext. getConfigurationContext(); Integer count = (Integer) configurationContext.getProperty(OUTGOING_ MESSAGE_COUNT_KEY); //increment the counter count = Integer.valueOf(count.intValue() + 1 + «»); //set it back to the configuration configurationContext.setProperty(OUTGOING_MESSAGE_COUNT_KEY, count); //print it out System.out.println(«The outgoing message count is now « + count); return InvocationResponse.CONTINUE; } } The implementation logic will be exactly the same as the incoming handler processing, except for the property name used in two places. Module implementation class When we work with enterprise-level applications, it is obvious that we have to initialize various settings such as database connections, thread pools, reading property, and so on. Therefore, you should have a place to put that logic in your module. We know that handlers run only when a request comes into the system but not at the system initialization time. The module implementation class provides a way to achieve system initialization logic as well as system shutdown time processing. As we mentioned earlier, module implementation class is optional. A very good example of a module that does not have a module implementation class is the Axis2 addressing module. However, to understand the concept clearly in our example application, we will implement a module implementation class, as shown below: public class CounterModule implements Module, CounterConstants { private static final String COUNTS_COMMENT = "Counts"; private static final String TIMESTAMP_FORMAT = "yyMMddHHmmss"; private static final String FILE_SUFFIX = ".properties"; public void init(ConfigurationContext configurationContext, AxisModule axisModule) throws AxisFault { //initialize our counters System.out.println("inside the init : module"); initCounter(configurationContext, INCOMING_MESSAGE_COUNT_KEY); initCounter(configurationContext, OUTGOING_MESSAGE_COUNT_KEY); } private void initCounter(ConfigurationContext configurationContext, String key) { Integer count = (Integer) configurationContext. getProperty(key); if (count == null) { configurationContext.setProperty(key, Integer. valueOf("0")); } } public void engageNotify(AxisDescription axisDescription) throws AxisFault { System.out.println("inside the engageNotify " + axisDescription); } public boolean canSupportAssertion(Assertion assertion) { //returns whether policy assertions can be supported return false; } public void applyPolicy(Policy policy, AxisDescription axisDescription) throws AxisFault { // Configuure using the passed in policy! } public void shutdown(ConfigurationContext configurationContext) throws AxisFault { //do cleanup - in this case we'll write the values of the counters to a file try { SimpleDateFormat format = new SimpleDateFormat(TIMESTAMP_ FORMAT); File countFile = new File(COUNT_FILE_NAME_PREFIX + format. format(new Date()) + FILE_SUFFIX); if (!countFile.exists()) { countFile.createNewFile(); } Properties props = new Properties(); props.setProperty(INCOMING_MESSAGE_COUNT_KEY, configurationContext.getProperty(INCOMING_MESSAGE_ COUNT_KEY).toString()); props.setProperty(OUTGOING_MESSAGE_COUNT_KEY, configurationContext.getProperty(OUTGOING_MESSAGE_ COUNT_KEY).toString()); //write to a file props.store(new FileOutputStream(countFile), COUNTS_ COMMENT); } catch (IOException e) { //if we have exceptions we'll just print a message and let it go System.out.println("Saving counts failed! Error is " + e.getMessage()); } } } As we can see, there are a number of methods in the previous module implementation class. However, notably not all of them are in the module interface. The module interface has only the following methods, but here we have some other methods for supporting our counter module-related stuff: init engageNotify applyPolicy shutdown At the system startup time, the init method will be called, and at that time, the module can perform various initialization tasks. In our sample module, we have initialized both in-counter and out-counter. When we engage this particular module to the whole system, to a service, or to an operation, the engageNotify method will be called. At that time, a module can decide whether the module can allow this engagement or not; say for an example, we try to engage the security module to a service, and at that time, the module finds out that there is a conflict in the encryption algorithm. In this case, the module will not be able to engage and the module throws an exception and Axis2 will not engage the module. In this example, we will do nothing inside the engageNotify method. As you might already know, WS-policy is one of the key standards and plays a major role in the web service configuration. When you engage a particular module to a service, the module policy should be applied to the service and should be visible when we view the WSDL of that service. So the applyPolicy method sets the module policy to corresponding services or operations when we engage the module. In this particular example, we do not have any policy associated with the module, so we do not need to worry about this method as well. As we discussed in the init method, the method shutdown will be called when the system has to shut down. So if we want to do any kind of processing at that time, we can add this logic into that particular method. In our example, for demonstration purposes, we have added code to store the counter values in a file.
Read more
  • 0
  • 0
  • 3601

article-image-faqs-yui
Packt
18 Feb 2011
7 min read
Save for later

FAQs on YUI

Packt
18 Feb 2011
7 min read
  YUI 2.8: Learning the Library Develop your next-generation web applications with the YUI JavaScript development library Improve your coding and productivity with the YUI Library Gain a thorough understanding of the YUI tools Learn from detailed examples for common tasks         Read more about this book       (For more resources on Yui, see here.) Q: What is the YUI?A: The Yahoo! User Interface (YUI) Library is a toolkit packed full of powerful objects that enables rapid frontend GUI design for richly interactive web-based applications. The utilities provide an advanced layer of functionality and logic to your applications, while the controls are attractive pre-packed objects that we can drop onto a page and begin using with little customization. Q: Who is it for and who will it benefit the most?A: The YUI is aimed at and can be used by just about anyone and everyone, from single site hobbyists to creators of the biggest and best web applications around. Developers of any caliber can use as much or as little of it as they like to improve their site and to help with debugging. It's simple enough to use for those of you that have just a rudimentary working knowledge of JavaScript and the associated web design technologies, but powerful and robust enough to satisfy the needs of the most aspiring and demanding developers amongst you. Q: How do I install it?A: The simple answer is that you don’t. Both you, while developing and your users can load the components needed both from Yahoo! CDN and even from Google CDN across the world. The CDN (Content Delivery Network) is what the press nowadays calls ‘the cloud’ thus, your users are more likely to get a better performance loading the library from the CDN than from your own servers. However, if you wish, you can download the whole package either to take a deep look into it or serve it to your users from within your own network. You have to serve the library files yourself if you use SSL. Q: From where can one download the YUI Library?A: The YUI Library can be downloaded from the YUI homepage. The link can be found at http://developer.yahoo.com/yui/. Q: Are there any licensing restrictions for YUI?A: All of the utilities, controls, and CSS resources that make up the YUI have been publicly released, completely for free, under the open source BSD (Berkeley Software Distribution) license. This is a very unrestrictive license in general and is popular amongst the open source community. Q: Which version should I use?A: The YUI Library is currently provided in two versions, YUI2 and YUI3. YUI2 is the most stable version and there are no plans to discontinue it. In fact, the YUI Team is working on the 2.9 version, which will be the last major revision of the YUI2 code line and is to be released in the second half of 2011. The rest of this article will mostly refer to the YUI 2.8 release, which is the current one. The YUI3 code line is the newest and is much faster and flexible. It has been redesigned from the ground up with all the experience accumulated over 5 years of development of the YUI2 code line. At this point, it does not have such a complete set of components as the YUI2 version and many of those that do exist are in ‘beta’ status. If your target release date is towards the end of 2011, YUI3 is a better choice since by then, more components should be out of ‘beta’. The YUI team has also opened the YUI Gallery to allow for external contributions. YUI3, being more flexible, allows for better integration of third-party components thus, what you might not yet find in the main distribution might be already available from the YUI Gallery. Q: Does Yahoo! use the YUI Library? Do I get the same one?A: They certainly do! The YUI Library you get is the very same that Yahoo! uses to power their own web applications and it is all released at the same time. Moreover, if you are in a rush, you can also stay ahead of the releases (at your own risk) by looking at GitHub, which is the main life repository for both YUI versions. You can follow YUI’s development day by day. Q: How do I get support?A: The YUI Library has always been one of the best documented libraries available with good users guide and plenty of well explained examples besides the automated API docs. If that is not enough, you can reach the forums, which currently have over 7000 members with many very knowledgeable people amongst them, both from the YUI team and many power users. Q: What does the core of the YUI library do?A: What was then known as the ‘browser wars’, with several companies releasing their own set of features on the browsers, left the programming community with a set of incompatible features which made front-end programming a nightmare. The core utilities try to fix these incompatibilities by providing a single standard and predictable API and deal with each browser as needed. The core of the library consists of the following three files: YAHOO Global Object: The Global Object sets up the Global YUI namespace and provides other core services to the rest of the utilities and controls. It's the foundational base of the library and is a dependency for all other library components (except for the CSS tools). Dom utilities: The Dom utilities provide a series of convenient methods that make working with the Document Object Model much easier and quicker. It adds useful selection tools, such as those for obtaining elements based on their class instead of an ID, and smoothes out the inconsistencies between different browsers to make interacting with the DOM programmatically a much more agreeable experience. Event Utility: The Event Utility provides a unified event model that co-exists peacefully with all of the A-grade browsers in use today and offers a consistent method of accessing the event object. Most of the other utilities and controls also rely heavily upon the Event Utility to function correctly. Q: What are A-grade browsers?A: For each release, the YUI Library is thoroughly tested on a variety of browsers. This list of browsers is taken from Yahoo!’s own statistics of visitors to their sites. The YUI Library must work on all browsers with a significant number of users. The A-grade browsers are those that make up the largest share of users. Fortunately browsers come in ‘families’ (for example, Google’s Chrome and Apple’s Safari both use the WebKit rendering engine) thus, a positive result in one of them is likely to apply to all of them. Testing in Safari for Mac provides valid results for the Safari on Windows version, which is rarely seen. Those browsers are considered X-Grade, meaning, they haven’t been tested but they are likely to work fine. Finally, we have the C-grade browsers which are known to be obsolete and nor YUI nor any other library can really be expected to work on them. This policy is called Graded Browser Support and it is updated quarterly. It does not depend on the age of the browser but on its popularity, for example, IE6 is still in the A-grade list because it still has a significant share.
Read more
  • 0
  • 0
  • 3746
article-image-how-overcome-pitfalls-magento
Packt
17 Feb 2011
7 min read
Save for later

How to Overcome the Pitfalls of Magento

Packt
17 Feb 2011
7 min read
  Magento 1.4 Development Cookbook Extend your Magento store to the optimum level by developing modules and widgets Develop Modules and Extensions for Magento 1.4 using PHP with ease Socialize your store by writing custom modules and widgets to drive in more customers Achieve a tremendous performance boost by applying powerful techniques such as YSlow, PageSpeed, and Siege Part of Packt's Cookbook series: Each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible         Read more about this book       (For more resources on Magento, see here.) The reader can benefit from the previous article on Magento 1.4: Performance Optimization. Using APC/Memcached as the cache backend Magento has got a cache system that is based on files by default. We can boost the overall performance by changing the cache handler to a better engine like APC or Memcached. This recipe will help us to set up APC or Memcached as the cache backend. Getting ready Installation of APC: Alternative PHP Cache (APC) is a PECL extension. For any Debian-based Distro, it can be installed with an easy command from the terminal: sudo apt-get install php5-apc Or: sudo pecl install APC You can also install it from the source. The package download location for APC is: http://pecl.php.net/package/APC. Check whether it exists or not in phpinfo(). If you cannot see an APC block there, then you might not have added APC in the php.ini file. Installation of Memcached: Memcached is also available in most OS package repositories. You can install it from the command line: sudo apt-get install php5-memcached Memcached could be installed from source as well. Check whether it exists or not in phpinfo(). If you cannot see a Memcached block there, then you might not have added Memcached in the php.ini file. You can also check it via the telnet client. Issue the following command in the terminal: telnet localhost 11211 We can issue the get command now: get greeting Nothing happened? We have to set it first. set greeting 1 0 11 Hello World STORED get greeting Hello World END quit How to do it... Okay, we are all set to go for the APC or Memcached. Let's do it now for APC. Open local.xml in your favorite PHP editor. Add the cache block as follows: <?xml version="1.0"?> <config> <global> <install> <date><![CDATA[Sat, 26 Jun 2010 11:55:18 +0000]]></date> </install> <cache> <backend>apc</backend> <prefix>alphanumeric</prefix> </cache> <crypt> <key><![CDATA[870f60e1ba58fd34dbf730bfa8c9c152]]></key> </crypt> <disable_local_modules>false</disable_local_modules> <resources> <db> <table_prefix><![CDATA[]]></table_prefix> </db> <default_setup> <connection> <host><![CDATA[localhost]]></host> <username><![CDATA[root]]></username> <password><![CDATA[f]]></password> <dbname><![CDATA[magento]]></dbname> <active>1</active> </connection> </default_setup> </resources> <session_save><![CDATA[files]]></session_save> </global> <admin> <routers> <adminhtml> <args> <frontName><![CDATA[backend]]></frontName> </args> </adminhtml> </routers> </admin> </config> Delete all files from the var/cache/ directory. Reload your Magento and benchmark it now to see the boost in performance. Run the benchmark several times to get an accurate result. ab -c 5 -n 100 http://magento.local.com/ You can use either APC or Memcached. Let's test it with Memcached now. Delete the cache block as we set with APC previously and add the cache block as follows: <?xml version="1.0"?> <config> <global> <install> <date><![CDATA[Sat, 26 Jun 2010 11:55:18 +0000]]></date> </install> <crypt> <key><![CDATA[870f60e1ba58fd34dbf730bfa8c9c152]]></key> </crypt> <disable_local_modules>false</disable_local_modules> <resources> <db> <table_prefix><![CDATA[]]></table_prefix> </db> <default_setup> <connection> <host><![CDATA[localhost]]></host> <username><![CDATA[root]]></username> <password><![CDATA[f]]></password> <dbname><![CDATA[magento]]></dbname> <active>1</active> </connection> </default_setup> </resources> <session_save><![CDATA[files]]></session_save> <cache> <backend>memcached</backend> apc / memcached / xcache / empty=file <slow_backend>file</slow_backend> database / file (default) - used for 2 levels cache setup, necessary for all shared memory storages <memcached> memcached cache backend related config <servers> any number of server nodes can be included <server> <host><![CDATA[127.0.0.1]]></host> <port><![CDATA[11211]]></port> <persistent><![CDATA[1]]></persistent> <weight><![CDATA[2]]></weight> <timeout><![CDATA[10]]></timeout> <retry_interval><![CDATA[10]]></retry_interval> <status><![CDATA[1]]></status> </server> </servers> <compression><![CDATA[0]]></compression> <cache_dir><![CDATA[]]></cache_dir> <hashed_directory_level><![CDATA[]]> </hashed_directory_level> <hashed_directory_umask><![CDATA[]]> </hashed_directory_umask> <file_name_prefix><![CDATA[]]></file_name_prefix> </memcached> </cache> </global> <admin> <routers> <adminhtml> <args> <frontName><![CDATA[backend]]></frontName> </args> </adminhtml> </routers> </admin> </config> Save the local.xml file, clear all cache files from /var/cache/ and reload your Magento in the frontend and check the performance. Mount var/cache as TMPFS: mount tmpfs /path/to/your/magento/var/cache -t tmpfs -o size=64m How it works... Alternative PHP Cache (APC) is a free, open source opcode cache framework that optimizes PHP intermediate code and caches data and compiled code from the PHP bytecode compiler in shared memory, which is similar to Memcached. APC is quickly becoming the de facto standard PHP caching mechanism, as it will be included built-in to the core of PHP, starting with PHP 6. The biggest problem with APC is that you can only access the local APC cache. Memcached's magic lies in its two-stage hash approach. It behaves as though it were a giant hash table, looking up key = value pairs. Give it a key, and set or get some arbitrary data. When doing a memcached lookup, first the client hashes the key against the whole list of servers. Once it has chosen a server, the client then sends its request, and the server does an internal hash key lookup for the actual item data. Memcached affords us endless possibilities (query caching, content caching, session storage) and great flexibility. It's an excellent option for increasing performance and scalability on any website without requiring a lot of additional resources. Changing the var/cache to TMPFS is a very good trick to increase disk I/O. I personally found both APC's and Memcached's performance pretty similar. Both are good to go. If you want to split your cache in multiple servers go for the Memcached. Good Luck! The highlighted sections in code are for the APC and Memcached settings, respectively.  
Read more
  • 0
  • 0
  • 1764

article-image-magento-14-performance-optimization
Packt
17 Feb 2011
12 min read
Save for later

Magento 1.4: Performance Optimization

Packt
17 Feb 2011
12 min read
  Magento 1.4 Development Cookbook Extend your Magento store to the optimum level by developing modules and widgets Develop Modules and Extensions for Magento 1.4 using PHP with ease Socialize your store by writing custom modules and widgets to drive in more customers Achieve a tremendous performance boost by applying powerful techniques such as YSlow, PageSpeed, and Siege Part of Packt's Cookbook series: Each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible         Read more about this book       (For more resources on Magento, see here.) The reader can benefit from the previous article on How to Overcome the Pitfalls of Magento. Users really respond to speed.—Marissa Mayer, Google vice president of the research section and user experience. We will explain why this quote is true throughout this article. Her key insight for the crowd at the Web 2.0 Summit is that "slow and steady doesn't win the race". Today people want fast and furious. Not convinced? Okay, let's have a look at some arguments: 500ms lost is to lose 20 percent of traffic for Google (this might be why there are only ten results per page in the research). Increased latency of 100ms costs 1 percent of sales for Amazon. Reducing by 25 percent the weight of a page is to win 25 percent of users in the medium term for Google. Losing 400ms is to have a 5-9 percent drop in addition to Yahoo!, an editorial site. This is the era of milliseconds and terabytes, so we have to pay a big price if we can't keep up. This article will describe how to ensure the optimum performance of your Magento store. Measuring/benchmarking your Magento with Siege, ab, Magento profiler, YSlow, Page Speed, GTmetrix, and WebPagetest The very first task of any website's performance optimization is to know its pitfalls. In other words, know why it is taking too much time. Who are the culprits? Fortunately, we have some amicable friends who will guide us through. Let's list them: ab (ApacheBench): This is bundled with every Apache as a benchmarking utility. Siege: This is an open source stress/regression test and benchmark utility by Joe Dog. Magento profiler: This is a built-in Magento profiler. YSlow: This is a tool from Yahoo! We have been using it for years. This is in fact a firebug add-on. Page Speed: This is yet another firebug add-on from Google to analyze page performance on some common rules. GTmetrix: This is a cool online web application from which you can get both YSlow and Page Speed in the same place. Opera fanboys who don't like Firefox or Chrome might use it for YSlow and Page Speed here. WebPagetest: This is another online tool for benchmarking as GTmetrix. It also collects and shows screenshots with the reports. Okay, we are introduced to our new friends. In this recipe, we will work with them and find the pitfalls of our Magento store. Getting ready Before starting the work, we have to make sure that every required tool is in place. Let's check it. ab: This Apache benchmarking tool is bundled with every Apache installation. If you are on a Linux-based distribution, you can give it a go by issuing the following command in the terminal: ab -h Siege: We will use this tool in the same box as our server. So make sure you have it installed. You can see it by typing this command (note that the option is capital V): siege -V If it's installed, you should see the installed version information of Siege. If it's not, you can install it with the following command in any Debian-based Distro: sudo apt-get install siege You can also install it from source. To do so, grab the latest source from here: ftp://ftp.joedog.org/pub/siege/siege-latest.tar.gz, then issue the following steps sequentially: # go the location where you downloaded siegetar xvzf siege-latest.tar.gz# go to the siege folder. You should read it with something likesiege-2.70./configuremakemake install If you are on a Windows-based box, you would find it as: apache/bin/ab.exe Magento Profile: This is also a built-in tool with Magento. YSlow: This firebug add-on from Firefox could be installed via the Internet from here: http://developer.yahoo.com/yslow/. Firebug add-on is a dependency for YSlow. Page Speed: This is also a firebug add-on that can be downloaded and installed from: http://code.google.com/speed/page-speed/download.html. For using GTmetrix and WebPagetest, we will need an active Internet connection. Make sure you have these. How to do it... Using the simple tool ab: If you are on a Windows environment, go to the apache/bin/ folder and if you are on Unix, fire up your terminal and issue the following command: ab -c 10 -n 50 -g mage.tsv http://magento.local.com/ In the previous command, the params denote the following: -c: This is the concurrency number of multiple requests to perform at a time. The default is one request at a time. -n: This requests the number of requests to perform for the benchmarking session. The default is to just perform a single request, which usually leads to non-representative benchmarking results. -g (gnuplot-file): This writes all measured values out as a gnuplot or TSV (tab separate values) file. This file can easily be imported into packages like Gnuplot, IDL, Mathematica, Igor, or even Excel. The labels are on the first line of the file. The preceding command generates some benchmarking report in the terminal and a file named mage.tsv in the current location, as we specified in the command. If we open the mage.tsv file in a spreadsheet editor such as Open Office or MS Excel, it should read as follows: You can tweak the ab params and view a full listing of params by typing ab -h in the terminal. Using Siege: Let's lay Siege to it! Siege is an HTTP regression testing and benchmarking utility. It was designed to let web developers measure the performance of their code under duress, to see how it will stand up to load on the Internet. Siege supports basic authentication, cookies, HTTP, and HTTPS protocols. It allows the user to hit a web server with a configurable number of concurrent simulated users. These users place the web server 'under Siege'. Let's create a text file with the URLs that would be tested under Siege. We can pass a single URL in the command line as well. We will use an external text file to use more URLs through a single command. Create a new text file in the terminal's current location. Let's assume that we are in the /Desktop/mage_benchmark/ directory. Create a file named mage_urls.txt here and put the following URLs in it: http://magento.local.com/http://magento.local.com/skin/frontend/default/default/favicon.icohttp://magento.local.com/js/index.php?c=auto&f=,prototype/prototype.js,prototype/validation.js,scriptaculous/builder.js,scriptaculous/effects.js,scriptaculous/dragdrop.js,scriptaculous/controls.js,scriptaculous/slider.js,varien/js.js,varien/form.js,varien/menu.js,mage/translate.js,mage/cookies.jshttp://magento.local.com/skin/frontend/default/default/css/print.csshttp://magento.local.com/skin/frontend/default/default/css/stylesie.csshttp://magento.local.com/skin/frontend/default/default/css/styles.csshttp://magento.local.com/skin/frontend/default/default/images/np_cart_thumb.gifhttp://magento.local.com/skin/frontend/default/default/images/np_product_main.gifhttp://magento.local.com/skin/frontend/default/default/images/np_thumb.gifhttp://magento.local.com/skin/frontend/default/default/images/slider_btn_zoom_in.gifhttp://magento.local.com/skin/frontend/default/default/images/slider_btn_zoom_out.gifhttp://magento.local.com/skin/frontend/default/default/images/spacer.gifhttp://magento.local.com/skin/frontend/default/default/images/media/404_callout1.jpghttp://magento.local.com/electronics/cameras.htmlhttp://magento.local.com/skin/frontend/default/default/images/media/furniture_callout_spot.jpghttp://magento.local.com/skin/adminhtml/default/default/boxes.csshttp://magento.local.com/skin/adminhtml/default/default/ie7.csshttp://magento.local.com/skin/adminhtml/default/default/reset.csshttp://magento.local.com/skin/adminhtml/default/default/menu.csshttp://magento.local.com/skin/adminhtml/default/default/print.csshttp://magento.local.com/nine-west-women-s-lucero-pump.html These URLs will vary with yours. Modify it as it fits. You can add more URLs if you want. Make sure that you are in the /Desktop/mage_benchmark/ directory in your terminal. Now issue the following command: siege -c 50 -i -t 1M -d 3 -f mage_urls.txt This will take a fair amount of time. Be patient. After completion it should return a result something like the following: Lifting the server siege.. done.Transactions: 603 hitsAvailability: 96.33 %Elapsed time: 59.06 secsData transferred: 10.59 MBResponse time: 1.24 secsTransaction rate: 10.21 trans/secThroughput: 0.18 MB/secConcurrency: 12.69Successful transactions: 603Failed transactions: 23Longest transaction: 29.46Shortest transaction: 0.00 Repeat the steps 1 and 3 to produce reports with some variations and save them wherever you want. The option details could be found by typing the following command in the terminal: siege -h Magento profiler: Magento has a built-in profiler. You can enable it from the backend's System | Configuration | Advanced | Developer | Debug section. Now open the index.php file from your Magento root directory. Uncomment line numbers 65 and 71. The lines read as follows: line 65: #Varien_Profiler::enable(); // delete #line 71: #ini_set(<display_errors>, 1); // delete # Save this file and reload your Magento frontend in the browser. You should see the profiler data at the bottom of the page, similar to the following screenshot: (Move the mouse over the image to enlarge.) Yslow: We have already installed the YSlow firebug add-on. Open the Firefox browser and let's activate it by pressing the F12 button or clicking the firebug icon from the bottom-right corner of Firefox. Click on the YSlow link in firebug. Select the Rulesets. In my case I chose YSlow (V2). Click on the Run Test button. After a few seconds you will see a report page with the grade details. Here is mine: You can click on the links and see what it says. Page Speed: Fire up your Firefox browser. Activate the firebug panel by pressing F12. Click on the Page Speed link. Click on the Performance button and see the Page Speed Score and details. The output should be something like the following screenshot: Using GTmetrix: This is an online tool to benchmark a page with a combination of YSlow and Page Speed. Visit http://gtmetrix.com/ and DIY (Do It Yourself). Using WebPagetest: This is a similar tool as GTmetrix, which can be accessed from here: http://www.webpagetest.org/. How it works... ab is a tool for benchmarking your Apache Hypertext Transfer Protocol (HTTP) server. It is designed to give you an impression of how your current Apache installation performs. This especially shows you how many requests per second your Apache installation is capable of serving. The analysis that Siege leaves you with can tell you a lot about the sustainability of your code and server under duress. Obviously, availability is the most critical factor. Anything less than 100 percent means there's a user who may not be able to access your site. So, in the above case, there's some issue to be looked at, given that availability was only 96.33 percent on a sustained 50 concurrent, one minute user Siege. Concurrency is measured as the time of each transaction (defined as the number of server hits including any possible authentication challenges) divided by the elapsed time. It tells us the average number of simultaneous connections. High concurrency can be a leading indicator that the server is struggling. The longer it takes the server to complete a transaction while it's still opening sockets to deal with new traffic, the higher the concurrent traffic and the worse the server performance will be. Yahoo!'s exceptional performance team has identified 34 rules that affect web page performance. YSlow's web page analysis is based on the 22 of these 34 rules that are testable. We used one of their predefined ruleset. You can modify and create your own as well. When analyzing a web page, YSlow deducts points for each infraction of a rule and then applies a grade to each rule. An overall grade and score for the web page is computed by summing up the values of the score for each rule weighted by the rule's importance. Note that the rules are weighted for an average page. For various reasons, there may be some rules that are less important for your particular page. In YSlow 2.0, you can create your own custom rulesets in addition to the following three predefined rulesets: YSlow(V2): This ruleset contains the 22 rules Classic(V1): This ruleset contains the first 13 rules Small Site or Blog: This ruleset contains 14 rules that are applicable to small websites or blogs Page Speed generates its results based on the state of the page at the time you run the tool. To ensure the most accurate results, you should wait until the page finishes loading before running Page Speed. Otherwise, Page Speed may not be able to fully analyze resources that haven't finished downloading. Windows users can use Fiddler as an alternative to Siege. You can download it from http://www.fiddler2.com/fiddler2/, which is developed by Microsoft.  
Read more
  • 0
  • 0
  • 1942

article-image-alfresco-3-business-solutions-planning-and-implementing-document-migration
Packt
15 Feb 2011
5 min read
Save for later

Alfresco 3 Business Solutions: Planning and Implementing Document Migration

Packt
15 Feb 2011
5 min read
  Alfresco 3 Business Solutions Practical implementation techniques and guidance for delivering business solutions with Alfresco Deep practical insights into the vast possibilities that exist with the Alfresco platform for designing business solutions. Each and every type of business solution is implemented through the eyes of a fictitious financial organization - giving you the right amount of practical exposure you need. Packed with numerous case studies which will enable you to learn in various real world scenarios. Learn to use Alfresco's rich API arsenal with ease. Extend Alfresco's functionality and integrate it with external systems. Planning document migration Now we have got a strategy for how to do the document migration and we have several import methods to choose from, but we have not yet thought about planning the document migration. The end users will need time to select and organize the files they want to migrate and we might need some time to write temporary import scripts. So we need to plan this well ahead of production day. The end users will have to go through all their documents and decide which ones they want to keep and which ones they will no longer need. Sometimes the decision to keep a document is not up to the end user but instead might be controlled by regulations, so this requires extra research The following screenshot shows the Best Money schedule for document migration: It is not only electronic files that might need to be imported, sometimes there are paper-based files that need to be scanned and imported. This needs to be planned into the schedule too. Implementing document migration So we have a document migration strategy and we have a plan. Now let's see a couple of examples of how we can implement document migration in practice. Using Alfresco bulk filesystem import tool A tool such as the Alfresco bulk filesystem import tool is probably what most people will use and it is also the preferred import tool in the Best Money project. So let's start looking at how this tool is used. It is delivered in an AMP and is installed by dropping the AMP into the ALFRESCO_HOME/amps directory and restarting Alfresco. However, we prefer to install it manually with the Module Management Tool (MMT) as we have other AMPs, such as the Best Money AMP, that have been installed with the MMT tool. Copy the alfresco-bulk-filesystem-import-0.8.amp (or newest version) file into the ALFRESCO_HOME/bin directory. Stop Alfresco and then install the AMP as follows: C:Alfresco3.3bin>java -jar alfresco-mmt.jar install alfresco- bulkfilesystem-import-0.8.amp C:Alfresco3.3tomcatwebapps alfresco.war-verbose Running Alfresco bulk import tool Remove the ALFRESCO_HOME/tomcat/webapps/alfresco directory, so the files contained in the new AMP are recognized when the updated WAR file is exploded on restart of Alfresco. The tool provides a UI form in Alfresco Explorer that makes it very simple to do the import. It can be accessed via the http://localhost:8080/alfresco/service/bulk/import/filesystem URL, which will display the following form (you will be prompted to log in first, so make sure to log in with a user that has access to the spaces where you want to upload the content): Here, the Import directory field is mandatory and specifies the absolute path to the filesystem directory from where to load the documents and folders from. It should be specified in an OS-specific format such as for example C:docmigrationmeetings or /docmigration/meetings. Note that this directory must be locally accessible to the server where the Alfresco instance is running. It must either be a local filesystem or a locally mounted remote filesystem. The Target space field is also mandatory and specifies the target space/folder to load the documents and folders into. It is specified as a path starting with /Company Home. The separator character is Unix-style (that is, "/"), regardless of the platform Alfresco is running on. This field includes an AJAX auto-suggest feature, so you may type any part of the target space name, and an AJAX search will be performed to find and display matching items. The Update existing files checkbox field specifies whether to update files that already exist in the repository (checked) or skip them (unchecked). The import is started by clicking on the Initiate Bulk Import button. Once an import has been initiated, a status Web Script will display that reports on the status of the background import process. This Web Script automatically refreshes every 10 seconds until the import process completes. For the Best Money project, we have set up a staging area for the document migration where users can add documents to be imported into Alfresco. Let's import the Meetings folder, which looks as follows, in the staging area: One Committee meeting has been added and that is what we will test to import with the tool. Fill out the Bulk Import form as follows Click Initiate Bulk Import button to start the import. The form should show the progress of the import and when finished we should see something like this: In this case, the import took 9.5 seconds and 31 documents (totaling 28 MB) were imported and five folders created. If we look at the document nodes, we will see that they all have the bmc:document type applied and the bmc:documentData aspect applied. This is accomplished by a type rule which is added to the Meetings folder. All documents also have the cm:versionable aspect applied via the "Apply Versioning" rule, which is added to the Meetings folder.
Read more
  • 0
  • 0
  • 2207
article-image-alfresco-3-business-solutions-document-migration-strategies
Packt
15 Feb 2011
13 min read
Save for later

Alfresco 3 Business Solutions: Document Migration Strategies

Packt
15 Feb 2011
13 min read
Alfresco 3 Business Solutions Practical implementation techniques and guidance for delivering business solutions with Alfresco Deep practical insights into the vast possibilities that exist with the Alfresco platform for designing business solutions. Each and every type of business solution is implemented through the eyes of a fictitious financial organization - giving you the right amount of practical exposure you need. Packed with numerous case studies which will enable you to learn in various real-world scenarios. Learn to use Alfresco's rich API arsenal with ease. Extend Alfresco's functionality and integrate it with external systems. The Best Money CMS project is now in full swing and we have the folder structure with business rules designed and implemented and the domain content model created. It is now time to start importing any existing documents into the Alfresco repository. Most companies that implement an ECM system, and Best Money is no exception, will have a substantial amount of files that they want to import, classify, and make searchable in the new CMS system. The planning and preparation for the document migration actually has to start a lot earlier, as there are a lot of things that need to be prepared: Who is going to manage sorting out files that should be migrated? What is the strategy and process for the migration? What sort of classification should be done during the import? What filesystem metadata needs to be preserved during the import? Do we need to write any temporary scripts or rules just for the import? Document migration strategies The first thing we need to do is to figure out how the document migration is actually going to be done. There are several ways of making this happen. We will discuss a couple of different ways, such as via the CIFS interface and via tools. There are also some general strategies that apply to any migration method. General migration strategies There are some common things that need to be done no matter which import method is used, such as setting up a document migration staging area. Document staging area The end users need to be able to copy or move documents—that they want to migrate—to a kind of staging area that mirrors the new folder structure that we have set up in Alfresco. The best way to set up the staging area is to copy it from Alfresco via CIFS. When this is done the end users can start copying files to the staging area. However, it is a good idea to train the users in the new folder structure before they start copying documents to it. We should talk to them about folder structure changes, what rules and naming conventions have been set up, the idea behind it, and why it should be followed. If we do not train the end users in the new folder structure, they will not honor it and the old structure will get mixed up with the new structure via document migration, and this is not something that we want. We did plan and implement the new structure for today's requirements and future requirements and we do not want it broken before we even start using the system. The end users will typically work with the staging area over some time. It is good if they get a couple of weeks for this. It will take them time to think about what documents they want to migrate and if any re-organization is needed. Some documents might also need to be renamed. Preserving Modified Date on imported documents We know that Best Money wants all their modified dates on the files to be preserved during an import, as they have a review process that is dependent on it. This means that we have to use an import method that can preserve the Modified Date on the network drive files when they are merged into the Alfresco repository. The CIFS interface cannot be used for this as it sets Modified Date to Current Date. There are a couple of methods that can be used to import content into the repository and preserve the Modified Date: Create an ACP file via an external tool and then import it Custom code the import with the Foundation API and turn off the Audit Aspect before the import Use an import tool that also has the possibility to turn off the Audit Aspect At the time of writing (when I am using Alfresco 3.3.3 Enterprise and Alfresco Community 3.4a) there is no easy way to import files and preserve the Modified Date. When a file is added via Alfresco Explorer, Alfresco Share, FTP, CIFS, Foundation API, REST API, and so on, the Created Date and Modified Date is set to "now", so we lose all the Modified Date data that was set on the files on the network drive. The Created Date, Creator, Modified Date, Modifier, and Access Date are all so called Audit properties that are automatically managed by Alfresco if a node has the cm:auditable aspect applied. If we try and set these properties during an import via one of the APIs, it will not succeed. Most people want to import files via CIFS or via an external import tool. Alfresco is working towards supporting preserving dates when using both these methods for import. Currently, there is a solution to add files via the Foundation API and preserve the dates, which can be used by custom tools. The Alfresco product itself also needs this functionality in, for example, the Transfer Service Receiver, so the dates can be preserved when it receives files. The new solution that enables the use of the Foundation API to set Auditable properties manually has been implemented in version 3.3.2 Enterprise and 3.4a Community. To be able to set audit properties do the following: Inject the policy behavior filter in the class that should do the property update: <property name="behaviourFilter" ref="policyBehaviourFilter"/> Then in the class, turn off the audit aspect before the update, it has to be inside a new transaction, as in the following example: RetryingTransactionCallback<Object> txnWork = new RetryingTransactionCallback<Object>() { public Object execute() throws Exception { behaviourFilter.disableBehaviour (ContentModel.ASPECT_AUDITABLE); Then in the same transaction update the Created or Modified Date: nodeService.setProperty(nodeRef, ContentModel.PROP_MODIFIED, someDate); . . . } }; With JDK 6, the Modified Date is the only file data that we can access, so no other file metadata is available via the CIFS interface. If we use JDK 7, there is a new NIO 2 interface that gives access to more metadata. So, if we are implementing an import tool that creates an ACP file, we could use JDK 7 and preserve Created Date, Modified Date, and potentially other metadata as well. Post migration processing scripts When the document migration has been completed, we might want to do further processing of the documents such as setting extra metadata. This is specifically needed when documents are imported into Alfresco via the CIFS interface, which does not allow any custom metadata to be set during the import. There might also be situations, such as in the case of Best Money, where a lot of the imported documents have older filenames (that is, following an older naming convention) with important metadata that should be extracted and applied to the new document nodes. For post migration processing, JavaScript is a convenient tool to use. We can easily define Lucene queries for the nodes we want to process, as the rules have applied domain document types such as Meeting to the imported documents, and we can use regular expressions to match and extract the metadata we want to apply to the nodes. Search restrictions when running post-migration scripts What we have to think about though when running these post-migration scripts, is that the repository now contains a lot of content, so each query we run might very well return much more than 1,000 rows. And 1,000 rows is the default max limit that a search will return. To change this to allow for 5,000 rows to be returned, we have to make some changes to the permission check configuration (Alfresco checks the permissions for each node that is being accessed, so the user running the query is not getting back content that he or she should not have access to). Open the alfresco-global.properties file located in the alfresco/tomcat/shared/classes directory and add the following properties: # The maximum time spent pruning results (was 10000) system.acl.maxPermissionCheckTimeMillis=100000 # The maximum number of results to perform permission checks against (was 1000) system.acl.maxPermissionChecks=5000 Unwanted Modified Date updates when running scripts So we have turned off the audit feature during document migration or made some custom code changes to Alfresco, to get the document's Modified Date to be preserved during import. Then we have turned on auditing again so the system behaves in the way the users expect. The last thing we want now is for all those preserved modified dates to be set to the current date when we update metadata. And this is what will happen if we are not running the post-migration scripts with the audit feature turned off. So this is important to think about unless you want to start all over again with the document migration. Versioning problems when running post-migration scripts Another thing that can cause problems is when we have versioning turned on for documents that we are updating the post-migration scripts. If we see the following error: org.alfresco.service.cmr.version.VersionServiceException: 07120018 The current implementation of the version service does not support the creation of branches. By default, new versions will be created even when we just update properties/metadata. This can cause errors such as the preceding error and we might not even be able to check-in and check-out the document. To prevent this error from popping up, and turn off versioning during property updates once and for all, we can set the following property at the same time as we set the other domain metadata in the scripts: legacyContentFile.properties["cm:autoVersionOnUpdateProps"] = false; Setting this property to false effectively turns off versioning during any property/metadata update for the document. Another thing that can be a problem is if folders have been set up as versionable by mistake. The most likely reason for this is that we probably forgot to set up the Versioning Rule to only apply to cm:content (and not to "All Items"). Folders in the workspace://SpacesStore store do not support versioning The WCM system comes with an AVM store that supports advanced folder versioning and change sets. Note that the WCM system can also store its data in the Workspace store. So we need to update the versioning rule to apply to the content and remove the versionable aspect from all folders, which have it applied, before we can update any content in these folders. Here is a script that removes the cm:versionable aspect from any folder having it applied: var store = "workspace://SpacesStore"; var query = "PATH:"/app:company_home//*" AND TYPE:"cm:folder" AND ASPECT:"cm:versionable""; var versionableFolders = search.luceneSearch(store, query); for each (versionableFolder in versionableFolders) { versionableFolder.removeAspect("cm:versionable"); logger.log("Removed versionable aspect from folder: " + versionableFolder.name); } logger.log("Removed versionable aspect from " + versionableFolders.length + " folders"); Post-migration script to extract legacy meeting metadata Best Money has a lot of documents that they are migrating to the Alfresco repository. Many of the documents have filenames following a certain naming convention. This is the case for the meeting documents that are imported. The naming convention for the old imported documents are not exactly the same as the new meeting naming convention, so we have to write the regular expression a little bit differently. An example of a filename with the new naming convention looks like this: 10En-FM.02_3_annex1.doc and the same filename with the old naming convention looks like this: 10Eng-FM.02_3_annex1.doc. The difference is that the old naming convention does not specify a two-character code for language but instead a list that looks like this: Arabic,Chinese,Eng|eng,F|Fr,G|Ger,Indonesian,Jpn,Port,Rus|Russian,Sp,Sw,Tagalog,Turkish. What we are interested in extracting is the language and the department code and the following script will do that with a regular expression: // Regulars Expression Definition var re = new RegExp("^d{2}(Arabic|Chinese|Eng|eng|F|Fr|G|Ger| Indonesian|Ital|Jpn|Port|Rus|Russian|Sp|Sw|Tagalog|Turkish)-(A| HR|FM|FS|FU|IT|M|L).*"); var store = "workspace://SpacesStore"; var query = "+PATH:"/app:company_home/cm:Meetings//*" + TYPE:"cm:content""; var legacyContentFiles = search.luceneSearch(store, query); for each (legacyContentFile in legacyContentFiles) { if (re.test(legacyContentFile.name) == true) { var language = getLanguageCode(RegExp.$1); var department = RegExp.$2; logger.log("Extracted and updated metadata (language=" + language + ")(department=" + department + ") for file: " + legacyContentFile.name); if (legacyContentFile.hasAspect("bmc:document_data")) { // Set some metadata extracted from file name legacyContentFile.properties["bmc:language"] = language; legacyContentFile.properties["bmc:department"] = department; // Make sure versioning is not enabled for property updates legacyContentFile.properties["cm:autoVersionOnUpdateProps"] = false; legacyContentFile.save(); } else { logger.log("Aspect bmc:document_data is not set for document" + legacyContentFile.name); } } else { logger.log("Did NOT extract metadata from file: " + legacyContentFile.name); } } /** * Convert from legacy language code to new 2 char language code * * @param parsedLanguage legacy language code */ function getLanguageCode(parsedLanguage) { if (parsedLanguage == "Arabic") { return "Ar"; } else if (parsedLanguage == "Chinese") { return "Ch"; } else if (parsedLanguage == "Eng" || parsedLanguage == "eng") { return "En"; } else if (parsedLanguage == "F" || parsedLanguage == "Fr") { return "Fr"; } else if (parsedLanguage == "G" || parsedLanguage == "Ger") { return "Ge"; } else if (parsedLanguage == "Indonesian") { return "In"; } else if (parsedLanguage == "Ital") { return ""; } else if (parsedLanguage == "Jpn") { return "Jp"; } else if (parsedLanguage == "Port") { return "Po"; } else if (parsedLanguage == "Rus" || parsedLanguage == "Russian") { return "Ru"; } else if (parsedLanguage == "Sp") { return "Sp"; } else if (parsedLanguage == "Sw") { return "Sw"; } else if (parsedLanguage == "Tagalog") { return "Ta"; } else if (parsedLanguage == "Turkish") { return "Tu"; } else { logger.log("Invalid parsed language code: " + parsedLanguage); return ""; } } This script can be run from any folder and it will search for all documents under the /Company Home/Meetings folder or any of its subfolders. All the documents that are returned by the search are looped through and matched with the regular expression. The regular expression defines two groups: one for the language code and one for the department. So after a document has been matched with the regular expression it is possible to back-reference the values that were matched in the groups by using RegExp.$1 and RegExp.$2. When the language code and the department code properties are set, we also set the cm:autoVersionOnUpdateProps property, so we do not get any problem with versioning during the update.
Read more
  • 0
  • 0
  • 3616

article-image-faqs-wordpress-3
Packt
15 Feb 2011
10 min read
Save for later

FAQs on WordPress 3

Packt
15 Feb 2011
10 min read
  WordPress 3 Complete Create your own complete website or blog from scratch with WordPress Learn everything you need for creating your own feature-rich website or blog from scratch Clear and practical explanations of all aspects of WordPress In-depth coverage of installation, themes, plugins, and syndication Explore WordPress as a fully functional content management system Clear, easy-to-follow, concise; rich with examples and screenshots         Read more about this book       (For more resources on WordPress, see here.) Q: What is WordPress? A: WordPress is an open source blog engine. Open source means that nobody owns it, everybody works on it, and anyone can contribute to it. Blog engine means a software application that can run a blog. It's a piece of software that lives on the web server and makes it easy for you to add and edit posts, themes, comments, and all of your other content. More expansively, WordPress can be called a publishing platform because it is by no means restricted to blogging.   Q: Why choose WordPress? A: WordPress is not the only publishing platform out there, but it has an awful lot to recommend it. A long time in refining Active in development Large community of contributors Amazingly extendable Compliant with W3C standards Multilanguage capable     Q: What are the system requirements for WordPress? A: The minimum system requirement for WordPress is a web server with the following software installed: PHP Version 4.3 or greater MySQL Version 4.1.2 or greater Although Apache and Nginx are highly recommended by the WordPress developers, any server running PHP and MySQL will do.     Q: What are the new features since 2.7? A: If you're new to WordPress, this list may not mean a whole lot to you, but if you're familiar with WordPress and have been using it for a long time, you'll find this list quite enlightening. The following are the new features: Adding support for "include" and "exclude" to [gallery] Changes Remove link on widgets to Delete because it doesn't just remove it, it deletes the settings for that widget instance Syntax highlighting and function lookup built into plugin and theme editors Improved revision comparison user interface Lots of new template files for custom taxonomies and custom post types, among others Browsing the theme directory and installing themes from the admin Allowing the dashboard widgets to be arranged in up to four columns Choosing username and password during installation rather than using "admin"     Q: How do I upgrade the existing Wordpress version with the latest one? A:If you're upgrading from a very early version of WordPress to a very recent version, you should do it in steps. That is, if you find yourself in a situation where you have to upgrade across a large span of version numbers, for example from 2.2 to 3.0.3, I highly recommend doing it in stages. Do the complete upgrade from 2.2 to 2.3.3, then from 2.3.3 to 2.5, then from 2.5 to 2.7, and finally from 2.7 to 3.0.3. When doing this, you can usually do the full content and database backup just once, but verify in between versions that the gradual upgrades are going well before you move onto the next one. You can download the previous stable versions of WordPress from this page: http://wordpress.org/download/release-archive/. Of course, another option would be to simply do a new installation of the latest version of WordPress and then move your previous content into it, and I encourage you to consider this course of action. However, sometimes contents are harder to move than it is to do the upgrades; this will be up to you to decide your specific server situation and your comfort level with the choices.   Q: What is the WordPress Codex? A: The WordPress Codex is the central repository of all the information the official WordPress team has published to help people work with WordPress. The Codex has some basic tutorials for getting started with WordPress, such as a detailed step-by-step discussion of installation, lists of every template tag and hook, and a lot more.     Q: Is WordPress available for download? A: WordPress is available in easily downloadable formats from its website, http://wordpress.org/download/. WordPress is a free, open source application, and is released under GNU General Public License (GPL). Take a look at the following screenshot in which the download links are available on the right side: The .zip file is shown as a big blue button because that'll be the most useful format for the most people. If you are using Windows, Mac, or Linux operating systems, your computer will be able to unzip that downloaded file automatically. (The .tar.gz file is provided because some Unix users prefer it.)     Q: What is the WordPress Admin Panel? A: WordPress installs a powerful and flexible administration area where you can manage all of your website content, and do much more. You can always get to the WP Admin by going to this URL: http://yoursite.com/wp-admin/. Your first time here, you'll be re-directed to the login page. In the future, WordPress will check to see if you're already logged in and, if so, you'll skip the login page. Following is the login page:     Q: How do I create a post on the blog that I created? A: To create a post, just click on New Post on the top menu. You'll be taken to the following page: Every post should have, at minimum, a title and some content. So go ahead and write in some text for those two things. When you are happy with it, click on the Publish button.     Q: What do I do if I have lost my password? A: If you have lost your password and can't get into your WP Admin panel, you can easily retrieve your password by clicking on the Lost your password? link on the login page. A newly generated password will be e-mailed to you at the e-mail address you gave during the WordPress installation. This is why you need to be sure that you enter a valid e-mail address. Otherwise, you will not be able to retrieve your password.   Q: What do you mean by Categories and Tags? A: Categories and tags are two types of information that you can add to a blog post. We use them to organize the information in your blog by topic and content (rather than just by, say, date), and to help visitors find what they are looking for on your blog.   Q: Which editor is used for writing and editing posts? A: WordPress comes with a Visual editor, otherwise known as a WYSIWYG editor. This is the default editor for typing and editing your posts. If you're comfortable with HTML, you may prefer to write and edit your posts using the HTML editor—particularly useful if you want to add special content or styling. To switch from the rich text editor to the HTML editor, click on the HTML tab next to the Visual tab at the top of the content box: You'll see your post in all its raw HTML glory, and you'll get a new set of buttons that lets you quickly bold and italicize text, as well as add link code, image code, and so on. You can make changes and swap back and forth between the tabs to see the result.     Q: What are Timestamps and how are they useful? A: WordPress will also let you alter the timestamp of your post. This is useful if you are writing a post today that you wish you'd published yesterday, or if you're writing a post in advance and don't want it to show up until the right day. By default, the timestamp will be set to the moment you publish your post. To change it, just find the Publish box, and click on the Edit link (next to the calendar icon and Publish immediately), and fields will show up with the current date and time for you to change: Change the details, click on the OK button, and then click on Publish to publish your post (or save a draft).     Q: Is there any way in which I can protect my content? A: WordPress gives you the option to hide posts. You can hide a post from everyone but yourself by marking it Private, or you can hide it from everyone but the people with whom you share a password by marking it as Password protected. To implement this, look at the Publish box at the upper right of the Edit Post page. If you click on the Edit link next to Visibility: Public, a few options will appear: If you click on the Password protected radio button, you'll get a box where you can type a password. Visitors to your blog will see the post title along with a note that they have to type in a password to read the post. If you click on the Private radio button, the post will not show up on the blog at all to any viewers, unless you are the viewer and you are logged in. If you leave the post Public and check the Stick this post to the front page checkbox, this post will be the first post on the front page, regardless of its publication date. Be sure to click on the OK button if you make any changes.     Q: What is a Widget? A: A widget is a small box of content, dynamic or not, that shows up somewhere on a widget-enabled site. Often, that location is in the sidebar of a blog, but that's not a rule. A widget area can be anywhere a theme developer wants it to be. Common widgets contain: A monthly archive of blog posts Recent comments posted on the blog A clickable list of categories A tag cloud A search box, and so on     Q: How do I add an Image gallery to my post? A: You can add an image gallery to any page or post in your website using WordPress's built-in Image Gallery functionality. There are just three simple steps: Choose a post or page for your image gallery. Upload the images you want in that gallery. Add a plugin, such as lightbox plugin, that will streamline your gallery, and save it. A lightbox effect is when the existing page content fades a little and a new item appears on top of the existing page. We can easily add the same effect to your galleries by adding a plugin. There are a number of lightbox plugins available, but the one I like these days uses jQuery Colorbox. Find this plugin, either through the WP Admin or in the Plugins Repository (http://wordpress.org/extend/plugins/jquery-colorbox/), and install it.   Summary In this article we covered some of the most frequently asked questions on WordPress 3. Further resources on this subject: WordPress 2.8 Themes Cookbook [Book] Getting Started with WordPress 3 [Article] How to Create an Image Gallery in WordPress 3 [Article] Performing Setup Tasks in the WordPress Admin Panel [Article]
Read more
  • 0
  • 0
  • 2267
Modal Close icon
Modal Close icon