Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-null-12
Packt
23 Jul 2012
13 min read
Save for later

Ruby with MongoDB for Web Development

Packt
23 Jul 2012
13 min read
Creating documents Let's first see how we can create documents in MongoDB. As we have briefly seen, MongoDB deals with collections and documents instead of tables and rows. Time for action – creating our first document Suppose we want to create the book object having the following schema: book = { name: "Oliver Twist", author: "Charles Dickens", publisher: "Dover Publications", published_on: "December 30, 2002", category: ['Classics', 'Drama'] }   On the Mongo CLI, we can add this book object to our collection using the following command: > db.books.insert(book)   Suppose we also add the shelf collection (for example, the floor, the row, the column the shelf is in, the book indexes it maintains, and so on that are part of the shelf object), which has the following structure: shelf : { name : 'Fiction', location : { row : 10, column : 3 }, floor : 1 lex : { start : 'O', end : 'P' }, }   Remember, it's quite possible that a few years down the line, some shelf instances may become obsolete and we might want to maintain their record. Maybe we could have another shelf instance containing only books that are to be recycled or donated. What can we do? We can approach this as follows: The SQL way: Add additional columns to the table and ensure that there is a default value set in them. This adds a lot of redundancy to the data. This also reduces the performance a little and considerably increases the storage. Sad but true! The NoSQL way: Add the additional fields whenever you want. The following are the MongoDB schemaless object model instances: > db.book.shelf.find() { "_id" : ObjectId("4e81e0c3eeef2ac76347a01c"), "name" : "Fiction", "location" : { "row" : 10, "column" : 3 }, "floor" : 1 } { "_id" : ObjectId("4e81e0fdeeef2ac76347a01d"), "name" : "Romance", "location" : { "row" : 8, "column" : 5 }, "state" : "window broken", "comments" : "keep away from children" } What just happened? You will notice that the second object has more fields, namely comments and state. When fetching objects, it's fine if you get extra data. That is the beauty of NoSQL. When the first document is fetched (the one with the name Fiction), it will not contain the state and comments fields but the second document (the one with the name Romance) will have them. Are you worried what will happen if we try to access non-existing data from an object, for example, accessing comments from the first object fetched? This can be logically resolved—we can check the existence of a key, or default to a value in case it's not there, or ignore its absence. This is typically done anyway in code when we access objects. Notice that when the schema changed we did not have to add fields in every object with default values like we do when using a SQL database. So there is no redundant information in our database. This ensures that the storage is minimal and in turn the object information fetched will have concise data. So there was no redundancy and no compromise on storage or performance. But wait! There's more. NoSQL scores over SQL databases The way many-to-many relations are managed tells us how we can do more with MongoDB that just cannot be simply done in a relational database. The following is an example: Each book can have reviews and votes given by customers. We should be able to see these reviews and votes and also maintain a list of top voted books. If we had to do this in a relational database, this would be somewhat like the relationship diagram shown as follows: (get scared now!) The vote_count and review_count fields are inside the books table that would need to be updated every time a user votes up/down a book or writes a review. So, to fetch a book along with its votes and reviews, we would need to fire three queries to fetch the information: SELECT * from book where id = 3; SELECT * from reviews where book_id = 3; SELECT * from votes where book_id = 3; We could also use a join for this: SELECT * FROM books JOIN reviews ON reviews.book_id = books.id JOIN votes ON votes.book_id = books.id; In MongoDB, we can do this directly using embedded documents or relational documents. Using MongoDB embedded documents Embedded documents, as the name suggests, are documents that are embedded in other documents. This is one of the features of MongoDB and this cannot be done in relational databases. Ever heard of a table embedded inside another table? Instead of four tables and a complex many-to-many relationship, we can say that reviews and votes are part of a book. So, when we fetch a book, the reviews and the votes automatically come along with the book. Embedded documents are analogous to chapters inside a book. Chapters cannot be read unless you open the book. Similarly embedded documents cannot be accessed unless you access the document. For the UML savvy, embedded documents are similar to the contains or composition relationship. Time for action – embedding reviews and votes In MongoDB, the embedded object physically resides inside the parent. So if we had to maintain reviews and votes we could model the object as follows: book : { name: "Oliver Twist", reviews : [ { user: "Gautam", comment: "Very interesting read" }, { user: "Harry", comment: "Who is Oliver Twist?" } ] votes: [ "Gautam", "Tom", "Dick"] } What just happened? We now have reviews and votes inside the book. They cannot exist on their own. Did you notice that they look similar to JSON hashes and arrays? Indeed, they are an array of hashes. Embedded documents are just like hashes inside another object. There is a subtle difference between hashes and embedded objects as we shall see later on in the book. Have a go hero – adding more embedded objects to the book Try to add more embedded objects such as orders inside the book document. It works! order = { name: "Toby Jones" type: "lease", units: 1, cost: 40 } Fetching embedded objects We can fetch a book along with the reviews and the votes with it. This can be done by executing the following command: > var book = db.books.findOne({name : 'Oliver Twist'}) > book.reviews.length 2 > book.votes.length 3 > book.reviews [ { user: "Gautam", comment: "Very interesting read" }, { user: "Harry", comment: "Who is Oliver Twist?" } ] > book.votes [ "Gautam", "Tom", "Dick"] This does indeed look simple, doesn't it? By fetching a single object, we are able to get the review and vote count along with the data. Use embedded documents only if you really have to! Embedded documents increase the size of the object. So, if we have a large number of embedded documents, it could adversely impact performance. Even to get the name of the book, the reviews and the votes are fetched. Using MongoDB document relationships Just like we have embedded documents, we can also set up relationships between different documents. Time for action – creating document relations The following is another way to create the same relationship between books, users, reviews, and votes. This is more like the SQL way. book: { _id: ObjectId("4e81b95ffed0eb0c23000002"), name: "Oliver Twist", author: "Charles Dickens", publisher: "Dover Publications", published_on: "December 30, 2002", category: ['Classics', 'Drama'] } Every document that is created in MongoDB has an object ID associated with it. In the next chapter, we shall soon learn about object IDs in MongoDB. By using these object IDs we can easily identify different documents. They can be considered as primary keys. So, we can also create the reviews collection and the votes collection as follows: users: [ { _id: ObjectId("8d83b612fed0eb0bee000702"), name: "Gautam" }, { _id : ObjectId("ab93b612fed0eb0bee000883"), name: "Harry" } ] reviews: [ { _id: ObjectId("5e85b612fed0eb0bee000001"), user_id: ObjectId("8d83b612fed0eb0bee000702"), book_id: ObjectId("4e81b95ffed0eb0c23000002"), comment: "Very interesting read" }, { _id: ObjectId("4585b612fed0eb0bee000003"), user_id : ObjectId("ab93b612fed0eb0bee000883"), book_id: ObjectId("4e81b95ffed0eb0c23000002"), comment: "Who is Oliver Twist?" } ] votes: [ { _id: ObjectId("6e95b612fed0eb0bee000123"), user_id : ObjectId("8d83b612fed0eb0bee000702"), book_id: ObjectId("4e81b95ffed0eb0c23000002"), }, { _id: ObjectId("4585b612fed0eb0bee000003"), user_id : ObjectId("ab93b612fed0eb0bee000883"), } ] What just happened? Hmm!! Not very interesting, is it? It doesn't even seem right. That's because it isn't the right choice in this context. It's very important to know how to choose between nesting documents and relating them. In your object model, if you will never search by the nested document (that is, look up for the parent from the child), embed it. Just in case you are not sure about whether you would need to search by an embedded document, don't worry too much – it does not mean that you cannot search among embedded objects. You can use Map/Reduce to gather the information. Comparing MongoDB versus SQL syntax This is a good time to sit back and evaluate the similarities and dissimilarities between the MongoDB syntax and the SQL syntax. Let's map them together: SQL commands NoSQL (MongoDB) equivalent SELECT * FROM books db.books.find() SELECT * FROM books WHERE id = 3; db.books.find( { id : 3 } ) SELECT * FROM books WHERE name LIKE 'Oliver%' db.books.find( { name : /^Oliver/ } ) SELECT * FROM books WHERE name like '%Oliver%' db.books.find( { name : /Oliver/ } ) SELECT * FROM books WHERE publisher = 'Dover Publications' AND published_date = "2011-8-01" db.books.find( { publisher : "Dover Publications", published_date : ISODate("2011-8-01") } ) SELECT * FROM books WHERE published_date > "2011-8-01" db.books.find ( { published_date : { $gt : ISODate("2011-8-01") } } ) SELECT name FROM books ORDER BY published_date db.books.find( {}, { name : 1 } ).sort( { published_date : 1 } ) SELECT name FROM books ORDER BY published_date DESC db.books.find( {}, { name : 1 } ).sort( { published_date : -1 } ) SELECT votes.name from books JOIN votes where votes.book_id = books.id db.books.find( { votes : { $exists : 1 } }, { votes.name : 1 } ) Some more notable comparisons between MongoDB and relational databases are: MongoDB does not support joins. Instead it fires multiple queries or uses Map/Reduce. We shall soon see why the NoSQL faction does not favor joins. SQL has stored procedures. MongoDB supports JavaScript functions. MongoDB has indexes similar to SQL. MongoDB also supports Map/Reduce functionality. MongoDB supports atomic updates like SQL databases. Embedded or related objects are used sometimes instead of a SQL join. MongoDB collections are analogous to SQL tables. MongoDB documents are analogous to SQL rows. Using Map/Reduce instead of join We have seen this mentioned a few times earlier—it's worth jumping into it, at least briefly. Map/Reduce is a concept that was introduced by Google in 2004. It's a way of distributed task processing. We "map" tasks to works and then "reduce" the results. Understanding functional programming Functional programming is a programming paradigm that has its roots from lambda calculus. If that sounds intimidating, remember that JavaScript could be considered a functional language. The following is a snippet of functional programming: $(document).ready( function () { $('#element').click( function () { # do something here }); $('#element2').change( function () { # do something here }) }); We can have functions inside functions. Higher-level languages (such as Java and Ruby) support anonymous functions and closures but are still procedural functions. Functional programs rely on results of a function being chained to other functions. Building the map function The map function processes a chunk of data. Data that is fed to this function could be accessed across a distributed filesystem, multiple databases, the Internet, or even any mathematical computation series! function map(void) -> void The map function "emits" information that is collected by the "mystical super gigantic computer program" and feeds that to the reducer functions as input. MongoDB as a database supports this paradigm making it "the all powerful" (of course I am joking, but it does indeed make MongoDB very powerful). Time for action – writing the map function for calculating vote statistics Let's assume we have a document structure as follows: { name: "Oliver Twist", votes: ['Gautam', 'Harry'] published_on: "December 30, 2002" } The map function for such a structure could be as follows: function() { emit( this.name, {votes : this.votes} ); } What just happened? The emit function emits the data. Notice that the data is emitted as a (key, value) structure. Key: This is the parameter over which we want to gather information. Typically it would be some primary key, or some key that helps identify the information. For the SQL savvy, typically the key is the field we use in the GROUP BY clause. Value: This is a JSON object. This can have multiple values and this is the data that is processed by the reduce function. We can call emit more than once in the map function. This would mean we are processing data multiple times for the same object. Building the reduce function The reduce functions are the consumer functions that process the information emitted from the map functions and emit the results to be aggregated. For each emitted data from the map function, a reduce function emits the result. MongoDB collects and collates the results. This makes the system of collection and processing as a massive parallel processing system giving the all mighty power to MongoDB. The reduce functions have the following signature: function reduce(key, values_array) -> value Time for action – writing the reduce function to process emitted information This could be the reduce function for the previous example: function(key, values) { var result = {votes: 0} values.forEach(function(value) { result.votes += value.votes; }); return result; } What just happened? reduce takes an array of values – so it is important to process an array every time. There are various options to Map/Reduce that help us process data. Let's analyze this function in more detail: function(key, values) { var result = {votes: 0} values.forEach(function(value) { result.votes += value.votes; }); return result; } The variable result has a structure similar to what was emitted from the map function. This is important, as we want the results from every document in the same format. If we need to process more results, we can use the finalize function (more on that later). The result function has the following structure: function(key, values) { var result = {votes: 0} values.forEach(function(value) { result.votes += value.votes; }); return result; } The values are always passed as arrays. It's important that we iterate the array, as there could be multiple values emitted from different map functions with the same key. So, we processed the array to ensure that we don't overwrite the results and collate them.
Read more
  • 0
  • 0
  • 4403

article-image-building-site-directory-sharepoint-search
Packt
17 Jul 2012
8 min read
Save for later

Building a Site Directory with SharePoint Search

Packt
17 Jul 2012
8 min read
  (For more resources on Microsoft Sharepoint, see here.) Site Directory options There are two main approaches to providing a Site Directory feature: A central list that has to be maintained Using a search-based tool that can provide the information dynamically   List-based Site Directory With a list-based Site Directory, a list is provisioned in a central site collection, such as the root of a portal or intranet. Like all lists, site columns can be defined to help describe the site's metadata. Since it is stored in a central list, the information can easily be queried, which can make it easy to show a listing of all sites and perform filtering, sorting, and grouping, like all SharePoint lists. It is important to consider the overall site topology within the farm. If everything of relevance is stored within a single site collection, a list-based Site Directory, accessible throughout that site collection, may be easy to implement. But as soon as you have a large number of site collections or web applications, you will no longer be able to easily use that Site Directory without creating custom solutions that can access the central content and display it on those other sites. In addition, you will need to ensure that all users have access to read from that central site and list. Another downside to this approach is that the list-based Site Directory has to be maintained to be effective, and in many cases it is very difficult to keep up with this. It is possible to add new sites to the directory programmatically, using an event receiver, or as part of a process that automates the site creation. However, through the site's life cycle, changes will inevitably have to be made, and in many cases sites will be retired, archived, or deleted. While this approach tends to work well in small, centrally controlled environments, it does not work well at all in most of the large, distributed environments where the number of sites is expected to be larger and the rate of change is typically more frequent. Search-based site discovery An alternative to the list-based Site Directory is a completely dynamic site discovery based on the search system. In this case the content is completely dynamic and requires no specific maintenance. As sites are created, updated, or removed, the changes will be updated in the index as the scheduled crawls complete. For environments with a large number of sites, with a high frequency of new sites being created, this is the preferred approach. The content can also be accessed throughout the environment without having to worry about site collection boundaries, and can also be leveraged using out of the box features. The downside to this approach is that there will be a limit to the metadata you can associate with the site. Standard metadata that will be related to the site include the site's name, description, URL, and to a lesser extent, the managed path used to configure the site collection. From these items you can infer keyword relevance, but there is no support for extended properties that can help correlate the site with categories, divisions, or other specific attributes. How to leverage search Most users are familiar with how to use the Search features to find content, but are not familiar with some of the capabilities that can help them pinpoint specific content or specific types of content. This section will provide an overview on how to leverage search to provide features that help support users finding results that are only related to sites. Content classes SharePoint Search includes an object classification system that can be used to identify specific types of items as shown in the next table. It is stored in the index as a property of the item, making it available for all queries. Content Class Description STS_Site Site Collection objects STS_Web Subsite/Web objects STS_list_[templatename] List objects where [templatename] is the name of the template such as Announcements or DocumentLibrary. STS_listitem_[templatename] List Item objects where [templatename] is the name of the template such as Announcements or DocumentLibrary. SPSPeople User Profile objects (requires a User Profile Service Application) The contentclass property can be included as part of an ad hoc search performed by a user, included in the search query within a customization, or as we will see in the next section, used to provide a filter to a Search Scope. Search Scopes Search Scopes provide a way to filter down the entire search index. As the index grows and is filled with potentially similar information, it can be helpful to define Search Scopes to put specific set of rules in place to reduce the initial index that the search query is executed against. This allows you to execute a search within a specific context. The rules can be set based on the specific location, specific property values, or the crawl source of the content. The Search Scopes can be either defined centrally within the Search service application by an administrator or within a given Site Collection by a Site Collection administrator. If the scope is going to be used in multiple Site Collections, it should be defined in the Search service application. Once defined, it is available in the Search Scopes dropdown box for any ad hoc queries, within the custom code, or within the Search Web Parts. Defining the Site Directory Search Scope To support dynamic discovery of the sites, we will configure a Search Scope that will look at just site collections and subsites. As we saw above, this will enable us to separate out the site objects from the rest of the content in the search index. This Search Scope will serve as the foundation for all of the solutions in this article. To create a custom Search Scope: Navigate to the Search Service Application. Click on the Search Scopes link on the QuickLaunch menu under the Queries and Results heading. Set the Title field to Site Directory. Provide a Description. Click on the OK button as shown in the following screenshot: From the View Scopes page, click on the Add Rules link next to the new Search Scope. For the Scope Rule Type select the Property Query option. For the Property Query select the contentclass option. Set the property value to STS_Site. For the Behaviorsection, select the Include option. From the Scope Properties page, select the New Rule link. For the Scope Rule Type section, select the Property Query option./li> For the Property Query select the contentclass option. Set the property value to STS_Web. For the Behavior section, select the Include option.   The end result will be a Search Scope that will include all Site Collection and subsite entries. There will be no user generated content included in the search results of this scope. After finishing the configuration for the rules there will be a short delay before the scope is available for use. A scheduled job will need to compile the search scope changes. Once compiled, the View Scopes page will list out the currently configured search scopes, their status, and how many items in the index match the rules within the search scopes. Enabling the Search Scope on a Site Collection Once a Search Scope has been defined you can then associate it with the Site Collection(s) you would like to use it from. Associating the Search Scope to the Site Collection will allow the scope to be selected from within the Scopes dropdown on applicable search forms. This can be done by a Site Collection administrator one Site Collection at a time or it can be set via a PowerShell script on all Site Collections. To associate the search scope manually: Navigate to the Site Settings page. Under the Site Collection Administration section, click on the Search Scopes link. In the menu, select the Display Groups action. Select the Search Dropdown item. You can now select the Sites Scope for display and adjust its position within the list. Click on the OK button when complete.   Testing the Site Directory Search Scope Once the scope has been associated with the Site Collection's search settings, you will be able to select the Site Directory scope and perform a search, as shown in the following screenshot: Any matching Site Collections or subsites will be displayed. As we can see from the results shown in the next screenshot, the ERP Upgrade project site collection comes back as well as the Project Blog subsite.
Read more
  • 0
  • 0
  • 6015

article-image-installing-and-configuring-drupal
Packt
11 Jul 2012
7 min read
Save for later

Installing and Configuring Drupal

Packt
11 Jul 2012
7 min read
(For more resources on Drupal 7, see here.) Installing Drupal There are a number of different ways to install Drupal on a web server, but in this recipe we will focus on the standard, most common installation, which is to say, Drupal running on an Apache server, which runs PHP with a MySQL database. In order to do this we will download the latest Drupal release, and walk you through all of the steps required to get it up and running. Getting ready Before beginning, you need to ensure that you meet the following minimal requirements: Web hosting with FTP access (or file access through a control panel). A server running PHP 5.2.5+ (5.3+ recommended). A blank MySQL database and the login credentials to access it. Ensure that register globals is set to off in the PHP.ini file. You may need to contact your hosting provider to do this. How to do it... The first step is to download the latest Drupal 7 release from the Drupal download page, which is located at http://drupal.org/project/drupal : This page displays the most recent and recommended releases for both Drupal 6 and 7. It also displays the most recent development versions, but be sure to download the recommended release (development versions are for developers who want to stay on the cutting edge). When the file is downloaded, extract it and upload the files to your chosen web server document root directory on the server. This may take some time. Configure your web server document root and server name (usually through a vhost directive). When the upload is complete, open your browser and in the address bar, type in the server name configured in the previous step to begin the installation wizard. Select Standard option and then select Save and continue: The next screen that you will see is the language selection screen; there should only be one language available at this point. Ensure that English is selected before proceeding: Following a requirements check, you will arrive at the database settings page. Enter your database name, username, and password in the required fields. Unless your database details have been supplied with a specific host name and port, you should leave the advanced options as they are and continue. You will now see the Site configuration page. Under Site information enter the name you would like to appear as the site's name. For Site e-mail address enter an e-mail address. Under the SITE MAINTENANCE ACCOUNT box, enter a username for the admin user (also known as user 1), followed by an e-mail address and password: (Move the mouse over the image to enlarge.) In the Server settings box, select your country from the drop-down, followed by your local time zone. Finally, in the Update notification box, ensure that both options are selected. Click on Save and continue to complete the installation. You will be presented with the congratulations page with a link to your new site. How it works... On the server requirements page, Drupal will carry out a number of tests. It is a requirement that PHP "register globals" is set to off or disabled. Register globals is a feature of PHP which allows global variables to be set from the contents of the Environment, GET, POST, Cookie, and Server variables. It can be a major security risk, as it enables potential hackers to overwrite important variables and gain unauthorized access. The Configure site page is where you specify the site name and e-mail addresses for the site and the admin user. The admin e-mail address will be used to contact the administrator with notifications from the site, and the site e-mail address is used as the originating e-mail address when the site sends e-mails to users. You can change these settings later on in the Site information page in the Configuration section. It's important to select the options to receive the site notifications so that you are aware when software updates are available for your site core and contrib modules; important security updates are available from time to time. There's more... In this recipe we have seen a regular Drupal installation procedure. There are various different ways to install and configure Drupal. We will explore some of these alternatives in the following sections. We will also cover some of the potential pitfalls you may come across with the requirements page. Uploading through a control panel If your web-hosting provider provides web access to your files through a control panel such as CPanel, you can save time by uploading the compressed Drupal installation package and running the unzip function on the file, if that functionality is provided. This will dramatically reduce the amount of time taken to perform the installation. Auto-installers There are other ways in which Drupal can be installed. Your hosting may come with an auto- installer such as Fantastico De Luxe or Softaculous. Both of these services provide a simple way to achieve the same results without the need to use FTP or to configure a database. Database table prefixes At the database setup screen there is an option to use a table prefix. Any prefix entered into the field would be added to the start of all table names in the database. This means that you could run multiple installations of Drupal, or possibly other CMSs from the same database by setting a different prefix. This method, however, will have implications for performance and maintenance. Installing on a Windows environment This recipe deals with installing Drupal on a Linux server. However, Drupal runs perfectly well on an IIS (Windows) server. Using Microsoft's WebMatrix software, it's easy to set up a Drupal site: http://www.microsoft.com/web/drupal Alternative languages Drupal supports many different languages. You can view and download the language packs at http://localize.drupal.org/download. You then need to upload the file to Drupal root/profiles/standard/translations. You will then see the option for that new language in the language selection page of the installation. Verifying the requirements page If all goes to plan, and the server is already configured correctly, then step 3, the server requirements page, will be skipped. However, you may come across problems in a few areas: Register Globals: This should be set to off in the php.ini file. This is very important in securing your site. If you find that register globals is turned on, then you will need to consult your hosting provider's documentation on this feature in order to switch it off. Drupal will attempt to create the following folder: Drupal root/sites/default/ files. If it fails, you may have to manually create this file on the server and give it the permission 755. Drupal will attempt to create a settings.php file by copying the default.settings.php file. If Drupal has trouble doing this, copy the default.settings.php file in the following directory: Drupal root/sites/default/default.settings.php and rename the copied file as settings.php. Give settings.php full write access CHMODD 777. After Drupal finishes the installation process, it will try to set the permission of this file to 444; you must check that this has been done, and manually set the file to 444, if it has not. See also See Installing Drupal distributions for more installation options using a preconfigured Drupal distribution. For more information about installing Drupal, see the installation guide at Drupal.org: http://drupal.org/documentation/install
Read more
  • 0
  • 0
  • 3583

article-image-setting-basics-drupal-multilingual-site-languages-and-ui-translation
Packt
25 Jun 2012
10 min read
Save for later

Setting up the Basics for a Drupal Multilingual site: Languages and UI Translation

Packt
25 Jun 2012
10 min read
(For more resources on Drupal, see here.) Getting up and running Before we get started, we obviously need a Drupal 7 website to work on. This section gives you two options, namely, roll your own or install the demo. Using your own site You can use your own Drupal 7 site. It can be an existing site or one you create from scratch. If you are creating a brand new site and weren't planning on using a particular installation profile, you can get a head start by using the Localized Drupal Distribution install profile at drupal.org/project/l10n_install. It is probably obvious, but it's best to run the site on a development machine and not in a production environment. Once all the basic Drupal core modules are configured, you will also want to set up the following additional modules to get the most out of the exercises: Panels: A tool for creating pages with custom layouts Path auto: Settings for creating path aliases automatically Views: A tool for creating custom pages and blocks Using the demo site If you'd prefer a jump-start, a full demo website can be created using a special install profile. Instructions for downloading and installing the demo website are included on the Drupal project page available at drupal.org/project/multilingual_book_demo. The demo site contains additional modules including the modules listed previously as well as the following: Administration Menu: Toolbar for quick access to the site configuration Views Bulk Operations: Extra functionality for Views forms Views Slideshow: Slideshows of content coming from Views These modules provide us with a starting point. As more modules are needed for particular exercises, they will be listed so you can add them. Roles, users, and permissions Although you might already have multiple users on your test site, for simplicity it will be assumed that you are logged in as the super admin (user ID 1). Working with languages If we want a multilingual site, the logical first step is to add more languages! In this section, we will add languages to our site, configure how our languages are detected, and set up ways to go between these languages. Adding languages with the Locale module Drupal has language support built into the core, but it's not fully turned on by default. If you go to your site right now and navigate to Configuration | Regional and language, you will see the Regional settings and Date and time comfit page s for configuring default country, time zone, and date/time formats: To get our languages hooked in, let's enable the core module, Locale. Now go back to Configuration | Regional and language to see more options: Click on Languages and you'll see we only have English in our list so far: Now let's add a language by clicking on the Add language link. You can add a predefined language such as German or you can create a custom language. For our purposes, we will work with predefined languages. So choose a language and click on the Add language button. Drupal will then redirect you to the main language admin page and your new language will be added to the list. Now you can simply repeat the process for each language. In my case, I've added three new languages, namely, Arabic, German, and Polish: The overview table shows the language's name (English and native), its code, and its directionality. The language's direction can be Left to right (LTR) or Right to left (RTL), with most languages using the former. 'Right to left' just means that you start at the right side of the page and move towards the left side when you are writing. RTL languages include Arabic, Hebrew, and Syriac, which are written in their own alphabets. You can choose which languages to enable, order them, and set the site default. Links are provided to edit and delete each language. English only has an edit link since it is the system language and cannot be deleted, but English can be disabled if you use a non-English default. If we edit a language, we can modify all the information from the overview table except for the language's code since we need that as a consistent reference string. Do not change the default language once you have started translating or translations might break. Install String Translation from the Internationalization package (drupal.org/project/i18n), go to Configuration | Regional and language | Multilingual settings | Strings, select the Source language, and click on Save configuration. Do not change this setting once it's configured. Detecting languages We have our languages, so now what? If you click around your site, nothing looks different. That's because we are looking at the English version of the site and we haven't told Drupal how we want to deal with the other languages. We'll do that now. Navigate to Configuration | Regional and language | Languages | Detection and selection and you'll see we have a number of choices available to us: The Default detection method is enabled for us, but we can also enable the URL, Session, User, and Browser options. If you want a cookie-based option, check out the Language Cookie and Locale Cookie modules. Let's go over the core options in more detail. URL If you enable this method, users can navigate to URLs such as example.com/de/ news or example.com/deutsch/news (when using the path prefix option) and example.de/news, deutschexample.com/news, or deutsch.example.com/news (when using the domain option). Configuring domains requires web server changes, but using path prefixes does not. This is a common configuration for multilingual sites, and one we'll use shortly. The language's path prefix can be changed when editing the language. If you want to use path-prefixed URLs, then you should decide on your path prefixes before translating content as changing path prefixes might break links (unless you set up proper redirects). If desired, you can choose one language that does not have any path prefix. This is common for the site's default language. For example, if German is the default language and no path prefix is used, the news page would be accessed as example.com/news whereas other languages would be accessed using a path prefix (for example, example.com/en/news). Session The Session option is available if you want to store a user's language preference inside their user session. It was actually proposed by some Drupal community members that this method be removed from the set of choices as it caused a number of issues in other code. One reason you may not want to use this option is due to the possible inconsistency between the content and the URL language. For example, you could enable both URL and Session methods and order them so that the Session method is first. If a user is at example.com/de and if the session is set to French, then the user will see French content even though the URL corresponds with German. My advice is to just skip this one, or, if you need it, at least make sure that it's ordered below the URL option. User Once the Locale module is enabled, users can specify their preferred language when they edit their account profile. If you enable the User method in the detection settings, the user's profile language will be checked when deciding what language to display. Note that the user profile language defaults to the site's default language. Browser Users can configure their browsers to specify which languages they prefer. If the Browser method is enabled, Drupal will check the browser's request to find out the language setting and use it for the language choice. This option may or may not be useful depending on your site audience. Default The default site language is configured on the Configuration | Regional and language | Languages settings page, and is used for the Default detection method. Although you can't disable this method, you can reorder it if you choose. But, it makes the most sense to keep it at the bottom of the list to use it as the fallback language. Detection method order It is important to note that the detection method order is critical to how detection works. If you were to drag the Default method to the top of the list, then none of the other methods would be used and the site would only use the default language. Similarly, if you allow a user profile language and drag User to top of the list, then the URL method would not matter even if it's enabled. Also, if URL detection is ordered below Session, User, and Browser options, the user might see a UI language that does not match up with the URL language, which could be confusing. Make sure to think carefully about the order of these settings. If you use the URL method, it's likely you will want it first. The Default method should be last. The other detection method positions depend on your preference. When using path-prefixed URLs, if one language does not have a prefix, then detection for that language will work differently. For example, if the URL method is first, then no other detection methods will trigger for any URLs with no path prefix such as example.com/news or example.com/about-us. Our choice For our purposes, let's stick with URL detection and use the path-prefix option as this is the easiest to configure (it doesn't require extra domains). This choice will keep our URLs in sync with our interface language, which is also user and SEO-friendly. Check Enabled for the URL method and press the Save settings button. Now click on Configure for that method and you'll see options for Path prefix and Domain. We'll use the default option, that is Path prefix (for example, example.com/de). Don't panic on the next step. You won't see anything different in the UI until we finish our interface translation process later in the article. Now change the URL in your browser to include the path prefix for one of your languages. In my case, I'll try German and go to example.com/de. You should be able to use the path prefixes for each of your configured languages. Switching between languages Most likely you don't want your users to have to manually type in a different URL to switch between languages. Drupal core provides a language switcher block that you can put somewhere convenient for your users. To use the block, navigate to Structure | Blocks, find the Language switcher (User interface text) block, position it where you'd like, and save your block settings. The order of the languages in the block is based on the order configured at Configuration | Regional and language | Languages. Once enabled, the language switcher block looks like the following screenshot: You can now easily switch between your site languages, and the language chosen is highlighted. The UI won't look different when switching until we finish the next section. Two alternatives to the core language switcher block are provided by the Language Switcher and Language Switcher Drop-down modules. Also, if you want country flag icons added next to each language, you can install the Language Icons module.
Read more
  • 0
  • 0
  • 11248

article-image-wordpress-buddypress-courseware
Packt
19 Jun 2012
10 min read
Save for later

Wordpress: Buddypress Courseware

Packt
19 Jun 2012
10 min read
Installing and configuring BP Courseware As BP Courseware is a plugin that runs atop BuddyPress, we must have already installed, activated, and configured BuddyPress. To install BP Courseware, log in to the WordPress dashboard, hover over Plugins in the left sidebar, and choose Add New. Search for BuddyPress ScholarPress Courseware and install the plugin. BuddyPress Courseware requires the Private Messaging BuddyPress component to be enabled. If you have disabled Private Messaging, you will be prompted to enable it before activating BuddyPress Courseware. To enable Private Messaging, log in to the WordPress dashboard and visit the BuddyPress Components screen. BuddyPress Courseware settings While BuddyPress Courseware will work immediately, there are a number of settings that can be adjusted to ensure that the plugin meets our needs. To access the BuddyPress Courseware settings, log in to the WordPress dashboard, hover over BuddyPress in the left sidebar, and select Courseware. Global settings BP Courseware integrates with BuddyPress groups for course management. By default, we must enable courseware individually for each group. Checking the Enable Courseware globally checkbox will turn courseware on for all new groups. This is useful if groups are used exclusively for course management. If you intend to use BuddyPress groups for other purposes, such as student project collaboration, the Enable Courseware globally option should remain unchecked. In this scenario each group will require enabling courseware manually. To do so, follow the instructions given in the Enabling BP Courseware section later in the article. Collaboration settings Within BP Courseware we are able to define users as either teachers or students. By enabling the Collaboration settings option, any site user with a teacher role has the ability to edit and add courses, assignments, and schedules. Make assignment responses private When students submit an assignment their response is public to all site users. By enabling the Make assignment responses private feature, student responses will be visible only to the teachers and the student who has completed the assignment. Gradebook default grade format The default BuddyPress Courseware Gradebook format is numeric. Within the Gradebook default grade format settings we are able to choose between numeric, letter, or percentage grading for assignments. Webservices API integration BP Courseware has the ability to integrate with WorldCat and ISBNdb web services to aid in locating books and articles. To integrate these services with BuddyPress Courseware, follow the links from the BudddyPress Courseware settings screen to sign up for a free API key. Customization Cascading stylesheets (CSS) are the files that control the look and formatting of a web page. BuddyPress Courseware allows administrators with advanced web skills to create a custom stylesheet for fine grain control over the look of Courseware. Renaming the groups page BP Courseware utilizes the BuddyPress group feature for course content. While the term Groups makes sense in the context of a standard BuddyPress installation, it can be confusing when using BuddyPress Courseware as a learning management system. To prevent this confusion, I find it helpful to rename the Groups page to Courses.   To rename Groups: Log in to the WordPress dashboard. Create a new WordPress page titled Courses by hovering over Pages in the left sidebar, and choosing Add New. Title the page Courses, leaving the page content blank, and click on the blue Publish button Adjust the BuddyPress page settings by hovering over BuddyPress in the left sidebar and selecting the Pages tab. In the menu next to User Groups, select the Courses option and click on the Save button. Delete the Groups page by clicking on Pages in the left sidebar. From the All Pages screen, hover over the Groups page and click on the red Trash link. Setting Courses as the site home page Using the BuddyPress Courseware plugin, we may wish to enable our Courses page as the site's front page. Doing so will allow students to quickly access course information and prevent confusion regarding how to find the courseware dashboard.   To enable the Courses page as our site's home page: Log in to the WordPress dashboard. If you have not already done so, create an empty page titled Blog Hover over Settings in the left sidebar and select the Reading tab. From the Reading Settings screen, select Front page displays | A static page option. Select Courses from the Front page menu choices and Blog as the Posts page Creating a course When setting up a course we must first create a BuddyPress group. To create course or group: From our public facing site, visit the Courses page (or Groups if not renamed). Click on the Create a Group button. From the Details screen, provide a Group Name such as the course name and section number and enter a Group Description such as the course catalog information. On the Privacy Options page, select This is a public group, allowing any site member to join. Complete the installation by optionally adding an avatar image and inviting members. Enabling BP Courseware Once the course group has been created, we may enable BP Courseware. This step may be skipped, if we selected Enable Courseware globally from the BuddyPress Courseware settings screen. To enable BP Courseware: Visit the page of the newly created group. Click on the Admin tab Click on the Courseware link from the row of links below the Admin tab. Below Courseware Status , select Enable to enable BP Courseware for the group. Optionally, if you wish to keep student assignment responses private, select Enable below Private Responses. Click on the Save button. Courseware dashboard Within BP Courseware, the courseware dashboard acts as the course home screen for both instructors and students. From the courseware dashboard, instructors are able to add and manage course content. Students use the dashboard to access course materials and submit assignments. To access the courseware dashboard, visit the Courses page (or Groups if unchanged) and click on the Courseware link located below the course/group description. The teacher dashboard appears as shown in the following screenshot: The student dashboard appears as shown in the following screenshot: Adding course content Adding course content to BP Courseware allows educators to easily organize and share course information with students. The BP Courseware plugin provides a structure for managing course lectures, assignments, quizzes, grades, resources, and schedules. Lectures Adding lecture information allows instructors to share course notes, resources, and slides with students in a structured format. While the term lecture implies the lecture format of university courses, I find it useful to think of lectures in terms of teaching units. The lecture pages can serve as a resource for a chunk of course content. To add a new lecture, click on the Add a new lecture button from the course dashboard. We may then edit the content of the lecture much in the same way as a WordPress post adding text, images, links, and embedded media. Lectures will appear in the Latest Lectures section of course dashboard, with the most recently posted lecture appearing at the top. This allows students to quickly access the most recently posted course lecture. Assignments BP Courseware provides a means for us to post assignments and collect student responses. The assignments can take multiple formats, allowing students to respond to questions, upload a file, or embed media. Posting assignments as a teacher As a teacher, we have the ability to post assignments. To post an assignment, visit the course dashboard and click on the Create an assignment button. This will take us to the New Assignment screen. From the New Assignment screen, we may enter the details of the assignment. From the New Assignment screen, assign a title to the assignment and enter the assignment description and necessary information. This acts much in the same way as WordPress posts, allowing us to enter text, images, media, and links. In the right column, we may choose the corresponding lecture topic and select the assignment due date. Once the assignment information is complete, click on the Publish assignment button to make it visible to students. Submitting assignments as a student Students can access assignments by clicking on the All Assignments button or viewing the Latest Assignments from the course dashboard. By clicking on the Assignment Title, students can see the detailed assignment information written by the instructor. Clicking on the Add a response button, will give the students an opportunity to submit the assignment. For standard assignments, students will be presented with a text input area similar to the WordPress post editor. Within this text area, students may type a written response, add links, embed media, and attach files. Viewing student responses Instructors have access to all student assignment responses, while students are only able to view their own. To view student assignment responses, visit the individual assignment page and click on the Responses button, located in the right sidebar. This will present us with a list of student responses. Clicking on the individual assignment link, will take us to a screen containing the student's assignment response. Gradebook BP Courseware provides us with a Gradebook feature for assessing student assignments. To access the Gradebook, visit an individual assignment page and click on the Gradebook button, located in the right sidebar. This will take us to the Gradebook screen for the assignment. For each student, we are given the option to enter a grade value, a private comment, and a public comment. Once the grades and comments have been entered, we may click on the Save grades button. After an assignment has been graded, students will receive a message containing the grade. Students will be alerted to this message by a notification, but may access their grades directly by visiting the individual assignment page. The grade is posted in the Assignment Meta sidebar of the assignment page. Bibliography The Bibliography is designed so that educators can easily maintain a list of course materials and resources. To add entries to the bibliography, visit the Courseware dashboard and click on the Manage bibliography button. From the Bibliography page we may add entries manually or import them by pasting information from BibTeX. The bibliography is ideal for courses utilizing a wide range of materials. BibTeX is a tool for formatting references. It is typically used in conjunction with the LaTeX typesetting system. More information can be found at http://www.bibtex.org. Schedule and calendar The BuddyPress Schedule page functions as a course calendar, automatically containing assignment due dates as well as manually managed schedule items. To view the course calendar, visit the Courseware Dashboard and click on the Schedules link. To schedule more items, click on the Add a schedule button from the Courseware dashboard. Complete the New Schedule form and click on the Publish Schedule button once it is complete. New schedule items will be added to the calendar as well as featured in a list format on the Schedules page.
Read more
  • 0
  • 0
  • 3491

article-image-supporting-hypervisors-opennebula
Packt
25 May 2012
7 min read
Save for later

Supporting hypervisors by OpenNebula

Packt
25 May 2012
7 min read
(For more resources on Open Source, see here.) A host is a server that has the ability to run virtual machines using a special software component called a hypervisor that is managed by the OpenNebula frontend. All the hosts do not need to have homogeneous configuration, but it is possible to use different hypervisors on different GNU/Linux distributions on a single OpenNebula cluster. Using different hypervisors in your infrastructure is not just a technical exercise but assures you greater flexibility and reliability. A few examples where having multiple hypervisors would prove to be beneficial are as follows: A bug in the current release of A hypervisor does not permit the installation of a virtual machine with a particular legacy OS (let's say, for example,Windows 2000 Service Pack 4), but you can execute it with B hypervisor without any problem. You have a production infrastructure that is running a closed source free-to-use hypervisor, and during the next year the software house developing that hypervisor will request a license payment or declare bankruptcy due to economic crisis. The current version of OpenNebula will give you great flexibility regarding hypervisor usage since it natively supports KVM/Xen (which are open source) and VMware ESXi. In the future it will probably support both VirtualBox (Oracle) and Hyper-V (Microsoft). Configuring hosts The first thing to do before starting with the installation of a particular hypervisor on a host is to perform some general configuration steps. They are as follows: Create a dedicated oneadmin UNIX account (which should have sudo privileges for executing particular tasks, for example, iptables/ebtables,and network hooks that we have configured. The frontend and host's hostname should be resolved by a local DNS or a shared/etc/hosts file. The oneadmin on the frontend should be able to connect remotely through SSH to the oneadmin on the hosts without a password. Configure the shared network bridge that will be used by VM to get the physical network.   The oneadmin account and passwordless login Every host should have a oneadmin UNIX account that will be used by the OpenNebula frontend to connect and execute commands. If during the operating system install you did not create it, create a oneadmin user on the host by using the following command: youruser@host1 $ sudo adduser oneadmin You can configure any password you like (even blank) because we are going to set up a passwordless login from the frontend: oneadmin@front-end $ ssh-copy-id oneadmin@host1 Now if you connect from the oneadmin account on the frontend to the oneadminaccount of the host, you should get the shell prompt without entering any password by using the following command: oneadmin@front-end $ ssh oneadmin@host1 Uniformity of oneadmin UID number Later, we will learn about the possible storage solutions available with OpenNebula. However, keep in mind that if we are going to set up a shared storage, we need to make sure that the UID number of the oneadmin user is homogeneous between the frontend and every other host. In other words, check that with the id command the oneadmin UID is the same both on the frontend and the hosts. Verifying the SSH host fingerprints The first time you connect to a remote SSH server from a particular host, the SSH client will provide you the fingerprintprint of the remote server and ask for your permission to continue with the following message: The authenticity of host host01 (192.168.254.2)can't be established. RSA key fingerprint is 5a:65:0f:6f:21:bb:fd:6a:4a:68:cd: 72:58:5c:fb:9f. Are you sure you want to continue connecting (yes/no)? Knowing the fingerprint of the remote SSH key and saving it to the local SSH client fingerprint cache (saved in ~/.ssh/known_hosts) should be good enough to prevent man-in-the-middle attacks. For this reason, you need to connect from the oneadmin user on the frontend to every host in order to save the fingerprints of the remote hosts in the oneadmin known_hosts for the first time. Not doing this will prevent OpenNebula from connecting to the remote hosts. In large environments, this requirement may be a slow-down when cofiguring new hosts. However, it is possible to bypass this operation by instructing the remote client used by OpenNebula to connect to remote hosts and not check the remote SSH key in ~/.ssh/config. The command prompt will show the following content when the operation is bypassed: Host* StrictHostKeyChecking no. If you do not have a local DNS (or you cannot/do not want to set it up), you can manually manage the /etc/hosts file in every host, using the following IP addresses: 127.0.0.1 localhost 192.168.66.90 on-front 192.168.66.97 kvm01 192.168.66.98 xen01 192.168.66.99 esx01 Now you should be able to remotely connect from a node to another with your hostname using the following command: $ ssh oneadmin@kvm01 Configuring a simple DNS with dnsmasq If you do not have a local DNS and manually managing the plain host's file on every host does not excite you, you can try to install and configure dnsmasq. It is a lightweight, easy-to-configure DNS forwarder (optionally DHCP and TFTP can be provided within it) that services well to a small-scale network. The OpenNebula frontend may be a good place to install it. For an Ubuntu/Debian installation use the following command: $ sudo apt-get install dnsmasq The default configuration should be fine. You just need to make sure that /etc/resolv.conf configuration details look similar to the following: # dnsmasq nameserver 127.0.0.1 # another local DNS nameserver 192.168.0.1 # ISP or public DNS nameserver 208.67.220.220 nameserver 208.67.222.222 The /etc/hosts configuration details will look similar to the following: 127.0.0.1 localhost 192.168.66.90 on-front 192.168.66.97 kvm01 192.168.66.98 xen01 192.168.66.99 esx01 Configure any other hostname here in the hosts file on the frontend by running dnsmasq. Configure /etc/resolv.conf configuration details on the other hosts using the following code: # ip where dnsmasq is installed nameserver 192.168.0.2 Now you should be able to remotely connect from a node to another using your plain hostname using the following command: $ ssh oneadmin@kvm01 When you add new hosts, simply add them at /etc/hosts on the frontend and they will automatically work on every other host, thanks to dnsmasq. Configuring sudo To give administrative privileges to the oneadmin account on the hosts, add it to the sudo or admin group depending on your /etc/sudoers configuration using the following code: # /etc/sudoers Defaults env_reset root ALL=(ALL) ALL %sudo ALL=NOPASSWD: ALL With this simple sudo configuration, every user in the sudo group can execute any command with root privileges, without requiring to enter the user password before each command. Now add the oneadmin user to the sudo group with the following command: $ sudo adduser oneadmin sudo Giving full administrative privileges to the oneadmin account might be considered inappropriate for most security-focused people. However, I can assure you that if you are taking the first step with OpenNebula now, having full administrative privileges could save some headaches. This is a suggested configuration but it is not required to run OpenNebula. Configuring network bridges Every host should have its bridges configured with the same name. Check the following /etc/network/interfaces code as an example: # The loopback network interface auto lo iface lo inet loopback # The primary network interface iface eth0 inet manual auto lan0 iface lan0 inet static bridge_ports eth0 bridge_stp off bridge_fd 0 address 192.168.66.97 netmask 255.255.255.0 gateway 192.168.66.1 dns-nameservers 192.168.66.1 You can have as many bridges as you need, bound or not bound to a physical network. By eliminating the bridge_ports parameter you get a pure virtual network for your VMs but remember that without a physical network different VMs on different hosts cannot communicate with each other.
Read more
  • 0
  • 1
  • 16566
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-magento-designs-and-themes
Packt
19 May 2012
13 min read
Save for later

Magento: Designs and Themes

Packt
19 May 2012
13 min read
(For more resources on e-Commerce, see here.) The Magento theme structure The same holds true for themes. You can specify the look and feel of your stores at the Global, Website, or Store levels (themes can be applied for individual store views relating to a store) by assigning a specific theme. In Magento,a group of related themes is referred to as a design package. Design packages contain files that control various functional elements that are common among the themes within the package. By default, Magento Community installs two design packages: Base package: A special package that contains all the default elements for a Magento installation (we will discuss this in more detail in a moment) Default package: This contains the layout elements of the default store (look and feel) Themes within a design package contain the various elements that determine the look and feel of the site: layout files, templates, CSS, images, and JavaScript. Each design package must have at least one default theme, but can contain other theme variants. You can include any number of theme variants within a design package and use them, for example, for seasonal purposes (that is, holidays, back-to-school, and so on). The following image shows the relationship between design packages and themes: A design package and theme can be specified at the Global, Website or Store levels. Most Magento users will use the same design package for a website and all descendant stores. Usually, related stores within a website business share very similar functional elements, as well as similar style features. This is not mandatory; you are free to specify a completely different design package and theme for each store view within your website hierarchy. The Theme structure Magento divides themes into two group of files: templating and skin. Templating files contain the HTML, PHTML, and PHP code that determines the functional aspects of the pages in your Magento website. Skin files are made of CSS, image, and JavaScript files that give your site its outward design. Ingeniously, Magento further separates these areas by putting them into different directories of your installation: Templating files are stored in the app/design directory, where the extra security of this section protects the functional parts of your site design Skin files are stored within the skin directory (at the root level of the installation), and can be granted a higher permission level, as these are the files that are delivered to a visitor's browser for rendering the page Templating hierarchy Frontend theme template files (the files used to produce your store's pages) are stored within three subdirectories: layout: It contains the XML files that contain the various core information that defines various areas of a page. These files also contain meta and encoding information. template: This stores the PHTML files (HTML files that contain PHP code and processed by the PHP server engine) used for constructing the visual structure of the page. locale: This add files within this directory to provide additional language translations for site elements, such as labels and messages. Magento has a distinct path for storing templating files used for your website: app/design/frontend/[Design Package]/[Theme]/. Skin hierarchy The skin files for a given design package and theme are subdivided into the following: css: This stores the CSS stylesheets, and, in some cases, related image files that are called by CSS files (this is not an acceptable convention, but I have seen some designers do this) images:This contains the JPG, PNG, and GIF files used in the display of your site js: This contains the JavaScript files that are specific to a theme (JavaScript files used for core functionality are kept in the js directory at the root level) The path for the frontend skin files is: skin/frontend/[Design Package]/[Theme]/. The concept of theme fallback A very important and brilliant aspect of Magento is what is called the Magento theme fallback model. Basically, this concept means that when building a page, Magento first looks to the assigned theme for a store. If the theme is missing any necessary templating or skin files, Magento then looks to the required default theme within the assigned design package. If the file is not found there, Magento finally looks into the default theme of the Base design package. For this reason, the Base design package is never to be altered or removed; it is the failsafe for your site. The following flowchart outlines the process by which Magento finds the necessary files for fulfilling a page rendering request. This model also gives the designers some tremendous assistance. When a new theme is created, it only has to contain those elements that are different from what is provided by the Base package. For example, if all parts of a desired site design are similar to the Base theme, except for the graphic appearance of the site, a new theme can be created simply by adding new CSS and image files to the new theme (stored within the skin directory). Any new CSS files will need to be included in the local.xml file for your theme (we will discuss the local.xml file later in this article). If the design requires different layout structures, only the changed layout and template files need to be created; everything that remains the same need not be duplicated. While previous versions of Magento were built with fallback mechanisms, only in the current versions has this become a true and complete fallback. In the earlier versions, the fallback was to the default theme within a package, not to the Base design package. Therefore, each default theme within a package had to contain all the files of the Base package. If Magento base files were updated in subsequent software versions, these changes had to be redistributed manually to each additional design package within a Magento installation. With Magento CE 1.4 and above, upgrades to the Base package automatically enhance all design packages. If you are careful not to alter the Base design package, then future upgrades to the core functionality of Magento will not break your installation. You will have access to the new improvements based on your custom design package or theme, making your installation virtually upgrade proof. For the same reason, never install a custom theme inside the Base design package. Default installation design packages and themes In a new, clean Magento Community installation, you are provided with the following design packages and themes: Depending on your needs, you could add additional a custom design packages, or custom themes within the default design package: If you're going to install a group of related themes, you should probably create a new design package, containing a default theme as your fallback theme On the other hand, if you're using only one or two themes based on the features of the default design package, you can install the themes within the default design package hierarchy I like to make sure that whatever I customize can be undone, if necessary. It's difficult for me to make changes to the core, installed files; I prefer to work on duplicate copies, preserving the originals in case I need to revert back. After re-installing Magento for the umpteenth time because I had altered too many core files, I learned the hard way! As Magento Community installs a basic variety of good theme variants from which to start, the first thing you should do before adding or altering theme components is to duplicate the default design package files, renaming the duplicate to an appropriate name, such as a description of your installation (for example, Acme or Sports). Any changes you make within this new design package will not alter the originally installed components, thereby allowing you to revert any or all of your themes to the originals. Your new theme hierarchy might now look like this: When creating new packages, you also need to create new folders in the /skin directory to match your directory hierarchy in the /app/design directory. Likewise, if you decide to use one of the installed default themes as the basis for designing a new custom theme, duplicate and rename the theme to preserve the original as your fallback. The new Blank theme A fairly recent default installed theme is Blank. If your customization to your Magento stores is primarily one of colors and graphics, this is not a bad theme to use as a starting point. As the name implies, it has a pretty stark layout, as shown in the following screenshot. However, it does give you all the basic structures and components. Using images and CSS styles, you can go a long way to creating a good-looking, functional website, as shown in the next screenshot for www.aviationlogs.com: When duplicating any design package or theme, don't forget that each of them is defined by directories under /app/design/frontend/ and /skin/frontend/ Installing third-party themes In most cases, Magento users who are beginners will explore hundreds of the available Magento themes created by third-party designers. There are many free ones available, but most are sold by dedicated designers. Shopping for themes One of the great good/bad aspects of Magento is the third-party themes. The architecture of the Magento theme model gives knowledgeable theme designers tremendous abilities to construct themes that are virtually upgrade proof, while possessing powerful enhancements. Unfortunately, not all designers have either upgraded older themes properly or created new themes fully honoring the fallback model. If the older fallback model is still used for current Magento versions, upgrades to the Base package could adversely affect your theme. Therefore, as you review third-party themes, take time to investigate how the designer constructs their themes. Most provide some type of site demo. As you learn more about using themes, you'll find it easier to analyze third-party themes. Apart from a few free themes offered through the Magento website, most of them require that you install the necessary files manually, by FTP or SFTP to your server. Every third-party theme I have ever used has included some instructions on how to install the files to your server. However, allow me to offer the following helpful guidelines: When using FTP/SFTP to upload theme files, use the merge function so that only additional files are added to each directory, instead of replacing entire directories. If you're not sure whether your FTP client provides merge capabilities, or not sure how to configure for merge, you will need to open each directory in the theme and upload the individual files to the corresponding directories on your server. If you have set your CSS and JavaScript files to merge, under System | Configuration | Developer, you should turn merging off while installing and modifying your theme. After uploading themes or any component files (for example, templates, CSS, or images), clear the Magento caches under System | Cache Management in your backend. Disable your Magento cache while you install and configure themes. While not critical, it will allow you to see changes immediately instead of having to constantly clear the Magento cache. You can disable the cache under System | Cache Management in the backend. If you wish to make any changes to a theme's individual file, make a duplicate of the original file before making your changes. That way, if something goes awry, you can always re-install the duplicated original. If you have followed the earlier advice to duplicate the Default design package before customizing, instructions to install files within /app/design/frontend/default/ and /skin/frontend/default/ should be interpreted as /app/design/frontend/[your design package name]/ and /skin/frontend/[your design package name]/, respectively. As most of the new Magento users don't duplicate the Default design package, it's common for theme designers to instruct users to install new themes and files within the Default design package. (We know better, now, don't we?) Creating variants Let's assume that we have created a new design package called outdoor_package. Within this design package, we duplicate the Blank theme and call it outdoor_theme. Our new design package file hierarchy, in both /app/design/ and /skin/frontend/ might resemble the following hierarchy: app/ design/ frontend/ default/ blank/ modern/ iphone/ outdoor_package/ outdoor_theme/ skin/ frontend/ default/ blank/ blue/ french/ german/ modern/ iphone/ outdoor_package/ outdoor_theme/ However, let's also take one more customization step here. Since Magento separates the template structure from the skin structure—the layout from the design, so to speak—we could create variations of a theme that are simply controlled by CSS and images, by creating more than one skin. For Acme, we might want to have our English language store in a blue color scheme, but our French language store in a green color scheme. We could take the acme/skin directory and duplicate it, renaming both for the new colors: app/ design/ frontend/ default/ blank/ modern/ iphone/ outdoor_package/ outdoor_theme/ skin/ frontend/ default/ blank/ blue/ french/ german/ modern/ iphone/ outdoor_package/ outdoor_blue/ outdoor_green/ Before we continue, let's go over something which is especially relevant to what we just created. For our outdoor theme, we created two skin variants: blue and green. However, what if the difference between the two is only one or two files? If we make changes to other files that would affect both color schemes, but which are otherwise the same for both, this would create more work to keep both color variations in sync, right? Remember, with the Magento fallback method, if your site calls on a file, it first looks into the assigned theme, then the default theme within the same design package, and, finally, within the Base design package. Therefore, in this example, you could use the default skin, under /skin/frontend/outdoor_package/default/ to contain all files common to both blue and green. Only include those files that will forever remain different to each of them within their respective skin directories. Assigning themes As mentioned earlier, you can assign design packages and themes at any level of the GWS hierarchy. As with any configuration, the choice depends on the level you wish to assign control. Global configurations affect the entire Magento installation. Website level choices set the default for all subordinant store views, which can also have their own theme specifics, if desired. Let's walk through the process of assigning custom design package and themes. For the sake of this exercise, let's continue with our Outdoor theme, as described earlier.Refer to the following screenshot: We're going to now assign our Outdoor theme to a Outdoor website and store views. Our first task is to assign the design package and theme to the website as the default for all subordinant store views: Go to System | Configuration | General | Design in your Magento backend. In the Current Configuration Scope drop-down menu, choose Outdoor Products. As shown in the following screenshot, enter the name of your design package, template, layout, and skin. You will have to uncheck the boxes labeled Use Default beside each field you wish to use. Click on the Save Config button. The reason you enter default in the fields, as shown in the previous screenshot, is to provide the fallback protection I described earlier. Magento needs to know where to look for any files that may be missing from your theme files.
Read more
  • 0
  • 0
  • 2713

article-image-microsoft-silverlight-5-working-services
Packt
23 Apr 2012
11 min read
Save for later

Microsoft Silverlight 5: Working with Services

Packt
23 Apr 2012
11 min read
(For more resources on silverlight, see here.) Introduction Looking at the namespaces and classes in the Silverlight assemblies, it's easy to see that there are no ADO.NET-related classes available in Silverlight. Silverlight does not contain a DataReader, a DataSet, or any option to connect to a database directly. Thus, it's not possible to simply define a connection string for a database and let Silverlight applications connect with that database directly. The solution adds a layer on top of the database in the form of services. The services that talk directly to a database (or, more preferably, to a business and data access layer) can expose the data so that Silverlight can work with it. However, the data that is exposed in this way does not always have to come from a database. It can come from a third-party service, by reading a file, or be the result of an intensive calculation executed on the server. Silverlight has a wide range of options to connect with services. This is important as it's the main way of getting data into our applications. In this article, we'll look at the concepts of connecting with several types of services and external data. We'll start our journey by looking at how Silverlight connects and works with a regular service. We'll see the concepts that we use here recur for other types of service communications as well. One of these concepts is cross-domain service access. In other words, this means accessing a service on a domain that is different from the one where the Silverlight application is hosted. We'll see why Microsoft has implemented cross-domain restrictions in Silverlight and what we need to do to access externally hosted services. Next, we'll talk about working with the Windows Azure Platform. More specifically, we'll talk about how we can get our Silverlight application to get data from a SQL Azure database, how to communicate with a service in the cloud, and even how to host the Silverlight application in the cloud, using a hosted service or serving it from Azure Storage. Finally, we'll finish this chapter by looking at socket communication. This type of communication is rare and chances are that you'll never have to use it. However, if your application needs the fastest possible access to data, sockets may provide the answer. Connecting and reading from a standardized service Applies to Silverlight 3, 4 and 5 If we need data inside a Silverlight application, chances are that this data resides in a database or another data store on the server. Silverlight is a client-side technology, so when we need to connect to data sources, we need to rely on services. Silverlight has a broad spectrum of services to which it can connect. In this recipe, we'll look at the concepts of connecting with services, which are usually very similar for all types of services Silverlight can connect with. We'll start by creating an ASMX webservice—in other words, a regular web service. We'll then connect to this service from the Silverlight application and invoke and read its response after connecting to it. Getting ready In this recipe, we'll build the application from scratch. However, the completed code for this recipe can be found in the Chapter07/SilverlightJackpot_Read_Completed folder in the code bundle that is available on the Packt website. How to do it... We'll start to explore the usage of services with Silverlight using the following scenario. Imagine we are building a small game application in which a unique code belonging to a user needs to be checked to find out whether or not it is a winning code for some online lottery. The collection of winning codes is present on the server, perhaps in a database or an XML file. We'll create and invoke a service that will allow us to validate the user's code with the collection on the server. The following are the steps we need to follow: We'll build this application from scratch. Our first step is creating a new Silverlight application called SilverlightJackpot. As always, let Visual Studio create a hosting website for the Silverlight client by selecting the Host the Silverlight application in a new Web site checkbox in the New Silverlight Application dialog box. This will ensure that we have a website created for us, in which we can create the service as well. We need to start by creating a service. For the sake of simplicity, we'll create a basic ASMX web service. To do so, right-click on the project node in the SilverlightJackpot. Web project and select Add | New Item... in the menu. In the Add New Item dialog, select the Web Service item. We'll call the new service as JackpotService. Visual Studio creates an ASMX file (JackpotService.asmx) and a code-behind file (JackpotService.asmx.cs). To keep things simple, we'll mock the data retrieval by hardcoding the winning numbers. We'll do so by creating a new class called CodesRepository.cs in the web project. This class returns a list of winning codes. In real-world scenarios, this code would go out to a database and get the list of winning codes from there. The code in this class is very easy. The following is the code for this class: public class CodesRepository{ private List<string> winningCodes; public CodesRepository() { FillWinningCodes(); } private void FillWinningCodes() { if (winningCodes == null) { winningCodes = new List<string>(); winningCodes.Add("12345abc"); winningCodes.Add("azertyse"); winningCodes.Add("abcdefgh"); winningCodes.Add("helloall"); winningCodes.Add("ohnice11"); winningCodes.Add("yesigot1"); winningCodes.Add("superwin"); } } public List<string> WinningCodes { get { return winningCodes; } }} At this point, we need only one method in our JackpotService. This method should accept the code sent from the Silverlight application, check it with the list of winning codes, and return whether or not the user is lucky to have a winning code. Only the methods that are marked with the WebMethod attribute are made available over the service. The following is the code for our service: [WebService(Namespace = "http://tempuri.org/")][WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)][System.ComponentModel.ToolboxItem(false)]public class JackpotService : System.Web.Services.WebService{ List<string> winningCodes; public JackpotService() { winningCodes = new CodesRepository().WinningCodes; } [WebMethod] public bool IsWinningCode(string code) { if(winningCodes.Contains(code)) return true; return false; }} Build the solution at this point to ensure that our service will compile and can be connected from the client side. Now that the service is ready and waiting to be invoked, let's focus on the Silverlight application. To make the service known to our application, we need to add a reference to it. This is done by right-clicking on the SilverlightJackpot project node, and selecting the Add Service Reference... item. In the dialog that appears, we have the option to enter the address of the service ourselves. However, we can click on the Discover button as the service lives in the same solution as the Silverlight application. Visual Studio will search the solution for the available services. If there are no errors, our freshly created service should show up in the list. Select it and rename the Namespace: as JackpotService, as shown in the following screenshot. Visual Studio will now create a proxy class: The UI for the application is kept quite simple. An image of the UI can be seen a little further ahead. It contains a TextBox, where the user can enter a code, a Button that will invoke a check, and a TextBlock that will display the result. This can be seen in the following code: <StackPanel> <TextBox x_Name="CodeTextBox" Width="100" Height="20"> </TextBox> <Button x_Name="CheckForWinButton" Content="Check if I'm a winner!" Click="CheckForWinButton_Click"> </Button> <TextBlock x_Name="ResultTextBlock"> </TextBlock></StackPanel> In the Click event handler, we'll create an instance of the proxy class that was created by Visual Studio as shown in the following code: private void CheckForWinButton_Click(object sender, RoutedEventArgs e){ JackpotService.JackpotServiceSoapClient client = new SilverlightJackpot.JackpotService.JackpotServiceSoapClient();} All service communications in Silverlight happen asynchronously. Therefore, we need to provide a callback method that will be invoked when the service returns: client.IsWinningCodeCompleted += new EventHandler <SilverlightJackpot.JackpotService. IsWinningCodeCompletedEventArgs> (client_IsWinningCodeCompleted); To actually invoke the service, we need to call the IsWinningCodeAsync method as shown in the following line of code. This method will make the actual call to the service. We pass in the value that the user entered: client.IsWinningCodeAsync(CodeTextBox.Text); Finally, in the callback method, we can work with the result of the service via the Result property of the IsWinningCodeCompletedEventArgs instance. Based on the value, we display another message as shown in the following code: void client_IsWinningCodeCompleted(object sender, SilverlightJackpot.JackpotService. IsWinningCodeCompletedEventArgs e){ bool result = e.Result; if (result) ResultTextBlock.Text = "You are a winner! Enter your data below and we will contact you!"; else ResultTextBlock.Text = "You lose... Better luck next time!";} We now have a fully working Silverlight application that uses a service for its data needs. The following screenshot shows the result from entering a valid code: How it works... As it stands, the current version of Silverlight does not have support for using a local database. Silverlight thus needs to rely on external services for getting external data. Even if we had local database support, we would still need to use services in many scenarios. The sample used in this recipe is a good example of data that would need to reside in a secure location (meaning on the server). In any case, we should never store the winning codes in a local database that would be downloaded to the client side. Silverlight has the necessary plumbing on board to connect with the most common types of services. Services such as ASMX, WCF, REST, RSS, and so on, don't pose a problem for Silverlight. While the implementation of connecting with different types of services differs, the concepts are similar. In this recipe, we used a plain old web service. Only the methods that are attributed with the WebMethodAttribute are made available over the service. This means that even if we create a public method on the service, it won't be available to clients if it's not marked as a WebMethod. In this case, we only create a single method called IsWinningCode, which retrieves a list of winning codes from a class called CodesRepository. In real-world applications, this data could be read from a database or an XML file. Thus, this service is the entry point to the data. For Silverlight to work with the service, we need to add a reference to it. When doing so, Visual Studio will create a proxy class. Visual Studio can do this for us because the service exposes a Web Service Description Language (WSDL) file. This file contains an overview of the methods supported by the service. A proxy can be considered a copy of the server-side service class, but without the implementations. Instead, each copied method contains a call to the actual service method. The proxy creation process carried out by Visual Studio is the same as adding a service reference in a regular .NET application. However, invoking the service is somewhat different. All communication with services in Silverlight is carried out asynchronously. If this wasn't the case, Silverlight would have had to wait for the service to return its result. In the meantime, the UI thread would be blocked and no interaction with the rest of the application would be possible. To support the asynchronous service call inside the proxy, the IsWinningCodeAsync method as well as the IsWinningCodeCompleted event is generated. The IsWinningCodeAsync method is used to make the actual call to the service. To get access to the results of a service call, we need to define a callback method. This is where the IsWinningCodeCompleted event comes in. Using this event, we define which method should be called when the service returns (in our case, the client_IsWinningCodeCompleted method). Inside this method, we have access to the results through the Result parameter, which is always of the same type as the return type of the service method. See also Apart from reading data, we also have to persist data. In the next recipe, Persisting data using a standardized service, we'll do exactly that.
Read more
  • 0
  • 0
  • 2549

article-image-gradebook-introduction
Packt
13 Apr 2012
5 min read
Save for later

Gradebook-An Introduction

Packt
13 Apr 2012
5 min read
  Getting to the gradebook All courses in Moodle have a grades area, also known as the gradebook . A number of activities within Moodle can be graded and these grades will automatically be captured and shown in the gradebook. To get to the gradebook, view the Settings block on the course and then click on Grades. The following screenshot shows an example of the teachers' view of a simple gradebook with a number of different graded activities within it. Let's take a quick tour of what we can see! The top row of the screenshot shows the column headings which are each of the assessed activities within the Moodle course. These automatically appear in the grades area. In this case, the assessed activities are: Initial assessment U1: Task 1 U1: Task 1 U1: Task 2 U2: Test Evidence On the left of the screenshot, you can see the students' names. Essentially, the name is the start of a row of information about the student. If we start with Emilie H, we can see that she received a score of 100.00 for her Initial assessment. Looking at Bayley W, we can see that his work for U1: Task 2 received a Distinction grade. Using the gradebook, we can see all the assessments and grades linked to each student from one screen. Users with teacher, non-editing teacher, or manager roles will be able to see the grades for all students on the course. Students will only be able to see their own grades and feedback. The advantage of storing the grades within Moodle is that information can be easily shared between all teachers on the online course. Traditionally, if a course manager wanted to know how students were progressing they would need to contact the course teacher(s) to gather this information. Now, they can log in to Moodle and view the live data (as long as they have teacher or manager rights to the course). There are also benefits to students as they will see all their progress in one place and can start to manage their own learning by reviewing their progress to date as shown in the following example student view: This is Bayley W's grade report. Bayley can see each assessment on the left-hand side with his grade next to it. By default, the student grades report also shows the range of grades possible for the assessment (for example, the highest and lowest scores possible), but this can be switched off by the teacher in the Grades course settings. It also shows the equivalent percentage as well as the written feedback given by the teacher. Activities that work with the gradebook There are a number of Moodle activities that can be graded and, therefore, work with the gradebook. The main ones are the following: Quiz Assignments: Four different core assignment types can be used to meet a range of needs within courses: Advanced uploading of files Online text Upload a single file Offline activity (The offline assignment is particularly useful for practical qualifications or presentations where the assessment is not submitted and is assessed offline by the teacher. The offline activity allows the detail of the assessment to be provided to students in Moodle, and the grade and feedback to be stored in the gradebook, even though no work has been electronically submitted.) Encouraging the use of the gradebook The offline activity is often a good way to start using the gradebook to record progress, as the assessment can take place in the normal way, but the grades can be recorded centrally to benefit teachers and students. Once confident with using the gradebook, teachers can then review assessment processes to use other assignment types   Forum Lesson SCORM package Workshop Glossary It is also possible to manually set up a "graded item" within the gradebook that is not linked with an activity, but allows a grade to be recorded. Key features of the gradebook The gradebook primarily shows the grade or score for each graded activity within the online course. This grade could be shown in a number of ways: Numeric grade: A numerical grade between 1 and 100. This is already set up and ready to use within all Moodle courses. Scale: A customized grading profile that can be letters, words, statements, or numbers (such as Pass , Merit, and Distinction). Letter grade: A grading profile that can be linked to percentages (such as 100 percent = A) Organizing grades With lots of activities that use grades within a course, the gradebook can be a lot of data on one page. Categories can be created for group activities and the gradebook view can be customized according to the user to see all or some categories on the screen. Think about a course that has 15 units and each unit has three assessments within it. The gradebook will have 45 columns of grades – which is a lot of data! We can organize this information into categories to make it easier to use.
Read more
  • 0
  • 0
  • 2010

article-image-setting-biztalk-server-environment
Packt
09 Apr 2012
18 min read
Save for later

Setting up a BizTalk Server Environment

Packt
09 Apr 2012
18 min read
Gathering requirements by asking the right questions Although, this is not an exact recipe, asking questions to obtain requirements for your BizTalk environment is important. Having a clear view and understanding of the requirements enables you to deploy the desired BizTalk environment that meets expectations of the customer. What are the right questions you may ask yourself? Well, there is quite a large area in general you basically need to cover with questions. These questions will be around the following topics: A BizTalk work load(s) that is functional Non-functional (high availability, scalability, and so on) Licensing (software) Hardware Virtualization Development, Test, Acceptance, and Production (DTAP) environment Tracking/Tracing Hosting Security Getting ready Organize the sessions, and/or the workshop(s) to discuss the BizTalk architecture (environment), functionality, and non-functional requirements, where you do a series of interviews with appropriate stakeholders. This way you will be able to retrieve the necessary requirements and information for a BizTalk environment. You will need to focus on business first and IT later. You will notice that each business will have a different set of requirements on integration of data and processes. Some of these are listed as follows: Business is able to have the access of information from anywhere any time Have the proper information to present to the proper people Have the necessary information available when needed Manage knowledge efficiently and be able to share it with the business Change the information when needed Automate the business process that is error-prone Automate the business process to reduce the processing time of orders, invoices, and so on Regarding the business requirements, BizTalk will have certain workloads, and with the business you determine if you want BizTalk to aid in automating processes, exchange of information with partners, maintaining business rules, visibility of psychical events, and/or integration with different systems. One important factor to reckon with bringing BizTalk into an organization is risk-associated with transitioning to its platform. This risk can be of a technical, operational, political, and financial nature. BizTalk solutions have to operate correctly, meet the business requirements, and be accepted by stakeholders within the organization and should not be too expensive. With IT, you focus more on the technical side of the BizTalk Environment such as, "What messages in size, format, and encoding are sent to the BizTalk system or what does it need to output?" You should consider security around it, when information going to or coming from trading partners is confidential. Encryption and decryption of data such as, "What processes that are automated need to interact with internal and external systems?" or "How are you going to monitor messages that are going in and out?" can come into play. Support needs to be set up properly to keep BizTalk and its solutions healthy. Solutions need to be developed and tested, preferably using different environments such as test and acceptance. For that, you will need an agreed deployment process with IT. These are factors to reckon with and need to be addressed when interviewing or talking to IT stakeholders within the organization. How to do it… Categorize your stakeholders into two categories—business and IT. Create a communication plan and list of questions related to areas mentioned earlier. With the list of questions you can assign each question to a person you think can answer it. This way you ask the right questions to the right people. The following table shows a sample of roles belonging to business and/or IT. It could be that you identify more roles depending on your situation: Category Role Business CEO, CIO, Security Officer, Business Analyst, Enterprise Architect, and Solution Architect. IT IT Manager, Enterprise Architect, Solution Architect, System/Application Architect, System Analyst, Developer, System Engineer, and DBA. Having the roles clear belonging to either business, IT, or both, you will then need to have a list of questions and assign these to the appropriate role. You can find an example list of questions associated to a particular role in the following table: Question Role Will BizTalk integrate with systems in the enterprise? Which consumers and host systems will it integrate with? Enterprise Architect, Solution Architect What are the applicable workloads? Enterprise Architect Is BizTalk going to be strategic for integration with internal/external systems? CEO, CIO, Enterprise Architect, and Business Analyst Number of messages a day/hour Enterprise Architect What are the candidate processes to automate with BizTalk? Business Analyst, Solution Architect What communication protocols are required? Enterprise Architect, Solution Architect Choice of Microsoft platform-Operating System, SQL Server Database Enterprise Architect, Security Officer, Solution Architect, System Engineer, and DBA Encryption algorithm for data Enterprise Architect, Security Officer, Solution Architect, and System Engineer Is Secure Socket Layer required for communication? Enterprise Architect, Security Oficer, Solution Architect, and System Engineer What kind of certificate store is there? Enterprise Architect, Security Officer, Solution Architect, and System Engineer Is the Support for BizTalk going to be outsourced CEO, IT Manager There's more… The best approach to gather the requirements is to view it as a project or a part of the project. You can use a methodology such as PRINCE2. PRINCE2 Projects in Controlled Environments (PRINCE) is a project management method. It covers the management, control, and organization of a project. PRINCE2 is the second major release of it. More information is available at http://www.prince2.com/. Microsoft BizTalk Server website The Microsoft BizTalk Server website provides a lot of information. Especially, the Production Information section provides detailed information on system requirements, roadmap, and the FAQs. The latter sections provide details on pricing, licensing, and so on. Go to http://www.microsoft.com/biztalk/en/us/default.aspx. Analyzing requirements and creating a design Analyzing requirements and creating a design for the BizTalk landscape is the next step forward before planning and installing. With the gathered requirements, you can make decisions on how to design a BizTalk environment(s). If BizTalk is used for the first time in an enterprise environment capacity, planning and server allocation is something to focus on. Once you gather requirements and ask questions, you will have a clear picture of where the platform will be hosted and whether it needs to be scaled up or out. If everything gets placed on one big server, it will introduce a serious single point of failure. You should try to avoid this scenario. Therefore, separating BizTalk from the SQL Server is the first thing you will do in your design, each on a separate hardware preferably. Depending on availability requirements, you will probably cluster the SQL Server. Besides that, you can choose to scale out BizTalk into a multiserver group, because of availability requirements and if the expected load cannot be handled by one BizTalk instance. You can opt for installing BizTalk and SQL separately first and then scale-out after performing benchmark tests. You can scale vertically (scaleup) by increasing the number of processors and the amount of memory each server uses, or you can scale horizontally (scaleout) by adding more servers to your BizTalk Server configuration. Other options you can consider during your design are as follows: Having multiple MessageBox databases Separate BizTalk databases These options are best visualized by the scale-out poster from Microsoft (http://www.microsoft.com/download/en/details.aspx?id=13103). Based on the requirements, you can consider isolating the BizTalk hosts to be able to manage BizTalk applications better and divide the load. By separating send, receive, and processing functionality in different hosts, you will benefit from better memory and thread management. If you expect a high load of large messages or orchestrations that would consume large amounts of resources, you should isolate send and/or receive adapters. Another consideration is to separate a host to handle tracking and relieve processing hosts from it. So far we have discussed scalability and design decisions you could consider. There are some other design considerations for a BizTalk environment such as security, tracking, fault tolerance, load balancing, choice of license, and support for virtualization (http:// support.microsoft.com/kb/842301). BizTalk security can be enhanced by deploying Secure Socket Layer (SSL), IPSec Tunneling, the Inter Security and Acceleration (ISA) server, and certificate services included with the Windows Server 2008. With the BizTalk Server, you can apply access control, implement least rights to limit access, and provide integrated security through Enterprise Single Sign-On (http://msdn.microsoft.com/en-us/library/aa577802%28v=bts.70%29.aspx). Furthermore, you can protect and secure applications and data by authenticating the sender of a message and authorizing the receiver of a message. Tracking messages in BizTalk messages can be useful to see what messages come in and out of the system, or for auditing, troubleshooting, or archiving purposes. Tracking of messages within BizTalk is a process by which parts of a message such as the body, properties, and metadata are stored in a database. These parts can be viewed by running queries from the Group Hub page in the BizTalk Server Administration console. It is important that you decide, or take up into the design, what needs to be tracked based on the requirements. There are some considerations to make regarding tracking. Tracking everything is not the smart thing to do, as each time a message is touched in BizTalk; a copy is made and stored. Focus on scope by tracking only on a specific port, which is better for performance and keeps the database uncluttered. For the latter, it is important that the data purge and archive job is configured properly. As mentioned earlier, it is worth considering a dedicated host for tracking. Fault tolerance and load balancing for BizTalk can be achieved through clustering, separating hosts as described earlier, implement a Storage Area Network (SAN) to house the BizTalk Server databases, cluster Enterprise Single Sign-On (SSO) Master Secret Server, and configuring the Internet Information Services (IIS) web server for isolated host instances and the BAM Portal web page to be highly available using Network Load Balancing (NLB) or other load balancing devices. The best way to implement this is to follow the steps in the Checklist: Providing High Availability with Fault Tolerance or Load Balancing document found on MSDN (http://msdn.microsoft.com/en-us/library/gg634479%28v=bts.70%29.aspx). Another important topic regarding your BizTalk environment is costs and based on requirements you will choose the Branch, Standard, or Enterprise Edition. The editions differ not only in price, but also in functionality. As with the Standard Edition, it is not possible to support scenarios for high availability, fault tolerance, and is limited on CPU and applications. The Branch Edition is even more limited and is designed for hub and spoke deployment scenarios including Radio Frequency Identification (RFID). With any version, you probably want to consider whether or not to virtualize. With virtualization in mind, licensing can be difficult. With the Standard Edition, you need a license for each virtual processor used by the virtual OS environment, regardless of whether the number of virtual processors is less than, or greater than, the number of physical processors on the server. With the Enterprise Edition, if you license all physical CPUs on the server you can run any number of instances in the physical or virtual OS environment. With both of these, a virtual processor is assumed to have the same number of cores as the physical processor. Using less than the number of cores available in the physical processor still counts as a full virtual processor (http://www.microsoft. com/biztalk/en/us/editions.aspx). Last, but not least, you need to consider how to support your BizTalk environment. It is worth considering the System Center Operation Manager to monitor your BizTalk environment using management packs for the SQL Server, Windows Server, and BizTalk Server 2010. The management pack for the BizTalk Server 2010 provides two views, one for the enterprise IT administrator and one for the BizTalk Server administrator. The first will be monitoring the state and health of the various enterprise deployments, the machines hosting the SQL Server databases, machines hosting the Enterprise SSO service, host instance machines, IIS, network services, and is interested in the overall health of the "physical deployment" of a BizTalk Server setup. The BizTalk Server Administrator will be monitoring the state and health of various BizTalk Server application artifacts, such as orchestrations, send ports, receive locations, and is interested in monitoring and tracking the BizTalk Server's health. If necessary, he/she can carry out corrective measures to keep applications running as expected. What you have read so far are considerations, which are useful while analyzing requirements and preparing your design. You need to take a considerable amount of time for analyzing requirements to be able to create a solid design for your BizTalk environment. There is a wealth of information provided by Microsoft in this book. It will be worth investing time now as you will lose a lot time and money if your applications do not perform or the system cripples under load while receiving the process. How to do it... To analyze the requirements, you will need to categorize them to certain topics mentioned in the Gathering requirements by asking the right questions recipe. You will then go over each requirement and decide how it can be met best. For each requirement, you will consider what the best option is and capture that in your design for the BizTalk setup. The BizTalk design will be a Word document, where you capture your design, considerations, and decisions. How it works... During analysis of each requirement, you will capture your considerations and decisions in a word document. Besides that, you will also describe the situation at the enterprise where the BizTalk environment will be deployed. You will find an example structure of a design document for a Development, Test, Acceptance, and Production (DTAP) environment, as follows, where you can place all the information: Introduction Purpose Current situation IT landscape Design Decisions Considerations/Issues Overview DTAP landscape Scope MS BizTalk and SQL Server editions SQL Database Server ICT Policy Operating systems Windows Server Backup Antivirus Windows update Security Settings Backup and Restore Backup procedure Restore procedure Development Development environment Development server Developer machine Test Test server Acceptance SQL Server clustering BizTalk group Acceptance server Production SQL Server clustering BizTalk group (load balancing) Production server Management and security Groups and accounts SCOM Single Sign-On Hosts In process hosts Isolated hosts Trusted and untrusted hosts Hosts configuration DTAP Resources Appendix A Redistributable CAB Files Design decisions are the important parts of your document. Here, you summarize all your design decisions and reference them to each corresponding chapter/section in the document, where a decision is described; you also note issues around your design. There's more... Analyzing requirements is an important task, which should not be taken lightly. Knowing architectural patterns, for instance, can help you choose the right technology and create the appropriate design. It can be that the BizTalk Server is not the right fit for the purpose. The following resources can aid you in analyzing the requirements: Architectural Patterns: Packt has published a book called Applied Architecture Patterns on Microsoft Platform that can aid you in analyzing the requirements by selecting the right technology. Wiki TechNet article: Refer to the Recommendations for Installing, Sizing, Deploying, and Maintaining a BizTalk Server Solution article at http://social.technet. microsoft.com/wiki/contents/articles/666.aspx. Microsoft BizTalk Server 2010 Operations Guide: Microsoft has created a BizTalk Server 2010 Operations Guide for anyone involved in the implementation and administration of a BizTalk solution, particularly IT professionals. You can find it online (http://msdn.microsoft.com/en-us/library/ gg634499%28v=bts.70%29.aspx) or you can download it from http://www. microsoft.com/downloads/en/details.aspx?FamilyID=4ef9eebb-b3f4-4534-b733-3eb2cb83d867&displaylang=en. Microsoft volume licensing brief: Licensing Microsoft Server Products in Virtual Environments is an interesting white paper from Microsoft. It describes licensing models under virtual environments for the server operating systems and server applications. It can help you understand how to use Microsoft server products with virtualization technologies, such as Microsoft Hyper-V technology, Microsoft Virtual Server 2005 R2, or third-party virtualization solutions that are provided by VMWare and Parallels. You can download from the URL: http://www.microsoft.com/ downloads/en/details.aspx?FamilyID=9ef7fc47-c531-40f1-a4e9-9859e593a1f1&displaylang=en. Microsoft poster scale-out configurations: Microsoft has published a poster (normal or interactive) that can be downloaded describing typical scenarios and commonly used options for scaling out the BizTalk Server 2010's physical configurations. This post clearly illustrates how to scale for achieving high availability through load balancing and fault tolerance. It also shows how to configure for high-throughput scenarios. A normal poster can be obtained from the URL: http://www.microsoft.com/ downloads/en/details.aspx?FamilyID=2b70cbfc-d158-45a6-8bbd-99782d6747dc. An interactive poster created in Silverlight can be obtained from the URL:http:// www.microsoft.com/downloads/en/details.aspx?FamilyID=7ef9ae69-9cc8-442a-8193-831a414dfc30. Installing and using the BizTalk Best Practices Analyzer The Best Practices Analyzer (BPA) examines a BizTalk Server 2010 deployment and generates a list of issues pertaining to best practice standards for BizTalk Server deployments. This tool is designed to assess the configuration of a BizTalk installation. The BPA performs configuration-level verification by gathering data from different information sources, such as Windows Management Instrumentation (WMI) classes, SQL Server databases, and registry entries and presents a report to the user. Under the hood, it uses the data to evaluate the deployment configuration. It does not modify any system settings and is not a self-tuning tool. The tool is there to deliver support in achieving the best suitable configuration and report issues or possible issues, that could potentially harm the BizTalk environment. Getting ready The latest version of the BPA tool (V1.2) can be obtained from the Microsoft download center (http://www.microsoft.com/downloads/en/details.aspx?FamilyID=93d432fe-1370-4b6d-aaa8-a0c43c30f5ab&displaylang=en) and must be installed on the BizTalk machine. As a user, you need an account that has local administrative rights, that is a member of the BizTalk Server Administrators group, and a member of the SSO Administrators group to be able to run the BPA. You may need to explicitly set some WMI permissions before you can use the BPA in a distributed environment, where the SQL Server is not installed on the same computer as the BizTalk Server. This is because when the BPA tries to connect to a remote computer running the SQL Server, WMI may not have sufficient access to determine whether the SQL Server Agent is running. This may result in incorrect BPA evaluations. How to do it... To run the Best Practices Analyzer, perform one of the following: Start the BizTalk Server Best Practices Analyzer from the Start menu. Go to Start | Programs | Microsoft BizTalk Server Best Practices Analyzer. Open Windows Explorer and navigate to the Best Practices Analyzer installation directory (by default, c:Program FilesBizTalkBPA) and double-click on BizTalkBPA.exe. Open a command prompt, change to the installation directory, and then enter BizTalkBPACmd.exe. The following steps need to be performed to do the analysis: As soon as you start the BPA, it will check for updates. The user can decide whether or not to check for updates for newer versions of the configuration: (Move the mouse over the image to enlarge.) If a newer version is found, you are able to download the latest updates. The next step is to perform a scan by clicking on Start a scan: After starting the scan, starts data will be gathered from different information sources as described earlier. After the scan has been completed, the user can decide to view the report of the performed scan: You can click View a report of this Best Practices scan and the report will be generated. After generation of the report, several tabs will appear: Critical Issues All Issues Non-Default Settings Recent Changes Baseline Informational Items How it works... When the BPA is running, it gathers information and evaluates them to best practice rules from the Microsoft product group and support. A report is presented to the user providing information on issues, non-default settings, changes, and so on. The report enables you to take action and apply the necessary changes to resolve identified issues. The BPA can be run again to verify that it adheres to all the necessary best practices. This shows the value of the tool when assessing the deployed BizTalk environment before it is operational. When BizTalk becomes operational, the MessageBox Viewer (MBV) has more value. There's more... The BPA is very useful and gives you information that helps you to tune BizTalk and to keep it healthy. There are more tools that can help in sustaining a healthy environment overall. The Microsoft SQL Server 2008 R2 BPA is a diagnostic tool that provides information about a server and a Microsoft SQL Server 2008 or Microsoft SQL Server 2008 R2 instance installed on that server. The Microsoft SQL Server 2008 R2 Best Practices Analyzer can be downloaded from http://www.microsoft.com/download/en/details.aspx?id=15289. There are a couple of analyzers provided by Microsoft that do a good job helping you and the system engineer to put out a healthy, robust, and stable environment: Best Practices Analyzer: http://technet.microsoft.com/en-us/library/dd759260.aspx Microsoft Baseline Configuration Analyzer 2.0: http://www.microsoft.com/download/en/details.aspx?id=16475 Microsoft Baseline Security Analyzer 2.1.1: http://www.microsoft.com/download/en/details.aspx?id=19892
Read more
  • 0
  • 0
  • 5680
article-image-creating-views-3-programmatically
Packt
21 Mar 2012
18 min read
Save for later

Creating Views 3 Programmatically

Packt
21 Mar 2012
18 min read
(For more resources on Drupal, see here.) Programming a view Creating a view with a module is a convenient way to have a predefined view available with Drupal. As long as the module is installed and enabled, the view will be there to be used. If you have never created a module in Drupal, or even never written a line of Drupal code, you will still be able to create a simple view using this recipe. Getting ready Creating a module involves the creation of the following two files at a minimum: An .info file that gives Drupal the information needed to add the module A .module file that contains the PHP script More complex modules will consist of more files, but those two are all we will need for now. How to do it... Carry out the following steps: Create a new directory named _custom inside your contributed modules directory (so, probably sites/all/modules/_custom). Create a subdirectory inside that directory; we will name it d7vr (Drupal 7 Views Recipes). Open a new file with your editor and add the following lines: ; $Id: name = Programmatic Views description = Provides supplementary resources such as programmatic views package = D7 Views Recipes version = "7.x-1.0" core = "7.x" php = 5.2 Save the file as d7vrpv.info. Open a new file with your editor and add the following lines: Feel free to download this code from the author's web site rather than typing it, at http://theaccidentalcoder.com/ content/drupal-7-views-cookbook <?php /** * Implements hook_views_api(). */ function d7vrpv_views_api() { return array( 'api' => 2, 'path' => drupal_get_path('module', 'd7vrpv'), ); } /** * Implements hook_views_default_views(). */ function d7vrpv_views_default_views() { return d7vrpv_list_all_nodes(); } /** * Begin view */ function d7vrpv_list_all_nodes() { /* * View 'list_all_nodes' */ $view = views_new_view(); $view->name = 'list_all_nodes'; $view->description = 'Provide a list of node titles, creation dates, owner and status'; $view->tag = ''; $view->view_php = ''; $view->base_table = 'node'; $view->is_cacheable = FALSE; $view->api_version = '3.0-alpha1'; $view->disabled = FALSE; /* Edit this to true to make a default view disabled initially */ /* Display: Defaults */ $handler = $view->new_display('default', 'Defaults', 'default'); $handler->display->display_options['title'] = 'List All Nodes'; $handler->display->display_options['access']['type'] = 'role'; $handler->display->display_options['access']['role'] = array( '3' => '3', ); $handler->display->display_options['cache']['type'] = 'none'; $handler->display->display_options['exposed_form']['type'] = 'basic'; $handler->display->display_options['pager']['type'] = 'full'; $handler->display-> display_options['pager']['options']['items_per_page'] = '15'; $handler->display->display_options['pager']['options'] ['offset'] = '0'; $handler->display->display_options['pager']['options'] ['id'] = '0'; $handler->display->display_options['style_plugin'] = 'table'; $handler->display->display_options['style_options'] ['columns'] = array( 'title' => 'title', 'type' => 'type', 'created' => 'created', 'name' => 'name', 'status' => 'status', ); $handler->display->display_options['style_options'] ['default'] = 'created'; $handler->display->display_options['style_options'] ['info'] = array( 'title' => array( 'sortable' => 1, 'align' => 'views-align-left', 'separator' => '', ), 'type' => array( 'sortable' => 1, 'align' => 'views-align-left', 'separator' => '', ), 'created' => array( 'sortable' => 1, 'align' => 'views-align-left', 'separator' => '', ), 'name' => array( 'sortable' => 1, 'align' => 'views-align-left', 'separator' => '', ), 'status' => array( 'sortable' => 1, 'align' => 'views-align-left', 145 'separator' => '', ), ); $handler->display->display_options['style_options'] ['override'] = 1; $handler->display->display_options['style_options'] ['sticky'] = 0; $handler->display->display_options['style_options'] ['order'] = 'desc'; /* Header: Global: Text area */ $handler->display->display_options['header']['area'] ['id'] = 'area'; $handler->display->display_options['header']['area'] ['table'] = 'views'; $handler->display->display_options['header']['area'] ['field'] = 'area'; $handler->display->display_options['header']['area'] ['empty'] = TRUE; $handler->display->display_options['header']['area'] ['content'] = '<h2>Following is a list of all non-page nodes.</h2>'; $handler->display->display_options['header']['area'] ['format'] = '3'; /* Footer: Global: Text area */ $handler->display->display_options['footer']['area'] ['id'] = 'area'; $handler->display->display_options['footer']['area'] ['table'] = 'views'; $handler->display->display_options['footer']['area'] ['field'] = 'area'; $handler->display->display_options['footer']['area'] ['empty'] = TRUE; $handler->display->display_options['footer']['area'] ['content'] = '<small>This view is brought to you courtesy of the D7 Views Recipes module</small>'; $handler->display->display_options['footer']['area'] ['format'] = '3'; /* Field: Node: Title */ $handler->display->display_options['fields']['title'] ['id'] = 'title'; $handler->display->display_options['fields']['title'] ['table'] = 'node'; $handler->display->display_options['fields']['title'] ['field'] = 'title'; $handler->display-> display_options['fields']['title']['alter']['alter_text'] = 0; $handler->display-> display_options['fields']['title']['alter']['make_link'] = 0; $handler->display-> display_options['fields']['title']['alter']['trim'] = 0; $handler->display-> display_options['fields']['title']['alter'] ['word_boundary'] = 1; $handler->display-> display_options['fields']['title']['alter']['ellipsis'] = 1; $handler->display-> display_options['fields']['title']['alter']['strip_tags'] = 0; $handler->display-> display_options['fields']['title']['alter']['html'] = 0; $handler->display-> display_options['fields']['title']['hide_empty'] = 0; $handler->display-> display_options['fields']['title']['empty_zero'] = 0; $handler->display-> display_options['fields']['title']['link_to_node'] = 0; /* Field: Node: Type */ $handler->display->display_options['fields']['type'] ['id'] = 'type'; $handler->display->display_options['fields']['type'] ['table'] = 'node'; $handler->display->display_options['fields']['type'] ['field'] = 'type'; $handler->display-> display_options['fields']['type']['alter']['alter_text'] = 0; $handler->display-> display_options['fields']['type']['alter']['make_link'] = 0; $handler->display-> display_options['fields']['type']['alter']['trim'] = 0; $handler->display-> display_options['fields']['type']['alter'] ['word_boundary'] = 1; $handler->display-> display_options['fields']['type']['alter']['ellipsis'] = 1; $handler->display-> display_options['fields']['type']['alter']['strip_tags'] = 0; $handler->display-> display_options['fields']['type']['alter']['html'] = 0; $handler->display-> display_options['fields']['type']['hide_empty'] = 0; $handler->display-> display_options['fields']['type']['empty_zero'] = 0; $handler->display-> display_options['fields']['type']['link_to_node'] = 0; $handler->display-> display_options['fields']['type']['machine_name'] = 0; /* Field: Node: Post date */ $handler->display->display_options['fields']['created'] ['id'] = 'created'; $handler->display->display_options['fields']['created'] ['table'] = 'node'; $handler->display->display_options['fields']['created'] ['field'] = 'created'; $handler->display-> display_options['fields']['created']['alter'] ['alter_text'] = 0; $handler->display-> display_options['fields']['created']['alter'] ['make_link'] = 0; $handler->display-> display_options['fields']['created']['alter']['trim'] = 0; $handler->display-> display_options['fields']['created']['alter'] ['word_boundary'] = 1; $handler->display-> display_options['fields']['created']['alter']['ellipsis'] = 1; $handler->display-> display_options['fields']['created']['alter'] ['strip_tags'] = 0; $handler->display-> display_options['fields']['created']['alter']['html'] = 0; $handler->display-> display_options['fields']['created']['hide_empty'] = 0; $handler->display-> display_options['fields']['created']['empty_zero'] = 0; $handler->display-> display_options['fields']['created']['date_format'] = 'custom'; $handler->display-> display_options['fields']['created']['custom_date_format'] = 'Y-m-d'; /* Field: User: Name */ $handler->display->display_options['fields']['name'] ['id'] = 'name'; $handler->display->display_options['fields']['name'] ['table'] = 'users'; $handler->display->display_options['fields']['name'] ['field'] = 'name'; $handler->display->display_options['fields']['name'] ['label'] = 'Author'; $handler->display-> display_options['fields']['name']['alter']['alter_text'] = 0; $handler->display-> display_options['fields']['name']['alter']['make_link'] = 0; $handler->display-> display_options['fields']['name']['alter']['trim'] = 0; $handler->display-> display_options['fields']['name']['alter'] ['word_boundary'] = 1; $handler->display-> display_options['fields']['name']['alter']['ellipsis'] = 1; $handler->display-> display_options['fields']['name']['alter']['strip_tags'] = 0; $handler->display-> display_options['fields']['name']['alter']['html'] = 0; $handler->display-> display_options['fields']['name']['hide_empty'] = 0; $handler->display-> display_options['fields']['name']['empty_zero'] = 0; $handler->display-> display_options['fields']['name']['link_to_user'] = 0; $handler->display-> display_options['fields']['name']['overwrite_anonymous'] = 0; /* Field: Node: Published */ $handler->display->display_options['fields']['status'] ['id'] = 'status'; $handler->display->display_options['fields']['status'] ['table'] = 'node'; $handler->display->display_options['fields']['status'] ['field'] = 'status'; $handler->display-> display_options['fields']['status']['alter'] ['alter_text'] = 0; $handler->display-> display_options['fields']['status']['alter']['make_link'] = 0; $handler->display-> display_options['fields']['status']['alter']['trim'] = 0; $handler->display-> display_options['fields']['status']['alter'] ['word_boundary'] = 1; $handler->display-> display_options['fields']['status']['alter']['ellipsis'] = 1; $handler->display-> display_options['fields']['status']['alter'] ['strip_tags'] = 0; $handler->display-> display_options['fields']['status']['alter']['html'] = 0; $handler->display-> display_options['fields']['status']['hide_empty'] = 0; $handler->display-> display_options['fields']['status']['empty_zero'] = 0; $handler->display->display_options['fields']['status'] ['type'] = 'true-false'; $handler->display->display_options['fields']['status'] ['not'] = 0; /* Sort criterion: Node: Post date */ $handler->display->display_options['sorts']['created'] ['id'] = 'created'; $handler->display->display_options['sorts']['created'] ['table'] = 'node'; $handler->display->display_options['sorts']['created'] ['field'] = 'created'; $handler->display->display_options['sorts']['created'] ['order'] = 'DESC'; /* Filter: Node: Type */ $handler->display->display_options['filters']['type'] ['id'] = 'type'; $handler->display->display_options['filters']['type'] ['table'] = 'node'; $handler->display->display_options['filters']['type'] ['field'] = 'type'; $handler->display-> display_options['filters']['type']['operator'] = 'not in'; $handler->display->display_options['filters']['type'] ['value'] = array( 'page' => 'page', ); /* Display: Page */ $handler = $view->new_display('page', 'Page', 'page_1'); $handler->display->display_options['path'] = 'list-all-nodes'; $views[$view->name] = $view; return $views; } ?>   Save the file as d7vrpv.module. Navigate to the modules admin page at admin/modules. Scroll down to the new module and activate it, as shown in the following screenshot: Navigate to the Views Admin page (admin/structure/views) to verify that the view appears in the list: Finally, navigate to list-all-nodes to see the view, as shown in the following screenshot: How it works... The module we have just created could have many other features associated with it, beyond simply a view, and enabling the module will make those features and the view available, while disabling it will hide those same features and view. When compiling the list of installed modules, Drupal looks first in its own modules directory for .info files, and then in the site's modules directories. As can be deduced from the fact that we put our .info file in a second-level directory of sites/all/modules and it was found there, Drupal will traverse the modules directory tree looking for .info files. We created a .info file that provided Drupal with the name and description of our module, its version, the version of Drupal it is meant to work with, and a list of files used by the module, in our case just one. We saved the .info file as d7vrpv.info (Drupal 7 Views Recipes programmatic view); the name of the directory in which the module files appear (d7vr) has no bearing on the module itself. The module file contains the code that will be executed, at least initially. Drupal does not "call" the module code in an active way. Instead, there are events that occur during Drupal's creation of a page, and modules can elect to register with Drupal to be notifi ed of such events when they occur, so that the module can provide the code to be executed at that time; for example, you registering with a business to receive an e-mail in the event of a sale. Just like you are free to act or not, but the sales go on regardless, so too Drupal continues whether or not the module decides to do something when given the chance. Our module 'hooks' the views_api and views_default_views events in order to establish the fact that we do have a view to offer. The latter hook instructs the Views module which function in our code executes our view: d7vrpv_list_all_nodes(). The first thing it does is create a view object by calling a function provided by the Views module. Having instantiated the new object, we then proceed to provide the information it needs, such as the name of the view, its description, and all the information that we would have selected through the Views UI had we used it. As we are specifying the view options in the code, we need to provide the information that is needed by each handler of the view functionality. The net effect of the code is that when we have cleared cache and enabled our module, Drupal then includes it in its list of modules to poll during events. When we navigate to the Views Admin page, an event occurs in which any module wishing to include a view in the list on the admin screen does so, including ours. One of the things our module does is defi ne a path for the page display of our view, which is then used to establish a callback. When that path, list-all-nodes, is requested, it results in the function in our module being invoked, which in turn provides all the information necessary for our view to be rendered and presented. There's more The details of the code provided to each handler are outside the scope of this book, but you don't really need to understand it all in order to use it. You can enable the Views Bulk Export module (it comes with Views), create a view using the Views UI in admin, and choose to Bulk Export it. Give the exporter the name of your new module and it will create a file and populate it with nearly all the code necessary for you. Handling a view field As you may have noticed in the preceding code that you typed or pasted, Views makes tremendous use of handlers. What is a handler? It is simply a script that performs a special task on one or more elements. Think of a house being built. The person who comes in to tape, mud, and sand the wallboard is a handler. In Views, one type of handler is the field handler, which handles any number of things, from providing settings options in the field configuration dialog, to facilitating the field being retrieved from the database if it is not part of the primary record, to rendering the data. We will create a field handler in this recipe that will add to the display of a zip code a string showing how many other nodes have the same zip code, and we will add some formatting options to it in the next recipe. Getting ready A handler lives inside a module, so we will create one: Create a directory in your contributed modules path for this module. Open a new text file in your editor and paste the following code into it: ; $Id: name = Zip Code Handler description = Provides a view handler to format a field as a zip code package = D7 Views Recipes ; Handler files[] = d7vrzch_handler_field_zip_code.inc files[] = d7vrzch_views.inc version = "7.x-1.0" core = "7.x" php = 5.2 Save the file as d7vrzch.info. Create another text file and paste the following code into it: <?php /** * Implements hook_views_data_alter() */ function d7vrzch_field_views_data_alter(&$data, $field) { if (array_key_exists('field_data_field_zip_code', $data)) { $data['field_data_field_zip_code']['field_zip_code'] ['field']['handler'] = 'd7vrzch_handler_field_zip_code'; } } Save the file as d7vrzch.views.inc. Create another text file and paste the following into it: <?php /** * Implements hook_views_api(). */ function d7vrzch_views_api() { return array( 'api' => 3, 'path' => drupal_get_path('module', 'd7vrzch'), ); } Save the file as d7vrzch.module. How to do it... Carry out the folowing steps: Create another text file and paste the following into it: <?php // $Id: $ /** * Field handler to format a zip code. * * @ingroup views_field_handlers */ class d7vrzch_handler_field_zip_code extends views_handler_field_field { function option_definition() { $options = parent::option_definition(); $options['display_zip_totals'] = array( 'contains' => array( 'display_zip_totals' => array('default' => FALSE), ) ); return $options; } /** * Provide a link to the page being visited. */ function options_form(&$form, &$form_state) { parent::options_form($form, $form_state); $form['display_zip_totals'] = array( '#title' => t('Display Zip total'), '#description' => t('Appends in parentheses the number of nodes containing the same zip code'), '#type' => 'checkbox', '#default_value' => !empty($this-> options['display_zip_totals']), ); } function pre_render(&$values) { if (isset($this->view->build_info['summary']) || empty($values)) { return parent::pre_render($values); } static $entity_type_map; if (!empty($values)) { // Cache the entity type map for repeat usage. if (empty($entity_type_map)) { $entity_type_map = db_query('SELECT etid, type FROM {field_config_entity_type}')->fetchAllKeyed(); } // Create an array mapping the Views values to their object types. $objects_by_type = array(); foreach ($values as $key => $object) { // Derive the entity type. For some field types, etid might be empty. if (isset($object->{$this->aliases['etid']}) && isset($entity_type_map[$object->{$this-> aliases['etid']}])) { $entity_type = $entity_type_map[$object->{$this-> aliases['etid']}]; $entity_id = $object->{$this->field_alias}; $objects_by_type[$entity_type][$key] = $entity_id; } } // Load the objects. foreach ($objects_by_type as $entity_type => $oids) { $objects = entity_load($entity_type, $oids); foreach ($oids as $key => $entity_id) { $values[$key]->_field_cache[$this->field_alias] = array( 'entity_type' => $entity_type, 'object' => $objects[$entity_id], ); } } } } function render($values) { $value = $values->_field_cache[$this->field_alias] ['object']->{$this->definition['field_name']} ['und'][0]['safe_value']; $newvalue = $value; if (!empty($this->options['display_zip_totals'])) { $result = db_query("SELECT count(*) AS recs FROM {field_data_field_zip_code} WHERE field_zip_code_value = :zip",array(':zip' => $value)); foreach ($result as $item) { $newvalue .= ' (' . $item->recs . ')'; } } return $newvalue; } Save the file as d7vrzch_handler_field_zip_code.inc. Navigate to admin/build/modules and enable the new module, which shows as the Zip Code Handler. We will test the handler in a quick view. Navigate to admin/build/views. Click on the +Add new view link , enter test as the View name, check the box for description and enter Zip code handler test; clear the Create a page checkbox , and click on the Continue & edit button . On the Views edit page, click on the add link in the Filter Criteria pane, check the box next to Content: Type, and click on the Add and configure filter criteria button . In the Content: Type configuration box , select Home and click on the Apply button . Click on the add link next to Fields, check the box next to Content: Zip code, and click on the Add and configure fields button. Check the box at the bottom of the Content: Zip code configuration box titled Display Zip total and click on the Apply button. Click on the Save button and see the result of our custom handler in the Live preview: How it works... The Views field handler is simply a set of functions that provide support for populating and formatting a field for Views, much in the way a printer driver does for the operating system. We created a module in which our handler resides, and whenever that field is requested within a view, our handler will be invoked. We also added a display option to the configuration options for our field, which when selected, takes each zip code value to be displayed, determines how many nodes have the same zip code, and appends the parenthesized total to the output. The three functions, two in the views.inc file and one in the module file, are very important. Their result is that our custom handler file will be used for field_zip_code instead of the default handler used for entity text fields. In the next recipe, we will add zip code formatting options to our custom handler.
Read more
  • 0
  • 0
  • 15228

article-image-customizing-look-and-feel-uag
Packt
22 Feb 2012
8 min read
Save for later

Customizing Look and Feel of UAG

Packt
22 Feb 2012
8 min read
(For more resources on Microsoft Forefront UAG, see here.) Honey, I wouldn't change a thing! We'll save the flattery for our spouses, and start by examining some key areas of interest what you might want and be able to change on a UAG implementation. Typically, the end user interface is comprised of the following: The Endpoint Components Installation page The Endpoint Detection page The Login page The Portal Frame The Portal page The Credentials Management page The Error pages There is also a Web Monitor, but it is typically only used by the administrator, so we won't delve into that. The UAG management console itself and the SSL-VPN/SSTP client-component user interface are also visual, but they are compiled code, so there's not much that can be done there. The elements of these pages that you might want to adjust are the graphics, layout, and text strings. Altering a piece of HTML or editing a GIF in Photoshop to make it look different may sound trivial, but there's actually more to it than that, and the supportability of your changes should definitely be questioned on every count. You wouldn't want your changes to disappear upon the next update to UAG, would you? Nor would you look the page to suddenly become all crooked because someone decided that he wants the RDP icon to have an animation from the Smurfs. The UI pages Anyone familiar with UAG will know of its folder structure and the many files that make up the code and logic that is applied throughout. For those less acquainted however, we'll start with the two most important folders you need to know—InternalSite and PortalHomePage. InternalSite contains pages that are displayed to the user as part of the login and logout process, as well as various error pages. PortalHomePage contains the files that are a part of the portal itself, shown to the user after logging in. The portal layout comes in three different flavors, depending on the client that is accessing it. The most common one is the Regular portal, which happens to be the more polished version of the three, shown to all computers. The second is the Premium portal, which is a scaled-down version designed for phones that have advanced graphic capabilities, such as Windows Mobile phones. The third is the Limited portal, which is a text-based version of the portal, shown to phones that have limited or no graphic capabilities, such as the Nokia S60 and N95 handsets. Regardless of the type, the majority of devices connecting to UAG will present a user-agent string in their request and it is this string that determines the type of layout that UAG will use to render its pages and content. UAG takes advantage of this, by allowing the administrator to choose between the various formats that are made available, on a per application basis. The results are pretty cool and being able to cater for most known platforms and form factors, provides users with the best possible experience. The following screenshot illustrates an application that is enabled for the Premium portal, and how the portal and login pages would look on both a premium device and on a limited device: Customizing the login and admin pages The login and admin pages themselves are simple ASP pages, which contain a lot of code, as well as some text and visual elements. The main files in InternalSite that may be of interest to you are the following: Login.asp LogoffMsg.asp InstallAndDetect.asp Validate.asp PostValidate.asp InternalError.asp In addition, UAG keeps another version of some of the preceding files for ADFS, OTP, and OWA under similarly named folders. This means that if you have enabled the OWA theme on your portal, and you wish to customize it, you should work with the files under the /InternalSite/OWA folder. Of course, there are many other files that partake in the flow of each process, but the fact is there is little need to touch either the above or the others, as most of the appearance is controlled by a CSS template and text strings stored elsewhere. Certain requirements may even involve making significant changes to the layout of the pages, and leave you with no other option but to edit core ASP files themselves, but be careful as this introduces risk and is not technically supported. It's likely that these pages change with future updates to UAG, and that may cause a conflict with the older code that is in your files. The result of mixing old and new code is unpredictable, to say the least. The general appearance of the various admin pages is controlled by the file /InternalSite/CSS/template.css. This file contains about 80 different style elements including some of the 50 or so images displayed in the portal pages, such as the gradient background, the footer, and command buttons to name a few. The images themselves are stored in /InternalSite/Images. Both these folders have an OWA folder, which contains the CSS and images for the OWA theme. When editing the CSS, most of the style names will make sense, but if you are not sure, then why not copy the relevant ASP file and the CSS to your computer, so you can take a closer look with a visual editor, to better understand the structure. If you are doing this be careful not to make any changes that may alter the code in a damaging way, as this is easily done and can waste a lot of valuable time. A very useful piece of advice for checking tweaked code is to consider the use of Internet Explorer's integrated developer tool. In case you haven't noticed, it's a simple press of F12 on the keyboard and you'll find everything you need to get debugging. IE 9 and higher versions even pack a nifty trace module that allows you to perform low-level inspection on client-server interaction, without the need for additional third-party tools. We don't intend to devote this book to CSS, but one useful CSS element to be familiar with is the display: none; element, which can be used to hide any element it's put in. For example, if you add this to the .button element, it will hide the Login button completely. A common task is altering the part of the page where you see the Application and Network Access Portal text displayed. The text string itself can be edited using the master language files, which we will discuss shortly. The background of that part of the page, however, is built with the files headertopl.gif, headertopm.gif, and headertopr.gif. The original page design is classic HTML—it places headertopl on the left, headertopr on the right, and repeats headertopm in between to fill the space. If you need to change it, you could simply design a similar layout and put the replacement image files in /InternalSite/Images/CustomUpdate. Alternatively, you might choose to customize the logo only by copying the /InternalSite/Samples/logo.inc file into the /InternalSite/Inc/CustomUpdate folder, as this is where the HTML code that pertains to that area is located. Another thing that's worth noting is that if you create a custom CSS file, it takes effect immediately, and there's no need to do an activation. Well at least for the purposes of testing anyway. The same applies for image file changes too but as a general rule you should always remember to activate when finished, as any new configurations or files will need to be pushed into the TMG storage. Arrays are no exception to this rule either and you should know that custom files are only propagated to array members during an activation, so in this scenario, you do need to activate after each change. During development, you may copy the custom files to each member node manually to save time between activations, or better still, simply stop NLB on all array members so that all client traffic is directed to the one you are working on. An equally important point is that when you test changes to the code, the browser's cache or IIS itself may still retain files from the previous test or config, so if changes you've made do not appear first time around, then start by clearing your browser's cache and even reset IIS, before assuming you messed up the code. Customizing the portal As we said earlier, the pages that make up a portal and its various flavors are under the PortalHomePage folder. These are all ASP.NET files (.ASPX), and the scope for making any alterations here is very limited. However, the appearance is mostly controlled via the file /InternalSite/PortalHomePage/Standard.Master, which contains many visual parameters that you can change. For example, the DIV with ID content has a section pertaining to the side bar application list. You might customize the midTopSideBarCell width setting to make the bar wider or thinner. You can even hide it completely by adding style="display: none;" to the contentLeftSideBarCell table cell. As always, make sure you copy the master file to CustomUpdate, and not touch the original file, and as with the CSS files, any changes you make take effect immediately. Additional things that you can do with the portal are removing or adding buttons to the portal toolbar. For example, you might add a button to point to a help page that describes your applications, or a procedure to contact your internal technical support in case of a problem with the site.
Read more
  • 0
  • 0
  • 2710

article-image-article-understanding-services
Packt
07 Feb 2012
23 min read
Save for later

Understanding Services in JBoss

Packt
07 Feb 2012
23 min read
(For more resources on topic_name, see here.) Preparing JBoss Developer Studio The examples in this article are based on a standard ESB application template that can be found under the Chapter3 directory within the sample downloads..We will modify this template application as we proceed through this chapter. Time for action – opening the Chapter3 app Follow these steps: Click on the File menu and select Import. Now choose Existing Projects into workspace and select the folder where the book samples have been extracted: Then click on Finish. Now have a look at the jboss-esb.xml file. You can see that it has a single service and action as defined in the following snippet: <jbossesb parameterReloadSecs="5" xsi_schemaLocation="http://anonsvn.labs.jboss.com/labs/ jbossesb/trunk/product/etc/schemas/xml/ jbossesb-1.3.0.xsd http://anonsvn.jboss.org/repos/labs/ labs/jbossesb/trunk/product/etc/ schemas/xml/jbossesb-1.3.0.xsd"> <providers> <jms-provider connection-factory="ConnectionFactory" name="JBossMQ"> <jms-bus busid="chapter3GwChannel"> <jms-message-filter dest-name="queue/chapter3_Request_gw" dest-type="QUEUE"/> </jms-bus> <jms-bus busid="chapter3EsbChannel"> <jms-message-filter dest-name="queue/chapter3_Request_esb" dest-type="QUEUE"/> </jms-bus> </jms-provider> </providers> <services> <service category="Chapter3Sample" description="A template for Chapter3" name="Chapter3Service"> <listeners> <jms-listener busidref="chapter3GwChannel" is-gateway="true" name="Chapter3GwListener"/> <jms-listener busidref="chapter3EsbChannel" name="Chapter3Listener"/> </listeners> <actions mep="OneWay"> <action class="org.jboss.soa.esb.actions.SystemPrintln" name="PrintBefore"> <property name="message"/> <property name="printfull" value="true"/> </action> </actions> </service> </services></jbossesb> Examining the structure of ESB messages A service is an implementation of a piece of business logic which exposes a well defined service contract to consumers. The service will provide an abstract service contract which describes the functionality exposed by the service and will exhibit the following characteristics: Self contained: The implementation of the service is independent from the context of the consumers; any implementation changes will have no impact. Loosely coupled: The consumer invokes the service indirectly, passing messages through the bus to the service endpoint. There is no direct connection between the service and its consumers. Reusable: The service can be invoked by any consumer requiring the functionality exposed by the service. The provider is tied to neither a particular application nor process. Services which adhere to these criteria will be capable of evolving and scaling without affecting any consumers of that service. The consumer no longer cares which implementation of the service is being invoked, nor where it is located, provided that the exposed service contract remains compatible. Examining the message The structure of the message, and how it can be manipulated, plays an important part in any ESB application as a result of the message driven nature of the communication between service providers and consumers. The message is the envelope which contains all of the information relevant to a specific invocation of a service. All messages within JBoss ESB are implementations of the org.jboss.soa.esb.message. Message interface, the major aspects of which are: Header: Information concerning the identity, routing addresses, and correlation of the message Context: Contextual information pertaining to the delivery of each message, such as the security context Body: The payload and additional details as required by the service contract Attachment: Additional information that may be referenced from within the payload Properties: Information relating to the specific delivery of a message, usually transport specific (for example the original JMS queue name) Time for action – printing the message structure Let us execute the Chapter3 sample application that was opened up at the beginning of this chapter. Follow these steps: In JBoss Developer Studio, click Run and select Run As and Run on Server. Alternatively you can press Alt + Shift + X, followed by R. You can see the server runtime has been pre-selected. Choosing the Always use this server when running this project check box will always use this runtime and this dialog will not appear again. Click Next. A window with the project pre-configured to run on this server is shown. Ensure that we have only our project Chapter3 selected to the right hand side. Click Finish. The server runtime will be started up (if not already started) and the ESB file will be deployed to the server runtime. Select the src folder, expand it till the SendJMSMessage.java file is displayed in the tree. Now click Run, select Run As and Java Application. The entire ESB message contents will be printed in the console as follows: INFO [STDOUT] Message structure: INFO [STDOUT] [ message: [ JBOSS_XML ]header: [ To: JMSEpr [ PortReference < <wsa:Address jms:localhost:1099#queue/chapter3_Request_esb/>, <wsa:ReferenceProperties jbossesb:java.naming.factory.initial : org.jnp.interfaces.NamingContextFactory/>, <wsa:ReferenceProperties jbossesb:java.naming.provider.url : localhost:1099/>, <wsa:ReferenceProperties jbossesb:java.naming.factory.url.pkgs : org.jnp.interfaces/>, <wsa:ReferenceProperties jbossesb:destination-type : queue/>, <wsa:ReferenceProperties jbossesb:destination-name : queue/chapter3_Request_esb/>, <wsa:ReferenceProperties jbossesb:specification-version : 1.1/>, <wsa:ReferenceProperties jbossesb:connection-factory : ConnectionFactory/>, <wsa:ReferenceProperties jbossesb:persistent : true/>, <wsa:ReferenceProperties jbossesb:acknowledge-mode : AUTO_ACKNOWLEDGE/>, <wsa:ReferenceProperties jbossesb:transacted : false/>, <wsa:ReferenceProperties jbossesb:type : urn:jboss/esb/epr/type/jms/> > ] MessageID: e694a6a5-6a30-45bf-8f6d-f48363219ccf RelatesTo: jms:correlationID#e694a6a5-6a30-45bf-8f6d-f48363219ccf ]context: {}body: [ objects: {org.jboss.soa.esb.message.defaultEntry=Chapter 3 says Hello!} ]fault: [ ]attachments: [ Named:{}, Unnamed:[] ]properties: [ {org.jboss.soa.esb.message.transport.type=Deferred serialized value: 12d16a5, org.jboss.soa.esb.message.byte.size=2757, javax.jms.message.redelivered=false, org.jboss.soa.esb.gateway.original.queue.name=Deferred serialized value: 129bebb, org.jboss.soa.esb.message.source=Deferred serialized value: 1a8e795} ] ] What just happened? You have just created a Chapter3.esb file and deployed it to the ESB Runtime on the JBoss Application Server 5.1. You executed a gateway client that posted a string to the Bus. The server converted this message to an ESB message and the complete structure was printed out. Take a moment to examine the output and understand the various parts of the ESB message. Have a go hero – deploying applications Step 1 through step 4 describe how to start the server and deploy our application from within JBoss Developer Studio. For the rest of this chapter, and throughout this book, you will be repeating these steps and will just be asked to deploy the application. Message implementations JBoss ESB provides two different implementations of the message interface, one which marshalls data into an XML format and a second which uses Java serialization to create a binary representation of the message. Both of these implementations will only handle Java serializable objects by default, however it is possible to extend the XML implementation to support additional object types. Message implementations are created indirectly through the org.jboss.soa.esb. message.format.MessageFactory class. In general any use of serializable objects can lead to a brittle application, one that is more tightly coupled between the message producer and consumer. The message implementations within JBoss ESB mitigate this by supporting a 'Just In Time' approach when accessing the data. Care must still be taken with what data is placed within the message, however serialization/marshalling of these objects will only occur as and when required. Extending the ESB to provide alternative message implementations, and extending the current XML implementation to support additional types, is outside the scope of this book. The body This is the section of the message which contains the main payload information for the message, adhering to the contract exposed by the service. The payload should only consist of the data required by the service contract and should not rely on any service implementation details as this will prevent the evolution or replacement of the service implementation at a future date. The types of data contained within the body are restricted only by the requirements imposed by the message implementation, in other words the implementation must be able to serialize or marshall the contents as part of service invocation. The body consists of Main payload: accessed using the following methods: public Object get() ;public void add(final Object value) ; Named objects: accessed using the following methods: public Object get(final String name) ;public void add(final String name, final Object value) ; Time for action – examining the main payload Let us create another action class that simply prints the message body. We will add this action to the sample application that was opened up at the beginning of this chapter. Right click on the src folder and choose New and select Class: Enter the Name as "MyAction", enter the Package as "org.jboss.soa. samples.chapter3", and select the Superclass as "org.jboss.soa.esb. actions.AbstractActionLifecycle": Click Finish. Add the following imports and the following body contents to the code: import org.jboss.soa.esb.helpers.ConfigTree;import org.jboss.soa.esb.message.Message; protected ConfigTree _config;public MyAction(ConfigTree config) { _config = config;}public Message displayMessage(Message message) throws Exception { System.out.println( "&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&");System.out.println("&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&"); System.out.println("Body: " + message.getBody().get()); System.out.println("&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&"); return message;} Click Save. Open the jboss-esb.xml file in Tree mode, expand till Actions is displayed in the tree. Select Actions, click Add | Custom Action: Enter the Name as "BodyPrinter" and choose the "MyAction" class and "displayMessage" process method: Click Save and the application will be deployed. If the server was stopped then deploy it using the Run menu and select Run As | Run on Server: Once the application is deployed on the server, run SendJMSMessage.java by clicking Run | Run As | Java Application. The following can be seen displayed in the console output: 12:19:32,562 INFO [STDOUT] &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&12:19:32,562 INFO [STDOUT] Body: Chapter 3 says Hello!12:19:32,562 INFO [STDOUT] &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& What just happened? You have just created your own action class that used the Message API to get the main payload of the message and printed it to the console. Have a go hero – additional body contents Now add another miscellaneous SystemPrintln action after our BodyPrinter. Name it PrintAfter and make sure printfull is set to true. Modify the MyAction class and add additional named content using the getBody().add(name, object) method and see what gets printed on the console. Here is the actions section of the config file <actions mep="OneWay"> <action class="org.jboss.soa.esb.actions.SystemPrintln" name="PrintBefore"> <property name="message"/> <property name="printfull" value="true"/> </action> <action class="org.jboss.soa.esb.samples.chapter3.MyAction" name="BodyPrinter" process="displayMessage"/> <action class="org.jboss.soa.esb.actions.SystemPrintln" name="PrintAfter"> <property name="message"/> <property name="printfull" value="true"/> </action></actions> The following is the listing of the MyAction class's modified displayMessage method public Message displayMessage(Message message) throws Exception { System.out.println("&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&"); System.out.println("Body: " + message.getBody().get()); message.getBody().add("Something", "Unknown"); System.out.println("&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&"); return message;} The header The message header contains the information relating to the identity, routing, and the correlation of messages. This information is based on, and shares much in common with, the concepts defined in the W3C WS-Addressing specification. It is important to point out that many of these aspects are normally initialized automatically by other parts of the codebase; a solid understanding of these concepts will allow the developer to create composite services using more advanced topologies. Routing information Every time a message is sent within the ESB it contains information which describes who sent the message, which service it should be routed to, and where any replies/faults should be sent once processing is complete. The creation of this information is the responsibility of the invoker and, once delivered, any changes made to this information, from within the target service, will be ignored by that service. The information in the header takes the form of Endpoint References (EPRs) containing a representation of the service address, often transport specific, and extensions which can contain relevant contextual information for that endpoint. This information should be treated as opaque by all parties except the party which was responsible for creating it. There are four EPRs included in the header, they are as follows: To: This is the only mandatory EPR, representing the address of the service to which the message is being sent. This will be initialized by ServiceInvoker with the details of the service chosen to receive the message. From: This EPR represents the originator of the message, if present, and may be used as the address for responses if there is neither an explicit ReplyTo nor FaultTo set on the message. ReplyTo: This EPR represents the endpoint to which all responses will be sent, if present, and may be used as the address for faults if there is no explicit FaultTo set on the message. This will normally be initialized by ServiceInvoker if a synchronous response is expected by the service consumer. FaultTo: This EPR represents the endpoint to which all faults will be sent, if present. When thinking about the routing information it is important to view these details from the perspective of the service consumer, as the EPRs represent the wishes of the consumer and must be adhered to. If the service implementation involves more advanced topologies, like chaining and continuations, which we will discuss later in the chapter, then care must be taken to preserve these EPRs when messages are propagated to subsequent services. Message identity and correlation There are two parts of the header which are related to the identity of the message and its correlation with a preceding message. These are as follows: MessageID: A unique reference which can be used to identify the message as it progresses through the ESB. The reference is represented by a Uniform Resource Name (URN), a specialized Uniform Resource Identifier (URI) which will represent the identity of the message within a specific namespace. The creator of the message may choose to associate it with an identity which is specific to the application context within which it is being used, in which case the URN should refer to a namespace which is also application context specific. If no MessageID has been associated with the message then the ESB will assign a unique identifier when it is first sent to a service. RelatesTo: When sending a reply, this represents the unique reference of the message representing the request. This may be used to correlate the response message with the original request. Service action The action header is an optional, service-specific URN that may be used to further refine the processing of the message by a service provider or service consumer. The URN should refer to an application-specific namespace. There are no restrictions on how this header is to be used by the application including, if considered appropriate, ignoring its contents.   Time for action – examining the header Let us modify MyAction to display some of the header information that we need: Open MyAction and edit the displayMessage method as follows: public Message displayMessage(Message message) throws Exception { System.out.println("&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&"); System.out.println("From: " + message.getHeader().getCall().getFrom()); System.out.println("To: " + message.getHeader().getCall().getTo()); System.out.println("MessageID: " + message.getHeader().getCall().getMessageID()); System.out.println("&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&"); return message;} Remove the PrintBefore and PrintAfter actions if they exist. Make sure that we have only the BodyPrinter action: Click on Save. If the server was still running (and a small red button appears in the console window), then you might notice the application gets redeployed by default. If this did not happen then deploy the application using the Run menu and select Run As | Run on Server. The following output will be displayed in the console: INFO [EsbDeployment] Stopping 'Chapter3.esb'INFO [EsbDeployment] Destroying 'Chapter3.esb'WARN [ServiceMessageCounterLifecycleResource] Calling cleanup on existing service message counters for identity ID-7INFO [QueueService] Queue[/queue/chapter3_Request_gw] stoppedINFO [QueueService] Queue[/queue/chapter3_Request_esb] stoppedINFO [QueueService] Queue[/queue/chapter3_Request_esb] started, fullSize=200000, pageSize=2000, downCacheSize=2000INFO [QueueService] Queue[/queue/chapter3_Request_gw] started, fullSize=200000, pageSize=2000, downCacheSize=2000INFO [EsbDeployment] Starting ESB Deployment 'Chapter3.esb' Run SendJMSMessage.java by clicking Run | Run As | Java Application. The following messages will be printed in the console INFO [STDOUT] &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&INFO [STDOUT] From: nullINFO [STDOUT] To: JMSEpr [ PortReference < <wsa:Address jms:localhost:1099#queue/chapter3_Request_esb/>, <wsa:ReferenceProperties jbossesb:java.naming.factory.initial : org.jnp.interfaces.NamingContextFactory/>, <wsa:ReferenceProperties jbossesb:java.naming.provider.url : localhost:1099/>, <wsa:ReferenceProperties jbossesb:java.naming.factory.url.pkgs : org.jnp.interfaces/>, <wsa:ReferenceProperties jbossesb:destination-type : queue/>, <wsa:ReferenceProperties jbossesb:destination-name : queue/chapter3_Request_esb/>, <wsa:ReferenceProperties jbossesb:specification-version : 1.1/>, <wsa:ReferenceProperties jbossesb:connection-factory : ConnectionFactory/>, <wsa:ReferenceProperties jbossesb:persistent : true/>, <wsa:ReferenceProperties jbossesb:acknowledge-mode : AUTO_ACKNOWLEDGE/>, <wsa:ReferenceProperties jbossesb:transacted : false/>, <wsa:ReferenceProperties jbossesb:type : urn:jboss/esb/epr/type/jms/> > ]INFO [STDOUT] MessageID: 46e57744-d0ac-4f01-ad78-b1f15a3335d1INFO [STDOUT] &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& What just happened? We examined some of the header contents through the API. We printed the From, To, and the MessageID from within our MyAction class. Have a go hero – additional header contents Now modify the MyAction class to print the Action, ReplyTo, RelatesTo, and FaultTo contents of the header to the console. Here is the listing of the modified MyAction class's method: public Message displayMessage(Message message) throws Exception { System.out.println("&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&"); System.out.println("From: " + message.getHeader().getCall().getFrom()); System.out.println("To: " + message.getHeader().getCall().getTo()); System.out.println("MessageID: " + message.getHeader().getCall().getMessageID()); System.out.println("Action: " + message.getHeader().getCall().getAction()); System.out.println("FaultTo: " + message.getHeader().getCall().getFaultTo()); System.out.println("RelatesTo: " + message.getHeader().getCall().getRelatesTo()); System.out.println("ReplyTo: " + message.getHeader().getCall().getReplyTo()); System.out.println("&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&"); return message;} The context The message context is used to transport the active contextual information when the message is sent to the target service. This may include information such as the current security context, transactional information, or even context specific to the application. This contextual information is not considered to be part of the service contract and is assumed to change between successive message deliveries. Where the message context really becomes important is when a service pipeline is invoked through an InVM transport, as this can allow the message to be passed by reference. When the transport passes the message to the target service it will create a copy of the message header and message context, allowing each to be updated in subsequent actions without affecting the invoked service. Have a go hero – printing message context Modify the MyAction class to print the context of the ESB message; obtain the context through the getContext() method. You will notice that the context is empty for our sample application as we currently have no security or transactional context attached to the message. Message validation The message format within JBoss ESB allows the consumer and producer to use any payload that suits the purpose of the service contract. No constraints are placed on this payload other than the fact that it must be possible to marshall the payload contents so that the messages can be transported between the consumer and producer. While this ability is useful for creating composite services, it can be a disadvantage when you need to design services that have an abstract contract, hide the details of the implementation, are loosely coupled, and can easily be reused. In order to encourage the loose coupling of services it is often advantageous to choose a payload that does not dictate implementation, for example XML. JBoss ESB provides support for enforcing the structure of XML payloads for request and response messages, through the XML schema language as defined through the W3C. An XML Schema Document (XSD) is an abstract, structural definition which can be used to formally describe an XML message and guarantee that a specific payload matches that definition through a process called validation. Enabling validation on a service is simply a matter of providing the schema associated with the request and/or response messages and specifying the validate attribute, as follows: <actions inXsd="/request.xsd" outXsd="/response.xsd" validate="true"> ...</actions> This will force the service pipeline to validate the request and response messages against the XSD files, if they are specified, with the request validation occurring before the first service action is executed and the response validation occurring immediately before the response message is sent to the consumer. If validation of the request or response message does fail then a MessageValidationException fault will be raised and sent to the consumer using the normal fault processing as defined in the MEPs and responses section. This exception can also be seen by enabling DEBUG logging through the mechanism supported by the server. Have a go hero – enabling validation Add a request.xsd or a response.xsd or both to your actions in the sample application provided. Enable validation and test the output. Configuring through the ConfigTree JBoss ESB handles the majority of its configuration through a hierarchical structure similar to the W3C DOM, namely, org.jboss.soa.esb.helpers.ConfigTree. Each node within the structure contains a name, a reference to the parent node, a set of named attributes, and references to all child nodes. This structure is used, directly and indirectly, within the implementation of the service pipeline and action processors, and will be required if you are intending to create your own action processors. The only exception to this is when using an annotated action class when the configuring of the action will be handled by the framework instead of programmatically. Configuring properties in the jboss-esb.xml file The ConfigTree instance passed to an action processor is a hierarchical representation of the properties as defined within the action definition of the jboss-esb.xml file. Each property defined within an action may be interpreted as a name/value pair or as hierarchical content to be parsed by the action. For example the following: <action ....> <!-- name/value property --> <property name="propertyName" value="propertyValue"/> <!-- Hierarchical property --><property name="propertyName"> <hierarchicalProperty attr="value"> <inner name="myName" random="randomValue"/> </hierarchicalProperty> </property></action> This will result in the following ConfigTree structure being passed to the action: Traversing the ConfigTree hierarchy Traversing the hierarchy is simply a matter of using the following methods to obtain access to the parent or child nodes: public ConfigTree getParent() ;public ConfigTree[] getAllChildren() ;public ConfigTree[] getChildren(String name) ;public ConfigTree getFirstChild(String name) ; Accessing attributes Attributes are usually accessed by querying the current ConfigTree instance for the value associated with the required name, using the following methods: public String getAttribute(String name) ;public String getAttribute(String name, String defaultValue) ;public long getLongAttribute(String name, long defaultValue) ;public float getFloatAttribute(String name, float defaultValue) ;public boolean getBooleanAttribute(String name, boolean defaultValue) ;public String getRequiredAttribute(String name) throws ConfigurationException ; It is also possible to obtain the number of attributes, names of all the attributes, or the set of key/value pairs using the following methods: public int attributeCount() ;public Set<String> getAttributeNames() ;public List<KeyValuePair> attributesAsList() ; Time for action – examining configuration properties Let us add some configuration properties to our MyAction. We will make the & and the number of times it needs to be printed as configurable properties. Follow these steps: Add two members to the MyAction class: public String SYMBOL = "&";public int COUNT = 48; Modify the constructor as follows: _config = config;String symbol = _config.getAttribute("symbol");if (symbol != null) { SYMBOL = symbol;}String count = _config.getAttribute("count");if (count != null) { COUNT = Integer.parseInt(count);} Add a printLine() method: private void printLine() { StringBuffer line = new StringBuffer(COUNT); for (int i = 0; i < COUNT; i++) { line.append(SYMBOL); } System.out.println(line);} Modify the printMessage() method as shown in the following snippet: printLine();System.out.println("Body: " + message.getBody().get());printLine();return message; Edit the jboss-esb.xml file and select the action, BodyPrinter. Add two properties symbol as * and count as 50: Click on Save or press Ctrl + S. Deploy the application using the Run menu and select Run As | Run on Server. Run SendJMSMessage.java by clicking Run, select Run As and Java Application. INFO [STDOUT] **************************************************INFO [STDOUT] Body: Chapter 3 says Hello!INFO [STDOUT] ************************************************** What just happened? You just added two properties to the MyAction class. You also retrieved these properties from the ConfigTree and used them. Have a go hero – additional header contents Experiment with the other API methods. Write hierarchicalProperty and see how that can be retrieved.
Read more
  • 0
  • 0
  • 1988
article-image-common-api-liferay-portal-systems-development
Packt
01 Feb 2012
11 min read
Save for later

Common API in Liferay Portal Systems Development

Packt
01 Feb 2012
11 min read
(For more resources on Liferay, see here.) User management The portal has defined user management with a set of entities, such as, User, Contact, Address, EmailAddress, Phone, Website, and Ticket, and so on at /portal/service.xml. In the following section, we're going to address the User entity, its association, and relationship. Models and services The following figure depicts these entities and their relationships. The entity User has a one-to-one association with the entity Contact, which may have many contacts as children. And the entity Contact has a one-to-one association with the entity Account, which may have many accounts as children. The entity Contact can have a many-to-many association with the entities Address, EmailAddress, Phone, Website, and Ticket. Logically, the entities Address, EmailAddress, Phone, Website, and Ticket may have a many-to-many association with the other entities, such as Group, Organization, and UserGroup as shown in the following image: Services The following table shows user-related service interfaces, extensions, utilities, wrappers, and their main methods: Interface Extension Utility/Wrapper Main methods UserService, UserLocalService PersistedModelLocalService User(Local)ServiceUtil, User(Local)ServiceWrapper add*, authenticate*, check*, decrypt*, delete*, get*, has*, search, unset*, update*, and so on. ContactService, ContactLocalService persistedmodellocalservice> Contact(Local)ServiceUtil, Contact(Local)ServiceWrapper add*, create*, delete*, get*, update*, dynamicQuery, and so on. AccountService, AccountLocalService Account(Local)ServiceUtil, Account(Local)ServiceWrapper add*, create*, delete*, get*, update*, dynamicQuery, and so on. AddressService, AddressLocalService Address(Local)ServiceUtil, Address(Local)ServiceWrapper add*, create*, delete*, get*, update*, dynamicQuery, and so on. EmailAddressService, EmailAddressLocalService PersistedModelLocalService Address(Local)ServiceUtil, Address(Local)ServiceWrapper add*, create*, delete*, get*, update*, dynamicQuery, and so on. PhoneService, PhoneLocalService PersistedModelLocalService Phone(Local)ServiceUtil, Phone(Local)ServiceWrapper add*, create*, delete*, get*, update*, dynamicQuery, and so on. WebsiteService, WebsiteLocalService PersistedModelLocalService Website(Local)ServiceUtil, Website(Local)ServiceWrapper add*, create*, delete*, get*, update*, dynamicQuery, and so on. TicketLocalService PersistedModelLocalService TicketLocalServiceUtil, TicketLocalServiceWrapper add*, create*, delete*, get*, update*, dynamicQuery, and so on.   Relationships The portal also defined many-to-many relationships between User and Group, User and Organization, User and Team, User and UserGroup, as shown in the following code: <column name="groups" type="Collection" entity="Group" mapping-table="Users_Groups" /> <column name="userGroups" type="Collection" entity="UserGroup" mapping-table="Users_UserGroups" /> In particular, you will be able to find a similar definition at /portal/service.xml. Sample portal service portlet The portal provides a sample portal service plugin called sample-portal-service-portlet (refer to the plugin details at /portlets/sample-portal-service-portlet). The following is the code snippet: List organizations = OrganizationServiceUtil.getUserOrganizations( themeDisplay.getUserId()); // add your logic The previous code shows how to consume Liferay services through regular Java calls. These services include com.liferay.portal.service.OrganizationServiceUtil and the model involves com.liferay.portal.model.Organization. Similarly, you can use other services, for example, com.liferay.portal.service.UserServiceUtil and com.liferay.portal.service.GroupServiceUtil; and models, for example, com.liferay.portal.model.User, com.liferay.portal.model.Group. Of course, you can find other services and models—you will find services located at the com. liferay.portal.service package in the /portal-service/src folder. In the same way, you will find models located at the com.liferay.portal.model package in the /portal-service/src folder. What's the difference between *LocalServiceUtil and *ServiceUtil? The sign * represents models, for example, Organization, User, Group, and so on. Generally speaking, *Service is the remote service interface that defines the service methods available to remote code. *ServiceUtil has an additional permission check, since this method might be called as a remote service. *ServiceUtil is a facade class that combines the service locator with the actual call to the service *Service. While *LocalService is the internal service interface,*LocalServiceUtil is a facade class that combines the service locator with the actual call to the service *LocalService. *Service has a PermissionChecker in each method, and *LocalService usually doesn't have the same. Authorization Authorization is a process of finding out if the user, once identified, is permitted to have access to a resource. The portal implemented authorization by assigning permissions via roles and checking permissions, and this is called Role-Based Access Control (RBAC). The following figure depicts an overview of authorization. A user can be a member of Group, UserGroup, Organization, or Team. And a user or a group of users, such as Group, UserGroup, or Organization can be a member of Role. And the entity Role can have many ResourcePermission entities associated with it, while the entity ResourcePermission may contain many ResourceAction entities, as shown in the following diagram: The following table shows the entities Role, ResourcePermission, and ResourceAction: Interface Extension Wrapper/SOAP Main methods Role RoleModel, PersistedModel RoleWrapper, RoleSoap clone, compareTo, get*, set*, toCacheModel, toEscapedModel, and so on. ResourceAction ResourceActionModel, PersistedModel ResourceActionWrapper, ResourceActionSoap clone, compareTo, get*, set*, toCacheModel, toEscapedModel, and so on. ResourcePermission ResourcePermissionModel, PersistedModel ResourcePermissionWrapper, ResourcePermissionSoap clone, compareTo, get*, set*, toCacheModel, toEscapedModel, and so on. In addition, the portal specifies role constants in the class RoleConstants. The entity ResourceAction gets specified with the columns name, actionId, and bitwiseValue as follows: <column name="name" type="String" /> <column name="actionId" type="String" /> <column name="bitwiseValue" type="long" /> The entity ResourcePermission gets specified with the columns name, scope, primKey, roleId, and actionIds as follows: <column name="name" type="String" /> <column name="scope" type="int" /> <column name="primKey" type="String" /> <column name="roleId" type="long" /> <column name="ownerId" type="long" /> <column name="actionIds" type="long" /> In addition, the portal specified resource permission constants in the class ResourcePermissionConstants Password policy The portal implements enterprise password policies and user account lockout using the entities PasswordPolicy and PasswordPolicyRel, as shown in the following table: Interface Extension Wrapper/Soap Description PasswordPolicy PasswordPolicyModel, PersistedModel PasswordPolicyWrapper, PasswordPolicySoap Columns: name, description, minAge, minAlphanumeric, minLength, minLowerCase, minNumbers, minSymbols, minUpperCase, lockout, maxFailure, lockoutDuration, and so on. PasswordPolicyRel PasswordPolicyRelModel, PersistedModel PasswordPolicyRelWrapper, PasswordPolicyRelSoap Columns: passwordPolicyId, classNameId, and classPK. Ability to associate the entity PasswordPolicy with other entities.   Passwords toolkit The portal has defined the following properties related to the passwords toolkit in portal.properties: passwords.toolkit= com.liferay.portal.security.pwd.PasswordPolicyToolkit passwords.passwordpolicytoolkit.generator=dynamic passwords.passwordpolicytoolkit.static=iheartliferay The property passwords.toolkit defines a class name that extends com.liferay.portal.security.pwd.BasicToolkit, which is called to generate and validate passwords. If you choose to use com.liferay.portal.security.pwd.PasswordPolicyToolkit as your password toolkit, you can choose either static or dynamic password generation. Static is set through the property passwords.passwordpolicytoolkit.static and dynamic uses the class com.liferay.util.PwdGenerator to generate the password. If you are using LDAP password syntax checking, you will also have to use the static generator, so that you can guarantee that passwords obey their rules. The passwords' toolkits get addressed in detail in the following table: Class Interface Utility Property Main methods DigesterImpl Digester DigesterUtil passwords.digest.encoding digest, digestBase64, digestHex, digestRaw, and so on. Base64 None None None decode, encode, fromURLSafe, objectToString, stringToObject, toURLSafe, and so on. PwdEncryptor None None passwords.encryption.algorithm encrypt, default types: MD2, MD5, NONE, SHA, SHA-256, SHA-384, SSHA, UFC-CRYPT, and so on .   Authentication Authentication is the process of determining whether someone or something is, in fact, who or what it is declared to be. The portal defines the class called PwdAuthenticator for authentication, as shown in the following code: public static boolean authenticate( String login, String clearTextPassword, String currentEncryptedPassword) { String encryptedPassword = PwdEncryptor.encrypt( clearTextPassword, currentEncryptedPassword); if (currentEncryptedPassword.equals(encryptedPassword)) { return true; } } As you can see, it encrypts the clear text password first into the variable encryptedPassword. It then tests whether the variable currentEncryptedPassword has the same value as that of the variable encryptedPassword or not. The classes UserLocalServiceImpl (the method authenticate) and EditUserAction (the method updateUser) call the class PwdAuthenticator for authentication. A Message Authentication Code (MAC) is a short piece of information used to authenticate a message. The portal supports MAC through the following properties: auth.mac.allow=false auth.mac.algorithm=MD5 auth.mac.shared.key= To use authentication with MAC, simply post to a URL as follows: It passes the MAC in the password field. Make sure that the MAC gets URL encoded, since it might contain characters not allowed in a URL. Authentication with MAC also requires that you set the following property in system-ext.properties: com.liferay.util.servlet.SessionParameters=false As shown in the previous code, it encrypts session parameters, so that browsers can't remember them. Authentication pipeline The portal provides the authentication pipeline framework for authentication, as shown in the following code: auth.pipeline.pre=com.liferay.portal.security.auth.LDAPAuth auth.pipeline.post= auth.pipeline.enable.liferay.check=true As you can see, the property auth.pipeline.enable.liferay.check is set to true to enable password checking by the internal portal authentication. If it is set to false, essentially, password checking is delegated to the authenticators configured in the auth.pipeline.pre and auth.pipeline.post settings. The interface com.liferay.portal.security.auth.Authenticator defines the constant values that should be used as return code from the classes implementing the interface. If authentication is successful, it returns SUCCESS; if the user exists but the passwords doesn't match, then it returns FAILURE. If the user doesn't exist in the system, it returns DNE. Constants get defined in the interface Authenticator. As shown in the following table, the available authenticator is com.liferay.portal.security.auth.LDAPAuth: Class Extension Involved properties Main Methods PasswordPolicyToolkit BasicToolkit passwords.passwordpolicytoolkit.charset.lowercase, passwords.passwordpolicytoolkit.charset.numbers, passwords.passwordpolicytoolkit.charset.symbols, passwords.passwordpolicytoolkit.charset.uppercase, passwords.passwordpolicytoolkit.generator, passwords.passwordpolicytoolkit.static generate, validate RegExpToolkit BasicToolkit passwords.regexptoolkit.pattern, passwords.regexptoolkit.charset, passwords.regexptoolkit.length generate, validate PwdToolkitUtil None passwords.toolkit Generate, validate PwdGenerator None None getPassword. getPinNumber   Authentication token The portal provides the interface com.liferay.portal.security.auth.AuthToken for the authentication token as follows: auth.token.check.enabled=true auth.token.impl= com.liferay.portal.security.auth.SessionAuthToken As shown in the previous code, the property auth.token.check.enabled is set to true to enable authentication token security checks. The checks can be disabled for specific actions via the property auth.token.ignore.actions or for specific portlets via the init parameter check-auth-token in portlet.xml. The property auth.token.impl is set to the authentication token class. This class must implement the interface AuthToken. The class SessionAuthToken is used to prevent CSRF (Cross-Site Request Forgery) attacks. The following table shows the interface AuthToken and its implementation: Class Interface Properties Main Methods LDAPAuth Authenticator ldap.auth.method, ldap.referral, ldap.auth.password.encryption.algorithm, ldap.base.dn, ldap.error.user.lockout, ldap.error.password.expired, ldap.import.user.password.enabled, ldap.base.provider.url, auth.pipeline.enable.liferay.check, ldap.auth.required authenticateByEmailAddress, authenticateByScreenName, authenticateByUserId   JAAS Java Authentication and Authorization Service (JAAS) is a Java security framework for user-centric security to augment the Java code-based security. The portal has specified a set of properties for JAAS as follows: portal.jaas.enable=false portal.jaas.auth.type=userId portal.impersonation.enable=true The property portal.jaas.enable is set to false to disable JAAS security checks. Disabling JAAS would speed up login. Note that JAAS must be disabled, if administrators are able to impersonate other users. JAAS can authenticate users based on their e-mail address, screen name, user ID, or login, as determined by the property company.security.auth.type. By default, the class com.liferay.portal.security.jaas.PortalLoginModule loads the correct JAAS login module, based on what application server or servlet container the portal is deployed on. You can set a JAAS implementation class to override this behavior. The following table shows this class and its associations: Class Interface Properties Main methods AuthTokenImpl AuthToken auth.token.impl check, getToken AuthTokenWrapper AuthToken None check, getToken AuthTokenUtil None None check, getToken SessionAuthToken AuthToken auth.token.shared.secret check, getToken   As you have noticed, the classes com.liferay.portal.kernel.security.jaas, PortalLoginModule, and com.liferay.portal.security.jaas.PortalLoginModule, implement the interface LoginModule, configured by the property portal.jaas.impl. As shown in the following table, the portal has provided different login module implementation for different application servers or servlet containers: Class Interface/Extension Package Main methods ProtectedPrincipal Principal com.liferay.portal.kernel.servlet getName, equals, hasCode, toString PortalPrincipal ProtectedPrincipal com.liferay.portal.kernel.security.jaas PortalPrincipal PortalRole PortalPrincipal com.liferay.portal.kernel.security.jaas PortalRole PortalGroup PortalPrincipal, java.security.acl.Group com.liferay.portal.kernel.security.jaas addMember, isMember, members, removeMember PortalLoginModule javax.security.auth.spi.LoginModule com.liferay.portal.kernel.security.jaas, com.liferay.portal.security.jaas abort, commit, initialize, login, logout  
Read more
  • 0
  • 0
  • 6602

article-image-ext-js-4-working-grid-component
Packt
11 Jan 2012
10 min read
Save for later

Ext JS 4: Working with the Grid Component

Packt
11 Jan 2012
10 min read
(For more resources on JavaScript, see here.) Grid panel The grid panel is one of the most-used components when developing an application Ext JS 4 provides some great improvements related to this component. The Ext JS 4 Grid panel renders a different HTML than Ext JS 3 Grid did. Sencha calls this new feature Intelligent Rendering. Ext JS 3 used to create the whole structure, supporting all the features. But, what if someone just wanted to display a simple grid? All the other features not being rendered would just be wasted, because no one was using that structure. Ext JS 4 now renders only the features the grid uses, minimizing and boosting the performance. Before we examine the grid's new features and enhancements, let's take a look how to implement a simple grid in Ext JS 4: Ext.create('Ext.grid.Panel', { store: Ext.create('Ext.data.ArrayStore', { fields: [ {name: 'book'}, {name: 'author'} ], data: [['Ext JS 4: First Look','Loiane Groner']] }), columns: [ { text : 'Book', flex : 1, sortable : false, dataIndex: 'book' },{ text : 'Author', width : 100, sortable : true, dataIndex: 'author' }], height: 80, width: 300, title: 'Simple Grid', renderTo: Ext.getBody() }); As you can see in the preceding code, the two main parts of the grid are the store and the columns declarations. Note, as well, names of both store and model fields always have to match with the column's dataIndex (if you want to display the column in the grid). So far, nothing has changed. The way we used to declare a simple grid in Ext JS 3 is the same way we do for Ext JS 4. However, there are some changes related to plugins and the new features property. We are going to take a closer look at that in this section. Let's dive into the changes! Columns Ext JS 4 organizes all the column classes into a single package—the Ext.grid.column package. We will explain how to use each column type with an example. But first, we need to declare a Model and a Store to represent and load the data: Ext.define('Book', { extend: 'Ext.data.Model', fields: [ {name: 'book'}, {name: 'topic', type: 'string'}, {name: 'version', type: 'string'}, {name: 'released', type: 'boolean'}, {name: 'releasedDate', type: 'date'}, {name: 'value', type: 'number'} ] }); var store = Ext.create('Ext.data.ArrayStore', { model: 'Book', data: [ ['Ext JS 4: First Look','Ext JS','4',false,null,0], ['Learning Ext JS 3.2','Ext JS','3.2',tr ue,'2010/10/01',40.49], ['Ext JS 3.0 Cookbook','Ext JS','3',true,'2009/10/01',44.99], ['Learning Ext JS','Ext JS','2.x',true,'2008/11/01',35.99], ] }); Now, we need to declare a grid: Ext.create('Ext.grid.Panel', { store: store, width: 550, title: 'Ext JS Books', renderTo: 'grid-example', selModel: Ext.create('Ext.selection.CheckboxModel'), //1 columns: [ Ext.create('Ext.grid.RowNumberer'), //2 { text: 'Book',//3 flex: 1, dataIndex: 'book' },{ text: 'Category', //4 xtype:'templatecolumn', width: 100, tpl: '{topic} {version}' },{ text: 'Already Released?', //5 xtype: 'booleancolumn', width: 100, dataIndex: 'released', trueText: 'Yes', falseText: 'No' },{ text: 'Released Date', //6 xtype:'datecolumn', width: 100, dataIndex: 'releasedDate', format:'m-Y' },{ text: 'Price', //7 xtype:'numbercolumn', width: 80, dataIndex: 'value', renderer: Ext.util.Format.usMoney },{ xtype:'actioncolumn', //8 width:50, items: [{ icon: 'images/edit.png', tooltip: 'Edit', handler: function(grid, rowIndex, colIndex) { var rec = grid.getStore().getAt(rowIndex); Ext.MessageBox.alert('Edit',rec.get('book')); } },{ icon: 'images/delete.gif', tooltip: 'Delete', handler: function(grid, rowIndex, colIndex) { var rec = grid.getStore().getAt(rowIndex); Ext.MessageBox.alert('Delete',rec.get('book')); } }] }] }); The preceding code outputs the following grid: The first column is declared as selModel, which, in this example, is going to render a checkbox, so we can select some rows from the grid. To add this column into a grid, simply declare the selModel (also known as sm in Ext JS 3) as CheckBox selection model, as highlighted in the code (comment 1 in the code). The second column that we declared is the RowNumberer column. This column adds a row number automatically into the grid. In the third column (with text:'Book'), we did not specify a column type; this means the column will display the data itself as a string. In the fourth column, we declared a column with xtype as templatecolumn. This column will display the data from the store, specified by an XTemplate, as declared in the tpl property. In this example, we are saying we want to display the topic (name of the technology) and its version. The fifth column is declared as booleancolumn. This column displays a true or false value. But, if we do not want to display true or false in the grid, we can specify the values that we want to get displayed. In this example, we displayed the value as Yes (for true values) and No (for false values), as we declared in the trueText and falseText. The sixth column we declared as datecolumn, which is used to display dates. We can also declare a date format we want to be displayed. In this example, we want to display only the month and the year. The format follows the same rules for PHP date formats. The seventh column we declared as numbercolumn. This column is used to display numbers, such as a quantitative number, money, and so on. If we want to display the number in a particular format, we can use one of the Ext JS renderers to create a customized one. And the last column we declared is the actioncolumn. In this column, we can display icons that are going to execute an action, such as delete or edit. We declare the icons we want to display in the items property. topic: {name}{rows.length} Book topic: {name}{rows.length} Books Feature support In Ext JS 3, when we wanted to add a new functionality to a grid, we used to create a plugin or extend the GridPanel class. There was no default way to do it. Ext JS 4 introduces the Ext.grid.feature.Feature class that contains common methods and properties to create a plugin. Inside the Ext.grid.feature packages, we will find seven classes: AbstractSummary, Chunking, Feature, Grouping, GroupingSummary, RowBody, and Summary. A feature is very simple to use—we need to add the feature inside the feature declaration in the grid: features: [{ groupHeaderTpl: 'Publisher: {name}', ftype: 'groupingsummary' }] Let's take a look at how to use some of these native grid features. Ext.grid.feature.Grouping Grouping rows in Ext JS 4 has changed. Now, Grouping is a feature and can be applied to a grid through the features property. The following code displays a grid grouped by book topic: Ext.define('Book', { extend: 'Ext.data.Model', fields: ['name', 'topic'] }); var Books = Ext.create('Ext.data.Store', { model: 'Book', groupField: 'topic', data: [{ name: 'Learning Ext JS', topic: 'Ext JS' },{ name: 'Learning Ext JS 3.2', topic: 'Ext JS' },{ name: 'Ext JS 3.0 Cookbook', topic: 'Ext JS' },{ name: 'Expert PHP 5 Tools', topic: 'PHP' },{ name: 'NetBeans IDE 7 Cookbook', topic: 'Java' },{ name: 'iReport 3.7', topic: 'Java' },{ name: 'Python Multimedia', topic: 'Python' },{ name: 'NHibernate 3.0 Cookbook', topic: '.NET' },{ name: 'ASP.NET MVC 2 Cookbook', topic: '.NET' }] }); Ext.create('Ext.grid.Panel', { renderTo: Ext.getBody(), frame: true, store: Books, width: 350, height: 400, title: 'Books', features: [Ext.create('Ext.grid.feature.Grouping',{ groupHeaderTpl: 'topic: {name} ({rows.length} Book{[values.rows.length > 1 ? "s" : ""]})' })], columns: [{ text: 'Name', flex: 1, dataIndex: 'name' },{ text: 'Topic', flex: 1, dataIndex: 'topic' }] }); In the groupHeaderTpl attribute, we declared a template to be displayed in the grouping row. We are going to display one of the following customized strings, depending on the number of books belonging to the topic: The string comprises of the topic name ({name}) and the count of the book for the topic ({rows.length}). In Ext JS 3, we still had to declare a grouping field in the store; but, instead of a Grouping feature, we used to declare GroupingView, as follows: view: new Ext.grid.GroupingView({ forceFit:true, groupTextTpl: '{text} ({[values.rs.length]} {[values.rs.length > 1 ? "Books" : "Book"]})' }) If we execute the grouping grid, we will get the following output:   Ext.grid.feature.GroupingSummary The GroupingSummary feature also groups rows with a field in common, but it also adds a summary row at the bottom of each group. Let's change the preceding example to use the GroupingSummary feature: Ext.create('Ext.grid.Panel', { renderTo: Ext.getBody(), frame: true, store: Books, width: 350, height: 400, title: 'Books', features: [{ groupHeaderTpl: 'Topic: {name}', ftype: 'groupingsummary' }], columns: [{ text: 'Name', flex: 1, dataIndex: 'name', summaryType: 'count', summaryRenderer: function(value){ return Ext.String.format('{0} book{1}', value, value !== 1 ? 's' : ''); } },{ text: 'Topic', flex: 1, dataIndex: 'topic' }] }); We highlighted two pieces in the preceding code. The first line is the feature declaration: in the previous example (Grouping) we created the feature using the Ext.create declaration. But if we do not want to explicitly create the feature every time we declare, we can use the ftype property, which is groupingsummary in this example. The groupingsummary that we added to the grid's name column is in the second line of highlighted code. We declared a summaryType property and set its value as count. Declaring the summaryType as count means we want to display the number of books in that particular topic/category; it is going to count how many records we have for a particular category in the grid. It is very similar to the count of the PL/SQL language. Other summary types we can declare are: sum, min, max, average (these are self-explanatory). In this example, we want to customize the text that will be displayed in the summary, so we are going to use the summaryRenderer function. We need to pass a value argument to it, and the value is the count of the name column. Then, we are going to return a customized string that is going to display the count (token {0}) and the string book or books, depending on the count (if it is more than 1 we add s at the end of the string book). Ext.String.format is a function that allows you to define a tokenized string and pass an arbitrary number of arguments to replace the tokens. Each token must be unique and must increment in the format {0}, {1}, and so on. The preceding code will output the following grid: Ext.grid.feature.Summary The GroupingSummary feature adds a row at the bottom of each grouping. The Summary feature adds a row at the bottom of the grid to display summary information. The property configuration is very similar to that for GroupingSummary, because both classes are subclasses of AbstractSummary (a class that provides common properties and methods for summary features). Ext.create('Ext.grid.Panel', { renderTo: Ext.getBody(), frame: true, store: Books, width: 350, height: 300, title: 'Books', features: [{ ftype: 'summary' }], columns: [{ text: 'Name', flex: 1, dataIndex: 'name', summaryType: 'count', summaryRenderer: function(value){ return Ext.String.format('{0} book{1}', value, value !== 1 ? 's' : ''); } },{ text: 'Topic', flex: 1, dataIndex: 'topic' }] }); The only difference from the GroupingSummary feature is the feature declaration itself. The summayType and summaryRenderer properties work in a similar way. The preceding code will output the following grid: Ext.grid.feature.RowBody The rowbody feature adds a new tr->td->div in the bottom of the row that we can use to display additional information. Here is how to use it: Ext.create('Ext.grid.Panel', { renderTo: Ext.getBody(), frame: true, store: Books, width: 350, height: 300, title: 'Books', features: [{ ftype: 'rowbody', getAdditionalData: function(data, idx, record, orig) { return { rowBody: Ext.String.format( '->topic: {0}', data.topic) }; } }, { ftype: 'rowwrap' }], columns: [{ text: 'Name', flex: 1, dataIndex: 'name' }] });  In the preceding code, we are not only displaying the name of the book; we are using the rowbody to display the topic of the book as well. The first step is to declare the rowbody feature. One very important thing to be noted is that rowbody will be initially hidden, unless you override the getAdditionalData method. If we execute the preceding code, we will get the following output:
Read more
  • 0
  • 0
  • 6783
Modal Close icon
Modal Close icon