Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-phplist-2-e-mail-campaign-manager-personalizing-e-mail-body
Packt
26 Jul 2011
5 min read
Save for later

phpList 2 E-mail Campaign Manager: Personalizing E-mail Body

Packt
26 Jul 2011
5 min read
Enhancing messages using built-in placeholders For simple functionality's sake, we generally want our phpList messages to contain at least a small amount of customization. For example, even the default footer, which phpList attaches to messages, contains three placeholders, customizing each message for each recipient: -- If you do not want to receive any more newsletters, [UNSUBSCRIBE] To update your preferences and to unsubscribe, visit [PREFERENCES] Forward a Message to Someone [FORWARD] The placeholders [UNSUBSCRIBE],[PREFERENCES], and [FORWARD] will be replaced with unique URLs per subscriber, allowing any subscriber to immediately unsubscribe, adjust their preferences, or forward a message to a friend simply by clicking on a link. There's a complete list of available placeholders documented on phpList's wiki page at http://docs.phplist.com/Placeholders. Here are some of the most frequently used ones: [CONTENT]: Use this while creating standard message templates. You can design a styled template which is re-used for every mailing and the [CONTENT] placeholder will be replaced with the unique content for that particular message. [EMAIL]: This is replaced by the user's e-mail address. It can be very helpful in the footer of an e-mail, so that subscribers know which e-mail address they used to sign up for list subscription. [LISTS]: The lists to which a member is subscribed. Having this information attached to system confirmation messages makes it easy for subscribers to manage their own subscriptions. Note that this placeholder is only applicable in system messages and not in general list messages. [UNSUBSCRIBEURL]: Almost certainly, you'll want to include some sort of "click here to unsubscribe" link on your messages, either as a pre-requisite for sending bulk mail (perhaps imposed by your ISP) or to avoid users inadvertently reporting you for spamming. [UNSUBSCRIBE]: This placeholder generates the entire hyperlink for you (including the link text, "unsubscribe"), whereas the [UNSUBSCRIBEURL] placeholder simply generates the URL. You would use the URL only if you wanted to link an image to the unsubscription page, as opposed to a simple link, or if you wanted the HTML link text to be something other than "unsubscribe". [USERTRACK]: This inserts an invisible tracker image into HTML messages, helping you to measure the effectiveness of your newsletter. You might combine several of these placeholders to add a standard signature to your messages, as follows: -- You ([EMAIL]) are receiving this message because you subscribed to one or more of our mailing lists. We only send messages to subscribers who have requested and confirmed their subscription (double-opt-in). You can adjust your list membership at any time by clicking on [PREFERENCES] or unsubscribe altogether by clicking on [UNSUBSCRIBE]. -- Placeholders in confirmation messages Some placeholders (such as [LISTS]) are only applicable in confirmation messages (that is, "thank you for subscribing to the following lists..."). These placeholders allow you to customize the following: Request to confirm: Sent initially to users when they subscribe, confirming their e-mail address and subscription request Confirmation of subscription: Sent to users to confirm that they've been successfully added to the requested lists (after they've confirmed their e-mail address) Confirmation of preferences update: Sent to users to confirm their updates when they change their list subscriptions/preferences themselves Confirmation of unsubscription: Sent to users after they've unsubscribed to confirm that their e-mail address will no longer receive messages from phpList Personalizing messages using member attributes Apart from the built-in placeholders, you can also use any member attributes to further personalize your messages. Say you captured the following attributes from your new members: First Name Last Name Hometown Favorite Food You could craft a personalized message as follows: Dear [FIRST NAME], Hello from your friends at the Funky Town Restaurant. We hope the [LAST NAME] family is well in the friendly town of [HOMETOWN]. If you're ever in the mood for a fresh [FAVORITE FOOD], please drop in - we'd be happy to have you! ... This would appear to different subscribers as: Dear Bart, Hello from your friends at the Funky Town Restaurant. We hope the Simpson family is well in the friendly town of Springfield. If you're ever in the mood for a fresh pizza, please drop in - we'd be happy to have you! ... Or: Dear Clark, Hello from your friends at the Funky Town Restaurant. We hope the Kent family is well in the friendly town of Smallville. If you're ever in the mood for a fresh Krypto-Burger, please drop in - we'd be happy to have you! ... If a user doesn't have an attribute for a particular placeholder, it will be replaced with a blank space. For example, if user "Mary" hadn't entered any attributes, her message would look like: Dear, Hello from your friends at the Funky Town Restaurant. We hope the family is well in the friendly town of . If you're ever in the mood for a fresh , please drop in - we'd be happy to have you! ... If the attributes on your subscription form are optional, try to structure your content in such a way that a blank placeholder substitution won't ruin the text. For example, the following text will look awkward with blank substitutions: Your name is [FIRST NAME], your favorite food is [FAVORITE FOOD], and your last name is [LAST NAME] Whereas the following text would at least "degrade gracefully": Your name: [FIRST NAME] Your favorite food: [FAVORITE FOOD] Your last name [LAST NAME]
Read more
  • 0
  • 0
  • 2547

article-image-using-additional-solr-functionalities
Packt
26 Jul 2011
9 min read
Save for later

Using Additional Solr Functionalities

Packt
26 Jul 2011
9 min read
  Apache Solr 3.1 Cookbook Over 100 recipes to discover new ways to work with Apache’s Enterprise Search Server         Read more about this book       (For more resources on this subject, see here.) Getting more documents similar to those returned in the results list Let's imagine a situation where you have an e-commerce library shop and you want to show users the books similar to the ones they found while using your application. This recipe will show you how to do that. How to do it... Let's assume that we have the following index structure (just add this to your schema.xml file's fields section): <field name="id" type="string" indexed="true" stored="true" required="true" /> <field name="name" type="text" indexed="true" stored="true" termVectors="true" /> The test data looks like this: <add> <doc> <field name="id">1</field> <field name="name">Solr Cookbook first edition</field> </doc> <doc> <field name="id">2</field> <field name="name">Solr Cookbook second edition</field> </doc> <doc> <field name="id">3</field> <field name="name">Solr by example first edition</field> </doc> <doc> <field name="id">4</field> <field name="name">My book second edition</field> </doc> </add> Let's assume that our hypothetical user wants to find books that have first in their names. However, we also want to show him the similar books. To do that, we send the following query: http://localhost:8983/solr/select?q=name:edition&mlt=true&mlt. fl=name&mlt.mintf=1&mlt.mindf=1 The results returned by Solr are as follows: <?xml version="1.0" encoding="UTF-8"?> <response> <lst name="responseHeader"> <int name="status">0</int> <int name="QTime">1</int> <lst name="params"> <str name="mlt.mindf">1</str> <str name="mlt.fl">name</str> <str name="q">name:edition</str> <str name="mlt.mintf">1</str> <str name="mlt">true</str> </lst> </lst> <result name="response" numFound="1" start="0"> <doc> <str name="id">3</str> <str name="name">Solr by example first edition</str> </doc> </result> <lst name="moreLikeThis"> <result name="3" numFound="3" start="0"> <doc> <str name="id">1</str> <str name="name">Solr Cookbook first edition</str> </doc> <doc> <str name="id">2</str> <str name="name">Solr Cookbook second edition</str> </doc> <doc> <str name="id">4</str> <str name="name">My book second edition</str> </doc> </result> </lst> </response> Now let's see how it works. How it works... As you can see, the index structure and the data are really simple. One thing to notice is that the termVectors attribute is set to true in the name field definition. It is a nice thing to have when using more like this component and should be used when possible in the fields on which we plan to use the component. Now let's take a look at the query. As you can see, we added some additional parameters besides the standard q one. The parameter mlt=true says that we want to add the more like this component to the result processing. Next, the mlt.fl parameter specifies which fields we want to use with the more like this component. In our case, we will use the name field. The mlt.mintf parameter tells Solr to ignore terms from the source document (the ones from the original result list) with the term frequency below the given value. In our case, we don't want to include the terms that will have the frequency lower than 1. The last parameter, mlt.mindf, tells Solr that the words that appear in less than the value of the parameter documents should be ignored. In our case, we want to consider words that appear in at least one document. Finally, let's take a look at the search results. As you can see, there is an additional section (<lst name="moreLikeThis">) that is responsible for showing us the more like this component results. For each document in the results, there is one more similar section added to the response. In our case, Solr added a section for the document with the unique identifier 3 (<result name="3" numFound="3" start="0">) and there were three similar documents found. The value of the id attribute is assigned the value of the unique identifier of the document that the similar documents are calculated for. Presenting search results in a fast and easy way Imagine a situation where you have to show a prototype of your brilliant search algorithm made with Solr to the client. But the client doesn't want to wait another four weeks to see the potential of the algorithm, he/she wants to see it very soon. On the other hand, you don't want to show the pure XML results page. What to do then? This recipe will show you how you can use the Velocity response writer (a.k.a. Solritas) to present a prototype fast. How to do it... Let's assume that we have the following index structure (just add this to your schema.xml file to the fields section): <field name="id" type="string" indexed="true" stored="true" required="true" /> <field name="name" type="text" indexed="true" stored="true" /> The test data looks like this: <add> <doc> <field name="id">1</field> <field name="name">Solr Cookbook first edition</field> </doc> <doc> <field name="id">2</field> <field name="name">Solr Cookbook second edition</field> </doc> <doc> <field name="id">3</field> <field name="name">Solr by example first edition</field> </doc> <doc> <field name="id">4</field> <field name="name">My book second edition</field> </doc> </add> We need to add the response writer definition. To do this, you should add this to your solrconfig.xml file (actually this should already be in the configuration file): <queryResponseWriter name="velocity" class="org.apache.solr. request.VelocityResponseWriter"/> Now let's set up the Velocity response writer. To do that we add the following section to the solrconfig.xml file (actually this should already be in the configuration file): <requestHandler name="/browse" class="solr.SearchHandler"> <lst name="defaults"> <str name="wt">velocity</str> <str name="v.template">browse</str> <str name="v.layout">layout</str> <str name="title">Solr cookbook example</str> <str name="defType">dismax</str> <str name="q.alt">*:*</str> <str name="rows">10</str> <str name="fl">*,score</str> <str name="qf">name</str> </lst> </requestHandler> Now you can run Solr and type the following URL address: http://localhost:8983/solr/browse You should see the following page: (Move the mouse over the image to enlarge it.) How it works... As you can see, the index structure and the data are really simple, so I'll skip discussing this part of the recipe. The first thing in configuring the solrconfig.xml file is adding the Velocity Response Writer definition. By adding it, we tell Solr that we will be using velocity templates to render the view. Now we add the search handler to use the Velocity Response Writer. Of course, we could pass the parameters with every query, but we don't want to do that, we want them to be added by Solr automatically. Let's go through the parameters: wt: The response writer type; in our case, we will use the Velocity Response Writer. v.template: The template that will be used for rendering the view; in our case, the template that Velocity will use is in the browse.vm file (the vm postfix is added by Velocity automatically). This parameter tells Velocity which file is responsible for rendering the actual page contents. v.layout: The layout that will be used for rendering the view; in our case, the template that velocity will use is in the layout.vm file (the vm postfix is added by velocity automatically). This parameter specifies how all the web pages rendered by Solritas will look like. title: The title of the page. defType: The parser that we want to use. q.alt: Alternate query for the dismax parser in case the q parameter is not defined. rows: How many maximum documents should be returned. fl: Fields that should be listed in the results. qf: The fields that we should be searched. Of course, the page generated by the Velocity Response Writer is just an example. To modify the page, you should modify the Velocity files, but this is beyond the scope of this article. There's more... If you are still using Solr 1.4.1 or 1.4, there is one more thing that can be useful. Running Solritas on Solr 1.4.1 or 1.4 Because the Velocity Response Writer is a contrib module in Solr 1.4.1, we need to do the following operations to use it. Copy the following libraries from the /contrib/velocity/ src/main/solr/lib directory to the /lib directory of your Solr instance: apache-solr-velocity-1.4.dev.jar commons-beanutils-1.7.0.jar commons-collections-3.2.1.jar velocity-1.6.1.jar velocity-tools-2.0-beta3.jar Then copy the contents of the /velocity (with the directory) directory from the code examples to your Solr configuration directory.
Read more
  • 0
  • 0
  • 2047

article-image-being-cross-platform-haxe
Packt
26 Jul 2011
10 min read
Save for later

Being Cross-platform with haXe

Packt
26 Jul 2011
10 min read
  haXe 2 Beginner's Guide Develop exciting applications with this multi-platform programming language What is cross-platform in the library The standard library includes a lot of classes and methods, which you will need most of the time in a web application. So, let's have a look at the different features. What we call standard library is simply a set of objects, which is available when you install haXe. Object storage The standard library offers several structures in which you can store objects. In haXe, you will see that, compared to many other languages, there are not many structures to store objects. This choice has been made because developers should be able to do what they need to do with their objects instead of dealing with a lot of structures, which they have to cast all the time. The basic structures are array, list, and hash. All of these have a different utility: Objects stored in an array can be directly accessed by using their index Objects in a list are linked together in an ordered way Objects in an hash are tied to what are called "keys", but instead of being an Int, keys are a String There is also the IntHash structure that is a hash-table using Int values as keys. These structures can be used seamlessly in all targets. This is also, why the hash only supports String as indexes: some platforms would require a complex class that would impair performances to support any kind of object as indexes. The Std class The Std class is a class that contains methods allowing you to do some basic tasks, such as parsing a Float or an Int from a String, transforming a Float to an Int, or obtaining a randomly generated number. This class can be used on all targets without any problems. The haxe package The haxe package (notice the lower-case x) contains a lot of class-specific to haXe such as the haxe.Serializer class that allows one to serialize any object to the haXe serialization format or its twin class haxe.Unserializer that allows one to unserialize objects (that is "reconstruct" them). This is basically a package offering extended cross-platform functionalities. The classes in the haxe package can be used on all platforms most of the time. The haxe.remoting package This package also contains a remoting package that contains several classes allowing us to use the haXe remoting protocol. This protocol allows several programs supporting it to communicate easily. Some classes in this package are only available for certain targets because of their limitations. For example, a browser environment won't allow one to open a TCP socket, or Flash won't allow one to create a server. Remoting will be discussed later, as it is a very interesting feature of haXe. The haxe.rtti package There's also the rtti package. RTTI means Run Time Type Information. A class can hold information about itself, such as what fields it contains and their declared types. This can be really interesting in some cases, such as, if you want to create automatically generated editors for some objects. The haxe.Http class The haxe.Http class is one you are certainly going to use quite often. It allows you to make HTTP requests and retrieve the answer pretty easily without having to deal with the HTTP protocol by yourself. If you don't know what HTTP is, then you should just know that it is the protocol used between web browsers and servers. On a side-note, the ability to make HTTPS requests depends on the platform. For example, at the moment, Neko doesn't provide any way to make one, whereas it's not a problem at all on JS because this functionality is provided by the browser. Also, some methods in this class are only available on some platforms. That's why, if you are writing a cross-platform library or program, you should pay attention to what methods you can use on all the platforms you want to target. You should note that on JS, the haxe.Http class uses HttpRequest objects and as such, they suffer from security restrictions, the most important one being the same-domain policy. This is something that you should keep in mind, when thinking about your solution's architecture. You can make a simple synchronous request by writing the following: var answer = Http.requestUrl("http://www.benjamindasnois.com"); It is also possible to make some asynchronous requests as follows: var myRequest = new Http("http://www.benjamindasnois.com"); myRequest.onData = function (d : String) { Lib.println(d); } myRequest.request(false); This method also allows you to get more information about the answer, such as the headers and the return code. The following is an example displaying the answer's headers: import haxe.Http; #if neko import neko.Lib; #elseif php import php.Lib; #end class Main { static function main() { var myRequest = new Http("http://www.benjamindasnois.com"); myRequest.onData = function (d : String) { for (k in myRequest.responseHeaders.keys()) { Lib.println(k + " : " + myRequest.responseHeaders.get(k)); } }; myRequest.request(false); } } The following is what it displays: X-Cache : MISS from rack1.tumblr.com X-Cache-Lookup : MISS from rack1.tumblr.com:80 Via : 1.0 rack1.tumblr.com:80 (squid/2.6.STABLE6) P3P : CP="ALL ADM DEV PSAi COM OUR OTRo STP IND ONL" Set-Cookie : tmgioct=h6NSbuBBgVV2IH3qzPEPPQLg; expires=Thu, 02-Jul-2020 23:30:11 GMT; path=/; httponly ETag : f85901c583a154f897ba718048d779ef Link : <http://assets.tumblr.com/images/default_avatar_16.gif>; rel=icon Vary : Accept-Encoding Content-Type : text/html; charset=UTF-8 Content-Length : 30091 Server : Apache/2.2.3 (Red Hat) Date : Mon, 05 Jul 2010 23:31:10 GMT X-Tumblr-Usec : D=78076 X-Tumblr-User : pignoufou X-Cache-Auto : hit Connection : close Regular expressions and XML handling haXe offers a cross-platform API for regular expressions and XML that can be used on most targets' target. Regular expressions The regular expression API is implemented as the EReg class. You can use this class on any platform to match a RegExp, split a string according to a RegExp, or do some replacement. This class is available on all targets, but on Flash, it only starts from Flash 9. The following is an example of a simple function that returns true or false depending on if a RegExp matches a string given as parameter: public static function matchesHello(str : String) : Bool { var helloRegExp = ~/.*hello.*/; return helloRegExp.match(str); } One can also replace what is matched by the RegExp and return this value. This one simply replaces the word "hello" with "bye", so it's a bit of an overkill to use a RegExp to do that, and you will find some more useful ways to use this possibility when making some real programs. Now, at least you will know how to do it: public static function replaceHello(str : String) : String { var helloRegExp = ~/hello/; helloRegExp.match(str); return helloRegExp.replace(str, "bye"); } XML handling The XML class is available on all platforms. It allows you to parse and emit XML the same way on many targets. Unfortunately, it is implemented using RegExp on most platforms, and therefore can become quite slow on big files. Such problems have already been raised on the JS targets, particularly on some browsers, but you should keep in mind that different browsers perform completely differently. For example, on the Flash platform, this API is now using the internal Flash XML libraries, which results in some incompatibilities. The following is an example of how to create a simple XML document: <pages> <page id="page1"/> <page id="page2"/> </pages> Now, the haXe code to generate it: var xmlDoc : Xml; var xmlRoot : Xml; xmlDoc = Xml.createDocument(); //Create the document xmlRoot = Xml.createElement("pages"); //Create the root node xmlDoc.addChild(xmlRoot); //Add the root node to the document var page1 : Xml; page1 = Xml.createElement("page"); //create the first page node page1.set("id", "page1"); xmlRoot.addChild(page1); //Add it to the root node var page2 : Xml; page2 = Xml.createElement("page"); page2.set("id", "page2"); xmlRoot.addChild(page2); trace(xmlDoc.toString()); //Print the generated XML Input and output Input and output are certainly the most important parts of an application; indeed, without them, an application is almost useless. If you think about how the different targets supported by haXe work and how the user may interact with them, you will quickly come to the conclusion that they use different ways of interacting with the user, which are as follows: JavaScript in the browser uses the DOM Flash has its own API to draw on screen and handle events Neko uses the classic input/output streams (stdin, stdout, stderr) and so do PHP and C++ So, we have three different main interfaces: DOM, Flash, and classic streams. The DOM interface The implementation of the DOM interface is available in the js package. This interface is implemented through typedefs. Unfortunately, the API doesn't provide any way to abstract the differences between browsers and you will have to deal with them in most cases by yourself. This API is simply telling the compiler what objects exist in the DOM environment; so, if you know how to manipulate the DOM in JavaScript, you will be able to manipulate it in haXe. The thing that you should know is that the document object can be accessed through js.Lib.document. The js package is accessible only when compiling to JS. The Flash interface In a way that is similar to how the Flash is implemented in the flash and flash9 packages, the js package implements the DOM interface. When reading this sentence, you may wonder why there are two packages. The reason is pretty simple, the Flash APIs pre and post Flash 9 are different. You also have to pay attention to the fact that, when compiling to Flash 9, the flash9 package is accessible through the flashpath and not through flash9. Also, at the time of writing, the documentation for flash and flash9 packages on haxe. org is almost non-existent; but, if you need some documentation, you can refer to the official documentation. The standard input/output interface The standard input/output interface refers to the three basic streams that exist on most systems, which are as follows: stdin (most of the time the keyboard). stdout (the standard output which is most of the time the console, or, when running as a web-application, the stream sent to the client). stderr (the standard error output which is most of the time directed to the console or the log file). Neko, PHP and C++ all make use of this kind of interface. Now, there are two pieces news for you: one good and one bad. The bad one is that the API for each platform is located in a platform-specific package. So, for example, when targeting Neko, you will have to use the neko package, which is not available in PHP or C++. The good news is that there is a workaround. Well, indeed, there are three. You just have to continue reading through this article and I'll tell you how to handle that.  
Read more
  • 0
  • 0
  • 2866

article-image-haxe-2-using-templates
Packt
25 Jul 2011
10 min read
Save for later

haXe 2: Using Templates

Packt
25 Jul 2011
10 min read
  haXe 2 Beginner's Guide Develop exciting applications with this multi-platform programming language Introduction to the haxe.Template class As developers our job is to create programs that allow the manipulation of data. That's the basis of our job, but beyond this part of the job, we must also be able to present that data to the user. Programs that don't have a user interface exist, but since you are reading this article about haXe, there is a greater chance that you are mostly interested in web applications, and almost all web applications have a User Interface of some kind. However, these can also be used to create XML documents for example. The haXe library comes with the haxe.Template class. This class allows for basic, yet quite powerful, templating: as we will see, it is not only possible to pass some data to it, but also possible to call some code from a template. Templates are particularly useful when you have to present data—in fact, you can, for example, define a template to display data about a user and then iterate over a list of users displaying this template for each one. We will see how this is possible during this article and we will see what else you can do with templates. We will also see that it is possible to change what is displayed depending on the data and also that it is easy to do some quite common things such as having a different style for one row out of two in a table. The haxe.Template is really easy to use—you just have to create an instance of it passing it a String that contains your template's code as a parameter. Then it is as easy as calling the execute method and giving it some data to display. Let's see a simple example: class TestTemplate { public static function main(): Void { var myTemplate = new haxe.Template("Hi. ::user::"); neko.Lib.println(myTemplate.execute({user : "Benjamin"})); } } This simple code will output "Hi. Benjamin". This is because we have passed an anonymous object as a context with a "user" property that has "Benjamin" as value. Obviously, you can pass objects with several properties. Moreover, as we will see it is even possible to pass complex structures and use them. In addition, we certainly won't be hard coding our templates into our haXe code. Most of the time, you will want to load them from a resource compiled into your executable by calling haxe.Resource.getString or by directly loading them from the filesystem or from a database. Printing a value As we've seen in the preceding sample, we have to surround an expression with :: in order to print its value. Expressions can be of several forms:     Form Explanation ::variableName:: The value of the variable. ::(123):: The integer 123. Note that only integers are allowed. ::e1 operator e2:: Applies the operator to e1 and e2 and returns the resulting value. The syntax doesn't manage operator precedence, so you should wrap expressions inside parenthesis. ::e.field:: Accesses the field and returns the value. Be warned that this doesn't work with properties' getters and setters as these properties are a compile-time only feature. Branching The syntax offers the if, else, and elseif: class TestTemplate { public static function main(): Void { var templateCode = "::if (sex==0):: Male ::elseif (sex==1):: Female ::else:: Unknown ::end::"; var myTemplate = new haxe.Template(templateCode); neko.Lib.print(myTemplate.execute({user : "Benjamin", sex:0})); } } Here the output will be Male. But if the sex property of the context was set to 1 it would print Female, if it is something else, it will print "Unknown". Note that our keywords are surrounded by :: (so the interpreter won't think that it is just some raw-text to be printed). Also note that the "end" keyword has been introduced since we do not use braces. Using lists, arrays, and other iterables The template engine allows one to iterate over an iterable and repeat a part of the template for each object in the iterable. This is done using the ::foreach:: keyword. When iterating, the context will be modified and will become the object that is actually selected in the iterable. It is also possible to access this object (indeed, the context's value) by using the __current__ variable. Let's see an example: class Main { public static function main() { //Let's create two departments: var itDep = new Department("Information Technologies Dept."); var financeDep = new Department("Finance Dept."); //Create some users and add them to their department var it1 = new Person(); it1.lastName = "Par"; it1.firstName = "John"; it1.age = 22; var it2 = new Person(); it2.lastName = "Bear"; it2.firstName = "Caroline"; it2.age = 40; itDep.workers.add(it1); itDep.workers.add(it2); var fin1 = new Person(); fin1.lastName = "Ha"; fin1.firstName = "Trevis"; fin1.age = 43; var fin2 = new Person(); fin2.lastName = "Camille"; fin2.firstName = "Unprobable"; fin2.age = 70; financeDep.workers.add(fin1); financeDep.workers.add(fin2); //Put our departements inside a List: var depts = new List<Department>(); depts.add(itDep); depts.add(financeDep); //Load our template from Resource: var templateCode = haxe.Resource.getString("DeptsList"); //Execute it var template = new haxe.Template(templateCode); neko.Lib.print(template.execute({depts: depts})); } } class Person { public var lastName : String; public var firstName : String; public var age : Int; public function new() { } } class Department { public var name : String; public var workers : List<Person>; public function new(name : String) { workers = new List<Person>(); this.name = name; } } In this part of the code we are simply creating two departments, some persons, and adding those persons into those departments. Now, we want to display the list of departments and all of the employees that work in them. So, let's write a simple template (you can save this file as DeptsList.template): <html> <head> <title>Workers</title> </head> <body> ::foreach depts:: <h1>::name::</h1> <table> ::foreach workers:: <tr> <td>::firstName::</td> <td>::lastName::</td> <td>::if (age < 35)::Junior::elseif (58): :Senior::else::Retired::end::</td> </tr> ::end:: </table> ::end:: </body> </html> When compiling your code you should add the following directive: -resource DeptsList.template@DeptsList The following is the output you will get: <html> <head> <title>Workers</title> </head> <body> <h1>Information Technologies Dept.</h1> <table> <tr> <td>John</td> <td>Par</td> <td>Junior</td> </tr> <tr> <td>Caroline</td> <td>Bear</td> <td>F</td> </tr> </table> <h1>Finance Dept.</h1> <table> <tr> <td>Trevis</td> <td>Ha</td> <td>Senior</td> </tr> <tr> <td>Unprobable</td> <td>Camille</td> <td>Retired</td> </tr> </table> </body> </html> As you can see, this is indeed pretty simple once you have your data structure in place. Time for action – Executing code from a template Even though templates can't contain haXe code, they can make calls to so-called "template macros". Macros are defined by the developer and, just like data they are passed to the template.execute function. In fact, they are passed exactly in the same way, but as the second parameter. Calling them is quite easy, instead of surrounding them with :: we will simply prefix them with $$, we can also pass them as parameters inside parenthesis. So, let's take our preceding sample and add a macro to display the number of workers in a department. First, let's add the function to our Main class: public static function displayNumberOfWorkers(resolve : String->Dynamic, department : Department) { return department.workers.length + " workers"; } Note that the first argument that the macro will receive is a function that takes a String and returns a Dynamic. This function will allow you to retrieve the value of an expression in the context from which the macro has been called. Then, other parameters are simply the parameters that the template passes to the macro. So, let's add a call to our macro: <html> <head> </head> <body> ::foreach depts:: <h1>::name:: ($$displayNumberOfWorkers(::__current__::))</h1> <table> ::foreach workers:: <tr> <td>::firstName::</td> <td>::lastName::</td> <td>::if (sex==0)::M::elseif (sex==1)::F::else::?::end::</td> </tr> ::end:: </table> ::end:: </body> </html> As you can see, we will pass the current department to the macro when calling it to display the number of workers. So, here is what you get: <html> <head> </head> <body> <h1>Information Technologies Dept. (2 workers)</h1> <table> <tr> <td>John</td> <td>Par</td> <td>M</td> </tr> <tr> <td>Caroline</td> <td>Bear</td> <td>F</td> </tr> </table> <h1>Finance Dept. (2 workers)</h1> <table> <tr> <td>Trevis</td> <td>Ha</td> <td>M</td> </tr> <tr> <td>Unprobable</td> <td>Camille</td> <td>?</td> </tr> </table> </body> </html>   What just happened?   We have written the displayNumberOfWorkers macro and added a call to it in the template. As a result, we've been able to display the number of workers in a department. Integrating subtemplates Sub-templates do not exist as such in the templating system. The fact is that you can include sub-templates into a main template, which is not a rare process. Some frameworks, not only in haXe, have even made this standard behavior. So, there are two ways of doing this: Execute the sub-template, store its return value, and pass it as a property to the main template when executing it. Create a macro to execute the sub-template and return its value. This way you just have to call the macro whenever you want to include your sub-template in your main template. Creating a blog's front page In this section, we are going to create a front page for a blog by using the haxe.Template class. We will also use the SPOD system to retrieve posts from the database.
Read more
  • 0
  • 0
  • 2305

article-image-extending-mootools
Packt
25 Jul 2011
8 min read
Save for later

Extending MooTools

Packt
25 Jul 2011
8 min read
  MooTools 1.3 Cookbook Over 100 highly effective recipes to turbo-charge the user interface of any web-enabled Internet application and web page         Read more about this book       (For more resources on this topic, see here.) The reader can benefit from the previous article on MooTools: Extending and Implementing Elements.   Making a Corvette out of a car-extending the base class The "base class" is a function, a method, that allows extension. Just what does extending a class entail? Buckle up and let us take a drive. Getting ready Just to show the output of our work, create a DIV that will be our canvas. <div id="mycanvas"></div> How to do it... Creating a class from the base class is as rudimentary as this: var Car = new Class();. That is not very instructive, so at the least, we add the constructor method to call at the time of instantiation: initialize. <script type="text/javascript"> var Car = new Class({ initialize: function(owner) { this.owner = owner; }}); The constructor method takes the form of a property named initialize and must be a function; however, it does not have to be the first property declared in the class. How it works... So far in our recipe, we have created an instance of the base class and assigned it to the variable Car. We like things to be sporty, of course. Let's mutate the Car into a Corvette using Extends and passing it the name of the Class to make a copy of and extend into a new class. var Corvette = new Class({ Extends: Car, mfg: 'Chevrolet', model: 'Corvette', setColor: function(color) { this.color = color; }}); Our Corvette is ready for purchase. An instantiation of the extended class will provide some new owner happiness for 5 years or 50,000 miles, whichever comes first. Make the author's red, please. var little_red = new Corvette('Jay Johnston'); little_red.setColor('red'); $('mycanvas').set('text',little_red.owner+"'s little "+little_red.color+' '+little_red.model+' made by '+little_red.mfg); </script> There's more... This entire example will work identically if Corvette Implements rather than Extends Car. Whether to Extend or to Implement Extending a class changes the prototype, creating a copy in which the this.parent property allows for the overridden parent class method to be referenced within the extended class's current method. To derive a mutation that takes class properties from multiple classes, we use Implements. Be sure to place the Extends or Implements property first before all other methods and properties. And if both extending and implementing, the Implements property follows the Extends property. See also See how Moo can muster so much class: http://mootools.net/docs/core/Class/Class#Class.   Giving a Corvette a supercharger-Implements versus Extends Be ready to watch for several things in this recipe. Firstly, note how the extended corvette methods can use this.parent. Secondly, note how the implemented corvette, the ZR1, can implement multiple classes. Getting ready Create a canvas to display some output. <h1>Speed Indexes:</h1><div id="mycanvas"></div> How to do it... Here we create a class to represent a car. This car does not have an engine until it goes through further steps of manufacturing, so if we ask what its speed is, the output is zero. Next, we create a class to represent a sporty engine, which has an arbitrary speed index of 10. // create two classes from the base Class var Car = new Class({ showSpeed: function() { return 0; } }); var SportyEngine = new Class({ speed: 10 }); Now we get to work. First, we begin by manufacturing corvettes, a process which is the extension of Car, they are faster than an empty chassis, of course, so we have them report their speed as an index rating one more than the parent class. // Extend one, Implement the other var Corvette = new Class({ Extends: Car, showSpeed: function() { // this.parent calls the overridden class return this.parent()+1; } }); Secondly, we implement both Car and SportyEngine simultaneously as ZR1. We cannot use this.parent so we return the speed if asked. Of course, the ZR1 would not have a speed if only a mutation of Car, but since it is also a mutation of SportyEngine it has the speed index of that class. var ZR1 = new Class({ // multiple classes may be implemented Implements: [Car, SportyEngine], // yep showSpeed: function() { // this.parent is not available //return this.parent()+1; // nope return this.speed; }}); How it works... When an instantiation of Corvette is created and its showSpeed() method called, it reports the speed of the parent class, Car, adding 1 to it. This is thanks to the magic Extends provides via this.parent(). var corvette = new Corvette(); var zr1 = new ZR1(); $('mycanvas').set('html', '<table>'+ '<tr><th>Corvette:</th>'+ '<td>'+corvette.showSpeed()+'</td></tr>'+ '<tr><th>ZR1:</th>'+ '<td>'+zr1.showSpeed()+'</td></tr>'+ '</table>'); And so, the output of this would be: Corvette: 1ZR1: 10 An instantiation of ZR1 has the properties of all classes passed to Implements. When showSpeed() is called, the value conjured by this.speed comes from the property defined within SportyEngine.   Upgrading some Corvettes—Extends versus Implements Now that we have reviewed some of the reasons to extend versus implement, we are ready to examine more closely how inheritance within Extends can be useful in our scripting. Getting ready Create a display area for the output of our manufacturing plant. <h1>Speeds Before</h1><div id="before"></div><h1>Speeds After</h1><div id="after"></div> How to do it... Create two classes, one that represents all car chassis with no engine and one that represents a fast engine that can be ordered as an upgrade. This section is identical to the last recipe; if necessary, review once more before continuing as the jist will be to alter our instantiations to display how inheritance patterns affect them. // create two classes from the base Class var Car = new Class({ showSpeed: function() { return 0; } }); var SportyEngine = new Class({ speed: 10 }); // Extend one, Implement the other var Corvette = new Class({ Extends: Car, speed: 1, showSpeed: function() { // this.parent calls the overridden class return this.parent()+1; } }); var ZR1 = new Class({ // multiple classes may be implemented Implements: [Car, SportyEngine], // yep showSpeed: function() { // this.parent is not available //return this.parent()+1; // nope return this.speed; } }); Note that the output before mutation is identical to the end of the previous recipe. var corvette = new Corvette(); var zr1 = new ZR1(); $('before').set('html', '<table>'+ '<tr><th>Corvette:</th>'+ '<td>'+corvette.showSpeed()+'</td></tr>'+ '<tr><th>ZR1</th>'+ '<td>'+zr1.showSpeed()+'</td></tr>'+ '</table>'); Here is what happens when the manufacturing plant decides to start putting engines in the base car chassis. That gives them a speed, where they did not have one previously. Mutate the base class by having it return an index of five rather than zero. // the mfg changes base Car speed to be +5 fasterCar = Car.implement({ showSpeed: function() { return 5; }});// but SportyEngine doesn't use the parent method$('after').set('html', '<table>'+ '<tr><th>New Corvette:</th>'+ '<td>'+corvette.showSpeed()+'</td></tr>'+ '<tr><th>New ZR1</th>'+ '<td>'+zr1.showSpeed()+'</td></tr>'+ '</table>'); How it works... The zr1 instantiation did not mutate. The corvette instantiation did. Since zr1 used implements, there is no inheritance that lets it call the parent method. In our example, this makes perfect sense. The base chassis comes with an engine rated with a speed of five. The ZR1 model, during manufacturing/instantiation is given a completely different engine/a completely different property, so any change/recall of the original chassis would not be applicable to that model. For the naysayer, the next recipe shows how to effect a manufacturer recall that will alter all Corvettes, even the ZR1s. There's more... There is an interesting syntax used to mutate the new version of Car, Class.implement(). That same syntax is not available to extend elements. See also Here is a link to the MooTool documentation for Class.implement(): http://mootools.net/docs/core/Class/Class#Class:implement.  
Read more
  • 0
  • 0
  • 1408

article-image-mootools-extending-and-implementing-elements
Packt
25 Jul 2011
7 min read
Save for later

MooTools: Extending and Implementing Elements

Packt
25 Jul 2011
7 min read
  MooTools 1.3 Cookbook Over 100 highly effective recipes to turbo-charge the user interface of any web-enabled Internet application and web page The reader can benefit from the previous article on Extending MooTools. Extending elements—preventing multiple form submissions Imagine a scenario where click-happy visitors may undo normalcy by double-clicking the submit button, or perhaps an otherwise normal albeit impatient user might click it a second time. Submit buttons frequently need to be disabled or removed using client-side code for just such a reason. Users that double-click everything It is not entirely known where double-clicking users originated from. Some believe that single-clicking-users needed to be able to double-click to survive in the wild. They therefore began to grow gills and double-click links, buttons, and menu items. Others maintain that there was a sudden, large explosion in the vapors of nothingness that resulted in hordes of users that could not fathom the ability of a link, button, or menu item that could be opened with just a single click. Either way, they are out there, and they mostly use Internet Explorer and are quickly identifiable by how they type valid URLs into search bars and then swear the desired website is no longer on the Inter-nets. How to do it... Extending elements uses the same syntax as extending classes. Add a method that can be called when appropriate. Our example, the following code, could be used in a library that is associated with every page so that no submit button could ever again be clicked twice, at least, without first removing the attribute that has it disabled: Element.implement({ better_submit_buttons: function() { if ( this.get('tag')=='input' && this.getProperty('type')=='submit') { this.addEvent('click', function(e) { this.set({ 'disabled':'disabled', 'value':'Form Submitted!' }); }); } } }); window.addEvent('load',function() { $$('input[type=submit]').better_submit_buttons(); }); How it works... The MooTools class Element extends DOM elements referenced by the single-dollar selector, the double-dollars selector, and the document.id selector. In the onLoad event, $$('input[type=submit]').submit_only_once(); all INPUT elements that have a type equal to submit are extended with the Element class methods and properties. Of course, before that infusion of Moo-goodness takes place, we have already implemented a new method that prevents those elements from being clicked twice by adding the property that disables the element. There's more... In our example, we disable the submit button permanently and return false upon submission. The only way to get the submit button live again is to click the Try again button that calls the page again. Note that reloading the page via refresh in some browsers may not clear the disabled attribute; however, calling the page again from the URL or by clicking a link will. On pages that submit a form to a second page for processing, the semi-permanently disabled button is desirable outright. If our form is processed via Ajax, then we can use the Ajax status events to manually remove the disabled property and reset the value of the button. See also Read the document on the MooTools Request class that shows the various statuses that could be used in conjunction with this extended element: http://mootools.net/docs/core/Request/Request.   Extending elements-prompt for confirmation on submit Launching off the last extension, the forms on our site may also need to ask for confirmation. It is not unthinkable that a slip of the carriage return could accidentally submit a form before a user is ready. It certainly happens to all of us occasionally and perhaps to some of us regularly. How to do it... Mutate the HTML DOM FORM elements to act upon the onSubmit event and prompt whether to continue with the submission. Element.implement({ polite_forms: function() { if (this.get('tag')=='form') { this.addEvent('submit',function(e) { if(!confirm('Okay to submit form?')) { e.stop(); } }); } } }); How it works... The polite_forms() method is added to all HTML DOM elements, but the execution is restricted to elements whose tag is form, if (this.get('tag')=='form') {...}. The onSubmit event of the form is bound to a function that prompts users via the raw JavaScript confirm() dialogue that either returns true for a positive response or false otherwise. If false, then we prevent the event from continuing by calling the MooTools-implemented Event.stop(). There's more... In order to mix the submit button enhancement with the polite form enhancement only a few small changes to the syntax are necessary. To stop our submit button from showing in process... if the form submission is canceled by the polite form request, we create a proprietary reset event that can be called via Element.fireEvent() and chained to the collection of INPUT children that match our double-dollar selector. // extend all elements with the method polite forms Element.implement({ better_submit_buttons: function() { if (this.get('tag')=='input'&&this.getProperty('type')=='submit') { this.addEvents({ 'click':function(e) { this.set({'disabled':'disabled','value':'in process...'}); }, 'reset':function() { this.set({'disabled':false,'value':'Submit!'}); } }); } }, polite_forms: function() { if (this.get('tag')=='form') { this.addEvent('submit',function(e) { if(!confirm('Okay to submit form?')) { e.stop(); this.getChildren('input[type=submit]').fireEvent('reset'); } }); } } }); // enhance the forms window.addEvent('load',function() { $$('input[type=submit]').better_submit_buttons(); $$('form').polite_forms(); });   Extending typeOf, fixing undefined var testing We could not properly return the type of an undeclared variable. This oddity has its roots in the fact that undefined, undeclared variables cannot be dereferenced during a function call. In short, undeclared variables can not be used as arguments to a function. Getting ready Get ready to see how we can still extend MooTools' typeOf function by passing a missing variable using the global scope: // will throw a ReferenceError myfunction(oops_var); // will not throw a ReferenceError myfunction(window.oops_var); How to do it... Extend the typeOf function with a new method and call that rather than the parent method. // it is possible to extend functions with new methods typeOf.extend('defined',function(item) { if (typeof(item)=='undefined') return 'undefined'; else return typeOf(item); }); //var oops_var; // commented out "on purpose" function report_typeOf(ismoo) { if (ismoo==0) { document.write('oops_var1 is: '+typeof(oops_var)+'<br/>'); } else { // concat parent to avoid error from derefrencing an undeclared var document.write('oops_var2 is: '+typeOf.defined( window.oops_var)+'<br/>'); } } The output from calling typeof() and typeOf.defined() is identical for an undefined, undeclared variable passed via the global scope to avoid a reference error. <h2>without moo:</h2> <script type="text/javascript"> report_typeOf(0); </script> <h2><strong>with</strong> moo:</h2> <script type="text/javascript"> report_typeOf(1); </script> The output is: without moo: oops_var1 is: undefined with moo: oops_var2 is: undefined How it works... The prototype for the typeOf function object has been extended with a new method. The original method is still applied when the function is executed. However, we are now able to call the property defined, which is itself a function that can still reference and call the original function. There's more... For those that are not satisfied at the new turn of syntax, the proxy pattern should suffice to help keep us using a much similar syntax. // proxying in raw javascript is cleaner in this case var oldTypeOf = typeOf; var typeOf = function(item) { if (typeof(item)=='undefined') return 'undefined'; else return oldTypeOf(item); }; The old typeOf function has been renamed using the proxy pattern but is still available. Meanwhile, all calls to typeOf are now handled by the new version. See also The Proxy Pattern The proxy pattern is one of many JavaScript design patterns. Here is one good link to follow for more information: http://www.summasolutions.net/blogposts/design-patterns-javascript-part-1. Undeclared and Undefined Variables It can be quite daunting to have to deal with multiple layers of development. When we are unable to work alone and be sure all our variables are declared properly, testing every one can really cause code bloat. Certainly, the best practice is to always declare variables. Read more about it at http://javascriptweblog.wordpress.com/2010/08/16/understanding-undefined-and-preventing-referenceerrors/.  
Read more
  • 0
  • 0
  • 2213
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-oracle-webcenter-11g-ps3-working-navigation-models-and-page-hierarchies
Packt
25 Jul 2011
3 min read
Save for later

Oracle WebCenter 11g PS3: Working with Navigation Models and Page Hierarchies

Packt
25 Jul 2011
3 min read
  Oracle WebCenter 11g PS3 Administration Cookbook Over 100 advanced recipes to secure, support, manage, and administer Oracle WebCenter 11g with this book and eBook         Read more about this book       (For more resources on this subject, see here.) Creating a navigation model at runtime Lots of administrators will not have access to JDeveloper, but they will need to manage navigation models. In WebCenter, you can easily create and manage navigation models at runtime. In this recipe, we will show how you can add navigation models at runtime. Getting ready For this recipe, you need a WebCenter Portal application. How to do it... Run your portal application. Log in as an administrator. Go to the administration page. Select Navigations from the Resource tab. Press the Create button. Specify a name, for example, hr. Specify a description, for example, Navigation model for HR users. Leave copy from empty. In this list, you can select an existing navigation model so the newly created model will copy the content from the selected model. Press the Create button: The navigation model is now created and you can add components to it. How it works... When you add a navigation model at runtime, an XML file will be generated in the background. The navigation model will be stored in the MDS. You can request the path to the actual xml file by selecting Edit properties from the Edit menu when you select a navigation model. In the properties window, you will find a field called Metadata file. This is the complete directory to the actual XML file. There's more... Even at runtime, you can modify the actual XML representation of the navigation model. This allows you to be completely flexible. Not everything is possible at runtime, but when you know what XML to add, you can do so by modifying the XML of the navigation model. This can be done by selecting Edit Source from the Edit menu. This way you will get the same XML representation of a navigation model as in JDevleoper. Adding a folder to a navigation model A folder is the simplest resource you can add to your navigation model. It does not link to a specific resource. A folder is only intended to organize your navigation model in a logical way. In this recipe, we will add a folder for the HR resources. Getting ready We will add the folder to the default navigation model so you only need the default WebCenter Portal application for this recipe. How to do it... Open default-navigation-mode.xml from Web Content/oracle/Webcenter/portalapp/navigations. Press the Add button and select Folder from the context menu. Specify an id for the folder. The id should be unique for each resource over the navigation model. Specify an expression language value for the Visible attribute. How it works... Adding a folder to a navigation model will add a folder tag to the XML with the metadata specified: <folder visible="#{true}" id="hr"> <attributes> <attribute isKey="false" value="folder" attributeId="Title"/> </attributes> <contents/> </folder> The folder tag has a contents tag as a child. This means that when you add a resource to a folder, these will be added as a child to the contents tag. There's more... You can also add a folder at runtime to a navigation model. This is done by selecting your navigation model and selecting Edit from the Edit menu. From the Add menu, you can select Folder. You are able to add the id, description, visible attribute and iconUrl.
Read more
  • 0
  • 0
  • 1540

article-image-apache-solr-analyzing-your-text-data
Packt
22 Jul 2011
13 min read
Save for later

Apache Solr: Analyzing your Text Data

Packt
22 Jul 2011
13 min read
  Apache Solr 3.1 Cookbook Introduction Type's behavior can be defined in the context of the indexing process or the context of the query process, or both. Furthermore, type definition is composed of tokenizers and filters (both token filters and character filters). Tokenizer specifies how your data will be preprocessed after it is sent to the appropriate field. Analyzer operates on the whole data that is sent to the field. Types can only have one tokenizer. The result of the tokenizer work is a stream of objects called tokens. Next in the analysis chain are the filters. They operate on the tokens in the token stream. And they can do anything with the tokens—changing them, removing them, or for example, making them lowercase. Types can have multiple filters. One additional type of filter is the character filter. The character filters do not operate on tokens from the token stream. They operate on the data that is sent to the field and they are invoked before the data is sent to the analyzer. This article will focus on the data analysis and how to handle the common day-to-day analysis questions and problems. Storing additional information using payloads Imagine that you have a powerful preprocessing tool that can extract information about all the words in the text. Your boss would like you to use it with Solr or at least store the information it returns in Solr. So what can you do? We can use something that is called payload and use it to store that data. This recipe will show you how to do it. How to do it... I assumed that we already have an application that takes care of recognizing the part of speech in our text data. Now we need to add it to the Solr index. To do that we will use payloads, a metadata that can be stored with each occurrence of a term. First of all, you need to modify the index structure. For this, we will add the new field type to the schema.xml file: <fieldtype name="partofspeech" class="solr.TextField"> <analyzer> <tokenizer class="solr.WhitespaceTokenizerFactory"/> <filter class="solr.DelimitedPayloadTokenFilterFactory" encoder="integer" delimiter="|"/> </analyzer> </fieldtype> Now add the field definition part to the schema.xml file: <field name="id" type="string" indexed="true" stored="true" required="true" /> <field name="text" type="text" indexed="true" stored="true" /> <field name="speech" type="partofspeech" indexed="true" stored= "true" multivalued="true" /> Now let's look at what the example data looks like (I named it ch3_payload.xml): <add> <doc> <field name="id">1</field> <field name="text">ugly human</field> <field name="speech">ugly|3 human|6</field> </doc> <doc> <field name="id">2</field> <field name="text">big book example</field> <field name="speech">big|3 book|6 example|1</field> </doc> </add> Let's index our data. To do that, we run the following command from the exampledocs directory (put the ch3_payload.xml file there): java -jarpost.jar ch3_payload.xml How it works... What information can payload hold? It may hold information that is compatible with the encoder type you define for the solr.DelimitedPayloadTokenFilterFactory filter . In our case, we don't need to write our own encoder—we will use the supplied one to store integers. We will use it to store the boost of the term. For example, nouns will be given a token boost value of 6, while the adjectives will be given a boost value of 3. First we have the type definition. We defined a new type in the schema.xml file, named partofspeech based on the Solr text field (attribute class="solr.TextField"). Our tokenizer splits the given text on whitespace characters. Then we have a new filter which handles our payloads. The filter defines an encoder, which in our case is an integer (attribute encoder="integer"). Furthermore, it defines a delimiter which separates the term from the payload. In our case, the separator is the pipe character |. Next we have the field definitions. In our example, we only define three fields: Identifier Text Recognized speech part with payload   Now let's take a look at the example data. We have two simple fields: id and text. The one that we are interested in is the speech field. Look how it is defined. It contains pairs which are made of a term, delimiter, and boost value. For example, book|6. In the example, I decided to boost the nouns with a boost value of 6 and adjectives with the boost value of 3. I also decided that words that cannot be identified by my application, which is used to identify parts of speech, will be given a boost of 1. Pairs are separated with a space character, which in our case will be used to split those pairs. This is the task of the tokenizer which we defined earlier. To index the documents, we use simple post tools provided with the example deployment of Solr. To use it, we invoke the command shown in the example. The post tools will send the data to the default update handler found under the address http://localhost:8983/solr/update. The following parameter is the file that is going to be sent to Solr. You can also post a list of files, not just a single one. That is how you index payloads in Solr. In the 1.4.1 version of Solr, there is no further support for payloads. Hopefully this will change. But for now, you need to write your own query parser and similarity class (or extend the ones present in Solr) to use them. Eliminating XML and HTML tags from the text There are many real-life situations when you have to clean your data. Let's assume that you want to index web pages that your client sends you. You don't know anything about the structure of that page—one thing you know is that you must provide a search mechanism that will enable searching through the content of the pages. Of course, you could index the whole page by splitting it by whitespaces, but then you would probably hear the clients complain about the HTML tags being searchable and so on. So, before we enable searching on the contents of the page, we need to clean the data. In this example, we need to remove the HTML tags. This recipe will show you how to do it with Solr. How to do it... Let's suppose our data looks like this (the ch3_html.xml file): <add> <doc> <field name="id">1</field> <field name="html"><![CDATA[<html><head><title>My page</title></ head><body><p>This is a <b>my</b><i>sample</i> page</body></html> ]]></field> </doc> </add> Now let's take care of the schema.xml file. First add the type definition to the schema.xml file: <fieldType name="html_strip" class="solr.TextField"> <analyzer> <charFilter class="solr.HTMLStripCharFilterFactory"/> <tokenizer class="solr.WhitespaceTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> </analyzer> </fieldType> And now, add the following to the field definition part of the schema.xml file: <field name="id" type="string" indexed="true" stored="true" required="true" /> <field name="html" type="html_strip" indexed="true" stored="false"/> Let's index our data. To do that, we run the following command from the exampledocs directory (put the ch3_html.xml file there): java -jar post.jar ch3_html.xml If there were no errors, you should see a response like this: SimplePostTool: version 1.2 SimplePostTool: WARNING: Make sure your XML documents are encoded in UTF-8, other encodings are not currently supported SimplePostTool: POSTing files to http://localhost:8983/solr/update.. SimplePostTool: POSTingfile ch3_html.xml SimplePostTool: COMMITting Solr index changes.. How it works... First of all, we have the data example. In the example, we see one file with two fields; the identifier and some HTML data nested in the CDATA section. You must remember to surround the HTML data in CDATA tags if they are full pages, and start from HTML tags like our example, otherwise Solr will have problems with parsing the data. However, if you only have some tags present in the data, you shouldn't worry. Next, we have the html_strip type definition. It is based on solr.TextField to enable full-text searching. Following that, we have a character filter which handles the HTML and the XML tags stripping. This is something new in Solr 1.4. The character filters are invoked before the data is sent to the tokenizer. This way they operate on untokenized data. In our case, the character filter strips the HTML and XML tags, attributes, and so on. Then it sends the data to the tokenizer, which splits the data by whitespace characters. The one and only filter defined in our type makes the tokens lowercase to simplify the search. To index the documents, we use simple post tools provided with the example deployment of Solr. To use it we invoke the command shown in the example. The post tools will send the data to the default update handler found under the address http://localhost:8983/solr/ update. The parameter of the command execution is the file that is going to be sent to Solr. You can also post a list of files, not just a single one. As you can see, the sample response from the post tools is rather informative. It provides information about the update handler address, files that were sent, and information about commits being performed. If you want to check how your data was indexed, remember not to be mistaken when you choose to store the field contents (attribute stored="true"). The stored value is the original one sent to Solr, so you won't be able to see the filters in action. If you wish to check the actual data structures, please take a look at the Luke utility (a utility that lets you see the index structure, field values, and operate on the index). Luke can be found at the following address: http://code.google.com/p/luke Solr provides a tool that lets you see how your data is analyzed. That tool is a part of Solr administration pages. Copying the contents of one field to another Imagine that you have many big XML files that hold information about the books that are stored on library shelves. There is not much data, just the unique identifier, name of the book, and the name of the author. One day your boss comes to you and says: "Hey, we want to facet and sort on the basis of the book author". You can change your XML and add two fields, but why do that when you can use Solr to do that for you? Well, Solr won't modify your data, but it can copy the data from one field to another. This recipe will show you how to do that. How to do it... Let's assume that our data looks like this: <add> <doc> <field name="id">1</field> <field name="name">Solr Cookbook</field> <field name="author">John Kowalsky</field> </doc> <doc> <field name="id">2</field> <field name="name">Some other book</field> <field name="author">Jane Kowalsky</field> </doc> </add> We want the contents of the author field to be present in the fields named author, author_facet, and author sort. So let's define the copy fields in the schema.xml file (place the following right after the fields section): <copyField source="author"dest="author_facet"/> <copyField source="author"dest="author_sort"/> And that's all. Solr will take care of the rest. The field definition part of the schema.xml file could look like this: <field name="id" type="string" indexed="true" stored="true" required="true"/> <field name="author" type="text" indexed="true" stored="true" multiValued="true"/> <field name="name" type="text" indexed="true" stored="true"/> <field name="author_facet" type="string" indexed="true" stored="false"/> <field name="author_sort" type="alphaOnlySort" indexed="true" stored="false"/> Let's index our data. To do that, we run the following command from the exampledocs directory (put the ch3_html.xml file there): java -jar post.jar data.xml How it works... As you can see in the example, we have only three fields defined in our sample data XML file. There are two fields which we are not particularly interested in: id and name. The field that interests us the most is the author field. As I have mentioned earlier, we want to place the contents of that field in three fields: Author (the actual field that will be holding the data) author_ sort author_facet   To do that we use the copy fields. Those instructions are defined in the schema.xml file, right after the field definitions, that is, after the tag. To define a copy field, we need to specify a source field (attribute source) and a destination field (attribute dest). After the definitions, like those in the example, Solr will copy the contents of the source fields to the destination fields during the indexing process. There is one thing that you have to be aware of—the content is copied before the analysis process takes place. This means that the data is copied as it is stored in the source. There's more... There are a few things worth nothing when talking about copying contents of the field to another field. Copying the contents of dynamic fields to one field You can also copy multiple field content to one field. To do that, you should define a copy field like this: <copyField source="*_author"dest="authors"/> The definition like the one above would copy all of the fields that end with _author to one field named authors. Remember that if you copy multiple fields to one field, the destination field should be defined as multivalued. Limiting the number of characters copied There may be situations where you only need to copy a defined number of characters from one field to another. To do that we add the maxChars attribute to the copy field definition. It can look like this: <copyField source="author" dest="author_facet" maxChars="200"/> The above definition tells Solr to copy upto 200 characters from the author field to the author_facet field. This attribute can be very useful when copying the content of multiple fields to one field.
Read more
  • 0
  • 0
  • 5326

article-image-alfresco-3-web-scripts
Packt
21 Jul 2011
6 min read
Save for later

Alfresco 3: Web Scripts

Packt
21 Jul 2011
6 min read
  Alfresco 3 Cookbook Over 70 recipes for implementing the most important functionalities of Alfresco Introduction You all know about Web Services – which took the web development world by storm a few years ago. Web Services have been instrumental in constructing Web APIs (Application Programming Interface) and making the web applications work as Service-Oriented Architecture. In the new Web 2.0 world, however, many criticisms arose around traditional Web Services – thus RESTful services came into the picture. REST (Representational State Transfer) attempts to expose the APIs using HTTP or similar protocol and interfaces using well-known, light-weight and standard methods such as GET, POST, PUT, DELETE, and so on. Alfresco Web Scripts provide RESTful APIs of the repository services and functions. Traditionally, ECM systems have been exposing the interfaces using RPC (Remote Procedure Call) – but gradually it turned out that RPC-based APIs are not particularly suitable in the wide Internet arena where multiple environments and technologies reside together and talk seamlessly. In the case of Web Scripts, the RESTful services overcome all these problems and integration with an ECM repository has never been so easy and secure. Alfresco Web Scripts were introduced in 2006 and since then it has been quite popular with the developer and system integrator community for implementing services on top of the Alfresco repository and to amalgamate Alfresco with any other system. What is a Web Script? A Web Script is simply a URI bound to a service using standard HTTP methods such as GET, POST, PUT, or DELETE. Web Scripts can be written using simply the Alfresco JavaScript APIs and Freemarker templates, and optionally Java API as well with or without any Freemarker template. For example, the http://localhost:8080/alfresco/service/api/search/person.html ?q=admin&p=1&c=10 URL will invoke the search service and return the output in HTML. Internally, a script has been written using JavaScript API (or Java API) that performs the search and a FreeMarker template is written to render the search output in a structured HTML format. All the Web Scripts are exposed as services and are generally prefixed with http://<<server-url>>/<<context-path>>/<<servicepath>>. In a standard scenario, this is http://localhost:8080/alfresco/service Web Script architecture Alfresco Web Scripts strictly follow the MVC architecture. Controller: Written using Alfresco Java or JavaScript API, you implement your business requirements for the Web Script in this layer. You also prepare your data model that is returned to the view layer. The controller code interacts with the repository via the APIs and other services and processes the business implementations. View: Written using Freemarker templates, you implement exactly what you want to return in your Web Script. For data Web Scripts you construct your JSON or XML data using the template; and for presentation Web Scripts you build your output HTML. The view can be implemented using Freemarker templates, or using Java-backed Web Script classes. Model: Normally constructed in the controller layer (in Java or JavaScript), these values are automatically available in the view layer. Types of Web Scripts Depending on the purpose and output, Web Scripts can be categorized in two types: Data Web Scripts: These Web Scripts mostly return data in plenty after processing of business requirements. Such Web Scripts are mostly used to retrieve, update, and create content in the repository or query the repository. Presentation Web Scripts: When you want to build a user interface using Web Scripts, you use these Web Scripts. They mostly return HTML output. Such Web Scripts are mostly used for creating dashlets in Alfresco Explorer or Alfresco Share or for creating JSR-168 portlets. Note that this categorization of Web Script is not technically different—it is just a logical separation. This means data Web Scripts and presentation Web Scripts are not technically dissimilar, only usage and purpose is different. Web Script files Defining and creating a Web Script in Alfresco requires creating certain files in particular folders. These files are: Web Script Descriptor: The descriptor is an XML file used to define the Web Script – the name of the script, the URL(s) on which the script can be invoked, the authentication mechanism of the script and so on. The name of the descriptor file should be of the form: <<service-id>>.<<http-method>>. desc.xml; for example, helloworld.get.desc.xml. Freemarker Template Response file(s) optional: The Freemarker Template output file(s) is the FTL file which is returned as the result of the Web Script. The name of the template files should be of the form: &lt;<service-id>>.<<httpmethod>>.<< response-format>>.ftl; for example, helloworld.get.html.ftl and helloworld.get.json.ftl. Controller JavaScript file (optional): The Controller JavaScript file is the business layer of your Web Script. The name of the JavaScript file should be of the form: <<service-id>>.<<http-method>>.js; for example, helloworld.get.js. Controller Java file (optional): You can write your business implementations in Java classes as well, instead of using JavaScript API. Configuration file (optional): You can optionally include a configuration XML file. The name of the file should be of the form: <<service-id>>.<<http-method>>.config.xml; for example, helloworld.get.config.js. Resource Bundle file (optional): These are standard message bundle files that can be used for making Web Script responses localized. The name of message files would be of the form: <<service-id>>.<<http-method>>.properties; for example, helloworld.get.properties. The naming conventions of Web Script files are fixed – they follow particular semantics. Alfresco, by default, has provided a quite rich list of built-in Web Scripts which can be found in the tomcatwebappsalfrescoWEB-INFclassesalfrescotemplateswebscriptsorgalfresco folder. There are a few locations where you can store your Web Scripts. Classpath folder: tomcatwebappsalfrescoWEB-INFclassesalfrescotemplateswebscripts Classpath folder (extension): tomcatwebappsalfrescoWEB-INFclassesalfrescoextensiontemplateswebscripts Repository folder: /Company Home/Data Dictionary/Web Scripts Repository folder (extension): /Company Home/Data Dictionary/Web Scripts Extensions It is not advised to keep your Web Scripts in the orgalfresco folder; this folder is reserved for Alfresco default Web Scripts. Create your own folders instead. Or better, you should create your Web Scripts in the extension folders. Web Script parameters You of course need to pass some parameters to your Web Script and execute your business implementations around that. You can pass parameters by query string for the GET Web Scripts. For example: http://localhost:8080/alfresco/service/api/search/person.html?q=admin&p=1&c=10 In this script, we have passed three parameters – q (for the search query), p (for the page index), and c (for the number of items per page). You can also pass parameters bound in HTML form data in the case of POST Web Scripts. One example of such Web Script is to upload a file using Web Script.  
Read more
  • 0
  • 0
  • 2816

article-image-play-framework-data-validation-using-controllers
Packt
21 Jul 2011
15 min read
Save for later

Play Framework: Data Validation Using Controllers

Packt
21 Jul 2011
15 min read
Play Framework Cookbook Over 60 incredibly effective recipes to take you under the hood and leverage advanced concepts of the Play framework Introduction This article will help you to keep your controllers as clean as possible, with a well defined boundary to your model classes. Always remember that controllers are really only a thin layer to ensure that your data from the outside world is valid before handing it over to your models, or something needs to be specifically adapted to HTTP. URL routing using annotation-based configuration If you do not like the routes file, you can also describe your routes programmatically by adding annotations to your controllers. This has the advantage of not having any additional config file, but also poses the problem of your URLs being dispersed in your code. You can find the source code of this example in the examples/chapter2/annotationcontroller directory. How to do it... Go to your project and install the router module via conf/dependencies.yml: require: - play - play -> router head Then run playdeps and the router module should be installed in the modules/ directory of your application. Change your controller like this: @StaticRoutes({ @ServeStatic(value="/public/", directory="public") }) public class Application extends Controller { @Any(value="/", priority=100) public static void index() { forbidden("Reserved for administrator"); } @Put(value="/", priority=2, accept="application/json") public static void hiddenIndex() { renderText("Secret news here"); } @Post("/ticket") public static void getTicket(String username, String password) { String uuid = UUID.randomUUID().toString(); renderJSON(uuid); } } How it works... Installing and enabling the module should not leave any open questions for you at this point. As you can see in the controller, it is now filled with annotations that resemble the entries in the routes.conf file, which you could possibly have deleted by now for this example. However, then your application will not start, so you have to have an empty file at least. The @ServeStatic annotation replaces the static command in the routes file. The @StaticRoutes annotation is just used for grouping several @ServeStatic annotations and could be left out in this example. Each controller call now has to have an annotation in order to be reachable. The name of the annotation is the HTTP method, or @Any, if it should match all HTTP methods. Its only mandatory parameter is the value, which resembles the URI—the second field in the routes. conf. All other parameters are optional. Especially interesting is the priority parameter, which can be used to give certain methods precedence. This allows a lower prioritized catchall controller like in the preceding example, but a special handling is required if the URI is called with the PUT method. You can easily check the correct behavior by using curl, a very practical command line HTTP client: curl -v localhost:9000/ This command should give you a result similar to this: > GET / HTTP/1.1 > User-Agent: curl/7.21.0 (i686-pc-linux-gnu) libcurl/7.21.0 OpenSSL/0.9.8o zlib/1.2.3.4 libidn/1.18 > Host: localhost:9000 > Accept: */* > < HTTP/1.1 403 Forbidden < Server: Play! Framework;1.1;dev < Content-Type: text/html; charset=utf-8 < Set-Cookie: PLAY_FLASH=;Path=/ < Set-Cookie: PLAY_ERRORS=;Path=/ < Set-Cookie: PLAY_SESSION=0c7df945a5375480993f51914804284a3bb ca726-%00___ID%3A70963572-b0fc-4c8c-b8d5-871cb842c5a2%00;Path=/ < Cache-Control: no-cache < Content-Length: 32 < <h1>Reserved for administrator</h1> You can see the HTTP error message and the content returned. You can trigger a PUT request in a similar fashion: curl -X PUT -v localhost:9000/ > PUT / HTTP/1.1 > User-Agent: curl/7.21.0 (i686-pc-linux-gnu) libcurl/7.21.0 OpenSSL/0.9.8o zlib/1.2.3.4 libidn/1.18 > Host: localhost:9000 > Accept: */* > < HTTP/1.1 200 OK < Server: Play! Framework;1.1;dev < Content-Type: text/plain; charset=utf-8 < Set-Cookie: PLAY_FLASH=;Path=/ < Set-Cookie: PLAY_ERRORS=;Path=/ < Set-Cookie: PLAY_SESSION=f0cb6762afa7c860dde3fe1907e8847347 6e2564-%00___ID%3A6cc88736-20bb-43c1-9d43-42af47728132%00;Path=/ < Cache-Control: no-cache < Content-Length: 16 Secret news here As you can see now, the priority has voted the controller method for the PUT method which is chosen and returned. There's more... The router module is a small, but handy module, which is perfectly suited to take a first look at modules and to understand how the routing mechanism of the Play framework works at its core. You should take a look at the source if you need to implement custom mechanisms of URL routing. Mixing the configuration file and annotations is possible You can use the router module and the routes file—this is needed when using modules as they cannot be specified in annotations. However, keep in mind that this is pretty confusing. You can check out more info about the router module at http://www.playframework.org/modules/router. Basics of caching Caching is quite a complex and multi-faceted technique, when implemented correctly. However, implementing caching in your application should not be complex, but rather the mindwork before, where you think about what and when to cache, should be. There are many different aspects, layers, and types (and their combinations) of caching in any web application. This recipe will give a short overview about the different types of caching and how to use them. You can find the source code of this example in the chapter2/caching-general directory. Getting ready First, it is important that you understand where caching can happen—inside and outside of your Play application. So let's start by looking at the caching possibilities of the HTTP protocol. HTTP sometimes looks like a simple protocol, but is tricky in the details. However, it is one of the most proven protocols in the Internet, and thus it is always useful to rely on its functionalities. HTTP allows the caching of contents by setting specific headers in the response. There are several headers which can be set: Cache-Control: This is a header which must be parsed and used by the client and also all the proxies in between. Last-Modified: This adds a timestamp, explaining when the requested resource had been changed the last time. On the next request the client may send an If-Modified- Since header with this date. Now the server may just return a HTTP 304 code without sending any data back. ETag: An ETag is basically the same as a Last-Modified header, except it has a semantic meaning. It is actually a calculated hash value resembling the resource behind the requested URL instead of a timestamp. This means the server can decide when a resource has changed and when it has not. This could also be used for some type of optimistic locking. So, this is a type of caching on which the requesting client has some influence on. There are also other forms of caching which are purely on the server side. In most other Java web frameworks, the HttpSession object is a classic example, which belongs to this case. Play has a cache mechanism on the server side. It should be used to store big session data, in this case any data exceeding the 4KB maximum cookie size. Be aware that there is a semantic difference between a cache and a session. You should not rely on the data being in the cache and thus need to handle cache misses. You can use the Cache class in your controller and model code. The great thing about it is that it is an abstraction of a concrete cache implementation. If you only use one node for your application, you can use the built-in ehCache for caching. As soon as your application needs more than one node, you can configure a memcached in your application.conf and there is no need to change any of your code. Furthermore, you can also cache snippets of your templates. For example, there is no need to reload the portal page of a user on every request when you can cache it for 10 minutes. This also leads to a very simple truth. Caching gives you a lot of speed and might even lower your database load in some cases, but it is not free. Caching means you need RAM, lots of RAM in most cases. So make sure the system you are caching on never needs to swap, otherwise you could read the data from disk anyway. This can be a special problem in cloud deployments, as there are often limitations on available RAM. The following examples show how to utilize the different caching techniques. We will show four different use cases of caching in the accompanying test. First test: public class CachingTest extends FunctionalTest { @Test public void testThatCachingPagePartsWork() { Response response = GET("/"); String cachedTime = getCachedTime(response); assertEquals(getUncachedTime(response), cachedTime); response = GET("/"); String newCachedTime = getCachedTime(response); assertNotSame(getUncachedTime(response), newCachedTime); assertEquals(cachedTime, newCachedTime); } @Test public void testThatCachingWholePageWorks() throws Exception { Response response = GET("/cacheFor"); String content = getContent(response); response = GET("/cacheFor"); assertEquals(content, getContent(response)); Thread.sleep(6000); response = GET("/cacheFor"); assertNotSame(content, getContent(response)); } @Test public void testThatCachingHeadersAreSet() { Response response = GET("/proxyCache"); assertIsOk(response); assertHeaderEquals("Cache-Control", "max-age=3600", response); } @Test public void testThatEtagCachingWorks() { Response response = GET("/etagCache/123"); assertIsOk(response); assertContentEquals("Learn to use etags, dumbass!", response); Request request = newRequest(); String etag = String.valueOf("123".hashCode()); Header noneMatchHeader = new Header("if-none-match", etag); request.headers.put("if-none-match", noneMatchHeader); DateTime ago = new DateTime().minusHours(12); String agoStr = Utils.getHttpDateFormatter().format(ago. toDate()); Header modifiedHeader = new Header("if-modified-since", agoStr); request.headers.put("if-modified-since", modifiedHeader); response = GET(request, "/etagCache/123"); assertStatus(304, response); } private String getUncachedTime(Response response) { return getTime(response, 0); } private String getCachedTime(Response response) { return getTime(response, 1); } private String getTime(Response response, intpos) { assertIsOk(response); String content = getContent(response); return content.split("n")[pos]; } } The first test checks for a very nice feature. Since play 1.1, you can cache parts of a page, more exactly, parts of a template. This test opens a URL and the page returns the current date and the date of such a cached template part, which is cached for about 10 seconds. In the first request, when the cache is empty, both dates are equal. If you repeat the request, the first date is actual while the second date is the cached one. The second test puts the whole response in the cache for 5 seconds. In order to ensure that expiration works as well, this test waits for six seconds and retries the request. The third test ensures that the correct headers for proxy-based caching are set. The fourth test uses an HTTP ETag for caching. If the If-Modified-Since and If-None- Match headers are not supplied, it returns a string. On adding these headers to the correct ETag (in this case the hashCode from the string 123) and the date from 12 hours before, a 302 Not-Modified response should be returned. How to do it... Add four simple routes to the configuration as shown in the following code: GET / Application.index GET /cacheFor Application.indexCacheFor GET /proxyCache Application.proxyCache GET /etagCache/{name} Application.etagCache The application class features the following controllers: public class Application extends Controller { public static void index() { Date date = new Date(); render(date); } @CacheFor("5s") public static void indexCacheFor() { Date date = new Date(); renderText("Current time is: " + date); } public static void proxyCache() { response.cacheFor("1h"); renderText("Foo"); } @Inject private static EtagCacheCalculator calculator; public static void etagCache(String name) { Date lastModified = new DateTime().minusDays(1).toDate(); String etag = calculator.calculate(name); if(!request.isModified(etag, lastModified.getTime())) { throw new NotModified(); } response.cacheFor(etag, "3h", lastModified.getTime()); renderText("Learn to use etags, dumbass!"); } } As you can see in the controller, the class to calculate ETags is injected into the controller. This is done on startup with a small job as shown in the following code: @OnApplicationStart public class InjectionJob extends Job implements BeanSource { private Map<Class, Object>clazzMap = new HashMap<Class, Object>(); public void doJob() { clazzMap.put(EtagCacheCalculator.class, new EtagCacheCalculator()); Injector.inject(this); } public <T> T getBeanOfType(Class<T>clazz) { return (T) clazzMap.get(clazz); } } The calculator itself is as simple as possible: public class EtagCacheCalculator implements ControllerSupport { public String calculate(String str) { return String.valueOf(str.hashCode()); } } The last piece needed is the template of the index() controller, which looks like this: Current time is: ${date} #{cache 'mainPage', for:'5s'} Current time is: ${date} #{/cache} How it works... Let's check the functionality per controller call. The index() controller has no special treatment inside the controller. The current date is put into the template and that's it. However, the caching logic is in the template here because not the whole, but only a part of the returned data should be cached, and for that a #{cache} tag used. The tag requires two arguments to be passed. The for parameter allows you to set the expiry out of the cache, while the first parameter defines the key used inside the cache. This allows pretty interesting things. Whenever you are in a page where something is exclusively rendered for a user (like his portal entry page), you could cache it with a key, which includes the user name or the session ID, like this: #{cache 'home-' + connectedUser.email, for:'15min'} ${user.name} #{/cache} This kind of caching is completely transparent to the user, as it exclusively happens on the server side. The same applies for the indexCacheFor() controller. Here, the whole page gets cached instead of parts inside the template. This is a pretty good fit for nonpersonalized, high performance delivery of pages, which often are only a very small portion of your application. However, you already have to think about caching before. If you do a time consuming JPA calculation, and then reuse the cache result in the template, you have still wasted CPU cycles and just saved some rendering time. The third controller call proxyCache() is actually the most simple of all. It just sets the proxy expire header called Cache-Control. It is optional to set this in your code, because your Play is configured to set it as well when the http.cacheControl parameter in your application.conf is set. Be aware that this works only in production, and not in development mode. The most complex controller is the last one. The first action is to find out the last modified date of the data you want to return. In this case it is 24 hours ago. Then the ETag needs to be created somehow. In this case, the calculator gets a String passed. In a real-world application you would more likely pass the entity and the service would extract some properties of it, which are used to calculate the ETag by using a pretty-much collision-safe hash algorithm. After both values have been calculated, you can check in the request whether the client needs to get new data or may use the old data. This is what happens in the request.isModified() method. If the client either did not send all required headers or an older timestamp was used, real data is returned; in this case, a simple string advising you to use an ETag the next time. Furthermore, the calculated ETag and a maximum expiry time are also added to the response via response.cacheFor(). A last specialty in the etagCache() controller is the use of the EtagCacheCalculator. The implementation does not matter in this case, except that it must implement the ControllerSupport interface. However, the initialization of the injected class is still worth a mention. If you take a look at the InjectionJob class, you will see the creation of the class in the doJob() method on startup, where it is put into a local map. Also, the Injector.inject() call does the magic of injecting the EtagCacheCalculator instance into the controllers. As a result of implementing the BeanSource interface, the getBeanOfType() method tries to get the corresponding class out of the map. The map actually should ensure that only one instance of this class exists. There's more... Caching is deeply integrated into the Play framework as it is built with the HTTP protocol in mind. If you want to find out more about it, you will have to examine core classes of the framework. More information in the ActionInvoker If you want to know more details about how the @CacheFor annotation works in Play, you should take a look at the ActionInvoker class inside of it. Be thoughtful with ETag calculation Etag calculation is costly, especially if you are calculating more then the last-modified stamp. You should think about performance here. Perhaps it would be useful to calculate the ETag after saving the entity and storing it directly at the entity in the database. It is useful to make some tests if you are using the ETag to ensure high performance. In case you want to know more about ETag functionality, you should read RFC 2616. You can also disable the creation of ETags totally, if you set http.useETag=false in your application.conf. Use a plugin instead of a job The job that implements the BeanSource interface is not a very clean solution to the problem of calling Injector.inject() on start up of an application. It would be better to use a plugin in this case.
Read more
  • 0
  • 0
  • 3542
article-image-tips-deploying-sakai
Packt
19 Jul 2011
10 min read
Save for later

Tips for Deploying Sakai

Packt
19 Jul 2011
10 min read
  Sakai CLE Courseware Management: The Official Guide The benefits of knowing that frameworks exist Sakai is built on top of numerous third-party open source libraries and frameworks. Why write code for converting from XML text files to Java objects or connecting and managing databases, when others have specialized and thought out the technical problems and found appropriate and consistent solutions? This reuse of code saves effort and decreases the complexity of creating new functionality. Using third-party frameworks has other benefits as well; you can choose the best from a series of external libraries, increasing the quality of your own product. The external frameworks have their own communities who test them actively. Outsourcing generic requirements, such as the rudiments of generating indexes for searching, allows the Sakai community to concentrate on higher-level goals, such as building new tools. For developers, also for course instructors and system administrators, it is useful background to know, roughly, what the underlying frameworks do: For a developer, it makes sense to look at reuse first. Why re-invent the wheel? Why write with external framework X for manipulating XML files when other developers have already extensively tried and tested and are running framework Y? Knowing what others have done saves time. This knowledge is especially handy for the new-to-Sakai developers who could be tempted to write from scratch. For the system administrator, each framework has its own strengths, weaknesses, and terminology. Understanding the terminology and technologies gives you a head start in debugging glitches and communicating with the developers. For a manager, knowing that Sakai has chosen solid and well-respected open source libraries should help influence buying decisions in favor of this platform. For the course instructor, knowing which frameworks exist and what their potential is helps inform the debate about adding interesting new features. Knowing what Sakai uses and what is possible sharpens the instructors' focus and the ability to define realistic requirements. For the software engineering student, Sakai represents a collection of best practices and frameworks that will make the students more saleable in the labor market. Using the third-party frameworks This section details frameworks that Sakai is heavily dependent on: Spring (http://www.springsource.org/), Hibernate ((http://www.hibernate.org/), and numerous Apache projects (http://www.apache.org/). Generally, Java application builders understand these frameworks. This makes it relatively easier to hire programmers with experience. All projects are open source and the individual use does not clash with Sakai's open source license (http://www.opensource.org/licenses/ecl2.php). The benefit of using Spring Spring is a tightly architected set of frameworks designed to support the main goals of building modern business applications. Spring has a broad set of abilities, from connecting to databases, to transaction, managing business logic, validation, security, and remote access. It fully supports the most modern architectural design patterns. The framework takes away a lot of drudgery for a programmer and enables pieces of code to be plugged in or to be removed by editing XML configuration files rather than refactoring the raw code base itself. You can see for yourself; this is the best framework for the user provider within Sakai. When you log in, you may want to validate the user credentials using a piece of code that connects to a directory service such as LDAP , or replace the code with another piece of code that gets credentials from an external database or even reads from a text file. Thanks to Sakai's services that rely on Spring! You can give (called injecting) the wanted code to a Service manager, which then calls the code when needed. In Sakai terminology, within a running application a service manager manages services for a particular type of data. For example, a course service manager allows programmers to add, modify, or delete courses. A user service manager does the same for users. Spring is responsible for deciding which pieces of code it injects into which service manager, and developers do not need to program the heavy lifting, only the configuration. The advantage is that later, as a part of adapting Sakai to a specific organization, system administrators can also reconfigure authentication or many other services to tailor to local preferences without recompilation. Spring abstracts away underlying differences between different databases. This allows you to program once, each for MySQL , Oracle , and so on, without taking into account the databases' differences. Spring can sit on the top of Hibernate and other limited frameworks, such as JDBC (yet another standard for connecting to databases). This adaptability gives architects more freedom to change and refactor (the process of changing the structure of the code to improve it) without affecting other parts of the code. As Sakai grows in code size, Spring and good architectural design patterns diminish the chance breaking older code. To sum up, the Spring framework makes programming more efficient. Sakai relies on the main framework. Many tasks that programmers would have previously hard coded are now delegated to XML configuration files. Hibernate for database coupling Hibernate is all about coupling databases to the code. Hibernate is a powerful, high performance object/relational persistence and query service. That is to say, a designer describes Java objects in a specific structure within XML files. After reading these files, Hibernate gains the ability to save or load instances of the object from the database. Hibernate supports complex data structures, such as Java collections and arrays of objects. Again, it is a choice of an external framework that does the programmer's dog work, mostly via XML configuration. The many Apache frameworks Sakai is biased rightfully towards projects associated with the Apache Software Foundation (ASF) (http://www.apache.org/). Sakai instances run within a Tomcat server and many institutes place an Apache web server in front of the Tomcat server to deal with dishing out static content (content that does not change, such as an ordinary web page), SSL/TLS, ease of configuration, and log parsing. Further, individual internal and external frameworks make use of the Apache commons frameworks, (http://commons.apache.org/) which have reusable libraries for all kinds of specific needs, such as validation, encoding, e-mailing, uploading files, and so on. Even if a developer does not use the common libraries directly, they are often called by other frameworks and have significant impact on the wellbeing; for example, security of a Sakai instance. To ensure look and feel consistency, designers used common technologies, such as Apache Velocity, Apache Wicket , Apache MyFaces (an implementation of Java Server Faces), Reasonable Server Faces (RSF) , and plain old Java Server Pages (JSP) Apache Velocity places much of the look and feel in text templates that non-programmers can then manipulate with text editors. The use of Velocity is mostly superseded by JSF. However, as Sakai moves forward, technologies such as RSF and Wicket (http://wicket.apache.org/) are playing a predominate role. Sakai uses XML as the format of choice to support much of its functionality, from configuration files, to the backing up of sites and the storage of internal data representations, RSS feeds, and so on. There is a lot of runtime effort in converting to and from XML and translating XML into other formats. Here are the gory technical details: there are two main methods for parsing XML: You can parse (another word for process) XML into a Document Object Model (DOM) in the memory that you can later transverse and manipulate programmatically. XML can also be parsed via an event-driven mechanism where Java methods are called, for example, when an XML tag begins or ends, or there is a body to the tag. Programmatically simple API for XML (SAX) libraries support the second approach in Java. Generally, it is easier to program with DOM than SAX, but as you need a model of the XML in memory, DOM, by its nature, is more memory intensive. Why would that matter? In large-scale deployments, the amount of memory tends to limit a Sakai instance's performance rather than Sakai being limited by the computational power of the servers. Therefore, as Sakai heavily uses XML, whenever possible, a developer should consider using SAX and avoid keeping the whole model of the XML document in memory. Looking at dependencies As Sakai adapts and expands its feature set, expect the range of external libraries to expand. The table mentions libraries used, their links to the relevant home page, and a very brief description of their functionality. Name Homepage Description Apache-Axis http://ws.apache.org/axis/ SOAP web services Apache-Axis2 http://ws.apache.org/axis2   SOAP, REST web services. A total rewrite of Apache-axis. However, not currently used within Entity Broker, a Sakai specific component.   Apache Commons http://commons.apache.org Lower-level utilities Batik http://xmlgraphics.apache.org/batik/ Batik is a Java-based toolkit for applications or applets that want to use images in the Scalable Vector Graphics (SVG) format. Commons-beanutils http://commons.apache.org/beanutils/ Methods for Java bean manipulation Commons-codec http://commons.apache.org/codec Commons Codec provides implementations of common encoders and decoders, such as Base64, Hex, Phonetic, and URLs. Commons-digester http://commons.apache.org/digester Common methods for initializing objects from XML configuration Commons-httpclient http://hc.apache.org/httpcomponents-client Supports HTTP-based standards with the client side in mind Commons-logging http://commons.apache.org/logging/ Logging support Commons-validator http://commons.apache.org/validator Support for verifying the integrity of received data Excalibur http://excalibur.apache.org Utilities FOP http://xmlgraphics.apache.org/fop Print formatting ready for conversions to PDF and a number of other formats Hibernate http://www.hibernate.org ORM database framework Log4j http://logging.apache.org/log4j For logging Jackrabbit http://jackrabbit. apache.org http://jcp.org/en/jsr/detail?id=170 Content repository. A content repository is a hierarchical content store with support for structured and unstructured content, full text search, versioning, transactions, observation, and more. James http://james.apache.org A mail server Java Server Faces http://java.sun.com/javaee/javaserverfaces Simplifies building user interfaces for JavaServer applications Lucene http://lucene.apache.org Indexing MyFaces http://myfaces.apache.org JSF implementation with implementation-specific widgets Pluto http://portals.apache.org/pluto The Reference Implementation of the Java Portlet Specfication Quartz http://www.opensymphony.com/quartz Scheduling Reasonable Server Faces (RSF) http://www2.caret.cam.ac.uk/rsfwiki RSF is built on the Spring framework, and simplifies the building of views via XHTML. ROME https://rome.dev.java.net ROME is a set of open source Java tools for parsing, generating, and publishing RSS and Atom feeds. SAX http://www.saxproject.org Event-based XML parser STRUTS http://struts.apache.org/ Heavy-weight MVC framework, not used in the core of Sakai, but rather some components used as part of the occasional tool Spring http://www.springsource.org Used extensively within the code base of Sakai. It is a broad framework that is designed to make building business applications simpler. Tomcat http://tomcat.apache.org Servlet container Velocity http://velocity.apache.org Templating Wicket http://wicket.apache.org Web app development framework Xalan http://xml.apache.org/xalan-j An XSLT (Extensible Stylesheet Language Transformation) processor for transforming XML documents into HTML, text, or other XML document types xerces http://xerces.apache.org/xerces-j XML parser For the reader who has downloaded and built Sakai from source code, you can automatically generate a list of current external dependencies via Maven. First, you will need to build the binary version and then print out the dependency report. To achieve this from within the top-level directory of the source code, you can run the following commands: mvn -Ppack-demo install mvn dependency:list The table is based on an abbreviated version of the dependency list, generated from the source code from March 2009. For those of you wishing to dive into the depths of Sakai, you can search the home pages mentioned in the table. In summary, Spring is the most important underlying third-party framework and Sakai spends a lot of its time manipulating XML.  
Read more
  • 0
  • 0
  • 3145

article-image-alice-3-controlling-behavior-animations
Packt
18 Jul 2011
11 min read
Save for later

Alice 3: Controlling the Behavior of Animations

Packt
18 Jul 2011
11 min read
  Alice 3 Cookbook 79 recipes to harness the power of Alice 3 for teaching students to build attractive and interactive 3D scenes and videos         Read more about this book       (For more resources related to this subject, see here.) Introduction You need to organize the statements that request the different actors to perform actions. Alice 3 provides blocks that allow us to configure the order in which many statements should be executed. This article provides many tasks that will allow us to start controlling the behavior of animations with many actors performing different actions. We will execute many actions with a specific order. We will use counters to run one or more statements many times. We will execute actions for many actors of the same class. We will run code for different actors at the same time to render complex animations. Performing many statements in order In this recipe, we will execute many statements for an actor with a specific order. We will add eight statements to control a sequence of movements for a bee. Getting ready We have to be working on a project with at least one actor. Therefore, we will create a new project and set a simple scene with a few actors. Select File | New... in the main menu to start a new project. A dialog box will display the six predefined templates with their thumbnail previews in the Templates tab. Select GrassyProject.a3p as the desired template for the new project and click on OK. Alice will display a grassy ground with a light blue sky. Click on Edit Scene, at the lower right corner of the scene preview. Alice will show a bigger preview of the scene and will display the Model Gallery at the bottom. Add an instance of the Bee class to the scene, and enter bee for the name of this new instance. First, Alice will create the MyBee class to extend Bee. Then, Alice will create an instance of MyBee named bee. Follow the steps explained in the Creating a new instance from a class in a gallery recipe, in the article, Alice 3: Making Simple Animations with Actors. Add an instance of the PurpleFlower class, and enter purpleFlower for the name of this new instance. Add another instance of the PurpleFlower class, and enter purpleFlower2 for the name of this new instance. The additional flower may be placed on top of the previously added flower. Add an instance of the ForestSky class to the scene. Place the bee and the two flowers as shown in the next screenshot: How to do it... Follow these steps to execute many statements for the bee with a specific order: Open an existing project with one actor added to the scene. Click on Edit Code, at the lower-right corner of the big scene preview. Alice will show a smaller preview of the scene and will display the Code Editor on a panel located at the right-hand side of the main window. Click on the class: MyScene drop-down list and the list of classes that are part of the scene will appear. Select MyScene | Edit run. Select the desired actor in the instance drop-down list located at the left-hand side of the main window, below the small scene preview. For example, you can select bee. Make sure that part: none is selected in the drop-down list located at the right-hand side of the chosen instance. Activate the Procedures tab. Alice will display the procedures for the previously selected actor. Drag the pointAt procedure and drop it in the drop statement here area located below the do in order label, inside the run tab. Because the instance name is bee, the pointAt statement contains the this.bee and pointAt labels followed by the target parameter and its question marks ???. A list with all the possible instances to pass to the first parameter will appear. Click on this.purpleFlower. The following code will be displayed, as shown in the next screenshot: this.bee.pointAt(this.purpleFlower) Drag the moveTo procedure and drop it below the previously dropped procedure call. A list with all the possible instances to pass to the first parameter will appear. Select this.purpleFlower getPart ??? and then IStemMiddle_IStemTop_IHPistil_IHPetal01, as shown in the following screenshot: Click on the more... drop-down menu button that appears at the right-hand side of the recently dropped statement. Click on duration and then on 1.0 in the cascade menu that appears. Click on the new more... drop-down menu that appears. Click on style and then on BEGIN_AND_END_ABRUPTLY. The following code will be displayed as the second statement: this.bee.moveTo(this.purpleFlower.getPart(IStemMiddle_IStemTop_IHPistil_IHPetal01), duration: 1.0, style: BEGIN_AND_END_ABRUPTLY) Drag the delay procedure and drop it below the previously dropped procedure call. A list with all the predefined direction values to pass to the first parameter will appear. Select 2.0 and the following code will be displayed as the third statement: this.bee.delay(2.0) Drag the moveAwayFrom procedure and drop it below the previously dropped procedure call. Select 0.25 for the first parameter. Click on the more... drop-down menu button that appears and select this.purpleFlower getPart ??? and then IStemMiddle_IStemTop_IHPistil_IHPetal01. Click on the additional more... drop-down menu button, on duration and then on 1.0 in the cascade menu that appears. Click on the new more... drop-down menu that appears, on style and then on BEGIN_ABRUPTLY_AND_END_GENTLY. The following code will be displayed as the fourth statement: this.bee.moveAwayFrom(0.25, this.purpleFlower.getPart(IStemMiddle_IStemTop_IHPistil_IHPetal01), duration: 1.0, style: BEGIN_ABRUPTLY_AND_END_GENTLY) Drag the turnToFace procedure and drop it below the previously dropped procedure call. Select this.purpleFlower2 getPart ??? and then IStemMiddle_IStemTop_IHPistil_IHPetal05. Click on the additional more... drop-down menu button, on duration and then on 1.0 in the cascade menu that appears. Click on the new more... drop-down menu that appears, on style and then on BEGIN_ABRUPTLY_AND_END_GENTLY. The following code will be displayed as the fifth statement: this.bee.turnToFace(this.purpleFlower2.getPart(IStemMiddle_IStemTop_IHPistil_IHPetal05), duration: 1.0, style: BEGIN_ABRUPTLY_AND_END_GENTLY) Drag the moveTo procedure and drop it below the previously dropped procedure call. Select this.purpleFlower2 getPart ??? and then IStemMiddle_IStemTop_IHPistil_IHPetal05. Click on the additional more... drop-down menu button, on duration and then on 1.0 in the cascade menu that appears. Click on the new more... drop-down menu that appears, on style and then on BEGIN_AND_END_ABRUPTLY. The following code will be displayed as the sixth statement: this.bee.moveTo(this.purpleFlower2.getPart(IStemMiddle_IStemTop_IHPistil_IHPetal05), duration: 1.0, style: BEGIN_AND_END_GENTLY) Drag the delay procedure and drop it below the previously dropped procedure call. A list with all the predefined direction values to pass to the first parameter will appear. Select 2.0 and the following code will be displayed as the seventh statement: this.bee.delay(2.0) Drag the move procedure and drop it below the previously dropped procedure call. Select FORWARD and then 10.0. Click on the more... drop-down menu button, on duration and then on 10.0 in the cascade menu that appears. Click on the additional more... drop-down menu that appears, on asSeenBy and then on this.bee. Click on the new more... drop-down menu that appears, on style and then on BEGIN_AND_END_ABRUPTLY. The following code will be displayed as the eighth and final statement. The following screenshot shows the eight statements that compose the run procedure: this.bee.move(FORWARD, duration: 10.0, asSeenBy: this.bee, style: BEGIN_ABRUPTLY_AND_END_GENTLY) (Move the mouse over the image to enlarge it.) Select File | Save as... from Alice's main menu and give a new name to the project. Then you can make changes to the project according to your needs. How it works... When we run a project, Alice creates the scene instance, creates and initializes all the instances that compose the scene, and finally executes the run method defined in the MyScene class. By default, the statements we add to a procedure are included within the do in order block. We added eight statements to the do in order block, and therefore Alice will begin with the first statement: this.bee.pointAt(this.purpleFlower) Once the bee finishes executing the pointAt procedure, the execution flow goes on with the next statement specified in the do in order block. Thus, Alice will execute the following second statement after the first one finishes: this.bee.moveTo(this.purpleFlower.getPart(IStemMiddle_IStemTop_IHPistil_IHPetal01), duration: 1.0, style: BEGIN_AND_END_ABRUPTLY) The do in order statement encapsulates a group of statements with a synchronous execution. Thus, when we add many statements within a do in order block, these statements will run one after the other. Each statement requires its previous statement to finish before starting its execution, and therefore we can use the do in order block to group statements that must run with a specific order. The moveTo procedure moves the 3D model that represents the actor until it reaches the position of the other actor. The value for the target parameter is the instance of the other actor. We want the bee to move to one of the petals of the first flower, purpleFlower, and therefore we passed this value to the target parameter: this.purpleFlower.getPart(IStemMiddle_IStemTop_IHPistil_IHPetal01) We called the getPart function for purpleFlower with IStemMiddle_IStemTop_IHPistil_IHPetal01 as the name of the part to return. This function allows us to retrieve one petal from the flower as an instance. We used the resulting instance as the target parameter for the moveTo procedure and we could make the bee move to the specific petal of the flower. Once the bee finishes executing the moveTo procedure, the execution flow goes on with the next statement specified in the do in order block. Thus, Alice will execute the following third statement after the second one finishes: this.bee.delay(2.0) The delay procedure puts the actor to sleep in its current position for the specified number of seconds. The next statement specified in the do in order block will run after waiting for two seconds. The statements added to the run procedure will perform the following visible actions in the specified order: Point the bee at purpleFlower. Begin and end abruptly a movement for the bee from its position to the petal named IStemMiddle_IStemTop_IHPistil_IHPetal01 of purpleFlower. The total duration for the animation must be 1 second. Make the bee stay in its position for 2 seconds. Move the bee away 0.25 units from the position of the petal named IStemMiddle_IStemTop_IHPistil_IHPetal01 of purpleFlower. Begin the movement abruptly but end it gently. The total duration for the animation must be 1 second. Turn the bee to the face of the petal named IStemMiddle_IStemTop_IHPistil_IHPetal05 of purpleFlower2. Begin the movement abruptly but end it gently. The total duration for the animation must be 1 second. Begin and end abruptly a movement for the bee from its position to the petal named IStemMiddle_IStemTop_IHPistil_IHPetal05 of purpleFlower2. The total duration for the animation must be 1 second. Make the bee stay in its position for 2 seconds. Move the bee forward 10 units. Begin the movement abruptly but end it gently. The total duration for the animation must be 10 seconds. The bee will disappear from the scene. The following screenshot shows six screenshots of the rendered frames: (Move the mouse over the image to enlarge it.) There's more... When you work with the Alice code editor, you can temporarily disable statements. Alice doesn't execute the disabled statements. However, you can enable them again later. It is useful to disable one or more statements when you want to test the results of running the project without these statements, but you might want to enable them back to compare the results. To disable a statement, right-click on it and deactivate the IsEnabled option, as shown in the following screenshot: The disabled statements will appear with diagonal lines, as shown in the next screenshot, and won't be considered at run-time: To enable a disabled statement, right-click on it and activate the IsEnabled option.
Read more
  • 0
  • 0
  • 5729

article-image-drupal-7-customizing-existing-theme
Packt
15 Jul 2011
9 min read
Save for later

Drupal 7: Customizing an Existing Theme

Packt
15 Jul 2011
9 min read
Drupal 7 Themes Create new themes for your Drupal 7 site with a clean layout and powerful CSS styling    With the arrival of Drupal 6, sub-theming really came to the forefront of theme design. While previously many people copied themes and then re-worked them to achieve their goals, that process became less attractive as sub-themes came into favor. This article focuses on sub-theming and how it should be used to customize an existing theme. We'll start by looking at how to set up a workspace for Drupal theming. Setting up the workspace Before you get too far into attempting to modify your theme files, you should put some thought into your tools. There are several software applications that can make your work modifying themes more efficient. Though no specific tools are required to work with Drupal themes, you could do it all with just a text editor—there are a couple of applications that you might want to consider adding to your tool kit. The first item to consider is browser selection. Firefox has a variety of extensions that make working with themes easier. The Web Developer extension, for example, is hugely helpful when dealing with CSS and related issues. We recommend the combination of Firefox and the Web developer extension to anyone working with Drupal themes. Another extension popular with many developers is Firebug, which is very similar to the Web developer extension, and is indeed more powerful in several regards. Pick up Web developer, Firebug, and other popular Firefox add-ons at https://addons.mozilla.org/en-US/firefox/. There are also certain utilities you can add into your Drupal installation that will assist with theming the site. Two modules you definitely will want to install are Devel and Theme developer. Theme developer can save you untold hours of digging around trying to find the right function or template. When the module is active all you need to do is click on an element and the Theme developer pop-up window will show you what is generating the element, along with other useful information like potential template suggestions. The Devel module performs a number of functions and is a prerequisite for running Theme developer. Download Devel from: http://drupal.org/project/devel. You can find Theme developer at: http://drupal.org/project/devel_themer. Note that neither Devel nor Theme Developer are suitable for use in a development environment—you don't want these installed and enabled on a client's public site, as they can present a security risk. When it comes to working with PHP files and the various theme files, you will also need a good code editor. There's a whole world of options out there, and the right choice for you is really a personal decision. Suffice it to say: as long as you are comfortable with it, it's probably the right choice. Setting up a local development server Another key component of your workspace is the ability to preview your work—preferably locally. As a practical matter, previewing Drupal themes requires the use of a server; themes are difficult to preview with any accuracy without a server to execute the PHP code. While you can work on a remote server on your webhost, often this is undesirable due to latency or simple lack of availability. A quick solution to this problem is to set up a local server using something like the XAMPP package (or the MAMP package for Mac OSX). XAMPP provides a one step installer containing everything you need to set up a server environment on your local machine (Apache, MySQL, PHP, phpMyAdmin, and more). Visit http://www.ApacheFriends.org to download XAMPP and you can have your own Dev Server set up on your local machine in no time at all. Follow these steps to acquire the XAMPP installation package and get it set up on your local machine: Connect to the Internet and direct your browser to http://www.apachefriends.org. Select XAMPP from the main menu. Click the link labeled XAMPP for Windows. Click the .zip option under the heading XAMPP for Windows. Note that you will be re-directed to the SourceForge site for the actual download. When the pop-up prompts you to save the file, click OK and the installer will download to your computer. Locate the downloaded archive (.zip) package on your local machine, and double-click it. Double-click the new file to start the installer. Follow the steps in the installer and then click Finish to close the installer. That's all there is to it. You now have all the elements you need for your own local development server. To begin, simply open the XAMPP application and you will see buttons that allow you to start the servers. To create a new website, simply copy the files into a directory placed inside the /htdocs directory. You can then access your new site by opening the URL in your browser, as follows: http://localhost/sitedirectoryname. As a final note, you may also want to have access to a graphics program to handle editing any image files that might be part of your theme. Again, there is a world of options out there and the right choice is up to you. Planning the modifications A proper dissertation on site planning and usability are beyond the scope of this article. Similarly, this article is neither an HTML nor a CSS tutorial; accordingly, in this article we are going to focus on identifying the issues and delineating the process involved in the customization of an existing theme, rather than focusing on design techniques or coding-specific changes. Any time you set off down the path of transforming an existing theme into something new, you need to spend some time planning. The principle here is the same as in many other areas. A little time spent planning at the frontend of a project can pay off big in savings later. When it comes to planning your theming efforts, the very first question you have to answer is whether you are going to customize an existing theme or whether you will create a new theme. In either event, it is recommended that you work with sub-themes. The key difference is the nature of the base theme you select, that is, the theme you are going to choose as your starting point. In sub-theming, the base theme is the starting point. Sub-themes inherit the parent theme's resources; hence, the base theme you select will shape your theme building. Some base themes are extremely simple, designed to impose on the themer the fewest restrictions; others are designed to give you the widest range of resources to assist your efforts. However, since you can use any theme for a base theme, the reality is that most themes fall in between, at least in terms of their suitability for use as a base theme. Another way to think of the relationship between a base theme and a subtheme is in terms of a parent-child relationship. The child (sub-theme) inherits its characteristics from its parent (the base theme). There are no limits to the ability to chain together multiple parent-child relationships; a sub-theme can be the child of another sub-theme. When it comes to customizing an existing theme, the reality is often that the selection of the base theme will be dictated by the theme's default appearance and feature set; in other words, you are likely to select the theme that is already the closest to what you want. That said, don't limit yourself to a shallow surface examination of the theme. In order to make the best decision, you need to look carefully at the underlying theme's file and structures and see if it truly is the best choice. While the original theme may be fairly close to what you want, it may also have limitations that require work to overcome. Sometimes it is actually faster to start with a more generic theme that you already know and can work with easily. Learning someone else's code is always a bit of a chore and themes are like any other code—some are great, some are poor, most are simply okay. A best practices theme makes your life easier. In simplest terms, the process of customizing an existing theme can be broken into three steps: Select your base theme. Create a sub-theme from your base theme. Make the changes to your new sub-theme. Why it is not recommended to simply modify the theme directly? There are two following reasons: First, best practices say not to touch the original files; leave them intact so you can upgrade them without losing customizations. Second, as a matter of theming philosophy, it's better to leave the things you don't need to change in the base theme and focus your sub-theme on only the things you want to change. This approach to theming is more manageable and makes for much easier testing as you go. Selecting a base theme For the sake of simplicity, in this article, we are going to work with the default Bartik theme. We'll take Bartik, create a new sub-theme and then modify the subtheme to create the customized theme. Let's call the new theme "JeanB". Note that while we've named the theme "JeanB", when it comes to naming the theme's directory, we will use "jeanb" as the system only supports lowercase letters and underscores. In order to make the example easier to follow and to avoid the need to install a variety of third-party extensions, the modifications we will make in this article will be done using only the default components. Arguably, when you are building a site like this for deployment in the real world (rather than simply for skills development) you might wish to consider implementing one or more specialized third-party extensions to handle certain tasks.
Read more
  • 0
  • 0
  • 4325
article-image-drupal-7-fieldscck-field-display-management
Packt
12 Jul 2011
6 min read
Save for later

Drupal 7 Fields/CCK: Field Display Management

Packt
12 Jul 2011
6 min read
Drupal 7 Fields/CCK Beginner's Guide Explore Drupal 7 fields/CCK and master their use Field display The purpose of managing the field display is not only to beautify the visual representation of fields but also to affect how people read the information on a web page and the usability of a website. The design of a field display has to seem logical to users and easy to understand. Consider an online application form where the first name field is positioned in between the state and country fields. Although the application can gather the information just fine, this would be very illogical and bothersome to our users. It goes without saying that the first name should be in the personal details section while the state and country should go in the personal address section of the form. Time for action – a first look at the field display settings In this section, we will learn where to find the field display settings in Drupal. Now, let's take a look at the field display settings: Click on the Structure link on the administration menu at the top of the page. Click on the Content types link on this page. On the right of the table, click on the manage display link to go to the manage display administration page to adjust the order and positioning of, the field labels. Click on the manage display link to adjust the field display for the Recipe content type. This page lists all of the field display settings that are related to the content type we selected. (Move the mouse over the image to enlarge it.) If we click on the select list for any of the labels, there are three options that we can select, Above, Inline, and <Hidden>. If we click on the select list for any of the formats, there are five options that we can select from, namely, Default, Plain text, Trimmed, Summary or trimmed, and <Hidden>. However, the options will vary with field types. As in the case of the Difficulty field the multiple values field, if we click on the select list for Format, will show three options, Default, Key, and <Hidden>. What just happened? We have learned where to find the field display settings in Drupal, and we have taken a look at the options for the field display. When we click on the select list for labels, there are three options that we can use to control the display of the field label: Above: The label will be positioned above the field widget Inline: The label will be positioned to the left of the field widget <Hidden>:The label will not be displayed on the page When we click on the select list for formats, the options will be shown, but the options will be different, and depend on the field type we select. For the Body field, we will have four options that we can choose to control the body field display: Default: The field content will be displayed as we specified when we created the field Plain text: The field content will be displayed as plain text, which ignores HTML tags if the content contains any Trimmed: The field content will be truncated to a specified number of characters Summary or trimmed: The summary of the field will be displayed; if there is no summary entered, the content of the field will be trimmed to a specified number of characters <Hidden>:The field content will not be displayed Formatting field display in the Teaser view The teaser view of content is usually the first piece of information people will see on a homepage or a landing page, so it will be useful if there are options that could control the display in teaser view. For example, for the yummy recipe website, the client would like to have the number of characters displayed in teaser view limited to 300 characters, because they do not like to display too much text for each post on the homepage. Time for action – formatting the Body field display in teaser view In this section, we will format the Body field of the Recipe content type in teaser view: Click on the Structure link on the administration menu at the top of the page. Click on the Content types link on the following page: Click on the manage display link to adjust the field display for the Recipe content type. At the top-right of the page, there are two buttons, the first one is Default, the second one is Teaser, now click on the Teaser button. (Move the mouse over the image to enlarge it.) This page lists all the available fields for the teaser view of the Recipe content type. (Move the mouse over the image to enlarge it.) Now click on the gear icon to update the trim length settings: (Move the mouse over the image to enlarge it.) After clicking on the gear icon, which will display the Trim length settings. The default value of Trim length is 600, we change it to 300, and then click on the Update button to confirm the entered value. Click on the Save button at the bottom of the page to store the value into Drupal. If we go back to the homepage, we will see the recipe content in teaser view. It is now truncated to 300 characters. What just happened? We have formatted the Body field of the Recipe content type in Teaser view. Currently there are two view modes, one is the Default view mode, and the other is the Teaser view mode. When we need to format the field content in Teaser view, we have to switch to the Teaser view mode on the Manage display. administration page to modify these settings. Moreover, when entering data or updating the field display settings, we have to remember to click on the Save button at the bottom of the page to permanently store the value into Drupal. If we just click on the Update button, it will not store the value into Drupal, it will only confirm the value we entered, therefore, we always need to remember to click on the Save button after updating any settings. Furthermore, there are other fields which are positioned in the hidden section at the bottom of the page, which means those fields will not be shown in Teaser view. In our case only the Body field is shown in Teaser view. We can easily drag and drop a field to the hidden section to hide the field, or drag and drop a field above the hidden section to show the field on the screen.
Read more
  • 0
  • 0
  • 1927

article-image-drupal-7-fieldscck-using-image-field-modules
Packt
11 Jul 2011
6 min read
Save for later

Drupal 7 Fields/CCK: Using the Image Field Modules

Packt
11 Jul 2011
6 min read
Drupal 7 Fields/CCK Beginner's Guide Explore Drupal 7 fields/CCK and master their use Adding image fields to content types We have learned how to add file fields to content types. In this section, we will learn how to add image fields to content types so that we can attach images to our content. Time for action – adding an image field to the Recipe content type In this section, we will add an image field to the Recipe content type. Follow these steps: Click on the Structure link in the administration menu at the top of the page. Click on the Content types link to go to the content types administration page. Click on the manage fields link on the Recipe row as in the following screenshot, because we would like to add an image field to the recipe content type. (Move the mouse over the image to enlarge it.) Locate the Add new field section. In the Label field enter "Image", and in the Field name field enter "image". In the field type select list, select "image" as the field type; the field widget will be automatically switched to select Image as the field widget. After the values are entered, click on Save. (Move the mouse over the image to enlarge it.)   What just happened?   We added an image field to the Recipe content type. The process of adding an image field to the Recipe content type is similar to the process of adding a file field to the Recipe content type, except that we selected image field as the field type and we selected image as the field widget. We will configure the image field in the next section. Configuring image field settings We have already added the image field. In this section, we will configure the image field, learn how to configure the image field settings, and understand how they reflect to image outputs by using those settings. Time for action – configuring an image field for the Recipe content type In this section, we will configure image field settings in the Recipe content type. Follow these steps: After clicking on the Save button, Drupal will direct us to the next page, which provides the field settings for the image field. The Upload destination option is the same as the file field settings, which provide us an option to decide whether image files should be public or private. In our case, we select Public files. The last option is the Default image field. We will leave this option for now, and click on the Save field settings button to go to the next step. (Move the mouse over the image to enlarge it.) This page contains all the settings for the image field. The most common field settings are the Label field, the Required field, and the Help text field. We will leave these fields as default. (Move the mouse over the image to enlarge it.) The Allowed file extension section is similar to the file field we have already learned about. We will use the default value in this field, so we don't need to enter anything for this field. The File directory section is also the same as the settings in the file field. Enter "image_files" in this field. (Move the mouse over the image to enlarge it.) Enter "640" x "480" in the Maximum image resolution field and the Minimum image resolution field, and enter "2MB" in the maximum upload size field. (Move the mouse over the image to enlarge it.) Check the Enable Alt field and the Enable Title field checkboxes. (Move the mouse over the image to enlarge it.) Select thumbnail in the Preview image style select list, and select Throbber in the Progress indicator section. (Move the mouse over the image to enlarge it.) The bottom part of this page, the image field settings section, is the same as the previous page we just saved, so we don't need to re-enter the values. Click on the Save settings button at the bottom of the page to store all the values we entered on this page. (Move the mouse over the image to enlarge it.) After clicking on the Save settings button, Drupal sends us back to the Manage fields setting administration page. Now the image field is added to the Recipe content type. (Move the mouse over the image to enlarge it.)   What just happened?   We have added and configured an image field for the Recipe content type. We left the default values in the Label field, the Required field, and the Help text field. They are the most common settings in fields. The Allowed file extension section is similar to the file field that we have seen, which provides us with the ability to enter the file extension of the images which are allowed to be uploaded. The File directory field is the same as the one in the file field, which provides us with the option to save the uploaded files to a different directory to the default location of the file directory. The Maximum image resolution field allows us to specify the maximum width and height of image resolution that will be uploaded. If the uploaded image is bigger than the resolution we specified, it will resize images to the size we specified. If we did not specify the size, it will not have any restriction to images. The Minimum image resolution field is the opposite of the maximum image resolution. We specify the minimum width and height of image resolution that is allowed to be uploaded, not the maximum width and height of image resolution. If we upload image resolution less than the minimum size we specified, it will throw an error message and reject the image upload. The Enable Alt field and the Enable Title field can be enabled to allow site administrators to enter the ALT and Title attributes of the img tag in XHTML, which can improve the accessibility and usability of a website when using images. The Preview image style select list allows us to select which image style will be used to display while editing content. Currently it provides three image styles, thumbnail, medium, and large. The thumbnail image style will be used by default. We will learn how to create a custom image style in the next section. Have a go hero – adding an image field to the Cooking Tip content type It's time for another challenge. We have created an image field to the Recipe content type. We can use the same method we have learned here to add and configure an image field to the Cooking Tip content type. You can apply the same steps used to create image fields to the Recipe content type and try to understand the differences between the settings on the image field settings administration page.
Read more
  • 0
  • 0
  • 2480
Modal Close icon
Modal Close icon