Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Front-End Web Development

341 Articles
article-image-scripty2-action
Packt
30 Apr 2010
13 min read
Save for later

Scripty2 in Action

Packt
30 Apr 2010
13 min read
(For more resources on Scripty2, see here.) Introduction to Scripty2 Some years ago the web started to see, with great amazement, the born of many JavaScript libraries. Most of them really helped us developers in working with JavasScript code, making life easier, and coding more fun and efective. Some names that usually come to our minds are jQuery, MooTools, dojo, prototype, and surely many, many others. Among these, surely we can remember one called script.aculo.us. It's not only about the cool name, but the great features it brings along with it. Now Thomas Fuchs, the developer behind script.aculo.us, is announcing a newer version. This time called Scripty2, and most interesting of all, this is a full rewrite, every line of code is new, with a big emphasis on making things better, faster and yet easy to use. The three parts in which Scripty2 is divided is: Core Fx Ui We are going to take a quick glance to Fx and Ui, so you can see some of their impressive features. First steps, downloading and placing the necessary code. In order to download the library we need to go to http://scripty2.com/ here we will see an image just like the next one: Clicking on it will result in the file being downloaded, and when the download finishes, we will be able to unzip it. Inside there are three more folders, and the necessary license documents. These folders are: dist → we can find inside this folder the files we will need for the article. doc → the documentation for the library, equal to the online one, but we can check it while offline. However, it's advisable to check the online documentation when possible, as it will be more up to date. src → here we can find the source files for the library, each part of the library being on a separate file. For the Scripty2 library to work, we will also need to included the prototype library, which is also included in the package. But we have another option, that's to include a file called prototype.s2.min.js. This file includes both libraries in it. Summarizing it, if we want to use the Scripty2 libray we have 2 options, to include both libraries separately or simply include prototype.s2.min.js: <script type="text/javascript" src="js/prototype.js"></script> <script type="text/javascript" src="js/s2.js"></script> Note that we are including the prototype-one first, as Scripty2 needs it. The second option, and the one we are going to follow in this article is: <script type="text/javascript" src="js/prototype.s2.min.js"></script> Now, let's take a look at what will be our base structure, first we will have a index.html file, with this code in it: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xml_lang="es-ES" lang="es-ES" ><head><meta http-equiv="content-type" content="text/html; charset=utf-8" /><title>Scripty2</title><link rel="stylesheet" href="css/reset.css" type="text/css" /><link rel="stylesheet" href="css/styles.css" type="text/css" /> </head><body> <script type="text/javascript" src="js/prototype.s2.min.js"></script> </body></html> Note that we have our JavaScript file placed at the end of the code, this is usually better for performance. This JavaScript file is located in a folder called js. We have also included two css files, one styles.css, which is empty for now. And the other one is reset.css . I like to use Eric Meyer's css reset which you can find it here:http://meyerweb.com/eric/tools/css/reset/. And that's all we need for now. We are ready to go, we will start with the UI part. Scripty2 UI The Scripty2 includes some very interesting UI controls, like accordion, tabs, autocompleter and many others. I think we could start with the accordion one, as it will make a nice example. Accordion For this we will need to add some code to our index.html file, a good start would be something like this: <body> <div id="tabs_panel"> <ul> <li><a href="#tab1">Sample tab</a></li> <li><a href="#tab2">Another tab</a></li> <li><a href="#tab3">And another one</a></li> </ul> <div id="tab1">1.- This will be the content for the first tab.</div> <div id="tab2">2.- And here we can find the content for the second one.</div> <div id="tab3">3.- Of course we are adding some content to the third one.</div></div> What have we done here? Just three things: First we have created a tabs_panel div, inside which we will place the necessary code for tabs. It will be our container. Next we have placed three div elements, with a link element inside each one, targeting a div. Inside each link, we find the title for the tab. Finally we have placed the final divs, with the ids corresponding to these ones that where being targeted by the previous links. It will be in these divs that we will place the content for each tab. Once we have this code in place we need something more to do the work, as this alone won't do anything. We need to make the necessary Scripty2 function call: ... <div id="tab3">3.- Of course we are adding some content to the third one.</div> </div> <script type="text/javascript" src="./js/prototype.s2.min.js"></script><script type="text/javascript"> new S2.UI.Tabs('tabs_panel');</script> Easy, isn't it? We just need to add this call new S2.UI.Tabs('tabs_panel'); which targets our previously created div. Would this be enough? Let's take a look: It seems nothing has happened, but that's far from true; if we check our page using Firebug, we will see something like the next image: Want to learn more about Firebug? Check this Packt article: http://www.packtpub.com/article/installation-and-getting-started-with-firebug. As we can see in the image, a whole bunch of css classes have been added to our quite simple html code. These classes are responsible for the tabs to work, does that mean that we have to create all of them? Well, not really. Luckily for us, we can use jQuery UI themes for this. Yes, that's it, just go to this url: http://jqueryui.com/themeroller/ And download your favourite one, from the gallery panel: For example, I'm going to download the Hot sneaks one: Once downloaded, we will be able to find the styles we need, inside the packaged file. If we unzip the file we will see these folders: css development-bundle js index.html Opening the css folder we will see a folder called hot-sneaks, or the name of the theme you have downloaded. We will copy the entire folder into our own css folder. Thus we will have this structure: css hot-sneaks reset.css styles.css Inside the hot-sneaks folder there's a file called jquery-ui-1.8.custom.css , we need to link this file in our index.html one, we will add these modifications: … <title>Scripty2</title> <link rel="stylesheet" href="css/reset.css" type="text/css" /> <link rel="stylesheet" href="css/hot-sneaks/jquery-ui-1.8.custom.css" type="text/css" /> <link rel="stylesheet" href="css/styles.css" type="text/css" /> ... But before taking a look at the result of these changes, we still need to do some modifications, this time in our own styles.css file: body{ padding: 10px; }#tabs_panel{ width:350px; font-size: 12px;} And we are done! Our site will look mostly like this: In the image, we can see the three possible states of the tabs: Normal tab Active tab Hover tab It was easy to achieve, isn't it? Next example will be a text autocompleter, stay with us! Text Autocompleter In this example, we are going to use another of the Scripty2 nice feature, this time to build a text autocompleter. This can be used to enhance site search, and it's pretty easy to achieve, thanks to Scripty2. First we need to add the necessary markup in our index.html file: … <div id="tab3">3.- Of course we are adding some content to the third one.</div> </div> <br/><br/> <div id="text_autocompleter"> <input type="text" name="demo" /> </div>... Not much added here, just another container div, and an input, so we can write in it. We now need our JavasCript code to make this work: new S2.UI.Tabs('tabs_panel'); var favourite = [ 'PHP', 'Ruby', 'Python', '.NET', 'JavaScript', 'CSS', 'HTML', 'Java' ]; new S2.UI.Autocompleter('text_autocompleter', { choices: favourite }); </script> … First what we need to do is to create an array of possible values, and then we call the Autocompleter method, with two parameters, first the div we are targetting, and then the array of values. Also we are going to modify our styles.css file, just to add some styling to our text_autocompleter div: ...#tabs_panel, #text_autocompleter{ width:350px; ... If we check our page after these changes, it will be looking this: If we try to enter some text, like a p in the example, we will see how options appear in the box under the input. If we do click on the option, the input box will be filled: Just after we select our desired option the suggestions panel will disappear, as it will do if we click outside the input box. Note that if the theme we are using lacks the ui-helper-hidden class, the suggestions panel won't dissapear. But don't worry, solving this is as easy as adding this class to our styles.css file: .ui-helper-hidden{    visibility: hidden;    } And we are done, now lets see an example about the accordion control. Accordion This is quite similar to the tabs example, quite easy too, first, as always, we are going to add some html markup to our index.html file: <div id="accordion"> <h3><a href="#">Sample tab</a></h3> <div> 1.- This will be the content for the first tab. </div> <h3><a href="#">Another tab</a></h3> <div> 2.- And here we can find the content for the second one. </div> <h3><a href="#">And another one</a></h3> <div> 3.- Of course we are adding some content to the third one. </div> </div> Good, this will be enough for now. We have a container div, where we are placing the necessary elements, each h3 element, with links inside, will be the headings, and the divs will be the contents for each tab. Let's add some styles in our styles.css file: #tabs_panel, #text_autocompleter, #accordion{ width:350px; font-size: 12px;}#accordion h3 a{ padding-left: 30px; } I've placed the necessary changes in bold. Now to the JavaScript code, we will add this in our index.html file: … new S2.UI.Accordion('accordion'); </script> ... The first parameter will be the id of our container div, for now, we don't need anything more. How does all this look like? Just take a look: Good, clicking on each one of the headings will result in closing the current tab and opening the clicked one. But, what if we want to be able to open each clicked tab, but without closing the others? Well, thanks to Scripty2 we can also achieve that. We only need to make some small modifications to the JavaScript call: …new S2.UI.Accordion('accordion', { multiple: true }); </script> … As we see, the second parameter for our accordion function can receive some options, this time we are selecting multiple to true. This way our accordion tabs won't close: In the previous image we can see all our tabs open, but we have some more options, let's see them. The first one will help us define our prefered header selector. As our code is now we are using h3 elements: <h3><a href="#">Sample tab</a></h3> <div> 1.- This will be the content for the first tab.</div>   But what if we wanted to use h1 elements? Well, it won't be very hard, just a tiny add to our JavaScript code: new S2.UI.Accordion('accordion', { multiple: true, headerSelector: 'h1' }); The last option we are going to see is the icons one, by default, this option will use these values: icons: { header:'ui-icon-triangle-1-e',headerSelected: 'ui-icon-triangle-1-s' } Where do these icons come from? Well, these little icons are from the theme we downloaded, and we have plenty of them to use. If you open the theme package, the one we downloaded at the start of the article, and we click on the index.html file, we will be able to see a demo of all styles included in the package. More or less at the bottom we will see a group of tiny icons:   If we hover over these little icons, its name will appear, and that's what we can use to change our options. So in our index.html file, we could change our JavaScript code just like this: new S2.UI.Accordion('accordion', { multiple: true, headerSelector: 'h1', icons: { header: 'ui-icon-circle-plus', headerSelected: 'ui-icon-circle-minus' } }); We define one option for the headers, and other for the selected one, how will this look like:   And with this option we have seen all the three available. With them we can customize our accordion as we wish. Summarizing, we have found that the Scripty2 library includes some very useful UI controllers. We have seen some of them, but there are many others, such as: Buttons → Scripty2 helps us in creating good looking buttons. Not only normal buttons, but also buttons that behave as checkboxes, or even radio buttons. Dialog → There are also some functions in the Scripty2 library that will help us in creating modal dialog boxes, with the contents we want. Slider → If at anytime we are in need of creating a slider, be it for moving the contents of a div, for creating an image gallery or for creating an interesting price filter, it is pretty easy with Scripty2. Progress bar → This one is pretty interesting, as will help us in the task of developing an animated progress bar, very nice! Now we will be taking a look at another interesting part of the library, the FX one.
Read more
  • 0
  • 0
  • 2315

article-image-haxe-2-using-templates
Packt
25 Jul 2011
10 min read
Save for later

haXe 2: Using Templates

Packt
25 Jul 2011
10 min read
  haXe 2 Beginner's Guide Develop exciting applications with this multi-platform programming language Introduction to the haxe.Template class As developers our job is to create programs that allow the manipulation of data. That's the basis of our job, but beyond this part of the job, we must also be able to present that data to the user. Programs that don't have a user interface exist, but since you are reading this article about haXe, there is a greater chance that you are mostly interested in web applications, and almost all web applications have a User Interface of some kind. However, these can also be used to create XML documents for example. The haXe library comes with the haxe.Template class. This class allows for basic, yet quite powerful, templating: as we will see, it is not only possible to pass some data to it, but also possible to call some code from a template. Templates are particularly useful when you have to present data—in fact, you can, for example, define a template to display data about a user and then iterate over a list of users displaying this template for each one. We will see how this is possible during this article and we will see what else you can do with templates. We will also see that it is possible to change what is displayed depending on the data and also that it is easy to do some quite common things such as having a different style for one row out of two in a table. The haxe.Template is really easy to use—you just have to create an instance of it passing it a String that contains your template's code as a parameter. Then it is as easy as calling the execute method and giving it some data to display. Let's see a simple example: class TestTemplate { public static function main(): Void { var myTemplate = new haxe.Template("Hi. ::user::"); neko.Lib.println(myTemplate.execute({user : "Benjamin"})); } } This simple code will output "Hi. Benjamin". This is because we have passed an anonymous object as a context with a "user" property that has "Benjamin" as value. Obviously, you can pass objects with several properties. Moreover, as we will see it is even possible to pass complex structures and use them. In addition, we certainly won't be hard coding our templates into our haXe code. Most of the time, you will want to load them from a resource compiled into your executable by calling haxe.Resource.getString or by directly loading them from the filesystem or from a database. Printing a value As we've seen in the preceding sample, we have to surround an expression with :: in order to print its value. Expressions can be of several forms:     Form Explanation ::variableName:: The value of the variable. ::(123):: The integer 123. Note that only integers are allowed. ::e1 operator e2:: Applies the operator to e1 and e2 and returns the resulting value. The syntax doesn't manage operator precedence, so you should wrap expressions inside parenthesis. ::e.field:: Accesses the field and returns the value. Be warned that this doesn't work with properties' getters and setters as these properties are a compile-time only feature. Branching The syntax offers the if, else, and elseif: class TestTemplate { public static function main(): Void { var templateCode = "::if (sex==0):: Male ::elseif (sex==1):: Female ::else:: Unknown ::end::"; var myTemplate = new haxe.Template(templateCode); neko.Lib.print(myTemplate.execute({user : "Benjamin", sex:0})); } } Here the output will be Male. But if the sex property of the context was set to 1 it would print Female, if it is something else, it will print "Unknown". Note that our keywords are surrounded by :: (so the interpreter won't think that it is just some raw-text to be printed). Also note that the "end" keyword has been introduced since we do not use braces. Using lists, arrays, and other iterables The template engine allows one to iterate over an iterable and repeat a part of the template for each object in the iterable. This is done using the ::foreach:: keyword. When iterating, the context will be modified and will become the object that is actually selected in the iterable. It is also possible to access this object (indeed, the context's value) by using the __current__ variable. Let's see an example: class Main { public static function main() { //Let's create two departments: var itDep = new Department("Information Technologies Dept."); var financeDep = new Department("Finance Dept."); //Create some users and add them to their department var it1 = new Person(); it1.lastName = "Par"; it1.firstName = "John"; it1.age = 22; var it2 = new Person(); it2.lastName = "Bear"; it2.firstName = "Caroline"; it2.age = 40; itDep.workers.add(it1); itDep.workers.add(it2); var fin1 = new Person(); fin1.lastName = "Ha"; fin1.firstName = "Trevis"; fin1.age = 43; var fin2 = new Person(); fin2.lastName = "Camille"; fin2.firstName = "Unprobable"; fin2.age = 70; financeDep.workers.add(fin1); financeDep.workers.add(fin2); //Put our departements inside a List: var depts = new List<Department>(); depts.add(itDep); depts.add(financeDep); //Load our template from Resource: var templateCode = haxe.Resource.getString("DeptsList"); //Execute it var template = new haxe.Template(templateCode); neko.Lib.print(template.execute({depts: depts})); } } class Person { public var lastName : String; public var firstName : String; public var age : Int; public function new() { } } class Department { public var name : String; public var workers : List<Person>; public function new(name : String) { workers = new List<Person>(); this.name = name; } } In this part of the code we are simply creating two departments, some persons, and adding those persons into those departments. Now, we want to display the list of departments and all of the employees that work in them. So, let's write a simple template (you can save this file as DeptsList.template): <html> <head> <title>Workers</title> </head> <body> ::foreach depts:: <h1>::name::</h1> <table> ::foreach workers:: <tr> <td>::firstName::</td> <td>::lastName::</td> <td>::if (age < 35)::Junior::elseif (58): :Senior::else::Retired::end::</td> </tr> ::end:: </table> ::end:: </body> </html> When compiling your code you should add the following directive: -resource DeptsList.template@DeptsList The following is the output you will get: <html> <head> <title>Workers</title> </head> <body> <h1>Information Technologies Dept.</h1> <table> <tr> <td>John</td> <td>Par</td> <td>Junior</td> </tr> <tr> <td>Caroline</td> <td>Bear</td> <td>F</td> </tr> </table> <h1>Finance Dept.</h1> <table> <tr> <td>Trevis</td> <td>Ha</td> <td>Senior</td> </tr> <tr> <td>Unprobable</td> <td>Camille</td> <td>Retired</td> </tr> </table> </body> </html> As you can see, this is indeed pretty simple once you have your data structure in place. Time for action – Executing code from a template Even though templates can't contain haXe code, they can make calls to so-called "template macros". Macros are defined by the developer and, just like data they are passed to the template.execute function. In fact, they are passed exactly in the same way, but as the second parameter. Calling them is quite easy, instead of surrounding them with :: we will simply prefix them with $$, we can also pass them as parameters inside parenthesis. So, let's take our preceding sample and add a macro to display the number of workers in a department. First, let's add the function to our Main class: public static function displayNumberOfWorkers(resolve : String->Dynamic, department : Department) { return department.workers.length + " workers"; } Note that the first argument that the macro will receive is a function that takes a String and returns a Dynamic. This function will allow you to retrieve the value of an expression in the context from which the macro has been called. Then, other parameters are simply the parameters that the template passes to the macro. So, let's add a call to our macro: <html> <head> </head> <body> ::foreach depts:: <h1>::name:: ($$displayNumberOfWorkers(::__current__::))</h1> <table> ::foreach workers:: <tr> <td>::firstName::</td> <td>::lastName::</td> <td>::if (sex==0)::M::elseif (sex==1)::F::else::?::end::</td> </tr> ::end:: </table> ::end:: </body> </html> As you can see, we will pass the current department to the macro when calling it to display the number of workers. So, here is what you get: <html> <head> </head> <body> <h1>Information Technologies Dept. (2 workers)</h1> <table> <tr> <td>John</td> <td>Par</td> <td>M</td> </tr> <tr> <td>Caroline</td> <td>Bear</td> <td>F</td> </tr> </table> <h1>Finance Dept. (2 workers)</h1> <table> <tr> <td>Trevis</td> <td>Ha</td> <td>M</td> </tr> <tr> <td>Unprobable</td> <td>Camille</td> <td>?</td> </tr> </table> </body> </html>   What just happened?   We have written the displayNumberOfWorkers macro and added a call to it in the template. As a result, we've been able to display the number of workers in a department. Integrating subtemplates Sub-templates do not exist as such in the templating system. The fact is that you can include sub-templates into a main template, which is not a rare process. Some frameworks, not only in haXe, have even made this standard behavior. So, there are two ways of doing this: Execute the sub-template, store its return value, and pass it as a property to the main template when executing it. Create a macro to execute the sub-template and return its value. This way you just have to call the macro whenever you want to include your sub-template in your main template. Creating a blog's front page In this section, we are going to create a front page for a blog by using the haxe.Template class. We will also use the SPOD system to retrieve posts from the database.
Read more
  • 0
  • 0
  • 2305

article-image-so-what-play
Packt
14 Jun 2013
11 min read
Save for later

So, what is Play?

Packt
14 Jun 2013
11 min read
(For more resources related to this topic, see here.) Quick start – Creating your first Play application Now that we have a working Play installation in place, we will see how easy it is to create and run a new application with just a few keystrokes. Besides walking through the structure of our Play application, we will also look at what we can do with the command-line interface of Play and how fast modifications of our application are made visible. Finally, we will take a look at the setup of integrated development environments ( IDEs ). Step 1 – Creating a new Play application So, let's create our first Play application. In fact, we create two applications, because Play comes with the APIs for Java and Scala, the sample accompanying us in this book is implemented twice, each in one separate language. Please note that it is generally possible to use both languages in one project. Following the DRY principle, we will show code only once if it is the same for the Java and the Scala application. In such cases we will use the play-starter-scala project. First, we create the Java application. Open a command line and change to a directory where you want to place the project contents. Run the play script with the new command followed by the application name (which is used as the directory name for our project): $ play new play-starter-java We are asked to provide two additional information: The application name, for display purposes. Just press the Enter key here to use the same name we passed to the play script. You can change the name later by editing the appName variable in play-starter-java/project/Build.scala. The template we want to use for the application. Here we choose 2 for Java. Repeat these steps for our Scala application, but now choose 1 for the Scala template. Please note the difference in the application name: $ play new play-starter-scala The following screenshot shows the output of the play new command: On our way through the next sections, we will build an ongoing example step-by-step. We will see Java and Scala code side-by-side, so create both projects if you want to find out more about the difference between Java and Scala based Play applications. Structure of a Play application Physically, a Play application consists of a series of folders containing source code, configuration files, and web page resources. The play new command creates the standardized directory structure for these files: /path/to/play-starter-scala└app source code| └controllers http request processors| └views templates for html files└conf configuration files└project sbt project definition└public folder containing static assets| └images images| └javascripts javascript files| └stylesheets css style sheets└test source code of test cases During development, Play generates several other directories, which can be ignored, especially when using a version control system: /path/to/play-starter-scala└dist releases in .zip format└logs log files└project THIS FOLDER IS NEEDED| └project but this...| └target ...and this can be ignored└target generated sources and binaries There are more folders that can be found in a Play application depending on the IDE we use. In particular, a Play project has optional folders on more involved topics we do not discuss in this book. Please refer to the Play documentation for more details. The app/ folder The app/ folder contains the source code of our application. According to the MVC architectural pattern, we have three separate components in the form of the following directories: app/models/: This directory is not generated by default, but it is very likely present in a Play application. It contains the business logic of the application, for example, querying or calculating data. app/views/: In this directory we find the view templates. Play's view templates are basically HTML files with dynamic parts. app/controllers/: This controllers contain the application specific logic, for example, processing HTTP requests and error handling. The default directory (or package) names, models, views, and controllers, can be changed if needed. The conf/ directory The conf/ directory is the place where the application's configuration files are placed. There are two main configuration files: application.conf: This file contains standard configuration parameters routes – This file defines the HTTP interface of the application The application.conf file is the best place to add more configuration options if needed for our application. Configuration files for third-party libraries should also be put in the conf/ directory or an appropriate sub-directory of conf/. The project/ folder Play builds applications with the Simple Build Tool ( SBT ). The project/ folder contains the SBT build definitions: Build.scala: This is the application's build script executed by SBT build.properties: This definition contains properties such as the SBT version plugins.sbt: This definition contains the SBT plugins used by the project The public/ folder Static web resources are placed in the public/ folder. Play offers standard sub-directories for images, CSS stylesheets, and JavaScript files. Use these directories to keep your Play applications consistent. Create additional sub-directories of public/ for third-party libraries for a clear resource management and to avoid file name clashes. The test/ folder Finally, the test/ folder contains unit tests or functional tests. This code is not distributed with a release of our application. Step 2 – Using the Play console Play provides a command-line interface (CLI), the so-called Play console. It is based on the SBT and provides several commands to manage our application's development cycle. Starting our application To enter the Play console, open a shell, change to the root directory of one of our Play projects, and run the play script. $ cd /path/to/play-starter-scala$ play On the Play console, type run to run our application in development (DEV) mode. [play-starter-scala] $ run Use ~run instead of run to enable automatic compilation of file changes. This gives us an additional performance boost when accessing our application during development and it is recommended by the author. All console commands can be called directly on the command line by running play <command>. Multiple arguments have to be denoted in quotation marks, for example, play "~run 9001" A web server is started by Play, which will listen for HTTP requests on localhost:9000 by default. Now open a web browser and go to this location. The page displayed by the web browser is the default implementation of a new Play application. To return to our shell, type the keys Ctrl + D to stop the web server and get back to the Play console. Play console commands Besides run , we typically use the following console commands during development: clean: This command deletes cached files, generated sources, and compiled classes compile: This command compiles the current application test: This command executes unit tests and functional tests We get a list of available commands by typing help play in the Play development console. A release of an application is started with the start command in production (PROD) mode. In contrast to the DEV mode no internal state is displayed in the case of an error. There are also commands of the play script, available only on the command line: clean-all: This command deletes all generated directories, including the logs. debug: This command runs the Play console in debug mode, listening on the JPDA port 9999. Setting the environment variable JDPA_PORT changes the port. stop: This command stops an application that is running in production mode. Closing the console We exit the Play console and get back to the command line with the exit command or by simply typing the key Ctrl + D . Step 3 – Modifying our application We now come to the part that we love the most as impatient developers: the rapid development turnaround cycles. In the following sections, we will make some changes to the given code of our new application visible. Fast turnaround – change your code and hit reload! First we have to ensure that our applications are running. In the root of each of our Java and Scala projects, we start the Play console. We start our Play applications in parallel on two different ports to compare them side-by-side with the commands ~run and ~run 9001. We go to the browser and load both locations, localhost:9000 and I Then we open the default controller app/controllers/Application.java and app/controllers/Application.scala respectively, which we created at application creation, in a text editor of our choice, and change the message to be displayed in the Java code: public class Application extends Controller {public static Result index() {return ok(index.render("Look ma! No restart!"));}} and then in the Scala code: object Application extends Controller {def index = Action {Ok(views.html.index("Look ma! No restart!"))}} Finally, we reload our web pages and immediately see the changes: That's it. We don't have to restart our server or re-deploy our application. The code changes take effect by simply reloading the page. Step 4 – Setting up your preferred IDE Play takes care of automatically compiling modifications we make to our source code. That is why we don't need a full-blown IDE to develop Play applications. We can use a simple text editor instead. However, using an IDE has many advantages, such as code completion, refactoring assistance, and debugging capabilities. Also it is very easy to navigate through the code. Therefore, Play has built-in project generation support for two of the most popular IDEs: IntelliJ IDEA and Eclipse . IntelliJ IDEA The free edition, IntelliJ IDEA Community , can be used to develop Play projects. However, the commercial release, IntelliJ IDEA Ultimate , includes Play 2.0 support for Java and Scala. Currently, it offers the most sophisticated features compared to other IDEs.More information can be found here: http://www.jetbrains.com/idea and also here: http://confluence.jetbrains.com/display/IntelliJIDEA/Play+Framework+2.0 We generate the required IntelliJ IDEA project files by typing the idea command on the Play console or by running it on the command line: $ play idea We can also download the available source JAR files by running idea with-source=true on the console or on the command line: $ play "idea with-source=true" After that, the project can be imported into IntelliJ IDEA. Make sure you have the IDE plugins Scala, SBT , and Play 2 (if available) installed. The project files have to be regenerated by running play idea every time the classpath changes, for example, when adding or changing project dependencies. IntelliJ IDEA will recognize the changes and reloads the project automatically. The generated files should not be checked into a version control system, as they are specific to the current environment. Eclipse Eclipse is also supported by Play. The Eclipse Classic edition is fine, which can be downloaded here: http://www.eclipse.org/downloads. It is recommended to install the Scala IDE plugin, which comes up with great features for Scala developers and can be downloaded here: http://scala-ide.org. You need to download Version 2.1.0 (milestone) or higher to get Scala 2.10 support for Play 2.1. A Play 2 plugin exists also for Eclipse, but it is in a very early stage. It will be available in a future release of the Scala IDE. More information can be found here: https://github.com/scala-ide/scala-ide-play2/wiki The best way to edit Play templates with Eclipse currently is by associating HTML files with the Scala Script Editor. You get this editor by installing the Scala Worksheet plugin, which is bundled with the Scala IDE. We generate the required Eclipse project files by typing the eclipse command on the Play console or by running it on the command line: $ play eclipse Analogous to the previous code, we can also download available source JAR files by running eclipse with-source=true on the console or on the command line: $ play "eclipse with-source=true" Also, don't check in generated project files for a version control system or regenerate project files if dependencies change. Eclipse (Juno) is recognizing the changed project files automatically. Other IDEs Other IDEs are not supported by Play out of the box. There are a couple of plugins, which can be configured manually. For more information on this topic, please consult the Play documentation. Summary We saw how easy it is to create and run a new application with just a few keystrokes. Besides walking through the structure of our Play application, we also looked at what we can do with the command-line interface of Play and how fast modifications of our application are made visible. Finally, we looked at the setup of integrated development environments ( IDEs ). Resources for Article : Further resources on this subject: Play! Framework 2 – Dealing with Content [Article] Play Framework: Data Validation Using Controllers [Article] Play Framework: Binding and Validating Objects and Rendering JSON Output [Article]
Read more
  • 0
  • 0
  • 2304

article-image-user-authentication-codeigniter-17-using-facebook-connect
Packt
21 May 2010
8 min read
Save for later

User Authentication with Codeigniter 1.7 using Facebook Connect

Packt
21 May 2010
8 min read
(Read more interesting articles on CodeIgniter 1.7 Professional Development here.) Registering a Facebook application You need to register a new Facebook Application so that you can get an API key and an Application Secret Key. Head on over to www.facebook.com/developers/ and click on the Set up New Application button in the upper right–hand corner. This process is very similar to setting up a new Twitter application which we covered in the previous article, so I won't bore you with all of the details. Once you've done that, you should have your API key and Application Secret Key. These two things will enable Facebook to recognize your application. Download the Client library When you are on your applications page showing all your applications' information, scroll down the page to see a link to download the Client Library. Once you've downloaded it, simply untar it. There are two folders inside the facebook-platform folder, footprints and php. We are only going to be using the php folder. Open up the php folder; there are two files here that we don't need, facebook_desktop.php and facebook_mobile.php—you can delete them. Finally, we can copy this folder into our application. Place it in the system/application/libraries folder, and then rename the folder to facebook. This helps us to keep our code tidy and properly sorted. Our CodeIgniter Wrapper Before we start coding, we need to know what we need to code in order to make the Facebook Client Library work with our CodeIgniter installation. Our Wrapper library needs to instantiate the Facebook class with our API Key and Secret Application Key. We'll also want it to create a session for the user when they are logged in. If a session is found but the user is not authenticated, we will need to destroy the session. You should create a new file in the system/application/libraries/ folder, called Facebook_connect.php. This is where the Library code given next should be placed. Base class The Base Class for our Facebook Connect Wrapper Library is very simple: <?phprequire_once(APPPATH . 'libraries/facebook/facebook.php');class Facebook_connect{ var $CI; var $connection; var $api_key; var $secret_key; var $user; var $user_id; var $client;}?> The first thing that our Library needs to do is to load the Facebook library—the one we downloaded from facebook.com. We build the path for this by using APPPATH, a constant defined by CodeIgniter to be the path of the application folder. Then, in our Class we have a set of variables. The $CI variable is the variable in which we will store the CodeIgniter super object; this allows us to load CodeIgniter resources (libraries, models, views, and so on) in our library. We'll only be using this to load and use the CodeIgniter Session library, however. The $connection variable will contain the instance of the Facebook class. This will allow us to grab any necessary user data and perform any operations that we like, such as updating a user's status or sending a message to one of their friends. The next few variables are pretty self-explanatory—they will hold our API Key and Secret Key. The $user variable will be used to store all of the information about our user, including general details about the user such as their profile URL and their name. The $user_id variable will be used to store the user ID of our user. Finally, the $client variable is used to store general information about our connection to Facebook, including the username of the user currently using the connection, amongst other things such as server addresses to query for things like photos. Class constructor Our class constructor has to do a few things in order to allow us to authenticate our users using Facebook Connect. Here's the code: function Facebook_connect($data){ $this->CI =& get_instance(); $this->CI->load->library('session'); $this->api_key = $data['api_key']; $this->secret_key = $data['secret_key']; $this->connection = new Facebook($this->api_key, $this->secret_key); $this->client = $this->connection->api_client; $this->user_id = $this->connection->get_loggedin_user(); $this->_session();} The first line in our function should be new to everyone reading this article. The function get_instance() allows us to assign the CodeIgniter super object by reference to a local variable. This allows us to use all of CodeIgniter's syntax for loading libraries, and so on; but instead of using $this->load we would use $this->CI->load. But of course it doesn't just allow us to use the Loader—it allows us to use any CodeIgniter resource, as we normally would inside a Controller or a Model. The next line of code gives us a brilliant example of this: we're loading the session library using the variable $this->CI rather than the usual $this. The next two lines simply set the values of the API key and Secret Application Key into a class variable so that we can reference it throughout the whole class. The $data array is passed into the constructor when we load the library in our Controller. More on that when we get there. Next up, we create a new instance of the Facebook Class (this is contained within the Facebook library that we include before our own class code) and we pass the API Key and Secret Application Key through to the class instance. This is all assigned to the class variable $this->connection, so that we can easily refer to it anywhere in the class. The next two lines are specific parts of the overall Facebook instance. All of the client details and the data that helps us when using the connection are stored in a class variable, in order to make it more accessible. We store the client details in the variable $this->client. The next line of code stores all of the details about the user that were provided to us by the Facebook class. We store this in a class variable for the same reason as storing the client data: it makes it easier to get to. We store this data in $this->user_id. The next line of code calls upon a function inside our class. The underscore at the beginning tells CodeIgniter that we only want to be able to use this function inside this class; so you couldn't use it in a Controller, for example. I'll go over this function shortly. _session(); This function manages the user's CodeIgniter session. Take a look at the following code: function _session(){ $user = $this->CI->session->userdata('facebook_user'); if($user === FALSE && $this->user_id !== NULL) { $profile_data = array('uid','first_name', 'last_name', 'name', 'locale', 'pic_square', 'profile_url'); $info = $this->connection->api_client-> users_getInfo($this->user_id, $profile_data); $user = $info[0]; $this->CI->session->set_userdata('facebook_user', $user); } elseif($user !== FALSE && $this->user_id === NULL) { $this->CI->session->sess_destroy(); } if($user !== FALSE) { $this->user = $user; }} This function initially creates a variable and sets its value to that of the session data from the CodeIgniter session library. Then we go through a check to see if the session is empty and the $this->user_id variable is false. This means that the user has not yet logged in using Facebook Connect. So we create an array of the data that we want to get back from the Facebook class, and then use the function users_getInfo() provided by the class to get the information in the array that we created. Then we store this data into the $user variable and create a new session for the user. The next check that we do is that if the $user variable is not empty, but the $this->user_id variable is empty, then the user is not authenticated on Facebook's side so we should destroy the session. We do this by using a function built in to the Session Library sess_destroy(); Finally, we check to see if the $user variable is not equal to FALSE. If it passes this check, we set the $this->user class variable to that of the local $user variable.
Read more
  • 0
  • 0
  • 2301

article-image-learning-informatica-powercenter-9x
Packt
08 May 2015
3 min read
Save for later

Learning Informatica PowerCenter 9.x

Packt
08 May 2015
3 min read
Informatica Corporation (Informatica), a multi-million dollar company established on February 1993, is an independent provider of enterprise data integration and data quality software and services. Informatica PowerCenter is the most widely used tool of Informatica across the globe for various data integration processes. Informatica PowerCenter tool helps integration of data from almost any business system in almost any format. This flexibility of PowerCenter to handle almost any data makes it most widely tool in the data integration world. (For more resources related to this topic, see here.) Key features Learn the functionalities of each component in the Informatica PowerCenter tool and deploy them to accomplish executive reporting using logical data stores Learn the core features of Informatica PowerCenter tool along with its administration and architectural aspects Develop skills to extract data and efficiently utilize it with the help of world’s most widely used integration tool, and make a promising career in Informatica PowerCenter Difference in approach The simple thought behind this book is to put all the essential ingredients of Informatica, starting from basic things, such as downloads, extraction, and installation to working on client tools and high-level aspects, such as scheduling, migration, and so on. There are multiple blogs available across the Internet that talk about the Informatica tool but none presents end-to-end answers. We have tried to put up all the steps and processes in a systematic manner to help you easily start with the learning. In this book, you will get a step-by-step procedure for every aspect of the Informatica PowerCenter tool. While writing this book, the author has kept in mind the importance of live, practical exposure of the graphical interface of the tool to the audience and hence, you will notice a lot of screenshots illustrating the steps to help you understand and follow the process. The chapters area arranged in such a way that all the aspects of the Informatica PowerCenter tool are covered, and they are in a proper flow in order to achieve the functionality. Here is a gist regarding the significant aspects of the book: Installation of Informatica and the information regarding the administrator console of the PowerCenter tool The basic and advanced topics of the Designer Screen. Implementation of the different types of Slowly Changing Dimension Understanding of the Workflow Manager Monitoring the code Implementation of mapping using different types of transformations Classification of transformations Usage of Repository Manager Required skills Before you make your mind up about learning Informatica, it is always recommended that you have a basic understanding of SQL and Unix. Though these are not mandatory and you can easily use 90 percent of the Informatica PowerCenter tool without knowledge of these, the confidence to work in real-time SQL and Unix projects is a must-have in your kitty. People who know SQL will easily understand that ETL tools are nothing but a graphical representation of SQL. Unix is utilized in Informatica PowerCenter with the scripting aspect, which makes your life easy in some scenarios. Summary Informatica PowerCenter has emerged as one of the most useful ETLs employed to build enterprise data warehouses. The PowerCenter tool can make your life easy and can offer you some great career path if learnt properly. This book will thereby help you get a know-how of PowerCenter. Resources for Article: Further resources on this subject: Transition to Readshift [article] Cloudera Hadoop and HP Vertica [article] Learning to Fly with Force.com [article]
Read more
  • 0
  • 0
  • 2286

article-image-plone-4-development-understanding-zope-security
Packt
30 Aug 2011
6 min read
Save for later

Plone 4 Development: Understanding Zope Security

Packt
30 Aug 2011
6 min read
Security primitives Zope's security is declarative: views, actions, and attributes on content objects are declared to be protected by permissions. Zope takes care of verifying that the current user has the appropriate access rights for a resource. If not, an AccessControl.Unauthorized exception will be raised. This is caught by an error handler which will either redirect the user to a login screen or show an access denied error page. Permissions are not granted to users directly. Instead, they are assigned to roles. Users can be given any number of roles, either site-wide, or in the context of a particular folder, in which case they are referred to as local roles. Global and local roles can also be assigned to groups, in which case all users in that group will have the particular role. (In fact, Zope considers users and groups largely interchangeable, and refers to them more generally as principals.) This makes security settings much more flexible than if they were assigned to individual users. Users and groups Users and groups are kept in user folders, which are found in the ZMI with the name acl_users. There is one user folder at the root of the Zope instance, typically containing only the default Zope-wide administrator that is created by our development buildout the first time it is run. There is also an acl_users folder inside Plone, which manages Plone's users and groups. Plone employs the Pluggable Authentication Service (PAS), a particularly flexible kind of user folder. In PAS, users, groups, their roles, their properties, and other security-related policy are constructed using various interchangeable plugins. For example, an LDAP plugin could allow users to authenticate against an LDAP repository. In day-to-day administration, users and groups are normally managed from Plone's Users and Groups control panel. Permissions Plone relies on a large number of permissions to control various aspects of its functionality. Permissions can be viewed from the Security tab in the ZMI, which lets us assign permissions to roles at a particular object. Note that most permissions are set to Acquire—the default—meaning that they cascade down from the parent folder. Role assignments are additive when permissions are set to acquire. Sometimes, it is appropriate to change permission settings at the root of the Plone site (which can be done using the rolemap.xml GenericSetup import step—more on that follows), but managing permissions from the Security tab anywhere else is almost never a good idea. Keeping track of which security settings are made where in a complex site can be a nightmare. Permissions are the most granular piece of the security puzzle, and can be seen as a consequence of a user's roles in a particular context. Security-aware code should almost always check permissions, rather than roles, because roles can change depending on the current folder and security policy of the site, or even based on an external source such as an LDAP or Active Directory repository. Permissions can be logically divided into three main categories: Those that relate to basic content operations, such as View and Modify portal content. These are used by almost all content types, and defined as constants in the module Products.CMFCore.permissions. Core permissions are normally managed by workflow. Those that control the creation of particular types of content, such as ATContentTypes: Add Image. These are usually set at the Plone site root to apply to the whole site, but they may be managed by workflow on folders. Those that control site-wide policy. For example, the Portlets: Manage portlets permission is usually given to the Manager and Site Administrator roles, because this is typically an operation that only the site's administrator will need to perform. These permissions are usually set at the site root and acquired everywhere else. Occasionally, it may be appropriate to change them here. For example, the Add portal member permission controls whether anonymous users can add themselves (that is, "join" the site) or not. Note that there is a control panel setting for this, under Security in Site Setup. Developers can create new permissions when necessary, although they are encouraged to reuse the ones in Products.CMFCore.permissions if possible. The most commonly used permissions are: Permission Constant Zope Toolkit name Controls Access contents information AccessContents Information zope2.AccessContents Information Low-level Zope permission controlling access to objects View View zope2.View Access to the main view of a content object List folder contents ListFolderContents cmf.ListFolderContents Ability to view folder listings Modify portal content ModifyPortalContent cmf.ModifyPortalContent Edit operations on content Change portal events N/A N/A Modification of the Event content type (largely a historical accident) Manage portal ManagePortal cmf.ManagePortal Operations typically restricted to the Manager role. Request review RequestReview cmf.RequestReview Ability to submit content for review in many workflows. Review portal content ReviewPortalContent cmf.ReviewPortalContent Ability to approve or reject items submitted for review in many workflows. Add portal content AddPortalContent cmf.AddPortalContent add new content in a folder. Note that most content types have their own "add" permissions. In this case, both this permission and the type-specific permission are required. The Constant column in the preceding table refers to constants defined in Products. CMFCore.permissions. The Zope Toolkit name column lists the equivalent names found in ZCML files in packages such as Products.CMFCore, Products.Five and (at least from Zope 2.13), AccessControl. They contain directives such as: <permission id="zope2.View" title="View" /> This is how permissions are defined in the Zope Toolkit. Custom permissions can also be created in this way. Sometimes, we will use ZCML directives which expect a permission attribute, such as: <browser:page name="some-view" class=".someview.SomeView" for="*" permission="zope2.View" /> The permission attribute here must be a Zope Toolkit permission ID. The title of the <permission /> directive is used to map the Zope 2-style permissions (which are really just strings) to Zope Toolkit permission IDs. To declare that a particular view or other resource defined in ZCML should not be subject to security checks, we can use the special permission zope.Public.
Read more
  • 0
  • 0
  • 2263
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-build-your-own-application-access-twitter-using-java-and-netbeans-part-4
Packt
28 Jun 2010
7 min read
Save for later

Build your own Application to access Twitter using Java and NetBeans: Part 4

Packt
28 Jun 2010
7 min read
In the 3rd part of this article series, we learnt about: Added a Tabbed Pane component to your SwingAndTweet application, to show your own timeline on one tab and your friend’s timeline on another tab Used a JScrollPane component to add vertical and horizontal scrollbars to your friends’ timeline list Used the getFriendsTimeline() method from the Twitter4J API to get the 20 most recent tweets from your friend’s timeline Applied font styles to your JLabel components via the Font class Added a black border to separate each individual tweet by using the BorderFactory and Color classes Added the date and time of creation of each individual tweet by using the getCreatedAt() method from the twitter4j.Status interface, along with the Date class. All those things, we learnt in the third part of the article series were a big improvement for our Twitter client, but wouldn’t it be cool if you could click on the URL links from your friends’ time line and then a web browser window would open automatically to show you the related webpage? Well, after reading this part of the article series, you’ll be able to integrate this functionality into your own Twitter client among other things. Here are the links to the earlier articles of this article series: Read Build your own Application to access Twitter using Java and NetBeans: Part 1 Read Build your own Application to access Twitter using Java and NetBeans: Part 2 Read Build your own Application to access Twitter using Java and NetBeans: Part 3 Using a JEditorPane component Till now, we’ve been working with JPanel objects to show your Twitter information inside the JTabbedPane component. But as you can see from your friends’ tweets, the URL links that show up aren’t clickable. And how can we make them clickable? Well, fortunately for us there’s a Swing component called JEditorPane that will let us use HTML markup, so the URL hyperlinks will show up as if you were on a web page. Cool, huh? Now let’s start with the dirty job… Open your NetBeans IDE along with your SwingAndTweet project, and make sure you’re in the Source View. Scroll up to the import declarations section and type import javax.swing.JEditorPane; right below the last import declaration, so your code looks as shown below: Now scroll down to the last line of code, JLabel statusUser;, and type JEditorPane statusPane; just below that line, as shown in the following screenshot: The next step is to add statusPane to the JTabbedPane1 component in your application. Scroll through the code until you locate the //code for the Friends timelineline and the try-block code below that line; then type the following code block just above the for statement: String paneContent = new String();statusPane = new JEditorPane();statusPane.setContentType("text/html");statusPane.setEditable(false); The following screenshot shows how your code must look like after inserting the above block of code (the red square indicates the lines you must add): Now scroll down through the code inside the for statement and type paneContent = paneContent + statusUser.getText() + "<br>" + statusText.getText() + "<hr>"; right after the jPanel1.add(individualStatus); line, as shown below: Then add the following two lines of code after the closing brace of the try block: statusPane.setText(paneContent);jTabbedPane1.add("Friends - Enhanced", statusPane); The following screenshot shows how your code must look like after the insertion: Run your application and log into your Twitter account. A new tab will appear in your Twitter client, and if you click on it you’ll see your friends’ latest tweets, as in the following screenshot: If you take a good look at the screen, you’ll notice the new tab you added with the JEditorPane component doesn’t show a vertical scroll bar so you can scroll up and down to see the complete list. That’s pretty easy to fix: First add the import javax.swing.JScrollPane; line to the import declarations section and then replace the jTabbedPane1.add("Friends - Enhanced", statusPane); line you added on step 9 with the following lines: JScrollPane editorScrollPane = new JScrollPane(statusPane, JScrollPane.VERTICAL_SCROLLBAR_AS_NEEDED, // vertical bar policy JScrollPane.HORIZONTAL_SCROLLBAR_NEVER ); // horizontal bar policyjTabbedPane1.add("Friends - Enhanced", editorScrollPane); Your code should now look like this: Run your Twitter application again and this time you’ll see the vertical scrollbar: Let’s stop for a while to review our progress so far. In the first step above the exercise, you added an import declaration to tell the Java compiler that we need to use an object from the JEditorPane class. In step 3, you added a JEditorPane object called statusPane to your application. This object acts as a container for your friends’ tweets. And in case you’re wondering why we didn’t use a regular JPanel object, just remember that we want to make the URL links in your friends’ tweets clickable, so when you click on one of them, a web browser window will pop up to show you the web page associated to that hyperlink. Now let’s get back to our exercise. In step 4, you added four lines to your application’s code. The first line: String paneContent = new String(); creates a String variable called paneContent to store the username and text of each individual tweet from your friends’ timeline. The next three lines: statusPane = new JEditorPane();statusPane.setContentType("text/html");statusPane.setEditable(false); create a JEditorPane object called statusPane, set its content type to text/html so we can include HTML markup and make the statusPane non-editable, so nothing gets messed up when showing your friends’ timeline. Now that we have the statusPane ready to roll, we need to fill it up with the information related to each individual tweet from your friends. That’s why we need the paneContent variable. In step 6, you inserted the following line: paneContent = paneContent + statusUser.getText() + "<br>" + statusText.getText() + "<hr>"; inside the for block to add the username and the text of each individual tweet to the paneContent variable. The <br> HTML tag inserts a line break so the username appears in one line and the text of each tweet appears in another line. The <hr> HTML tag inserts a horizontal line to separate one tweet from the other. Once the for loop ends, we need to add the information from the paneContent variable to the JEditorPane object called statusPane. That’s why in step 7, you added the following line: statusPane.setText(paneContent); and then the jTabbedPane1.add("Friends - Enhanced", statusPane); line creates a new tab in the jTabbedPane1 component and adds the statusPane component to it, so you can see the friends timeline with HTML markup. In step 10, you learned how to create a JScrollPane object called editorScrollPane to add scrollbars to your statusPane component and integrate them into the jTabbedPane1 container. In this example, the JScrollPane constructor requires arguments: the statusPane component, the vertical scrollbar policy and the horizontal scrollbar policy. There are three options you can choose for your vertical and horizontal scrollbars: show them as needed, never show them or always show them. In this specific case, we need the vertical scrollbar to show up as needed, in case the list of your friends’ tweets doesn’t fit the screen, so we use the JScrollPane.VERTICAL_SCROLLBAR_AS_NEEDED policy. And since we don’t need the horizontal bar to show up because the statusPane component can adjust its horizontal size to fit your application’s window, we use the JScrollPane.HORIZONTAL_SCROLLBAR_NEVER policy. The last line of code from step 10 adds the editorScrollPane component to the jTabbedPane1 container instead of adding the statusPane component directly, because now the JEditorPane component is contained within the JScrollPane component. Now let’s see how to convert the URL links to real hyperlinks.
Read more
  • 0
  • 0
  • 2262

article-image-jquery-user-interface-plugins-tooltip-plugins
Packt
27 Oct 2010
6 min read
Save for later

jQuery User Interface Plugins: Tooltip Plugins

Packt
27 Oct 2010
6 min read
  jQuery Plugin Development Beginner's Guide Build powerful, interactive plugins to implement jQuery in the best way possible Utilize jQuery's plugin framework to create a wide range of useful jQuery plugins from scratch Understand development patterns and best practices and move up the ladder to master plugin development Discover the ins and outs of some of the most popular jQuery plugins in action A Beginner's Guide packed with examples and step-by-step instructions to quickly get your hands dirty in developing high quality jQuery plugins         Read more about this book       (For more resources on jQuery, see here.) Before we get started, there is another little thing worth mentioning: provide many different opportunities to introduce new concepts and ideas, even while keeping the complexity of the whole plugin at a minimum. We can now go on to create our plugin, starting with basic functionalities, and subsequently adjusting its goals. We will add new, improved functionalities that, however, do not make the whole code look too difficult to understand—even after some time or for someone who's just starting out with jQuery. Tooltip plugins in general A lot has been said about tooltip plugins, but it's worth repeating the most important points with particular regard to the way tooltips are supposed to work, and how we want our tooltip to behave. First of all, we might want to get an idea of what tooltips look like and a sample of what we will accomplish by the end of this article. Here is an example: Also, with some more work and proper application of effects, images, and other relatively advanced techniques, we can also obtain something more complex and nicer looking, thus giving the user the chance to specify the style and behavior for the tooltip, as follows: The idea is actually very simple. The elements we have selected will trigger an event every time we hover the mouse pointer over them. The tooltip will then pop out, right at the mouse cursor position, retrieving the text portion from the title attribute of the said element. Finally, whenever we move the mouse over the same element, the plugin will move and follow the mouse cursor until it goes off the boundaries of the element. Positioning the tooltip The first problem we have to face is, of course, how to make the tooltip appear in the right position. It would be no trouble at all if we just had to make some text, image, or anything else show up. We've done it many times and it's no problem at all—just make their positioning absolute and set the right top and side distances. However, we need to take into account the fact that we don't know exactly where the mouse cursor might be and, as such, we need to calculate distances based upon the mouse cursor position itself. So, how can we do it? It's simple enough; we can use some of the JavaScript event properties to obtain the position. Unfortunately, Internet Explorer always tries to put a spoke in our wheel. In fact, the magnificent browser does not (according to this table, which is quite accurate: http://www.quirksmode.org/dom/w3c_cssom.html#mousepos) support pageX and pageY, which would normally return the mouse coordinates relative to the document. So we need to think about a workaround for Internet Explorer, as jQuery (from version 1.0.4 onwards) does not normalize some of the event properties according to W3C standards (http://api.jquery.com/category/events/event-object/). The following diagram (also provided in the " target="_blank">code bundle) should clarify what the visible viewport is (that is, the browser window—the red box): Whenever we scroll down, different parts of the document (blue) are shown through the browser window and hidden due to space constraints. The scroll height (green) is the part of the document currently not displayed. Custom jQuery selectors Suppose we have a page with some text written in, which also contains a few links to both internal pages (that is, pages on the same server) and external websites. We are presented with different choices in terms of which elements to apply the tooltip to (referring to links as an example, but they apply to any kind of element as well), as follows: All the links All the links with a specific class (for example, tooltip) All the links with the title attribute not empty All the links pointing to internal pages All the links pointing to external websites Combinations of the above We can easily combine the first three conditions with the others (and with themselves) using CSS selectors appropriately. For example: $("a"), all the links $("a.tooltip"), links having a tooltip class $("a[title]"), links with a title attribute (still have to check if empty) $("a.tooltip[title]"), links with a tooltip class and a title attribute As for internal and external pages, we have to work with jQuery selectors instead. Time for action – creating custom jQuery selectors Although jQuery makes it easy to select elements using standard CSS selectors, as well as some other selectors, jQuery's own selectors are the ones that help the developer to write and read code. Examples of custom selectors are :odd, :animated, and so on. jQuery also lets you create your own selectors! The syntax is as follows: // definition$.expr[':'].customselector = function(object, index,properties, list) { // code goes here};// call$("a:customselector") The parameters are all optional except for the first one (of course!), which is required to perform some basic stuff on the selected object: object: Reference to current HTML DOM element (not jQuery, beware!) index: Zero-based loop index within array properties: Array of metadata about the selector (the 4th argument contains the string passed to the jQuery selector) list: Array of DOM elements to loop through The return value can be either: true: Include current element false: Exclude current element Our selector (for external links detection) will then look, very simply, like the following code: $.expr[':'].external = function(object) { if(object.hostname) // is defined return(object.hostname != location.hostname); else return false;}; Also note that, to access the jQuery object, we have to use the following (since object refers to the DOM element only!): $.expr[':'].sample = function(object) { alert('$(obj).attr(): ' + $(object).attr("href") + 'obj.href: ' + object.href);}; Merging pieces together We have slowly created different parts of the plugin, which we need to merge in order to create a working piece of code that actually makes tooltips visible. So far we have understood how positioning works and how we can easily place an element in a determined position. Also, we have found out we can create our own jQuery selectors, and have developed a simple yet useful custom selector with which we are able to select links pointing to either internal or external pages. It needs to be placed at the top of the code, inside the closure, as we will make use of the dollar symbol ($) and it may conflict with other software.
Read more
  • 0
  • 0
  • 2255

article-image-look-high-level-programming-operations-php-language
Packt
11 Mar 2013
3 min read
Save for later

A look into the high-level programming operations for the PHP language

Packt
11 Mar 2013
3 min read
(For more resources related to this topic, see here.) Accessing documentation PhpStorm offers four different operations that will help you to access the documentation: Quick Definition, Quick Documentation, Parameter Info, and External Documentation. The first one, Quick Definition, presents the definition of a given symbol. You can use it for a variable, function, method, or class. Quick Documentation allows easy access to DocBlocks. It can be used for all kinds of symbols: variables, functions, methods, and classes. The next operation, Parameter Info, presents simplified information about a function or method interface. Finally, External Documentation will help you to access the official PHP documentation available at php.com. Their shortcuts are as follows: Quick Definition (Ctrl + Shift + I, Mac: alt + Space bar) Quick Documentation (Ctrl + Q, Mac: F1) Parameter Info (Ctrl + P, Mac: command + P) External Documentation (Shift + F1, Mac: shift + F1) The Esc (Mac: shift + esc) hotkey will close any of the previous windows. If you place the cursor inside the parenthesis of $s = str_replace(); and run Parameter Info (Ctrl + P, Mac: command + P), you will get the hint showing all the parameters for the str_replace() function. If that is not enough, place the cursor inside the str_replace function name and press Shift + F1 (Mac: shift + F1). You will get the manual for the function. If you want to test the next operation, open the project created in the Quick start – your first PHP application section and place the cursor inside the class name Controller in the src/My/HelloBundle/Controller/DefaultController.php file. The place where you should place the cursor is denoted with bar | in the following code: class DefaultController extends Cont|roller { } The Quick Definition operation will show you the class definition: The Quick Documentation operation will show you the documentation defined with PhpDoc blocks: It is a formal standard for commenting on the PHP code. The official documentation is available at http://www.phpdoc.org. Generators PhpStorm enables you to do the following: Implement magic methods Override inherited methods Generate constructor, getters, setters, and docblocks All of these operations are available in Code | Generate (Alt + Insert, Mac: command + N). Perform the following steps: Create a new class Foo and place the cursor at the position of | : class Person { | } The Generate dialog box will contain the following operations: The Implement Methods dialog box contains all available magic methods: Create the class with two private properties: class Lorem { private $ipsum; private $dolor; } Then go to Code | Generate | Getters and Setters. In the dialog box select both properties: Then press OK. PhpStorm will generate the following methods: class Lorem { private $ipsum; private $dolor; public function setDolor($dolor) { $this->dolor = $dolor; } public function getDolor() { return $this->dolor; } public function setIpsum($ipsum) { $this->ipsum = $ipsum; } public function getIpsum() { return $this->ipsum; } } Next, go to Code | Generate | DocBlocks and in the dialog box select all the properties and methods: PhpStorm will generate docblocks for each of the selected properties and methods, for example: /** * @param $dolor */ public function setDolor($dolor) { $this->dolor = $dolor; } Summary We just learned that in some cases you don't have to type the code at all, as it can be generated automatically. Generators discussed in this article lifted the burden of type setters, getters and magic functions from your shoulders. We also dived into different ways to access documentation here. Resources for Article : Further resources on this subject: Installing PHP-Nuke [Article] An Introduction to PHP-Nuke [Article] Creating Your Own Theme—A Wordpress Tutorial [Article]
Read more
  • 0
  • 0
  • 2253

Packt
20 May 2013
15 min read
Save for later

Play! Framework 2 – Dealing with Content

Packt
20 May 2013
15 min read
(For more resources related to this topic, see here.) In order to keep the article short and to the point, we'll only see the Java part. Keep in mind that the Scala version is little different for this level of detail. Body parsing for better reactivity As noted earlier, the way to manage content in Play! 2 is to use instances of body parsers. In brief, a body parser is a component that is responsible for parsing the body of an HTTP request as a stream to be converted into a predefined structure. This has a common sense ring to it, however their strength is in their way of consuming the stream—in a reactive fashion. Reactivity, in this context, is meant to describe a process where an application won't block on a task that is actually idle. As a stream consumption task is idle when no bytes are incoming, a body parser should behave the same. It will read and construct an internal representation of the incoming bytes. But it can also decide at any time that it has read enough to terminate and return the representation. On the other hand, if no more bytes are coming into the stream, it can relax its thread in favor of another request; it pauses its work until new bytes are received. Thinking about an HTTP request that is sending a bunch of XML content, the underlying action can use the XML-related body parser to handle it correctly (read reactively); that is, by parsing it and providing a DOM representation. To understand what a body parser actually is, we'll first look at how they are used—in the actions. An action in Play! 2 represents the piece of software that is able to handle an HTTP request; therefore, they are the right place to use a body parser. In the Java API, an action is allowed to be annotated with the Of annotation available in the BodyParser class. This annotation declares the expected type of request routed to it, and it requires a parameter that is the class of the parser that will be instantiated to parse the incoming request's body. The following screenshot shows an example: Isn't this helpful? We've gone from a request to a W3C document, in a single line. Functionally speaking, this works because an action is semantically a higher-order function that takes a body parser and generates a function that takes a request (and so its body) and results in an HTTP response (result). This result will then be used to construct the HTTP response by Play! 2. In Java, it is not all that obvious how to create a higher-order function. A good way, however, to achieve this was to add an annotation. An annotation can be processed at runtime in order to execute the right body parser (in this case). To illustrate this, we'll have a quick look at the Scala version: With this Scala version, it is easy to see that an action is dealing with a function from a request to a response. There are a plenty of predefined body parsers that can be used to handle our requests, and they are all defined in the BodyParser class as static inner classes. One can have a specific behavior to be applied on its expected request body, and even though a body parser has to be implemented in Scala, a Java coder can simply extend these current implementations. Actually, they're already providing enough control to cover all custom use cases. So, we have in our hands tools to handle the following content types: JSON XML URL form encoded Multipart (for uploading files) Text Raw (fallback) As we can see from the previous list, there is, obviously, an implementation for the x-www-form-urlencoded content type. Indeed, this is the parser we've used so far to retrieve data from the client side. For example, using POST requests throughout HTML forms. But wait, we never had to add such annotations to our actions, and, moreover, we've never looked in the parsed result. That's true, Play! 2, as a great framework, is already doing a lot of stuff for us. And that's because it's a web framework; it takes advantage of HTTP; in this case, using the content-type header. Based on this hint, it seems obvious that Play! Framework 2 will look in this header to find the right parser to apply. So annotations are mandatory, but where did we use them previously? In the bindFromRequest method, of course. Let's see how. We have used form instances, and we fed them some data through the client. Those instances were applied on the request using the bindFromRequest method, and this method's job was to look for data according to the provided content type. And, of course, this content type was set in the header by the HTML forms themselves. Indeed, an HTTP GET will send data in the request URL (query string), where an HTTP POST will be sent with a body that contains all data encoded by default as URL parameters (that is, x-www-url-encoded). So, we can now give an overview of what the bindFromRequest method does. When we ask a form to be filled in with data, this method will: Gather data as URL-form encoded data, if any Gather data from parts (if the content type is multipart-data) Gather data as JSON-encoded, if any Gather data from the query string (that's why GET requests were working as well)/p> Fill in the form's data with all of them (and validate) You might be wondering the worth of such annotations; the quick answer to that is they allow new types of parsers, but they can also enforce certain actions' requests to match a given content type. Another advantage of such annotations is that they allow us to extend or narrow the length of the body that can be handled. By default, 100 K are accepted, and this can be either configured (parsers.text.maxLength=42K) or passed as an argument to the annotation. With all of this in mind, we are now ready to implement these concepts in our code, and what we're going to do is to update our code base to create a kind of forum. A forum where one can log in, initiate a chat, reply to non-closed ones (based on their date), or even attach files to them. Creating a forum In this section, we'll refactor our existing application in order to enable it to act as a forum. And, chances are high that it won't be necessary to learn anything new; we'll just re-use the skills gathered so far; but we'll also use the parsing commodities that Play! 2 offers us. Reorganizing and logging in The very first thing we have to do is to enable a user to log in; this ability was already created in the Data controller. However, for that, we'll update our Application controller a bit, to create a new index action that will check whether a user is logged in or not. So, index is now the new entry point of the application and can be routed from / in the routes file. And, it's solely meant to check if a user has logged in or not. This check is based on the session content, as we simply check whether a user's e-mail is present in the session. We never see what a session can be in Play! 2, but we saw that Play! 2 is completely stateless. So, a session in Play! 2 is only an encrypted map of the value stored in the cookie. Thus it cannot be that big, and definitely cannot contain full data. If the user is present, we redirect the request to the chatroom by calling redirect with the expected action. This will prevent the browser from posting the request again if the user reloads the page. This method is called POST-redirect-GET. Otherwise, we respond with an Unauthorized HTTP response (401) that contains the HTML login page. The two actions (shown in the next screenshot) are so simple that we won't cover them further, except for a single line: session().clear(). It is simply revoking the cookie's content, which will require the subsequent request to create a new one, which then doesn't contain the previously stored e-mail. And finally, enter, which shows how a request's body can easily be handled using the relevant method: asFormUrlEncoded. It should look like that shown in the following screenshot: Indeed, one would normally have to use a form to retrieve this information for us, which would do it for us (behind the scenes); but in this case we have only a single parameter to retrieve, so a form would be overkill. So far, so good; we are now able to create a user, log in with it, and use a login page. To target having cleaner code, it would be worth splitting the Data controller code into several pieces (matter of a good separation of subject). Hence, the Users controller is created, in which will be placed the user-related actions taken out of Data. Now, we'll move back to something we saw earlier but didn't cover—the routes. Chats.allChats() action call. Chatting In the previous section, we were introduced to the Chats controller and its allChats action. If the names are self-descriptive, the underlying code isn't that much. First of all, we're now dealing with Chat instances that must be persisted somewhere in a database, along with their underlying items. But we'll also prepare for the next section, which relates to multipart data (for instance, it's helpful for file upload). That's why we'll add a brand new type, Image, which is also linked to Chat. Having said that, it would be worth checking our new chat implementation: Before we cover the Item and Image types, we'll first go to the Chats controller to see what's going on. Finally, we can see our allChats action; it's simply rendering all existing instances within a template. Even the rest of the controller is simple; everything is done in templates, which are left as exercises (we're so good at them now!). However, there's still the loadChat action that contains something related to this article: Long chatId = Long.parseLong(queryString.get("chatid")[0]); This action handles requests asking to show a particular Chat instance, which is a resource and thus should be served using a GET request. This implies that the parameter value is stored in the query string (or in the URL itself) rather than in the request body. Regarding query string access, it's more interesting to analyze the following line: Map<String,String[]> queryString = request().queryString(); In fact, all actions contextually refer to a request object, which is accessible using the request() method. This request object declares a queryString() method that returns a map of string and an array of strings. What comes next is trivial; we just get chatid out of this map (ok... in a very unsafe way). Until now, we have been able to log in and access the chatroom, where we can create or show chat instances. But we're still unable to reply to a chat. That's what will be tackled now. For that, we need to create an action that will, based on a chat ID, post a new message linked to the logged in user, and then attach this message as an item of the underlying Chat instance. For this, we must update the Item class with persistence information. Afterwards, we'll be able to update the Chats controller in order to create instances. Ok, it's like a beefed-up POJO; let's jump into the action that will create Item instances. The workflow to post a message for a user starts by enabling him/her to participate in a chat. This is done by loading it (using the loadChat action) where the user will be able to post a new message (an overview of the UI will be presented at the end of this article for illustration only). The following screenshot shows how it can be done: Observe how the user was recovered using the session. Still, nothing cumbersome to review here, we've just re-used a lot of stuff we've already covered. The action receives a POST request in which information about the message is given, and then we can bind the request to itemForm and finally save to the database the item contained in the resulting form. At most, we should notice that we're still free to encode the body as we want, and also that the chat ID is not a part of the form but a part of the action signature—that's because it is a part of the URL (routing). We've almost finished our forum; the only thing needed is to enable users to post images. Handling multipart content types The HTTP protocol is ready to accept, from a client, a lot of data and/or large chunks of data, at once. A way to achieve this is to use a specific encoding type: multipart/form-data. Such requests will have a body that can hold several data pieces formatted differently and attributed with different names. So, Play! 2 is a web framework that fits into HTTP as much as possible; that's why it deals with such requests goods, and provides an API that hides almost all of the tricky parts. In this section, we'll see how one could upload an image along with some caption text that will be attached to a specific chat. Before diving into the workflow, let's first create the holding structure: Image. This newly introduced type is not hard to understand as well; only two things should be pointed out: The pic() method that relies on the filePath field to recover the file itself. It uses a File instance to memorize subsequent calls. The enum type that prepares the action logic to filter the incoming files based on the given MIME type. This logic could also be defined in the validate method. These instances are always locked in with the connected user who uploaded it and will be added to a Chat instance. This will allow a chatroom to display all attached images with their caption beside the messages themselves. Now we're ready to look at the file upload itself by paying some attention to the last action of the Chats controller, that is, receiveImage. As we are used to simplifying the code (Play! 2 is there to ease our work, after all) and to get straight to the point, we reflected this in our receiveImage action.. In a very few lines, we declared a new action that expects requests to be multipart encoded containing at least two parts, where the first is a map of data (no matter how this map is encoded) to fill in imageForm (essentially a caption). The second will be the image part. After binding the request with the form and verifying that no errors have occurred, we can move to the body content in order to recover the binary data that was sent along with its metadata: the file content, its content type, its length, and so on. That was quite an intuitive thing to do – asking the body to be parsed as a multipart/multidata and and get it as an Http.MultipartFormData object, which has a getFile method that returns an Http.MultipartFormData.FilePart value. To understand why we didn't specify a body parser, recall that Play! 2 is able, most of the time, to discover which method fits best by itself. The Http.MultipartFormData. FilePart type is not only allowing us to recover the content as a file, but also its key in the multipart body, its filename header, and (especially) its content type. Having all of these things in hand, we are now able to check the content-type validity against the image's enum, and to store the image by getting the file path of the provided file. This file path will target the Temp directory of your machine. In the real world, the file should be relocated in a dedicated folder or maybe on an S3 repository. Et voilà! We have now learned about some of the features that can provide a very simple forum. The following screenshot shows what it could look like (without any efforts on the design, of course). First, the forms to show and enter archived and active chats: On entering an active chat, let's say the one named Today, we reach a page similar to the one shown next: Using the Attach an image form, we can select an image on our filesystem to be sent to the server. The result obtained is shown as follows: Until now, we have spoken about handling various content types coming from the outside world, but what about our application having to render content other than HTML? That's what we're about to see next.
Read more
  • 0
  • 0
  • 2223
article-image-mootools-extending-and-implementing-elements
Packt
25 Jul 2011
7 min read
Save for later

MooTools: Extending and Implementing Elements

Packt
25 Jul 2011
7 min read
  MooTools 1.3 Cookbook Over 100 highly effective recipes to turbo-charge the user interface of any web-enabled Internet application and web page The reader can benefit from the previous article on Extending MooTools. Extending elements—preventing multiple form submissions Imagine a scenario where click-happy visitors may undo normalcy by double-clicking the submit button, or perhaps an otherwise normal albeit impatient user might click it a second time. Submit buttons frequently need to be disabled or removed using client-side code for just such a reason. Users that double-click everything It is not entirely known where double-clicking users originated from. Some believe that single-clicking-users needed to be able to double-click to survive in the wild. They therefore began to grow gills and double-click links, buttons, and menu items. Others maintain that there was a sudden, large explosion in the vapors of nothingness that resulted in hordes of users that could not fathom the ability of a link, button, or menu item that could be opened with just a single click. Either way, they are out there, and they mostly use Internet Explorer and are quickly identifiable by how they type valid URLs into search bars and then swear the desired website is no longer on the Inter-nets. How to do it... Extending elements uses the same syntax as extending classes. Add a method that can be called when appropriate. Our example, the following code, could be used in a library that is associated with every page so that no submit button could ever again be clicked twice, at least, without first removing the attribute that has it disabled: Element.implement({ better_submit_buttons: function() { if ( this.get('tag')=='input' && this.getProperty('type')=='submit') { this.addEvent('click', function(e) { this.set({ 'disabled':'disabled', 'value':'Form Submitted!' }); }); } } }); window.addEvent('load',function() { $$('input[type=submit]').better_submit_buttons(); }); How it works... The MooTools class Element extends DOM elements referenced by the single-dollar selector, the double-dollars selector, and the document.id selector. In the onLoad event, $$('input[type=submit]').submit_only_once(); all INPUT elements that have a type equal to submit are extended with the Element class methods and properties. Of course, before that infusion of Moo-goodness takes place, we have already implemented a new method that prevents those elements from being clicked twice by adding the property that disables the element. There's more... In our example, we disable the submit button permanently and return false upon submission. The only way to get the submit button live again is to click the Try again button that calls the page again. Note that reloading the page via refresh in some browsers may not clear the disabled attribute; however, calling the page again from the URL or by clicking a link will. On pages that submit a form to a second page for processing, the semi-permanently disabled button is desirable outright. If our form is processed via Ajax, then we can use the Ajax status events to manually remove the disabled property and reset the value of the button. See also Read the document on the MooTools Request class that shows the various statuses that could be used in conjunction with this extended element: http://mootools.net/docs/core/Request/Request.   Extending elements-prompt for confirmation on submit Launching off the last extension, the forms on our site may also need to ask for confirmation. It is not unthinkable that a slip of the carriage return could accidentally submit a form before a user is ready. It certainly happens to all of us occasionally and perhaps to some of us regularly. How to do it... Mutate the HTML DOM FORM elements to act upon the onSubmit event and prompt whether to continue with the submission. Element.implement({ polite_forms: function() { if (this.get('tag')=='form') { this.addEvent('submit',function(e) { if(!confirm('Okay to submit form?')) { e.stop(); } }); } } }); How it works... The polite_forms() method is added to all HTML DOM elements, but the execution is restricted to elements whose tag is form, if (this.get('tag')=='form') {...}. The onSubmit event of the form is bound to a function that prompts users via the raw JavaScript confirm() dialogue that either returns true for a positive response or false otherwise. If false, then we prevent the event from continuing by calling the MooTools-implemented Event.stop(). There's more... In order to mix the submit button enhancement with the polite form enhancement only a few small changes to the syntax are necessary. To stop our submit button from showing in process... if the form submission is canceled by the polite form request, we create a proprietary reset event that can be called via Element.fireEvent() and chained to the collection of INPUT children that match our double-dollar selector. // extend all elements with the method polite forms Element.implement({ better_submit_buttons: function() { if (this.get('tag')=='input'&&this.getProperty('type')=='submit') { this.addEvents({ 'click':function(e) { this.set({'disabled':'disabled','value':'in process...'}); }, 'reset':function() { this.set({'disabled':false,'value':'Submit!'}); } }); } }, polite_forms: function() { if (this.get('tag')=='form') { this.addEvent('submit',function(e) { if(!confirm('Okay to submit form?')) { e.stop(); this.getChildren('input[type=submit]').fireEvent('reset'); } }); } } }); // enhance the forms window.addEvent('load',function() { $$('input[type=submit]').better_submit_buttons(); $$('form').polite_forms(); });   Extending typeOf, fixing undefined var testing We could not properly return the type of an undeclared variable. This oddity has its roots in the fact that undefined, undeclared variables cannot be dereferenced during a function call. In short, undeclared variables can not be used as arguments to a function. Getting ready Get ready to see how we can still extend MooTools' typeOf function by passing a missing variable using the global scope: // will throw a ReferenceError myfunction(oops_var); // will not throw a ReferenceError myfunction(window.oops_var); How to do it... Extend the typeOf function with a new method and call that rather than the parent method. // it is possible to extend functions with new methods typeOf.extend('defined',function(item) { if (typeof(item)=='undefined') return 'undefined'; else return typeOf(item); }); //var oops_var; // commented out "on purpose" function report_typeOf(ismoo) { if (ismoo==0) { document.write('oops_var1 is: '+typeof(oops_var)+'<br/>'); } else { // concat parent to avoid error from derefrencing an undeclared var document.write('oops_var2 is: '+typeOf.defined( window.oops_var)+'<br/>'); } } The output from calling typeof() and typeOf.defined() is identical for an undefined, undeclared variable passed via the global scope to avoid a reference error. <h2>without moo:</h2> <script type="text/javascript"> report_typeOf(0); </script> <h2><strong>with</strong> moo:</h2> <script type="text/javascript"> report_typeOf(1); </script> The output is: without moo: oops_var1 is: undefined with moo: oops_var2 is: undefined How it works... The prototype for the typeOf function object has been extended with a new method. The original method is still applied when the function is executed. However, we are now able to call the property defined, which is itself a function that can still reference and call the original function. There's more... For those that are not satisfied at the new turn of syntax, the proxy pattern should suffice to help keep us using a much similar syntax. // proxying in raw javascript is cleaner in this case var oldTypeOf = typeOf; var typeOf = function(item) { if (typeof(item)=='undefined') return 'undefined'; else return oldTypeOf(item); }; The old typeOf function has been renamed using the proxy pattern but is still available. Meanwhile, all calls to typeOf are now handled by the new version. See also The Proxy Pattern The proxy pattern is one of many JavaScript design patterns. Here is one good link to follow for more information: http://www.summasolutions.net/blogposts/design-patterns-javascript-part-1. Undeclared and Undefined Variables It can be quite daunting to have to deal with multiple layers of development. When we are unable to work alone and be sure all our variables are declared properly, testing every one can really cause code bloat. Certainly, the best practice is to always declare variables. Read more about it at http://javascriptweblog.wordpress.com/2010/08/16/understanding-undefined-and-preventing-referenceerrors/.  
Read more
  • 0
  • 0
  • 2213

article-image-user-interaction-and-email-automation-symfony-13-part1
Packt
18 Nov 2009
14 min read
Save for later

User Interaction and Email Automation in Symfony 1.3: Part1

Packt
18 Nov 2009
14 min read
The signup module We want to provide the users with the functionality to enter their name, email address, and how they found our web site. We want all this stored in a database and to have an email automatically sent out to the users thanking them for signing up. To start things off, we must first add some new tables to our existing database schema. The structure of our newsletter table will be straightforward. We will need one table to capture the users' information and a related table that will hold the names of all the places where we advertised our site. I have constructed the following entity relationship diagram to show you a visual relationship of the tables: All the code used in this article can be accessed here. Let's translate this diagram into XML and place it in the config/schema.xml file: <table name="newsletter_adverts" idMethod="native" phpName="NewsletterAds"> <column name="newsletter_adverts_id" type="INTEGER" required="true" autoIncrement="true" primaryKey="true" /> <column name="advertised" type="VARCHAR" size="30" required="true" /> </table> <table name="newsletter_signups" idMethod="native" phpName="NewsletterSignup"> <column name="id" type="INTEGER" required="true" autoIncrement="true" primaryKey="true" /> <column name="first_name" type="VARCHAR" size="20" required="true" /> <column name="surname" type="VARCHAR" size="20" required="true" /> <column name="email" type="VARCHAR" size="100" required="true" /> <column name="activation_key" type="VARCHAR" size="100" required="true" /> <column name="activated" type="BOOLEAN" default="0" required="true" /> <column name="newsletter_adverts_id" type="INTEGER" required="true"/> <foreign-key foreignTable="newsletter_adverts" onDelete="CASCADE"> <reference local="newsletter_adverts_id" foreign="newsletter_adverts_id" /> </foreign-key> <column name="created_at" type="TIMESTAMP" required="true" /> <column name="updated_at" type="TIMESTAMP" required="true" /> </table> We will need to populate the newsletter_adverts table with some test data as well. Therefore, I have also appended the following data to the fixtures.yml file located in the data/fixtures/ directory: NewsletterAds: nsa1: advertised: Internet Search nsa2: advertised: High Street nsa3: advertised: Poster With the database schema and the test data ready to be inserted into the database, we can once again use the Symfony tasks. As we have added two new tables to the schema, we will have to rebuild everything to generate the models using the following command: $/home/timmy/workspace/milkshake>symfony propel:build-all-load --no-confirmation Now we have populated the tables in the database, and the models and forms have been generated for use too. Binding a form to a database table Symfony contains a whole framework just for the development of forms. The forms framework makes building forms easier by applying object-oriented methods to their development. Each form class is based on its related table in the database. This includes the fields, the validators, and the way in which the forms and fields are rendered. A look at the generated base class Rather than starting off with a simple form, we are going to look at the base form class that has already been generated for us as a part of the build task we executed earlier. Because the code is generated, it will be easier for you to see the initial flow of a form. So let's open the base class for the NewsletterSignupForm form. The file is located at lib/form/base/BaseNewsletterSignupForm.class.php: class BaseNewsletterSignupForm extends BaseFormPropel { public function setup() { $this->setWidgets(array( 'id' => new sfWidgetFormInputHidden(), 'first_name' => new sfWidgetFormInput(), 'surname' => new sfWidgetFormInput(), 'email' => new sfWidgetFormInput(), 'activation_key' => new sfWidgetFormInput(), 'activated' => new sfWidgetFormInputCheckbox(), 'newsletter_adverts_id' => new sfWidgetFormPropelChoice (array('model' => 'NewsletterAds', 'add_empty' => false)), 'created_at' => new sfWidgetFormDateTime(), 'updated_at' => new sfWidgetFormDateTime(), )); $this->setValidators(array( 'id' => new sfValidatorPropelChoice(array ('model' => 'NewsletterSignup', 'column' => 'id', 'required' => false)), 'first_name' => new sfValidatorString(array('max_length' => 20)), 'surname' => new sfValidatorString(array('max_length' => 20)), 'email' => new sfValidatorString(array('max_length' => 100)), 'activation_key' => new sfValidatorString(array('max_length' => 100)), 'activated' => new sfValidatorBoolean(), 'newsletter_adverts_id'=> new sfValidatorPropelChoice(array ('model' => 'NewsletterAds', 'column' => 'newsletter_adverts_id')), 'created_at' => new sfValidatorDateTime(), 'updated_at' => new sfValidatorDateTime(), )); $this->widgetSchema->setNameFormat('newsletter_signup[%s]'); $this->errorSchema = new sfValidatorErrorSchema ($this->validatorSchema); parent::setup(); } There are five areas in this base class that are worth noting: This base class extends the BaseFormPropel class, which is an empty class. All base classes extend this class, which allows us to add global settings to all our forms. All of the columns in our table are treated as fields in the form, and are referred to as widgets. All of these widgets are then attached to the form by adding them to the setWidgets() method. Looking over the widgets in the array, you will see that they are pretty standard, such as sfWidgetFormInputHidden(), sfWidgetFormInput(). However, there is one widget added that follows the relationship between the newsletter_sigups table and the newsletter_adverts table. It is the sfWidgetFormPropelChoice widget. Because there is a 1:M relation between the tables, the default behavior is to use this widget, which creates an HTML drop-down box and is populated with the values from the newsletter_adverts table. As a part of the attribute set, you will see that it has set the model needed to retrieve the values to NewsletterAds and the newsletter_adverts_id column for the actual values of the drop-down box. All the widgets on the form must be validated by default. To do this, we have to call the setValidators() method and add the validation requirements to each widget. At the moment, the generated validators reflect the attributes of our database as set in the schema. For example, the first_name field in the statement 'first_name' => new sfValidatorString(array('max_length' => 20)) demonstrates that the validator checks if the maximum length is 20. If you remember, in our schema too, the first_name column is set to 20 characters. The final part calls the parent's setup() function. The base class BaseNewsletterSignupForm contains all the components needed to generate the form for us. So let's get the form on a page and take a look at the method to customize it. There are many widgets that Symfony provides for us. You can find the classes for them inside the widget/ directory of your Symfony installation. The Symfony propel task always generates a form class and its corresponding base class. Of course, not all of our tables will need to have a form bound to them. Therefore, delete all the form classes that are not needed. Rendering the form Rendering this basic form requires us to instantiate the form object in the action. Assigning the form object to the global $this variable means that we can pass the form object to the template just like any other variable. So let's start by implementing the newsletter signup module. In your terminal window, execute the generate:module task like this: $/home/timmy/workspace/milkshake>symfony generate:module frontend signup Now we can start with the application logic. Open the action class from apps/frontend/modules/signup/actions/actions.class.php for the signup module and add the following logic inside the index action: public function executeIndex(sfWebRequest $request) { $this->form = new NewsletterSignupForm(); return sfView::SUCCESS; } As I had mentioned earlier, the form class deals with the form validation and rendering. For the time being, we are going to stick to the default layout by allowing the form object to render itself. Using this method initially will allow us to create rapid prototypes. Let's open the apps/frontend/signup/templates/indexSuccess.php template and add the following view logic: <form action="<?php echo url_for('signup/submit') ?>" method="POST"> <table><?php echo $form ?></table> <input type="submit" /> </form> The form class is responsible for rendering of the form elements only. Therefore, we have to include the <form> and submit HTML tags that wrap around the form. Also, the default format of the form is set to 'table'. Again, we must also add the start and end tags of the <table>. At this stage, we would normally be able to view the form in the browser. But doing so will raise a Symfony exception error. The cause of this is that the results retrieved from the newsletter_adverts table are in the form of an array of objects. These results need to populate the select box widget. But in the current format, this is not possible. Therefore, we have to convert each object into its string equivalent. To do this, we need to create a PHP magic function of __toString() in the DAO class NewsletterAds. The DAO class for NewlsetterAds is located at lib/model/NewsletterAds.php just as all of the other models. Here we need to represent each object as its name, which is the value in the advertised column. Remember that we need to add this method to the DAO class as this represents a row within the results, unlike the peer class that represents the entire result set. Let's add the function to the NewsletterAds class as I have done here: class NewsletterAds extends BaseNewsletterAds { public function __toString() { return $this->getAdvertised(); } } We are now ready to view the completed form. In your web browser, enter the URL http://milkshake/frontend_dev.php/signup and you will see the result shown in the following screenshot: As you can see, although the form has been rendered according to our table structure, the fields which we do not want the user to fill in are also included. Of course, we can change this quiet easily. But before we take a look at the layout of the form, let's customize the widgets and widget validators. Now we can begin working on the application logic for submitting the form. Customizing form widgets and validators All of the generated form classes are located in the lib/form and the lib/form/base directories. The latter is where the default generated classes are located, and the former is where the customizable classes are located. This follows the same structure as the models. Each custom form class inherits from its parent. Therefore, we have to override some of the functions to customize the form. Let's customize the widgets and validators for the NewsletterSignupForm. Open the lib/forms/NewsletterSignupForm.class.php file and paste the following code inside the configure() method: //Removed unneeded widgets unset( $this['created_at'], $this['updated_at'], $this['activation_key'], $this['activated'], $this['id'] ); //Set widgets //Modify widgets $this->widgetSchema['first_name'] = new sfWidgetFormInput(); $this->widgetSchema['newsletter_adverts_id'] = new sfWidgetFormPropelChoice(array('model' => 'NewsletterAds', 'add_empty' => true, 'label'=>'Where did you find us?')); $this->widgetSchema['email'] = new sfWidgetFormInput (array('label' => 'Email Address')); //Add validation $this->setValidators(array ('first_name'=> new sfValidatorString(array ('required' => true), array('required' => 'Enter your firstname')), 'surname'=> new sfValidatorString(array('required' => true), array('required' => 'Enter your surname')), 'email'=> new sfValidatorString(array('required' => true), array('invalid' => 'Provide a valid email', 'required' => 'Enter your email')), 'newsletter_adverts_id' => new sfValidatorPropelChoice(array('model' => 'NewsletterAds', 'column' => 'newsletter_adverts_id'), array('required' => 'Select where you found us')), )); //Set post validators $this->validatorSchema->setPostValidator( new sfValidatorPropelUnique(array('model' => 'NewsletterSignup', 'column' => array('email')), array('invalid' => 'Email address is already registered')) ); //Set form name $this->widgetSchema->setNameFormat('newsletter_signup[%s]'); //Set the form format $this->widgetSchema->setFormFormatterName('list'); Let's take a closer look at the code. Removing unneeded fields To remove the fields that we do not want to be rendered, we must call the PHP unset() method and pass in the fields to unset. As mentioned earlier, all of the fields that are rendered need a corresponding validator, unless we unset them. Here we do not want the created_at and activation_key fields to be entered by the user. To do so, the unset() method should contain the following code: unset( $this['created_at'], $this['updated_at'], $this['activation_key'], $this['activated'], $this['id'] ); Modifying the form widgets Although it'll be fine to use the remaining widgets as they are, let's have a look at how we can modify them: //Modify widgets $this->widgetSchema['first_name'] = new sfWidgetFormInput(); $this->widgetSchema['newsletter_adverts_id'] = new sfWidgetFormPropelChoice(array('model' => 'AlSignupNewsletterAds', 'add_empty' => true, 'label'=>'Where did you find us?')); $this->widgetSchema['email'] = new sfWidgetFormInput(array('label' => 'Email Address')); There are several types of widgets available, but our form requires only two of them. Here we have used the sfWidgetFormInput() and sfWidgetFormPropelChoice() widgets. Each of these can be initialized with several values. We have initialized the email and newsletter_adverts_id widgets with a label. This basically renders the label field associated to the widget on the form. We do not have to include a label because Symfony adds the label according to the column name. Adding form validators Let's add the validators in a similar way as we have added the widgets: //Add validation $this->setValidators(array( 'first_name'=> new sfValidatorString(array('required' => true), array('required' => 'Enter your firstname')), 'surname'=> new sfValidatorString(array('required' => true), array('required' => 'Enter your surname')), 'email'=> new sfValidatorEmail(array('required' => true), array('invalid' => 'Provide a valid email', 'required' => 'Enter your email')), 'newsletter_adverts_id' => new sfValidatorPropelChoice(array ('model' => 'NewsletterAds', 'column' => 'newsletter_adverts_id'), array('required' => 'Select where you found us')), )); //Set post validators $this->validatorSchema->setPostValidator(new sfValidatorPropelUnique(array('model' => 'NewsletterSignup', 'column' => array('email')), array('invalid' => 'Email address is already registered')) ); Our form will need four different types of validators: sfValidatorString: This checks the validity of a string against a criteria. It takes four arguments—required, trim, min_length, and max_length. SfValidatorEmail: This validates the input against the pattern of an email address. SfValidatorPropelChoice: It validates the value with the values in the newsletter_adverts table. It needs the model and column that are to be used.   SfValidatorPropelUnique: Again, this validator checks the value against the values in a given table column for uniqueness. In our case, we want to use the NewsletterSignup model to test if the email column is unique. As mentioned earlier, all the fields must have a validator. Although it's not recommended, you can allow extra parameters to be passed in. To achieve this, there are two steps: You must disable the default option of having all fields validated by $this->validatorSchema->setOption('allow_extra_fields', true). Although the above step allows the values to bypass validation, they will be filtered out of the results. To prevent this, you will have to set $this->validatorSchema->setOption('filter_extra_fields', false). Form naming convention and setting its style The final part we added is the naming convention for the HTML attributes and the style in which we want the form rendered. The HTML output will use our naming convention. For example, in the following code, we have set the convention to newsletter_signup[fieldname] for each input field's name. //Set form name $this->widgetSchema->setNameFormat('newsletter_signup[%s]'); //Set the form format $this->widgetSchema->setFormFormatterName('list'); Two formats are shipped with Symfony that we can use to render our form. We can either render it in an HTML table or an unordered list. As we have seen, the default is an HTML table. But by setting this as list, the form is now rendered as an unordered HTML list, just like the following screenshot. (Of course, I had to replace the <table> tags with the <ul> tags.)
Read more
  • 0
  • 0
  • 2207

article-image-communicating-server-using-google-web-toolkit-rpc
Packt
19 Jan 2011
5 min read
Save for later

Communicating with Server using Google Web Toolkit RPC

Packt
19 Jan 2011
5 min read
  Google Web Toolkit 2 Application Development Cookbook Over 70 simple but incredibly effective practical recipes to develop web applications using GWT with JPA , MySQL and i Report Create impressive, complex browser-based web applications with GWT 2 Learn the most effective ways to create reports with parameters, variables, and subreports using iReport Create Swing-like web-based GUIs using the Ext GWT class library Develop applications using browser quirks, Javascript,HTML scriplets from scratch Part of Packt's Cookbook series: Each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible  The Graphical User Interface (GUI) resides in the client side of the application. This article introduces the communication between the server and the client, where the client (GUI) will send a request to the server, and the server will respond accordingly. In GWT, the interaction between the server and the client is made through the RPC mechanism. RPC stands for Remote Procedure Call. The concept is that there are some methods in the server side, which are called by the client at a remote location. The client calls the methods by passing the necessary arguments, and the server processes them, and then returns back the result to the client. GWT RPC allows the server and the client to pass Java objects back and forth. RPC has the following steps: Defining the GWTService interface: Not all the methods of the server are called by the client. The methods which are called remotely by the client are defined in an interface, which is called GWTService. Defining the GWTServiceAsync interface: Based on the GWTService interface, another interface is defined, which is actually an asynchronous version of the GWTService interface. By calling the asynchronous method, the caller (the client) is not blocked until the method completes the operation. Implementing the GWTService interface: A class is created where the abstract method of the GWTService interface is overridden. Calling the methods: The client calls the remote method to get the server response. Creating DTO classes In this application, the server and the client will pass Java objects back and forth for the operation. For example, the BranchForm will request the server to persist a Branch object, where the Branch object is created and passed to server by the client, and the server persists the object in the server database. In another example, the client will pass the Branch ID (as an int), the server will find the particular Branch information, and then send the Branch object to the client to be displayed in the branch form. So, both the server and client need to send or receive Java objects. We have already created the JPA entity classes and the JPA controller classes to manage the entity using the Entity Manager. But the JPA class objects are not transferable over the network using the RPC. JPA classes will just be used by the server on the server side. For the client side (to send and receive objects), DTO classes are used. DTO stands for Data Transfer Object. DTO is simply a transfer object which encapsulates the business data and transfers it across the network. Getting ready Create a package com.packtpub.client.dto, and create all the DTO classes in this package. How to do it... The steps required to complete the task are as follows: Create a class BranchDTO that implements the Serializable interface: public class BranchDTO implements Serializable Declare the attributes. You can copy the attribute declaration from the entity classes. But in this case, do not include the annotations: private Integer branchId; private String name; private String location Define the constructors, as shown in the following code: public BranchDTO(Integer branchId, String name, String location) { this.branchId = branchId; this.name = name; this.location = location; } public BranchDTO(Integer branchId, String name) { this.branchId = branchId; this.name = name; } public BranchDTO(Integer branchId) { this.branchId = branchId; } public BranchDTO() { } To generate the constructors automatically in NetBeans, right-click on the code, select Insert Code | Constructor, and then click on Generate after selecting the attribute(s). Define the getter and setter: public Integer getBranchId() { return branchId; } public void setBranchId(Integer branchId) { this.branchId = branchId; } public String getLocation() { return location; } public void setLocation(String location) { this.location = location; } public String getName() { return name; } public void setName(String name) { this.name = name; } To generate the setter and getter automatically in NetBeans, right-click on the code, select Insert Code | Getter and Setter…, and then click on Generate after selecting the attribute(s). Mapping entity classes and DTOs In RPC, the client will send and receive DTOs, but the server needs pure JPA objects to be used by the Entity Manager. That's why, we need to transform from DTO to JPA entity class and vice versa. In this recipe, we will learn how to map the entity class and DTO. Getting ready Create the entity and DTO classes. How to do it... Open the Branch entity class and define a constructor with a parameter of type BranchDTO. The constructor gets the properties from the DTO and sets them in its own properties: public Branch(BranchDTO branchDTO) { setBranchId(branchDTO.getBranchId()); setName(branchDTO.getName()); setLocation(branchDTO.getLocation()); } This constructor will be used to create the Branch entity class object from the BranchDTO object. In the same way, the BranchDTO object is constructed from the entity class object, but in this case, the constructor is not defined. Instead, it is done where it is required to construct DTO from the entity class. There's more... Some third-party libraries are available for automatically mapping entity class and DTO, such as Dozer and Gilead. For details, you may visit http://dozer.sourceforge.net/ and http://noon.gilead.free.fr/gilead/. Creating the GWT RPC Service In this recipe, we are going to create the GWTService interface, which will contain an abstract method to add a Branch object to the database. Getting ready Create the Branch entity class and the DTO class.  
Read more
  • 0
  • 0
  • 2198
article-image-using-themes-lwuit-11-part-1-2
Packt
30 Sep 2009
6 min read
Save for later

Using Themes in LWUIT 1.1: Part 1

Packt
30 Sep 2009
6 min read
Working with theme files A theme file is conceptually similar to CSS while its implementation is like that of a Java properties file. Essentially a theme is a list of key-value pairs with an attribute being a key and its value being the second part of the key-value pair An entry in the list may be Form.bgColor= 555555. This entry specifies that the background color of all forms in the application will be (hex) 555555 in the RGB format. The list is implemented as a hashtable. Viewing a theme file A theme is packaged into a resource file that can also hold, as we have already seen, other items like images, animations, bitmap fonts, and so on. The fact that a theme is an element in a resource bundle means it can be created, viewed, and edited using the LWUIT Designer. The following screenshot shows a theme file viewed through the LWUIT Designer: The first point to note is that there are five entries at the bottom, which appear in bold letters. All such entries are the defaults. To take an example, the only component-specific font setting in the theme shown above is for the soft button. The font for the form title, as well as that for the strings in other components is not defined. These strings will be rendered with the default font. A theme file can contain images, animations, and fonts—both bitmap and system—as values. Depending on the type of key, values can be numbers, filenames or descriptions along with thumbnails where applicable. Editing a theme file In order to modify an entry in the theme file, select the row, and click on the Edit button. The dialog for edit will open, as shown in the following screenshot: Clicking Clicking on the browse button (the button with three dots and marked by the arrow) will open a color chooser from which the value of the selected color will be directly entered into the edit dialog. The edit dialog has fields corresponding to various keys, and depending on the one selected for editing, the relevant field will be enabled. Once a value is edited, click on the OK button to enter the new value into the theme file. In order to abort editing, click on the Cancel button. Populating a theme We shall now proceed to build a new theme file and see how it affects the appearance of a screen. The application used here is DemoTheme, and the code snippet below shows that we have set up a form with a label, a button, and a radio button. //create a new formForm demoForm = new Form("Theme Demo");//demoForm.setLayout(new BorderLayout());demoForm.setLayout(new BoxLayout(BoxLayout.Y_AXIS));//create and add 'Exit' command to the form//the command id is 0demoForm.addCommand(new Command("Exit", 1));//this MIDlet is the listener for the form's commanddemoForm.setCommandListener(this);//labelLabel label = new Label("This is a Label");//buttonButton button = new Button("An ordinary Button");//radiobuttonRadioButton rButton = new RadioButton("Just a RadioButton");//timeteller -- a custom component//TimeTeller timeTeller = new TimeTeller();//set style for timeLabel and titleLabel(in TimeViewer)//these parts of TimeTeller cannot be themed//because they belong to TimeViewer which does not//have any UIID/*Style tStyle = new Style();tStyle.setBgColor(0x556b3f);tStyle.setFgColor(0xe8dd21);tStyle.setBorder(Border.createRoundBorder(5, 5));timeTeller.setTitleStyle(tStyle);Style tmStyle = timeTeller.getTimeStyle();tmStyle.setBgColor(0xff0000);tmStyle.setFgColor(0xe8dd21);tmStyle.setBgTransparency(80);tmStyle.setBorder(Border.createRoundBorder(5, 5));*///add the widgets to demoFormdemoForm.addComponent(label);demoForm.addComponent(button);demoForm.addComponent(rButton);//demoForm.addComponent(timeTeller);//show the formdemoForm.show(); The statements for TimeTeller have been commented out. They will have to be uncommented to produce the screenshots in the section dealing with setting a theme for a custom component. The basic structure of the code is the same as that in the examples that we have come across so far, but with one difference—we do not have any statement for style setting this time around. That is because we intend to use theming to control the look of the form and the components on it. If we compile and run the code in its present form, then we get the following (expected) look. All the components have now been rendered with default attributes. In order to change the way the form looks, we are going to build a theme file—SampleTheme—that will contain the attributes required. We start by opening the LWUIT Designer through the SWTK. Had a resource file been present in the res folder of the project, we could have opened it in the LWUIT Designer by double-clicking on that file in the SWTK screen. In this case, as there is no such file, we launch the LWUIT Designer through the SWTK menu. The following screenshot shows the result of selecting Themes, and then clicking on the Add button: The name of the theme is typed in, as shown in the previous screenshot. Clicking on the OK button now creates an empty theme file, which is shown under Themes. Our first target for styling will be the form including the title and menu bars. If we click on the Add button in the right panel, the Add dialog will open. We can see this dialog below with the drop-down list for the Component field. Form is selected from this list. Similarly, the drop-down list for Attribute shows all the attributes that can be set. From this list we select bgImage, and we are prompted to enter the name for the image, which is bgImage in our case. The next step is to close the Add Image dialog by clicking on the OK button. As we have not added any image to this resource file as yet, the Image field above is blank. In order to select an image, we have to click on the browse button on the right of the Image field to display the following dialog. Again, the browse button has to be used to locate the desired image file. We confirm our selection through the successive dialogs to add the image as the one to be shown on the background of the form.
Read more
  • 0
  • 0
  • 2187

article-image-starting-small-and-growing-modular-way
Packt
02 Mar 2015
27 min read
Save for later

Starting Small and Growing in a Modular Way

Packt
02 Mar 2015
27 min read
This article written by Carlo Russo, author of the book KnockoutJS Blueprints, describes that RequireJS gives us a simplified format to require many parameters and to avoid parameter mismatch using the CommonJS require format; for example, another way (use this or the other one) to write the previous code is: define(function(require) {   var $ = require("jquery"),       ko = require("knockout"),       viewModel = {};   $(function() {       ko.applyBindings(viewModel);   });}); (For more resources related to this topic, see here.) In this way, we skip the dependencies definition, and RequireJS will add all the texts require('xxx') found in the function to the dependency list. The second way is better because it is cleaner and you cannot mismatch dependency names with named function arguments. For example, imagine you have a long list of dependencies; you add one or remove one, and you miss removing the relative function parameter. You now have a hard-to-find bug. And, in case you think that r.js optimizer behaves differently, I just want to assure you that it's not so; you can use both ways without any concern regarding optimization. Just to remind you, you cannot use this form if you want to load scripts dynamically or by depending on variable value; for example, this code will not work: var mod = require(someCondition ? "a" : "b");if (someCondition) {   var a = require('a');} else {   var a = require('a1');} You can learn more about this compatibility problem at this URL: http://www.requirejs.org/docs/whyamd.html#commonjscompat. You can see more about this sugar syntax at this URL: http://www.requirejs.org/docs/whyamd.html#sugar. Now that you know the basic way to use RequireJS, let's look at the next concept. Component binding handler The component binding handler is one of the new features introduced in Version 2.3 of KnockoutJS. Inside the documentation of KnockoutJS, we find the following explanation: Components are a powerful, clean way of organizing your UI code into self-contained, reusable chunks. They can represent individual controls/widgets, or entire sections of your application. A component is a combination of HTML and JavaScript. The main idea behind their inclusion was to create full-featured, reusable components, with one or more points of extensibility. A component is a combination of HTML and JavaScript. There are cases where you can use just one of them, but normally you'll use both. You can get a first simple example about this here: http://knockoutjs.com/documentation/component-binding.html. The best way to create self-contained components is with the use of an AMD module loader, such as RequireJS; put the View Model and the template of the component inside two different files, and then you can use it from your code really easily. Creating the bare bones of a custom module Writing a custom module of KnockoutJS with RequireJS is a 4-step process: Creating the JavaScript file for the View Model. Creating the HTML file for the template of the View. Registering the component with KnockoutJS. Using it inside another View. We are going to build bases for the Search Form component, just to move forward with our project; anyway, this is the starting code we should use for each component that we write from scratch. Let's cover all of these steps. Creating the JavaScript file for the View Model We start with the View Model of this component. Create a new empty file with the name BookingOnline/app/components/search.js and put this code inside it: define(function(require) {var ko = require("knockout"),     template = require("text!./search.html");function Search() {}return {   viewModel: Search,   template: template};}); Here, we are creating a constructor called Search that we will fill later. We are also using the text plugin for RequireJS to get the template search.html from the current folder, into the argument template. Then, we will return an object with the constructor and the template, using the format needed from KnockoutJS to use as a component. Creating the HTML file for the template of the View In the View Model we required a View called search.html in the same folder. At the moment, we don't have any code to put inside the template of the View, because there is no boilerplate code needed; but we must create the file, otherwise RequireJS will break with an error. Create a new file called BookingOnline/app/components/search.html with the following content: <div>Hello Search</div> Registering the component with KnockoutJS When you use components, there are two different ways to give KnockoutJS a way to find your component: Using the function ko.components.register Implementing a custom component loader The first way is the easiest one: using the default component loader of KnockoutJS. To use it with our component you should just put the following row inside the BookingOnline/app/index.js file, just before the row $(function () {: ko.components.register("search", {require: "components/search"}); Here, we are registering a module called search, and we are telling KnockoutJS that it will have to find all the information it needs using an AMD require for the path components/search (so it will load the file BookingOnline/app/components/search.js). You can find more information and a really good example about a custom component loader at: http://knockoutjs.com/documentation/component-loaders.html#example-1-a-component-loader-that-sets-up-naming-conventions. Using it inside another View Now, we can simply use the new component inside our View; put the following code inside our Index View (BookingOnline/index.html), before the script tag:    <div data-bind="component: 'search'"></div> Here, we are using the component binding handler to use the component; another commonly used way is with custom elements. We can replace the previous row with the following one:    <search></search> KnockoutJS will use our search component, but with a WebComponent-like code. If you want to support IE6-8 you should register the WebComponents you are going to use before the HTML parser can find them. Normally, this job is done inside the ko.components.register function call, but, if you are putting your script tag at the end of body as we have done until now, your WebComponent will be discarded. Follow the guidelines mentioned here when you want to support IE6-8: http://knockoutjs.com/documentation/component-custom-elements.html#note-custom-elements-and-internet-explorer-6-to-8 Now, you can open your web application and you should see the text, Hello Search. We put that markup only to check whether everything was working here, so you can remove it now. Writing the Search Form component Now that we know how to create a component, and we put the base of our Search Form component, we can try to look for the requirements for this component. A designer will review the View later, so we need to keep it simple to avoid the need for multiple changes later. From our analysis, we find that our competitors use these components: Autocomplete field for the city Calendar fields for check-in and check-out Selection field for the number of rooms, number of adults and number of children, and age of children This is a wireframe of what we should build (we got inspired by Trivago): We could do everything by ourselves, but the easiest way to realize this component is with the help of a few external plugins; we are already using jQuery, so the most obvious idea is to use jQuery UI to get the Autocomplete Widget, the Date Picker Widget, and maybe even the Button Widget. Adding the AMD version of jQuery UI to the project Let's start downloading the current version of jQuery UI (1.11.1); the best thing about this version is that it is one of the first versions that supports AMD natively. After reading the documentation of jQuery UI for the AMD (URL: http://learn.jquery.com/jquery-ui/environments/amd/) you may think that you can get the AMD version using the download link from the home page. However, if you try that you will get just a package with only the concatenated source; for this reason, if you want the AMD source file, you will have to go directly to GitHub or use Bower. Download the package from https://github.com/jquery/jquery-ui/archive/1.11.1.zip and extract it. Every time you use an external library, remember to check the compatibility support. In jQuery UI 1.11.1, as you can see in the release notes, they removed the support for IE7; so we must decide whether we want to support IE6 and 7 by adding specific workarounds inside our code, or we want to remove the support for those two browsers. For our project, we need to put the following folders into these destinations: jquery-ui-1.11.1/ui -> BookingOnline/app/ui jquery-ui-1.11.1/theme/base -> BookingOnline/css/ui We are going to apply the widget by JavaScript, so the only remaining step to integrate jQuery UI is the insertion of the style sheet inside our application. We do this by adding the following rows to the top of our custom style sheet file (BookingOnline/css/styles.css): @import url("ui/core.css");@import url("ui/menu.css");@import url("ui/autocomplete.css");@import url("ui/button.css");@import url("ui/datepicker.css");@import url("ui/theme.css") Now, we are ready to add the widgets to our web application. You can find more information about jQuery UI and AMD at: http://learn.jquery.com/jquery-ui/environments/amd/ Making the skeleton from the wireframe We want to give to the user a really nice user experience, but as the first step we can use the wireframe we put before to create a skeleton of the Search Form. Replace the entire content with a form inside the file BookingOnline/components/search.html: <form data-bind="submit: execute"></form> Then, we add the blocks inside the form, step by step, to realize the entire wireframe: <div>   <input type="text" placeholder="Enter a destination" />   <label> Check In: <input type="text" /> </label>   <label> Check Out: <input type="text" /> </label>   <input type="submit" data-bind="enable: isValid" /></div> Here, we built the first row of the wireframe; we will bind data to each field later. We bound the execute function to the submit event (submit: execute), and a validity check to the button (enable: isValid); for now we will create them empty. Update the View Model (search.js) by adding this code inside the constructor: this.isValid = ko.computed(function() {return true;}, this); And add this function to the Search prototype: Search.prototype.execute = function() { }; This is because the validity of the form will depend on the status of the destination field and of the check-in date and check-out date; we will update later, in the next paragraphs. Now, we can continue with the wireframe, with the second block. Here, we should have a field to select the number of rooms, and a block for each room. Add the following markup inside the form, after the previous one, for the second row to the View (search.html): <div>   <fieldset>     <legend>Rooms</legend>     <label>       Number of Room       <select data-bind="options: rangeOfRooms,                           value: numberOfRooms">       </select>     </label>     <!-- ko foreach: rooms -->       <fieldset>         <legend>           Room <span data-bind="text: roomNumber"></span>         </legend>       </fieldset>     <!-- /ko -->   </fieldset></div> In this markup we are asking the user to choose between the values found inside the array rangeOfRooms, to save the selection inside a property called numberOfRooms, and to show a frame for each room of the array rooms with the room number, roomNumber. When developing and we want to check the status of the system, the easiest way to do it is with a simple item inside a View bound to the JSON of a View Model. Put the following code inside the View (search.html): <pre data-bind="text: ko.toJSON($data, null, 2)"></pre> With this code, you can check the status of the system with any change directly in the printed JSON. You can find more information about ko.toJSON at http://knockoutjs.com/documentation/json-data.html Update the View Model (search.js) by adding this code inside the constructor: this.rooms = ko.observableArray([]);this.numberOfRooms = ko.computed({read: function() {   return this.rooms().length;},write: function(value) {   var previousValue = this.rooms().length;   if (value > previousValue) {     for (var i = previousValue; i < value; i++) {       this.rooms.push(new Room(i + 1));     }   } else {     this.rooms().splice(value);     this.rooms.valueHasMutated();   }},owner: this}); Here, we are creating the array of rooms, and a property to update the array properly. If the new value is bigger than the previous value it adds to the array the missing item using the constructor Room; otherwise, it removes the exceeding items from the array. To get this code working we have to create a module, Room, and we have to require it here; update the require block in this way:    var ko = require("knockout"),       template = require("text!./search.html"),       Room = require("room"); Also, add this property to the Search prototype: Search.prototype.rangeOfRooms = ko.utils.range(1, 10); Here, we are asking KnockoutJS for an array with the values from the given range. ko.utils.range is a useful method to get an array of integers. Internally, it simply makes an array from the first parameter to the second one; but if you use it inside a computed field and the parameters are observable, it re-evaluates and updates the returning array. Now, we have to create the View Model of the Room module. Create a new file BookingOnline/app/room.js with the following starting code: define(function(require) {var ko = require("knockout");function Room(roomNumber) {   this.roomNumber = roomNumber;}return Room;}); Now, our web application should appear like so: As you can see, we now have a fieldset for each room, so we can work on the template of the single room. Here, you can also see in action the previous tip about the pre field with the JSON data. With KnockoutJS 3.2 it is harder to decide when it's better to use a normal template or a component. The rule of thumb is to identify the degree of encapsulation you want to manage: Use the component when you want a self-enclosed black box, or the template if you want to manage the View Model directly. What we want to show for each room is: Room number Number of adults Number of children Age of each child We can update the Room View Model (room.js) by adding this code into the constructor: this.numberOfAdults = ko.observable(2);this.ageOfChildren = ko.observableArray([]);this.numberOfChildren = ko.computed({read: function() {   return this.ageOfChildren().length;},write: function(value) {   var previousValue = this.ageOfChildren().length;   if (value > previousValue) {     for (var i = previousValue; i < value; i++) {       this.ageOfChildren.push(ko.observable(0));     }   } else {     this.ageOfChildren().splice(value);     this.ageOfChildren.valueHasMutated();   }},owner: this});this.hasChildren = ko.computed(function() {return this.numberOfChildren() > 0;}, this); We used the same logic we have used before for the mapping between the count of the room and the count property, to have an array of age of children. We also created a hasChildren property to know whether we have to show the box for the age of children inside the View. We have to add—as we have done before for the Search View Model—a few properties to the Room prototype: Room.prototype.rangeOfAdults = ko.utils.range(1, 10);Room.prototype.rangeOfChildren = ko.utils.range(0, 10);Room.prototype.rangeOfAge = ko.utils.range(0, 17); These are the ranges we show inside the relative select. Now, as the last step, we have to put the template for the room in search.html; add this code inside the fieldset tag, after the legend tag (as you can see here, with the external markup):      <fieldset>       <legend>         Room <span data-bind="text: roomNumber"></span>       </legend>       <label> Number of adults         <select data-bind="options: rangeOfAdults,                            value: numberOfAdults"></select>       </label>       <label> Number of children         <select data-bind="options: rangeOfChildren,                             value: numberOfChildren"></select>       </label>       <fieldset data-bind="visible: hasChildren">         <legend>Age of children</legend>         <!-- ko foreach: ageOfChildren -->           <select data-bind="options: $parent.rangeOfAge,                               value: $rawData"></select>         <!-- /ko -->       </fieldset>     </fieldset>     <!-- /ko --> Here, we are using the properties we have just defined. We are using rangeOfAge from $parent because inside foreach we changed context, and the property, rangeOfAge, is inside the Room context. Why did I use $rawData to bind the value of the age of the children instead of $data? The reason is that ageOfChildren is an array of observables without any container. If you use $data, KnockoutJS will unwrap the observable, making it one-way bound; but if you use $rawData, you will skip the unwrapping and get the two-way data binding we need here. In fact, if we use the one-way data binding our model won't get updated at all. If you really don't like that the fieldset for children goes to the next row when it appears, you can change the fieldset by adding a class, like this: <fieldset class="inline" data-bind="visible: hasChildren"> Now, your application should appear as follows: Now that we have a really nice starting form, we can update the three main fields to use the jQuery UI Widgets. Realizing an Autocomplete field for the destination As soon as we start to write the code for this field we face the first problem: how can we get the data from the backend? Our team told us that we don't have to care about the backend, so we speak to the backend team to know how to get the data. After ten minutes we get three files with the code for all the calls to the backend; all we have to do is to download these files (we already got them with the Starting Package, to avoid another download), and use the function getDestinationByTerm inside the module, services/rest. Before writing the code for the field let's think about which behavior we want for it: When you put three or more letters, it will ask the server for the list of items Each recurrence of the text inside the field into each item should be bold When you select an item, a new button should appear to clear the selection If the current selected item and the text inside the field are different when the focus exits from the field, it should be cleared The data should be taken using the function, getDestinationByTerm, inside the module, services/rest The documentation of KnockoutJS also explains how to create custom binding handlers in the context of RequireJS. The what and why about binding handlers All the bindings we use inside our View are based on the KnockoutJS default binding handler. The idea behind a binding handler is that you should put all the code to manage the DOM inside a component different from the View Model. Other than this, the binding handler should be realized with reusability in mind, so it's always better not to hard-code application logic inside. The KnockoutJS documentation about standard binding is already really good, and you can find many explanations about its inner working in the Appendix, Binding Handler. When you make a custom binding handler it is important to remember that: it is your job to clean after; you should register event handling inside the init function; and you should use the update function to update the DOM depending on the change of the observables. This is the standard boilerplate code when you use RequireJS: define(function(require) {var ko = require("knockout"),     $ = require("jquery");ko.bindingHandlers.customBindingHandler = {   init: function(element, valueAccessor,                   allBindingsAccessor, data, context) {     /* Code for the initialization… */     ko.utils.domNodeDisposal.addDisposeCallback(element,       function () { /* Cleaning code … */ });   },   update: function (element, valueAccessor) {     /* Code for the update of the DOM… */   }};}); And inside the View Model module you should require this module, as follows: require('binding-handlers/customBindingHandler'); ko.utils.domNodeDisposal is a list of callbacks to be executed when the element is removed from the DOM; it's necessary because it's where you have to put the code to destroy the widgets, or remove the event handlers. Binding handler for the jQuery Autocomplete widget So, now we can write our binding handler. We will define a binding handler named autocomplete, which takes the observable to put the found value. We will also define two custom bindings, without any logic, to work as placeholders for the parameters we will send to the main binding handler. Our binding handler should: Get the value for the autoCompleteOptions and autoCompleteEvents optional data bindings. Apply the Autocomplete Widget to the item using the option of the previous step. Register all the event listeners. Register the disposal of the Widget. We also should ensure that if the observable gets cleared, the input field gets cleared too. So, this is the code of the binding handler to put inside BookingOnline/app/binding-handlers/autocomplete.js (I put comments between the code to make it easier to understand): define(function(require) {var ko = require("knockout"),     $ = require("jquery"),     autocomplete = require("ui/autocomplete");ko.bindingHandlers.autoComplete = {   init: function(element, valueAccessor, allBindingsAccessor, data, context) { Here, we are giving the name autoComplete to the new binding handler, and we are also loading the Autocomplete Widget of jQuery UI: var value = ko.utils.unwrapObservable(valueAccessor()),   allBindings = ko.utils.unwrapObservable(allBindingsAccessor()),   options = allBindings.autoCompleteOptions || {},   events = allBindings.autoCompleteEvents || {},   $element = $(element); Then, we take the data from the binding for the main parameter, and for the optional binding handler; we also put the current element into a jQuery container: autocomplete(options, $element);if (options._renderItem) {   var widget = $element.autocomplete("instance");   widget._renderItem = options._renderItem;}for (var event in events) {   ko.utils.registerEventHandler(element, event, events[event]);} Now we can apply the Autocomplete Widget to the field. If you are questioning why we used ko.utils.registerEventHandler here, the answer is: to show you this function. If you look at the source, you can see that under the wood it uses $.bind if jQuery is registered; so in our case we could simply use $.bind or $.on without any problem. But I wanted to show you this function because sometimes you use KnockoutJS without jQuery, and you can use it to support event handling of every supported browser. The source code of the function _renderItem is (looking at the file ui/autocomplete.js): _renderItem: function( ul, item ) {return $( "<li>" ).text( item.label ).appendTo( ul );}, As you can see, for security reasons, it uses the function text to avoid any possible code injection. It is important that you know that you should do data validation each time you get data from an external source and put it in the page. In this case, the source of data is already secured (because we manage it), so we override the normal behavior, to also show the HTML tag for the bold part of the text. In the last three rows we put a cycle to check for events and we register them. The standard way to register for events is with the event binding handler. The only reason you should use a custom helper is to give to the developer of the View a way to register events more than once. Then, we add to the init function the disposal code: // handle disposalko.utils.domNodeDisposal.addDisposeCallback(element, function() {$element.autocomplete("destroy");}); Here, we use the destroy function of the widget. It's really important to clean up after the use of any jQuery UI Widget or you'll create a really bad memory leak; it's not a big problem with simple applications, but it will be a really big problem if you realize an SPA. Now, we can add the update function:    },   update: function(element, valueAccessor) {     var value = valueAccessor(),         $element = $(element),         data = value();     if (!data)       $element.val("");   }};}); Here, we read the value of the observable, and clean the field if the observable is empty. The update function is executed as a computed observable, so we must be sure that we subscribe to the observables required inside. So, pay attention if you put conditional code before the subscription, because your update function could be not called anymore. Now that the binding is ready, we should require it inside our form; update the View search.html by modifying the following row:    <input type="text" placeholder="Enter a destination" /> Into this:    <input type="text" placeholder="Enter a destination"           data-bind="autoComplete: destination,                     autoCompleteEvents: destination.events,                     autoCompleteOptions: destination.options" /> If you try the application you will not see any error; the reason is that KnockoutJS ignores any data binding not registered inside the ko.bindingHandlers object, and we didn't require the binding handler autocomplete module. So, the last step to get everything working is the update of the View Model of the component; add these rows at the top of the search.js, with the other require(…) rows:      Room = require("room"),     rest = require("services/rest");require("binding-handlers/autocomplete"); We need a reference to our new binding handler, and a reference to the rest object to use it as source of data. Now, we must declare the properties we used inside our data binding; add all these properties to the constructor as shown in the following code: this.destination = ko.observable();this.destination.options = { minLength: 3,source: rest.getDestinationByTerm,select: function(event, data) {   this.destination(data.item);}.bind(this),_renderItem: function(ul, item) {   return $("<li>").append(item.label).appendTo(ul);}};this.destination.events = {blur: function(event) {   if (this.destination() && (event.currentTarget.value !==                               this.destination().value)) {     this.destination(undefined);   }}.bind(this)}; Here, we are defining the container (destination) for the data selected inside the field, an object (destination.options) with any property we want to pass to the Autocomplete Widget (you can check all the documentation at: http://api.jqueryui.com/autocomplete/), and an object (destination.events) with any event we want to apply to the field. Here, we are clearing the field if the text inside the field and the content of the saved data (inside destination) are different. Have you noticed .bind(this) in the previous code? You can check by yourself that the value of this inside these functions is the input field. As you can see, in our code we put references to the destination property of this, so we have to update the context to be the object itself; the easiest way to do this is with a simple call to the bind function. Summary In this article, we have seen all some functionalities of KnockoutJS (core). The application we realized was simple enough, but we used it to learn better how to use components and custom binding handlers. If you think we put too much code for such a small project, try to think what differences you have seen between the first and the second component: the more component and binding handler code you write, the lesser you will have to write in the future. The most important point about components and custom binding handlers is that you have to realize them looking at future reuse; more good code you write, the better it will be for you later. The core point of this article was AMD and RequireJS; how to use them inside a KnockoutJS project, and why you should do it. Resources for Article: Further resources on this subject: Components [article] Web Application Testing [article] Top features of KnockoutJS [article] e to add—as we have done before for the Search View Model—  
Read more
  • 0
  • 0
  • 2180
Modal Close icon
Modal Close icon