Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Front-End Web Development

341 Articles
article-image-yui-28-rich-text-editor
Packt
16 Jul 2010
9 min read
Save for later

YUI 2.8: Rich Text Editor

Packt
16 Jul 2010
9 min read
(For more resources on YUI, see here.) Long gone are the days when we struggled to highlight a word in an e-mail message for lack of underlining or boldfacing. The rich graphic environment that the web provides has extended to anything we do on it; plain text is no longer fashionable. YUI includes a Rich Text Editor (RTE) component in two varieties, the basic YA-HOO.widget.SimpleEditor and the full YAHOO.widget.Editor. Both editors are very simple to include in a web page and they enable our visitors to enter richly formatted documents which we can easily read and use in our applications. Beyond that, the RTE is highly customizable and allows us to tailor the editor we show the user in any way we want. In this article we’ll see: What each of the two editors offers How to create either of them Ways to retrieve the text entered How to add toolbar commands   The Two Editors Nothing comes for free, features take bandwidth so the RTE component has two versions, SimpleEditor which provides the basic editing functionality and Editor which is a subclass of SimpleEditor and adds several features at a cost of close to 40% more size plus several more dependencies which we might have already loaded and might not add to the total. A look at their toolbars can help us to see the differences: The above is the standard toolbar of SimpleEditor. The toolbar allows selection of fonts, sizes and styles, select the color both for the text and the background, create lists and insert links and pictures. The full editor adds to the top toolbar sub and superscript, remove formatting, show source, undo and redo and to the bottom toolbar, text alignment, &ltHn> paragraph styles and indenting commands. The full editor requires, beyond the common dependencies for both, Button and Menu so that the regular HTML &ltselect> boxes can be replaced by a fancier one: Finally, while in the SimpleEditor, when we insert an image or a link, RTE will simply call window.prompt() to show a standard input box asking for the URL for the image or the link destination, the full editor can show a more elaborate dialog box such as the following for the Insert Image command: A simple e-mail editor It is high time we did some coding, however I hope nobody gets frustrated at how little we’ll do because, even though the RTE is quite a complex component and does wonderful things, there is amazingly little we have to do to get one up and running. This is what our page will look like: This is the HTML for the example: <form method="get" action="#" id="form1"> <div class="fieldset"><label for="to">To:</label> <input type="text" name="to" id="to"/></div> <div class="fieldset"><label for="from">From:</label> <input type="text" name="from" id="from" value="me" /></div> <div class="fieldset"><label for="subject">Subject:</label> <input type="text" name="subject" id="subject"/></div> <textarea id="msgBody" name="msgBody" rows="20" cols="75"> Lorem ipsum dolor sit amet, and so on </textarea> <input type="submit" value=" Send Message " /></form> This simple code, assisted by a little CSS would produce something pretty much like the image above, except for the editing toolbar. This is by design, RTE uses Progressive Enhancement to turn the &lttextarea> into a fully featured editing window so, if you don’t have JavaScript enabled, you’ll still be able to get your text edited, though it will be plain text. The form should have its method set to “post”, since the body of the message might be quite long and exceed the browser limit for a “get” request, but using “get” in this demo will allow us to see in the location bar of the browser what would actually get transmitted to the server. Our page will require the following dependencies: yahoo-dom-event.js, element-min.js and simpleeditor-min.js along its CSS file, simpleeditor.css. In a <script> tag right before the closing </body> we will have: YAHOO.util.Event.onDOMReady(function () { var myEditor = new YAHOO.widget.SimpleEditor('msgBody', { height: '300px', width: '740px', handleSubmit: true }); myEditor.get('toolbar').titlebar = false; myEditor.render();}); This is all the code we need turn that &lttextarea> into an RTE; we simply create an instance of SimpleEditor giving the id of the &lttextarea> and a series of options. In this case we set the size of the editor and tell it that it should take care of submitting the data on the RTE along the rest of the form. What the RTE does when this option is true is to set a listener for the form submission and dump the contents of the editor window back into the &lttextarea> so it gets submitted along the rest of the form. The RTE normally shows a title bar over the toolbar; we don't want this in our application and we eliminate it simply by setting the titlebar property in the toolbar configuration attribute to false. Alternatively, we could have set it to any HTML string we wanted shown on that area Finally, we simply render the editor. That is all we need to do; the RTE will take care of all editing chores and when the form is about to be submitted, it will take care of sending its data along with the rest. Filtering the data The RTE will not send the data unfiltered, it will process the HTML in its editing area to make sure it is clean, safe, and compliant. Why would we expect our data to contain anything invalid? If all text was written from within the RTE, there would be no problem at all as the RTE won't generate anything wrong, but that is not always the case. Plenty of text will be cut from somewhere else and pasted into the RTE, and that text brings with it plenty of existing markup. To clean up the text, the RTE will consider the idiosyncrasies of a few user agents and the settings of a couple of configuration attributes. The filterWord configuration attribute will make sure that the extra markup introduced by text pasted into the editor from MS Word does not get through. The markup configuration attribute has four possible settings: semantic: This is the default setting; it will favor semantic tags in contrast to styling tags, for example, it will change &ltb> into &ltstrong>, &ltti&g into &ltem> and &ltfont> into &ltspan style="font: …. css: It will favor CSS style attributes, for example, changing &ltb> into &ltspan style="font-weight:bold">. default: It does the minimum amount of changes required for safety and compliance. xhtml: Among other changes, it makes sure all tags are closed such as &ltbr />, &ltimg />, and &ltinput />.   The default setting, which is not the default, offers the least filtering that will be done in all cases; it will make sure tags have their matching closing tags, extra whitespace is stripped off, and the tags and attributes are in lower case. It will also drop several tags that don't fit in an HTML fragment, such as &lthtml> or &ltbody>, that are unsafe, such as &ltscript> or &ltiframe>, or would involve actions, such as &ltform> or form input elements. The list of invalid tags is stored in property .invalidHTML and can be freely changed. More validation We can further validate what the RTE sends in the form; instead of letting the RTE handle the data submission automatically, we can handle it ourselves by simply changing the previous code to this: YAHOO.util.Event.onDOMReady(function () { var Dom = YAHOO.util.Dom, Event = YAHOO.util.Event; var myEditor = new YAHOO.widget.SimpleEditor('msgBody', { height: '300px', width: '740px' }); myEditor.get('toolbar').titlebar = false; myEditor.render(); Event.on('form1', 'submit', function (ev) { var html = myEditor.getEditorHTML(); html = myEditor.cleanHTML(html); if (html.search(/<strong/gi) > -1) { alert("Don't shout at me!"); Event.stopEvent(ev); } this.msgBody.innerHTML = html; });}); We have dropped the handleSubmit configuration attribute when creating the SimpleEditor instance as we want to handle it ourselves. We listen to the submit event for the form and in the listener we read the actual rich text from the RTE via .getEditorHTML(). We may or may not want to clean it; in this example, we do so by calling .cleanHTML(). In fact, if we call .cleanHTML() with no arguments we will get the cleaned-up rich text; we don't need to call .getEditorHTML() first. Then we can do any validation that we want on that string and any of the other values. We use the Event utility .stopEvent() method to prevent the form from submitting if an error is found, but if everything checks fine, we save the HTML we recovered from the RTE into the &lttextarea>, just as if we had the handleSubmit configuration attribute set, except that now we actually control what goes there. In the case of text in boldface, it would seem easy to filter it out by simply adding this line: myEditor.invalidHTML.strong = true; However, this erases the tag and all the content in between, probably not what we wanted. Likewise, we could have set .invalidHTML.em to true to drop italics, but other elements are not so easy. RTE replaces a &ltu> (long deprecated) by &ltspan style="text-decoration:underline;"> which is impossible to drop in this way. Besides, these replacements depend on the setting of the markup configuration attribute. This example has also served us to see how data can be read from the RTE and cleaned if desired. Data can also be sent to the RTE by using .setEditorHTML(), but not before the editorContentLoaded event is fired, as the RTE would not be ready to receive it. In the example, we wanted to manipulate the editor contents, so we read it and saved it back into the &lttextarea> in separate steps; otherwise, we could have used the .saveHTML() method to send the data back to the &lttextarea> directly. In fact, this is what the RTE itself does when we set handleSubmit to true.
Read more
  • 0
  • 0
  • 5623

article-image-aggregators-file-exchange-over-ftpftps-social-integration-and-enterprise-messaging
Packt
20 Feb 2015
26 min read
Save for later

Aggregators, File exchange Over FTP/FTPS, Social Integration, and Enterprise Messaging

Packt
20 Feb 2015
26 min read
In this article by Chandan Pandey, the author of Spring Integration Essentials, we will explore the out-of-the-box capabilities that the Spring Integration framework provides for a seamless flow of messages across heterogeneous components and see what Spring Integration has in the box when it comes to real-world integration challenges. We will cover Spring Integration's support for external components and we will cover the following topics in detail: Aggregators File exchange over FTP/FTPS Social integration Enterprise messaging (For more resources related to this topic, see here.) Aggregators The aggregators are the opposite of splitters - they combine multiple messages and present them as a single message to the next endpoint. This is a very complex operation, so let's start by a real life scenario. A news channel might have many correspondents who can upload articles and related images. It might happen that the text of the articles arrives much sooner than the associated images - but the article must be sent for publishing only when all relevant images have also arrived. This scenario throws up a lot of challenges; partial articles should be stored somewhere, there should be a way to correlate incoming components with existing ones, and also there should be a way to identify the completion of a message. Aggregators are there to handle all of these aspects - some of the relevant concepts that are used are MessageStore, CorrelationStrategy, and ReleaseStrategy. Let's start with a code sample and then we will dive down to explore each of these concepts in detail: <int:aggregator   input-channel="fetchedFeedChannelForAggregatior"   output-channel="aggregatedFeedChannel"   ref="aggregatorSoFeedBean"   method="aggregateAndPublish"   release-strategy="sofeedCompletionStrategyBean"   release-strategy-method="checkCompleteness"   correlation-strategy="soFeedCorrelationStrategyBean"   correlation-strategy-method="groupFeedsBasedOnCategory"   message-store="feedsMySqlStore "   expire-groups-upon-completion="true">   <int:poller fixed-rate="1000"></int:poller> </int:aggregator> Hmm, a pretty big declaration! And why not—a lot of things combine together to act as an aggregator. Let's quickly glance at all the tags used: int:aggregator: This is used to specify the Spring framework's namespace for the aggregator. input-channel: This is the channel from which messages will be consumed. output-channel: This is the channel to which messages will be dropped after aggregation. ref: This is used to specify the bean having the method that is called on the release of messages. method: This is used to specify the method that is invoked when messages are released. release-strategy: This is used to specify the bean having the method that decides whether aggregation is complete or not. release-strategy-method: This is the method having the logic to check for completeness of the message. correlation-strategy: This is used to specify the bean having the method to correlate the messages. correlation-strategy-method: This is the method having the actual logic to correlate the messages. message-store: This is used to specify the message store, where messages are temporarily stored until they have been correlated and are ready to release. This can be in memory (which is default) or can be a persistence store. If a persistence store is configured, message delivery will be resumed across a server crash. Java class can be defined as an aggregator and, as described in the previous bullet points, the method and ref parameters decide which method of bean (referred by ref) should be invoked when messages have been aggregated as per CorrelationStrategy and released after fulfilment of ReleaseStrategy. In the following example, we are just printing the messages before passing them on to the next consumer in the chain: public class SoFeedAggregator {   public List<SyndEntry> aggregateAndPublish(List<SyndEntry>     messages) {     //Do some pre-processing before passing on to next channel     return messages;   } } Let's get to the details of the three most important components that complete the aggregator. Correlation strategy Aggregator needs to group the messages—but how will it decide the groups? In simple words, CorrelationStrategy decides how to correlate the messages. The default is based on a header named CORRELATION_ID. All messages having the same value for the CORRELATION_ID header will be put in one bracket. Alternatively, we can designate any Java class and its method to define a custom correlation strategy or can extend Spring Integration framework's CorrelationStrategy interface to define it. If the CorrelationStrategy interface is implemented, then the getCorrelationKey() method should be implemented. Let's see our correlation strategy in the feeds example: public class CorrelationStrategy {   public Object groupFeedsBasedOnCategory(Message<?> message) {     if(message!=null){       SyndEntry entry = (SyndEntry)message.getPayload();       List<SyndCategoryImpl> categories=entry.getCategories();       if(categories!=null&&categories.size()>0){         for (SyndCategoryImpl category: categories) {           //for simplicity, lets consider the first category           return category.getName();         }       }     }     return null;   } } So how are we correlating our messages? We are correlating the feeds based on the category name. The method must return an object that can be used for correlating the messages. If a user-defined object is returned, it must satisfy the requirements for a key in a map such as defining hashcode() and equals(). The return value must not be null. Alternatively, if we would have wanted to implement it by extending framework support, then it would have looked like this: public class CorrelationStrategy implements CorrelationStrategy {   public Object getCorrelationKey(Message<?> message) {     if(message!=null){       …             return category.getName();           }         }       }       return null;     }   } } Release strategy We have been grouping messages based on correlation strategy—but when will we release it for the next component? This is decided by the release strategy. Similar to the correlation strategy, any Java POJO can define the release strategy or we can extend framework support. Here is the example of using the Java POJO class: public class CompletionStrategy {   public boolean checkCompleteness(List<SyndEntry> messages) {     if(messages!=null){       if(messages.size()>2){         return true;       }     }     return false;   } } The argument of a message must be of type collection and it must return a Boolean indication whether to release the accumulated messages or not. For simplicity, we have just checked for the number of messages from the same category—if it's greater than two, we release the messages. Message store Until an aggregated message fulfils the release criteria, the aggregator needs to store them temporarily. This is where message stores come into the picture. Message stores can be of two types: in-memory and persistence store. Default is in memory, and if this is to be used, then there is no need to declare this attribute at all. If a persistent message store needs to be used, then it must be declared and its reference should be given to the message- store attribute. A mysql message store can be declared and referenced as follows: <bean id=" feedsMySqlStore "   class="org.springframework.integration.jdbc.JdbcMessageStore">   <property name="dataSource" ref="feedsSqlDataSource"/> </bean> Data source is Spring framework's standard JDBC data source. The greatest advantage of using persistence store is recoverability—if the system recovers from a crash, all in-memory aggregated messages will not be lost. Another advantage is capacity—memory is limited, which can accommodate a limited number of messages for aggregation, but the database can have a much bigger space. FTP/FTPS FTP, or File Transfer Protocol, is used to transfer files across networks. FTP communications consist of two parts: server and client. The client establishes a session with the server, after which it can download or upload files. Spring Integration provides components that act as a client and connect to the FTP server to communicate with it. What about the server—which server will it connect to? If you have access to any public or hosted FTP server, use it. Else, the easiest way for trying out the example in this section is to set up a local instance of the FTP server. FTP setup is out of the scope of this article. Prerequisites To use Spring Integration components for FTP/FTPS, we need to add a namespace to our configuration file and then add the Maven dependency entry in the pom.xml file. The following entries should be made: Namespace support can be added by using the following code snippet:   class="org.springframework.integration.     ftp.session.DefaultFtpSessionFactory">   <property name="host" value="localhost"/>   <property name="port" value="21"/>   <property name="username" value="testuser"/>   <property name="password" value="testuser"/> </bean> The DefaultFtpSessionFactory class is at work here, and it takes the following parameters: Host that is running the FTP server Port at which it's running the server Username Password for the server A session pool for the factory is maintained and an instance is returned when required. Spring takes care of validating that a stale session is never returned. Downloading files from the FTP server Inbound adapters can be used to read the files from the server. The most important aspect is the session factory that we just discussed in the preceding section. The following code snippet configures an FTP inbound adapter that downloads a file from a remote directory and makes it available for processing: <int-ftp:inbound-channel-adapter   channel="ftpOutputChannel"   session-factory="ftpClientSessionFactory"   remote-directory="/"   local-directory=   "C:\Chandan\Projects\siexample\ftp\ftplocalfolder"   auto-create-local-directory="true"   delete-remote-files="true"   filename-pattern="*.txt"   local-filename-generator-expression=   "#this.toLowerCase() + '.trns'">   <int:poller fixed-rate="1000"/> </int-ftp:inbound-channel-adapter> Let's quickly go through the tags used in this code: int-ftp:inbound-channel-adapter: This is the namespace support for the FTP inbound adapter. channel: This is the channel on which the downloaded files will be put as a message. session-factory: This is a factory instance that encapsulates details for connecting to a server. remote-directory: This is the directory on the server where the adapter should listen for the new arrival of files. local-directory: This is the local directory where the downloaded files should be dumped. auto-create-local-directory: If enabled, this will create the local directory structure if it's missing. delete-remote-files: If enabled, this will delete the files on the remote directory after it has been downloaded successfully. This will help in avoiding duplicate processing. filename-pattern: This can be used as a filter, but only files matching the specified pattern will be downloaded. local-filename-generator-expression: This can be used to generate a local filename. An inbound adapter is a special listener that listens for events on the remote directory, for example, an event fired on the creation of a new file. At this point, it will initiate the file transfer. It creates a payload of type Message<File> and puts it on the output channel. By default, the filename is retained and a file with the same name as the remote file is created in the local directory. This can be overridden by using local- filename-generator-expression. Incomplete files On the remote server, there could be files that are still in the process of being written. Typically, there the extension is different, for example, filename.actualext.writing. The best way to avoid reading incomplete files is to use the filename pattern that will copy only those files that have been written completely. Uploading files to the FTP server Outbound adapters can be used to write files to the server. The following code snippet reads a message from a specified channel and writes it inside the FTP server's remote directory. The remote server session is determined as usual by the session factory. Make sure the username configured in the session object has the necessary permission to write to the remote directory. The following configuration sets up a FTP adapter that can upload files in the specified directory:   <int-ftp:outbound-channel-adapter channel="ftpOutputChannel"     remote-directory="/uploadfolder"     session-factory="ftpClientSessionFactory"     auto-create-directory="true">   </int-ftp:outbound-channel-adapter> Here is a brief description of the tags used: int-ftp:outbound-channel-adapter: This is the namespace support for the FTP outbound adapter. channel: This is the name of the channel whose payload will be written to the remote server. remote-directory: This is the remote directory where files will be put. The user configured in the session factory must have appropriate permission. session-factory: This encapsulates details for connecting to the FTP server. auto-create-directory: If enabled, this will automatically create a remote directory if it's missing, and the given user should have sufficient permission. The payload on the channel need not necessarily be a file type; it can be one of the following: java.io.File: A Java file object byte[]: This is a byte array that represents the file contents java.lang.String: This is the text that represents the file contents Avoiding partially written files Files on the remote server must be made available only when they have been written completely and not when they are still partial. Spring uses a mechanism of writing the files to a temporary location and its availability is published only when it has been completely written. By default, the suffix is written, but it can be changed using the temporary-file-suffix property. This can be completely disabled by setting use-temporary-file- name to false. FTP outbound gateway Gateway, by definition, is a two-way component: it accepts input and provides a result for further processing. So what is the input and output in the case of FTP? It issues commands to the FTP server and returns the result of the command. The following command will issue an ls command with the option –l to the server. The result is a list of string objects containing the filename of each file that will be put on the reply- channel. The code is as follows: <int-ftp:outbound-gateway id="ftpGateway"     session-factory="ftpClientSessionFactory"     request-channel="commandInChannel"     command="ls"     command-options="-1"     reply-channel="commandOutChannel"/> The tags are pretty simple: int-ftp:outbound-gateway: This is the namespace support for the FTP outbound gateway session-factory: This is the wrapper for details needed to connect to the FTP server command: This is the command to be issued command-options: This is the option for the command reply-channel: This is the response of the command that is put on this channel FTPS support For FTPS support, all that is needed is to change the factory class—an instance of org.springframework.integration.ftp.session.DefaultFtpsSessionFactory should be used. Note the s in DefaultFtpsSessionFactory. Once the session is created with this factory, it's ready to communicate over a secure channel. Here is an example of a secure session factory configuration: <bean id="ftpSClientFactory"   class="org.springframework.integration.ftp.session.   DefaultFtpsSessionFactory">   <property name="host" value="localhost"/>   <property name="port" value="22"/>   <property name="username" value="testuser"/>   <property name="password" value="testuser"/> </bean> Although it is obvious, I would remind you that the FTP server must be configured to support a secure connection and open the appropriate port. Social integration Any application in today's context is incomplete if it does not provide support for social messaging. Spring Integration provides in-built support for many social interfaces such as e-mails, Twitter feeds, and so on. Let's discuss the implementation of Twitter in this section. Prior to Version 2.1, Spring Integration was dependent on the Twitter4J API for Twitter support, but now it leverages Spring's social module for Twitter integration. Spring Integration provides an interface for receiving and sending tweets as well as searching and publishing the search results in messages. Twitter uses oauth for authentication purposes. An app must be registered before we start Twitter development on it. Prerequisites Let's look at the steps that need to be completed before we can use a Twitter component in our Spring Integration example: Twitter account setup: A Twitter account is needed. Perform the following steps to get the keys that will allow the user to use Twitter using the API: Visit https://apps.twitter.com/. Sign in to your account. Click on Create New App. Enter the details such as Application name, Description, Website, and so on. All fields are self-explanatory and appropriate help has also been provided. The value for the field Website need not be a valid one—put an arbitrary website name in the correct format. Click on the Create your application button. If the application is created successfully, a confirmation message will be shown and the Application Management page will appear, as shown here: Go to the Keys and Access Tokens tab and note the details for Consumer Key (API Key) and Consumer Secret (API Secret) under Application Settings, as shown in the following screenshot: You need additional access tokens so that applications can use Twitter using APIs. Click on Create my access token; it takes a while to generate these tokens. Once it is generated, note down the value of Access Token and Access Token Secret. Go to the Permissions tab and provide permission to Read, Write and Access direct messages. After performing all these steps, and with the required keys and access token, we are ready to use Twitter. Let's store these in the twitterauth.properties property file: twitter.oauth.apiKey= lnrDlMXSDnJumKLFRym02kHsy twitter.oauth.apiSecret= 6wlriIX9ay6w2f6at6XGQ7oNugk6dqNQEAArTsFsAU6RU8F2Td twitter.oauth.accessToken= 158239940-FGZHcbIDtdEqkIA77HPcv3uosfFRnUM30hRix9TI twitter.oauth.accessTokenSecret= H1oIeiQOlvCtJUiAZaachDEbLRq5m91IbP4bhg1QPRDeh The next step towards Twitter integration is the creation of a Twitter template. This is similar to the datasource or connection factory for databases, JMS, and so on. It encapsulates details to connect to a social platform. Here is the code snippet: <context:property-placeholder location="classpath: twitterauth.properties "/> <bean id="twitterTemplate" class=" org.springframework.social.   twitter.api.impl.TwitterTemplate ">   <constructor-arg value="${twitter.oauth.apiKey}"/>   <constructor-arg value="${twitter.oauth.apiSecret}"/>   <constructor-arg value="${twitter.oauth.accessToken}"/>   <constructor-arg value="${twitter.oauth.accessTokenSecret}"/> </bean> As I mentioned, the template encapsulates all the values. Here is the order of the arguments: apiKey apiSecret accessToken accessTokenSecret With all the setup in place, let's now do some real work: Namespace support can be added by using the following code snippet: <beans   twitter-template="twitterTemplate"   channel="twitterChannel"> </int-twitter:inbound-channel-adapter> The components in this code are covered in the following bullet points: int-twitter:inbound-channel-adapter: This is the namespace support for Twitter's inbound channel adapter. twitter-template: This is the most important aspect. The Twitter template encapsulates which account to use to poll the Twitter site. The details given in the preceding code snippet are fake; it should be replaced with real connection parameters. channel: Messages are dumped on this channel. These adapters are further used for other applications, such as for searching messages, retrieving direct messages, and retrieving tweets that mention your account, and so on. Let's have a quick look at the code snippets for these adapters. I will not go into detail for each one; they are almost similar to what have been discussed previously. Search: This adapter helps to search the tweets for the parameter configured in the query tag. The code is as follows: <int-twitter:search-inbound-channel-adapter id="testSearch"   twitter-template="twitterTemplate"   query="#springintegration"   channel="twitterSearchChannel"> </int-twitter:search-inbound-channel-adapter> Retrieving Direct Messages: This adapter allows us to receive the direct message for the account in use (account configured in Twitter template). The code is as follows: <int-twitter:dm-inbound-channel-adapter   id="testdirectMessage"   twitter-template="twiterTemplate"   channel="twitterDirectMessageChannel"> </int-twitter:dm-inbound-channel-adapter> Retrieving Mention Messages: This adapter allows us to receive messages that mention the configured account via the @user tag (account configured in the Twitter template). The code is as follows: <int-twitter:mentions-inbound-channel-adapter   id="testmentionMessage"   twitter-template="twiterTemplate"   channel="twitterMentionMessageChannel"> </int-twitter:mentions-inbound-channel-adapter> Sending tweets Twitter exposes outbound adapters to send messages. Here is a sample code:   <int-twitter:outbound-channel-adapter     twitter-template="twitterTemplate"     channel="twitterSendMessageChannel"/> Whatever message is put on the twitterSendMessageChannel channel is tweeted by this adapter. Similar to an inbound gateway, the outbound gateway provides support for sending direct messages. Here is a simple example of an outbound adapter: <int-twitter:dm-outbound-channel-adapter   twitter-template="twitterTemplate"   channel="twitterSendDirectMessage"/> Any message that is put on the twitterSendDirectMessage channel is sent to the user directly. But where is the name of the user to whom the message will be sent? It is decided by a header in the message TwitterHeaders.DM_TARGET_USER_ID. This must be populated either programmatically, or by using enrichers or SpEL. For example, it can be programmatically added as follows: Message message = MessageBuilder.withPayload("Chandan")   .setHeader(TwitterHeaders.DM_TARGET_USER_ID,   "test_id").build(); Alternatively, it can be populated by using a header enricher, as follows: <int:header-enricher input-channel="twitterIn"   output-channel="twitterOut">   <int:header name="twitter_dmTargetUserId" value=" test_id "/> </int:header-enricher> Twitter search outbound gateway As gateways provide a two-way window, the search outbound gateway can be used to issue dynamic search commands and receive the results as a collection. If no result is found, the collection is empty. Let's configure a search outbound gateway, as follows:   <int-twitter:search-outbound-gateway id="twitterSearch"     request-channel="searchQueryChannel"     twitter-template="twitterTemplate"     search-args-expression="#springintegration"     reply-channel="searchQueryResultChannel"/> And here is what the tags covered in this code mean: int-twitter:search-outbound-gateway: This is the namespace for the Twitter search outbound gateway request-channel: This is the channel that is used to send search requests to this gateway twitter-template: This is the Twitter template reference search-args-expression: This is used as arguments for the search reply-channel: This is the channel on which searched results are populated This gives us enough to get started with the social integration aspects of the spring framework. Enterprise messaging Enterprise landscape is incomplete without JMS—it is one of the most commonly used mediums of enterprise integration. Spring provides very good support for this. Spring Integration builds over that support and provides adapter and gateways to receive and consume messages from many middleware brokers such as ActiveMQ, RabbitMQ, Rediss, and so on. Spring Integration provides inbound and outbound adapters to send and receive messages along with gateways that can be used in a request/reply scenario. Let's walk through these implementations in a little more detail. A basic understanding of the JMS mechanism and its concepts is expected. It is not possible to cover even the introduction of JMS here. Let's start with the prerequisites. Prerequisites To use Spring Integration messaging components, namespaces, and relevant Maven the following dependency should be added: Namespace support can be added by using the following code snippet: > Maven entry can be provided using the following code snippet: <dependency>   <groupId>org.springframework.integration</groupId>   <artifactId>spring-integration-jms</artifactId>   <version>${spring.integration.version}</version> </dependency> After adding these two dependencies, we are ready to use the components. But before we can use an adapter, we must configure an underlying message broker. Let's configure ActiveMQ. Add the following in pom.xml:   <dependency>     <groupId>org.apache.activemq</groupId>     <artifactId>activemq-core</artifactId>     <version>${activemq.version}</version>     <exclusions>       <exclusion>         <artifactId>spring-context</artifactId>         <groupId>org.springframework</groupId>       </exclusion>     </exclusions>   </dependency>   <dependency>     <groupId>org.springframework</groupId>     <artifactId>spring-jms</artifactId>     <version>${spring.version}</version>     <scope>compile</scope>   </dependency> After this, we are ready to create a connection factory and JMS queue that will be used by the adapters to communicate. First, create a session factory. As you will notice, this is wrapped in Spring's CachingConnectionFactory, but the underlying provider is ActiveMQ: <bean id="connectionFactory" class="org.springframework.   jms.connection.CachingConnectionFactory">   <property name="targetConnectionFactory">     <bean class="org.apache.activemq.ActiveMQConnectionFactory">       <property name="brokerURL" value="vm://localhost"/>     </bean>   </property> </bean> Let's create a queue that can be used to retrieve and put messages: <bean   id="feedInputQueue"   class="org.apache.activemq.command.ActiveMQQueue">   <constructor-arg value="queue.input"/> </bean> Now, we are ready to send and retrieve messages from the queue. Let's look into each message one by one. Receiving messages – the inbound adapter Spring Integration provides two ways of receiving messages: polling and event listener. Both of them are based on the underlying Spring framework's comprehensive support for JMS. JmsTemplate is used by the polling adapter, while MessageListener is used by the event-driven adapter. As the name suggests, a polling adapter keeps polling the queue for the arrival of new messages and puts the message on the configured channel if it finds one. On the other hand, in the case of the event-driven adapter, it's the responsibility of the server to notify the configured adapter. The polling adapter Let's start with a code sample: <int-jms:inbound-channel-adapter   connection-factory="connectionFactory"   destination="feedInputQueue"   channel="jmsProcessedChannel">   <int:poller fixed-rate="1000" /> </int-jms:inbound-channel-adapter> This code snippet contains the following components: int-jms:inbound-channel-adapter: This is the namespace support for the JMS inbound adapter connection-factory: This is the encapsulation for the underlying JMS provider setup, such as ActiveMQ destination: This is the JMS queue where the adapter is listening for incoming messages channel: This is the channel on which incoming messages should be put There is a poller element, so it's obvious that it is a polling-based adapter. It can be configured in one of two ways: by providing a JMS template or using a connection factory along with a destination. I have used the latter approach. The preceding adapter has a polling queue mentioned in the destination and once it gets any message, it puts the message on the channel configured in the channel attribute. The event-driven adapter Similar to polling adapters, event-driven adapters also need a reference either to an implementation of the interface AbstractMessageListenerContainer or need a connection factory and destination. Again, I will use the latter approach. Here is a sample configuration: <int-jms:message-driven-channel-adapter   connection-factory="connectionFactory"   destination="feedInputQueue"   channel="jmsProcessedChannel"/> There is no poller sub-element here. As soon as a message arrives at its destination, the adapter is invoked, which puts it on the configured channel. Sending messages – the outbound adapter Outbound adapters convert messages on the channel to JMS messages and put them on the configured queue. To convert Spring Integration messages to JMS messages, the outbound adapter uses JmsSendingMessageHandler. This is is an implementation of MessageHandler. Outbound adapters should be configured with either JmsTemplate or with a connection factory and destination queue. Keeping in sync with the preceding examples, we will take the latter approach, as follows: <int-jms:outbound-channel-adapter   connection-factory="connectionFactory"   channel="jmsChannel"   destination="feedInputQueue"/> This adapter receives the Spring Integration message from jmsChannel, converts it to a JMS message, and puts it on the destination. Gateway Gateway provides a request/reply behavior instead of a one-way send or receive. For example, after sending a message, we might expect a reply or we may want to send an acknowledgement after receiving a message. The inbound gateway Inbound gateways provide an alternative to inbound adapters when request-reply capabilities are expected. An inbound gateway is an event-based implementation that listens for a message on the queue, converts it to Spring Message, and puts it on the channel. Here is a sample code: <int-jms:inbound-gateway   request-destination="feedInputQueue"   request-channel="jmsProcessedChannel"/> However, this is what an inbound adapter does—even the configuration is similar, except the namespace. So, what is the difference? The difference lies in replying back to the reply destination. Once the message is put on the channel, it will be propagated down the line and at some stage a reply would be generated and sent back as an acknowledgement. The inbound gateway, on receiving this reply, will create a JMS message and put it back on the reply destination queue. Then, where is the reply destination? The reply destination is decided in one of the following ways: Original message has a property JMSReplyTo, if it's present it has the highest precedence. The inbound gateway looks for a configured, default-reply-destination which can be configured either as a name or as a direct reference of a channel. For specifying channel as direct reference default-reply-destination tag should be used. An exception will be thrown by the gateway if it does not find either of the preceding two ways. The outbound gateway Outbound gateways should be used in scenarios where a reply is expected for the send messages. Let's start with an example: <int-jms:outbound-gateway   request-channel="jmsChannel"   request-destination="feedInputQueue"   reply-channel="jmsProcessedChannel" /> The preceding configuration will send messages to request-destination. When an acknowledgement is received, it can be fetched from the configured reply-destination. If reply-destination has not been configured, JMS TemporaryQueues will be created. Summary In this article, we covered out-of-the-box component provided by the Spring Integration framework such as aggregator. This article also showcased the simplicity and abstraction that Spring Integration provides when it comes to handling complicated integrations, be it file-based, HTTP, JMS, or any other integration mechanism. Resources for Article: Further resources on this subject: Modernizing Our Spring Boot App [article] Home Security by Beaglebone [article] Integrating With Other Frameworks [article]
Read more
  • 0
  • 0
  • 5607

article-image-building-color-picker-hex-rgb-conversion
Packt
02 Mar 2015
18 min read
Save for later

Building a Color Picker with Hex RGB Conversion

Packt
02 Mar 2015
18 min read
In this article by Vijay Joshi, author of the book Mastering jQuery UI, we are going to create a color selector, or color picker, that will allow the users to change the text and background color of a page using the slider widget. We will also use the spinner widget to represent individual colors. Any change in colors using the slider will update the spinner and vice versa. The hex value of both text and background colors will also be displayed dynamically on the page. (For more resources related to this topic, see here.) This is how our page will look after we have finished building it: Setting up the folder structure To set up the folder structure, follow this simple procedure: Create a folder named Article inside the MasteringjQueryUI folder. Directly inside this folder, create an HTML file and name it index.html. Copy the js and css folder inside the Article folder as well. Now go inside the js folder and create a JavaScript file named colorpicker.js. With the folder setup complete, let's start to build the project. Writing markup for the page The index.html page will consist of two sections. The first section will be a text block with some text written inside it, and the second section will have our color picker controls. We will create separate controls for text color and background color. Inside the index.html file write the following HTML code to build the page skeleton: <html> <head> <link rel="stylesheet" href="css/ui-lightness/jquery-ui- 1.10.4.custom.min.css"> </head> <body> <div class="container"> <div class="ui-state-highlight" id="textBlock"> <p> Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. </p> <p> Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. </p> <p> Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. </p> </div> <div class="clear">&nbsp;</div> <ul class="controlsContainer"> <li class="left"> <div id="txtRed" class="red slider" data-spinner="sptxtRed" data-type="text"></div><input type="text" value="0" id="sptxtRed" data-slider="txtRed" readonly="readonly" /> <div id="txtGreen" class="green slider" dataspinner=" sptxtGreen" data-type="text"></div><input type="text" value="0" id="sptxtGreen" data-slider="txtGreen" readonly="readonly" /> <div id="txtBlue" class="blue slider" dataspinner=" sptxtBlue" data-type="text"></div><input type="text" value="0" id="sptxtBlue" data-slider="txtBlue" readonly="readonly" /> <div class="clear">&nbsp;</div> Text Color : <span>#000000</span> </li> <li class="right"> <div id="bgRed" class="red slider" data-spinner="spBgRed" data-type="bg" ></div><input type="text" value="255" id="spBgRed" data-slider="bgRed" readonly="readonly" /> <div id="bgGreen" class="green slider" dataspinner=" spBgGreen" data-type="bg" ></div><input type="text" value="255" id="spBgGreen" data-slider="bgGreen" readonly="readonly" /> <div id="bgBlue" class="blue slider" data-spinner="spBgBlue" data-type="bg" ></div><input type="text" value="255" id="spBgBlue" data-slider="bgBlue" readonly="readonly" /> <div class="clear">&nbsp;</div> Background Color : <span>#ffffff</span> </li> </ul> </div> <script src="js/jquery-1.10.2.js"></script> <script src="js/jquery-ui-1.10.4.custom.min.js"></script> <script src="js/colorpicker.js"></script> </body> </html> We started by including the jQuery UI CSS file inside the head section. Proceeding to the body section, we created a div with the container class, which will act as parent div for all the page elements. Inside this div, we created another div with id value textBlock and a ui-state-highlight class. We then put some text content inside this div. For this example, we have made three paragraph elements, each having some random text inside it. After div#textBlock, there is an unordered list with the controlsContainer class. This ul element has two list items inside it. First list item has the CSS class left applied to it and the second has CSS class right applied to it. Inside li.left, we created three div elements. Each of these three div elements will be converted to a jQuery slider and will represent the red (R), green (G), and blue (B) color code, respectively. Next to each of these divs is an input element where the current color code will be displayed. This input will be converted to a spinner as well. Let's look at the first slider div and the input element next to it. The div has id txtRed and two CSS classes red and slider applied to it. The red class will be used to style the slider and the slider class will be used in our colorpicker.js file. Note that this div also has two data attributes attached to it, the first is data-spinner, whose value is the id of the input element next to the slider div we have provided as sptxtRed, the second attribute is data-type, whose value is text. The purpose of the data-type attribute is to let us know whether this slider will be used for changing the text color or the background color. Moving on to the input element next to the slider now, we have set its id as sptxtRed, which should match the value of the data-spinner attribute on the slider div. It has another attribute named data-slider, which contains the id of the slider, which it is related to. Hence, its value is txtRed. Similarly, all the slider elements have been created inside div.left and each slider has an input next to id. The data-type attribute will have the text value for all sliders inside div.left. All input elements have also been assigned a value of 0 as the initial text color will be black. The same pattern that has been followed for elements inside div.left is also followed for elements inside div.right. The only difference is that the data-type value will be bg for slider divs. For all input elements, a value of 255 is set as the background color is white in the beginning. In this manner, all the six sliders and the six input elements have been defined. Note that each element has a unique ID. Finally, there is a span element inside both div.left and div.right. The hex color code will be displayed inside it. We have placed #000000 as the default value for the text color inside the span for the text color and #ffffff as the default value for the background color inside the span for background color. Lastly, we have included the jQuery source file, the jQuery UI source file, and the colorpicker.js file. With the markup ready, we can now write the properties for the CSS classes that we used here. Styling the content To make the page presentable and structured, we need to add CSS properties for different elements. We will do this inside the head section. Go to the head section in the index.html file and write these CSS properties for different elements: <style type="text/css">   body{     color:#025c7f;     font-family:Georgia,arial,verdana;     width:700px;     margin:0 auto;   }   .container{     margin:0 auto;     font-size:14px;     position:relative;     width:700px;     text-align:justify;    } #textBlock{     color:#000000;     background-color: #ffffff;   }   .ui-state-highlight{     padding: 10px;     background: none;   }   .controlsContainer{       border: 1px solid;       margin: 0;       padding: 0;       width: 100%;       float: left;   }   .controlsContainer li{       display: inline-block;       float: left;       padding: 0 0 0 50px;       width: 299px;   }   .controlsContainer div.ui-slider{       margin: 15px 0 0;       width: 200px;       float:left;   }   .left{     border-right: 1px solid;   }   .clear{     clear: both;   }     .red .ui-slider-range{ background: #ff0000; }   .green .ui-slider-range{ background: #00ff00; }   .blue .ui-slider-range{ background: #0000ff; }     .ui-spinner{       height: 20px;       line-height: 1px;       margin: 11px 0 0 15px;     }   input[type=text]{     margin-top: 0;     width: 30px;   } </style> First, we defined some general rules for page body and div .container. Then, we defined the initial text color and background color for the div with id textBlock. Next, we defined the CSS properties for the unordered list ul .controlsContainer and its list items. We have provided some padding and width to each list item. We have also specified the width and other properties for the slider as well. Since the class ui-slider is added by jQuery UI to a slider element after it is initialized, we have added our properties in the .controlsContainer div .ui-slider rule. To make the sliders attractive, we then defined the background colors for each of the slider bars by defining color codes for red, green, and blue classes. Lastly, CSS rules have been defined for the spinner and the input box. We can now check our progress by opening the index.html page in our browser. Loading it will display a page that resembles the following screenshot: It is obvious that sliders and spinners will not be displayed here. This is because we have not written the JavaScript code required to initialize those widgets. Our next section will take care of them. Implementing the color picker In order to implement the required functionality, we first need to initialize the sliders and spinners. Whenever a slider is changed, we need to update its corresponding spinner as well, and conversely if someone changes the value of the spinner, we need to update the slider to the correct value. In case any of the value changes, we will then recalculate the current color and update the text or background color depending on the context. Defining the object structure We will organize our code using the object literal. We will define an init method, which will be the entry point. All event handlers will also be applied inside this method. To begin with, go to the js folder and open the colorpicker.js file for editing. In this file, write the code that will define the object structure and a call to it: var colorPicker = {   init : function ()   {       },   setColor : function(slider, value)   {   },   getHexColor : function(sliderType)   {   },   convertToHex : function (val)   {   } }   $(function() {   colorPicker.init(); }); An object named colorPicker has been defined with four methods. Let's see what all these methods will do: init: This method will be the entry point where we will initialize all components and add any event handlers that are required. setColor: This method will be the main method that will take care of updating the text and background colors. It will also update the value of the spinner whenever the slider moves. This method has two parameters; the slider that was moved and its current value. getHexColor: This method will be called from within setColor and it will return the hex code based on the RGB values in the spinners. It takes a sliderType parameter based on which we will decide which color has to be changed; that is, text color or background color. The actual hex code will be calculated by the next method. convertToHex: This method will convert an RGB value for color into its corresponding hex value and return it to get a HexColor method. This was an overview of the methods we are going to use. Now we will implement these methods one by one, and you will understand them in detail. After the object definition, there is the jQuery's $(document).ready() event handler that will call the init method of our object. The init method In the init method, we will initialize the sliders and the spinners and set the default values for them as well. Write the following code for the init method in the colorpicker.js file:   init : function () {   var t = this;   $( ".slider" ).slider(   {     range: "min",     max: 255,     slide : function (event, ui)     {       t.setColor($(this), ui.value);     },     change : function (event, ui)     {       t.setColor($(this), ui.value);     }   });     $('input').spinner(   {     min :0,     max : 255,     spin : function (event, ui)     {       var sliderRef = $(this).data('slider');       $('#' + sliderRef).slider("value", ui.value);     }   });       $( "#txtRed, #txtGreen, #txtBlue" ).slider('value', 0);   $( "#bgRed, #bgGreen, #bgBlue" ).slider('value', 255); } In the first line, we stored the current scope value, this, in a local variable named t. Next, we will initialize the sliders. Since we have used the CSS class slider on each slider, we can simply use the .slider selector to select all of them. During initialization, we provide four options for sliders: range, max, slide, and change. Note the value for max, which has been set to 255. Since the value for R, G, or B can be only between 0 and 255, we have set max as 255. We do not need to specify min as it is 0 by default. The slide method has also been defined, which is invoked every time the slider handle moves. The call back for slide is calling the setColor method with an instance of the current slider and the value of the current slider. The setColor method will be explained in the next section. Besides slide, the change method is also defined, which also calls the setColor method with an instance of the current slider and its value. We use both the slide and change methods. This is because a change is called once the user has stopped sliding the slider handle and the slider value has changed. Contrary to this, the slide method is called each time the user drags the slider handle. Since we want to change colors while sliding as well, we have defined the slide as well as change methods. It is time to initialize the spinners now. The spinner widget is initialized with three properties. These are min and max, and the spin. min and max method has been set to 0 and 255, respectively. Every time the up/down button on the spinner is clicked or the up/down arrow key is used, the spin method will be called. Inside this method, $(this) refers to the current spinner. We find our related slider to this spinner by reading the data-slider attribute of this spinner. Once we get the exact slider, we set its value using the value method on the slider widget. Note that calling the value method will invoke the change method of the slider as well. This is the primary reason we have defined a callback for the change event while initializing the sliders. Lastly, we will set the default values for the sliders. For sliders inside div.left, we have set the value as 0 and for sliders inside div.right, the value is set to 255. You can now check the page on your browser. You will find that the slider and the spinner elements are initialized now, with the values we specified: You can also see that changing the spinner value using either the mouse or the keyboard will update the value of the slider as well. However, changing the slider value will not update the spinner. We will handle this in the next section where we will change colors as well. Changing colors and updating the spinner The setColor method is called each time the slider or the spinner value changes. We will now define this method to change the color based on whether the slider's or spinner's value was changed. Go to the setColor method declaration and write the following code: setColor : function(slider, value) {   var t = this;   var spinnerRef = slider.data('spinner');   $('#' + spinnerRef).spinner("value", value);     var sliderType = slider.data('type')     var hexColor = t.getHexColor(sliderType);   if(sliderType == 'text')   {       $('#textBlock').css({'color' : hexColor});       $('.left span:last').text(hexColor);                  }   else   {       $('#textBlock').css({'background-color' : hexColor});       $('.right span:last').text(hexColor);                  } } In the preceding code, we receive the current slider and its value as a parameter. First we get the related spinner to this slider using the data attribute spinner. Then we set the value of the spinner to the current value of the slider. Now we find out the type of slider for which setColor is being called and store it in the sliderType variable. The value for sliderType will either be text, in case of sliders inside div.left, or bg, in case of sliders inside div.right. In the next line, we will call the getHexColor method and pass the sliderType variable as its argument. The getHexColor method will return the hex color code for the selected color. Next, based on the sliderType value, we set the color of div#textBlock. If the sliderType is text, we set the color CSS property of div#textBlock and display the selected hex code in the span inside div.left. If the sliderType value is bg, we set the background color for div#textBlock and display the hex code for the background color in the span inside div.right. The getHexColor method In the preceding section, we called the getHexColor method with the sliderType argument. Let's define it first, and then we will go through it in detail. Write the following code to define the getHexColor method: getHexColor : function(sliderType) {   var t = this;   var allInputs;   var hexCode = '#';   if(sliderType == 'text')   {     //text color     allInputs = $('.left').find('input[type=text]');   }   else   {     //background color     allInputs = $('.right').find('input[type=text]');   }   allInputs.each(function (index, element) {     hexCode+= t.convertToHex($(element).val());   });     return hexCode; } The local variable t has stored this to point to the current scope. Another variable allInputs is declared, and lastly a variable to store the hex code has been declared, whose value has been set to # initially. Next comes the if condition, which checks the value of parameter sliderType. If the value of sliderType is text, it means we need to get all the spinner values to change the text color. Hence, we use jQuery's find selector to retrieve all input boxes inside div.left. If the value of sliderType is bg, it means we need to change the background color. Therefore, the else block will be executed and all input boxes inside div.right will be retrieved. To convert the color to hex, individual values for red, green, and blue will have to be converted to hex and then concatenated to get the full color code. Therefore, we iterate in inputs using the .each method. Another method convertToHex is called, which converts the value of a single input to hex. Inside the each method, we keep concatenating the hex value of the R, G, and B components to a variable hexCode. Once all iterations are done, we return the hexCode to the parent function where it is used. Converting to hex convertToHex is a small method that accepts a value and converts it to the hex equivalent. Here is the definition of the convertToHex method: convertToHex : function (val) {   var x  = parseInt(val, 10).toString(16);   return x.length == 1 ? "0" + x : x; } Inside the method, firstly we will convert the received value to an integer using the parseInt method and then we'll use JavaScript's toString method to convert it to hex, which has base 16. In the next line, we will check the length of the converted hex value. Since we want the 6-character dash notation for color (such as #ff00ff), we need two characters each for red, green, and blue. Hence, we check the length of the created hex value. If it is only one character, we append a 0 to the beginning to make it two characters. The hex value is then returned to the parent function. With this, our implementation is complete and we can check it on a browser. Load the page in your browser and play with the sliders and spinners. You will see the text or background color changing, based on their value: You will also see the hex code displayed below the sliders. Also note that changing the sliders will change the value of the corresponding spinner and vice versa. Improving the Colorpicker This was a very basic tool that we built. You can add many more features to it and enhance its functionality. Here are some ideas to get you started: Convert it into a widget where all the required DOM for sliders and spinners is created dynamically Instead of two sliders, incorporate the text and background changing ability into a single slider with two handles, but keep two spinners as usual Summary In this article, we created a basic color picker/changer using sliders and spinners. You can use it to view and change the colors of your pages dynamically. Resources for Article: Further resources on this subject: Testing Ui Using WebdriverJs? [article] Important Aspect Angularjs Ui Development [article] Kendo Ui Dataviz Advance Charting [article]
Read more
  • 0
  • 0
  • 5586

article-image-angularjs-performance
Packt
04 Mar 2015
20 min read
Save for later

AngularJS Performance

Packt
04 Mar 2015
20 min read
In this article by Chandermani, the author of AngularJS by Example, we focus our discussion on the performance aspect of AngularJS. For most scenarios, we can all agree that AngularJS is insanely fast. For standard size views, we rarely see any performance bottlenecks. But many views start small and then grow over time. And sometimes the requirement dictates we build large pages/views with a sizable amount of HTML and data. In such a case, there are things that we need to keep in mind to provide an optimal user experience. Take any framework and the performance discussion on the framework always requires one to understand the internal working of the framework. When it comes to Angular, we need to understand how Angular detects model changes. What are watches? What is a digest cycle? What roles do scope objects play? Without a conceptual understanding of these subjects, any performance guidance is merely a checklist that we follow without understanding the why part. Let's look at some pointers before we begin our discussion on performance of AngularJS: The live binding between the view elements and model data is set up using watches. When a model changes, one or many watches linked to the model are triggered. Angular's view binding infrastructure uses these watches to synchronize the view with the updated model value. Model change detection only happens when a digest cycle is triggered. Angular does not track model changes in real time; instead, on every digest cycle, it runs through every watch to compare the previous and new values of the model to detect changes. A digest cycle is triggered when $scope.$apply is invoked. A number of directives and services internally invoke $scope.$apply: Directives such as ng-click, ng-mouse* do it on user action Services such as $http and $resource do it when a response is received from server $timeout or $interval call $scope.$apply when they lapse A digest cycle tracks the old value of the watched expression and compares it with the new value to detect if the model has changed. Simply put, the digest cycle is a workflow used to detect model changes. A digest cycle runs multiple times till the model data is stable and no watch is triggered. Once you have a clear understanding of the digest cycle, watches, and scopes, we can look at some performance guidelines that can help us manage views as they start to grow. (For more resources related to this topic, see here.) Performance guidelines When building any Angular app, any performance optimization boils down to: Minimizing the number of binding expressions and hence watches Making sure that binding expression evaluation is quick Optimizing the number of digest cycles that take place The next few sections provide some useful pointers in this direction. Remember, a lot of these optimization may only be necessary if the view is large. Keeping the page/view small The sanest advice is to keep the amount of content available on a page small. The user cannot interact/process too much data on the page, so remember that screen real estate is at a premium and only keep necessary details on a page. The lesser the content, the lesser the number of binding expressions; hence, fewer watches and less processing are required during the digest cycle. Remember, each watch adds to the overall execution time of the digest cycle. The time required for a single watch can be insignificant but, after combining hundreds and maybe thousands of them, they start to matter. Angular's data binding infrastructure is insanely fast and relies on a rudimentary dirty check that compares the old and the new values. Check out the stack overflow (SO) post (http://stackoverflow.com/questions/9682092/databinding-in-angularjs), where Misko Hevery (creator of Angular) talks about how data binding works in Angular. Data binding also adds to the memory footprint of the application. Each watch has to track the current and previous value of a data-binding expression to compare and verify if data has changed. Keeping a page/view small may not always be possible, and the view may grow. In such a case, we need to make sure that the number of bindings does not grow exponentially (linear growth is OK) with the page size. The next two tips can help minimize the number of bindings in the page and should be seriously considered for large views. Optimizing watches for read-once data In any Angular view, there is always content that, once bound, does not change. Any read-only data on the view can fall into this category. This implies that once the data is bound to the view, we no longer need watches to track model changes, as we don't expect the model to update. Is it possible to remove the watch after one-time binding? Angular itself does not have something inbuilt, but a community project bindonce (https://github.com/Pasvaz/bindonce) is there to fill this gap. Angular 1.3 has added support for bind and forget in the native framework. Using the syntax {{::title}}, we can achieve one-time binding. If you are on Angular 1.3, use it! Hiding (ng-show) versus conditional rendering (ng-if/ng-switch) content You have learned two ways to conditionally render content in Angular. The ng-show/ng-hide directive shows/hides the DOM element based on the expression provided and ng-if/ng-switch creates and destroys the DOM based on an expression. For some scenarios, ng-if can be really beneficial as it can reduce the number of binding expressions/watches for the DOM content not rendered. Consider the following example: <div ng-if='user.isAdmin'>   <div ng-include="'admin-panel.html'"></div></div> The snippet renders an admin panel if the user is an admin. With ng-if, if the user is not an admin, the ng-include directive template is neither requested nor rendered saving us of all the bindings and watches that are part of the admin-panel.html view. From the preceding discussion, it may seem that we should get rid of all ng-show/ng-hide directives and use ng-if. Well, not really! It again depends; for small size pages, ng-show/ng-hide works just fine. Also, remember that there is a cost to creating and destroying the DOM. If the expression to show/hide flips too often, this will mean too many DOMs create-and-destroy cycles, which are detrimental to the overall performance of the app. Expressions being watched should not be slow Since watches are evaluated too often, the expression being watched should return results fast. The first way we can make sure of this is by using properties instead of functions to bind expressions. These expressions are as follows: {{user.name}}ng-show='user.Authorized' The preceding code is always better than this: {{getUserName()}}ng-show = 'isUserAuthorized(user)' Try to minimize function expressions in bindings. If a function expression is required, make sure that the function returns a result quickly. Make sure a function being watched does not: Make any remote calls Use $timeout/$interval Perform sorting/filtering Perform DOM manipulation (this can happen inside directive implementation) Or perform any other time-consuming operation Be sure to avoid such operations inside a bound function. To reiterate, Angular will evaluate a watched expression multiple times during every digest cycle just to know if the return value (a model) has changed and the view needs to be synchronized. Minimizing the deep model watch When using $scope.$watch to watch for model changes in controllers, be careful while setting the third $watch function parameter to true. The general syntax of watch looks like this: $watch(watchExpression, listener, [objectEquality]); In the standard scenario, Angular does an object comparison based on the reference only. But if objectEquality is true, Angular does a deep comparison between the last value and new value of the watched expression. This can have an adverse memory and performance impact if the object is large. Handling large datasets with ng-repeat The ng-repeat directive undoubtedly is the most useful directive Angular has. But it can cause the most performance-related headaches. The reason is not because of the directive design, but because it is the only directive that allows us to generate HTML on the fly. There is always the possibility of generating enormous HTML just by binding ng-repeat to a big model list. Some tips that can help us when working with ng-repeat are: Page data and use limitTo: Implement a server-side paging mechanism when a number of items returned are large. Also use the limitTo filter to limit the number of items rendered. Its syntax is as follows: <tr ng-repeat="user in users |limitTo:pageSize">…</tr> Look at modules such as ngInfiniteScroll (http://binarymuse.github.io/ngInfiniteScroll/) that provide an alternate mechanism to render large lists. Use the track by expression: The ng-repeat directive for performance tries to make sure it does not unnecessarily create or delete HTML nodes when items are added, updated, deleted, or moved in the list. To achieve this, it adds a $$hashKey property to every model item allowing it to associate the DOM node with the model item. We can override this behavior and provide our own item key using the track by expression such as: <tr ng-repeat="user in users track by user.id">…</tr> This allows us to use our own mechanism to identify an item. Using your own track by expression has a distinct advantage over the default hash key approach. Consider an example where you make an initial AJAX call to get users: $scope.getUsers().then(function(users){ $scope.users = users;}) Later again, refresh the data from the server and call something similar again: $scope.users = users; With user.id as a key, Angular is able to determine what elements were added/deleted and moved; it can also determine created/deleted DOM nodes for such elements. Remaining elements are not touched by ng-repeat (internal bindings are still evaluated). This saves a lot of CPU cycles for the browser as fewer DOM elements are created and destroyed. Do not bind ng-repeat to a function expression: Using a function's return value for ng-repeat can also be problematic, depending upon how the function is implemented. Consider a repeat with this: <tr ng-repeat="user in getUsers()">…</tr> And consider the controller getUsers function with this: $scope.getUser = function() {   var orderBy = $filter('orderBy');   return orderBy($scope.users, predicate);} Angular is going to evaluate this expression and hence call this function every time the digest cycle takes place. A lot of CPU cycles were wasted sorting user data again and again. It is better to use scope properties and presort the data before binding. Minimize filters in views, use filter elements in the controller: Filters defined on ng-repeat are also evaluated every time the digest cycle takes place. For large lists, if the same filtering can be implemented in the controller, we can avoid constant filter evaluation. This holds true for any filter function that is used with arrays including filter and orderBy. Avoiding mouse-movement tracking events The ng-mousemove, ng-mouseenter, ng-mouseleave, and ng-mouseover directives can just kill performance. If an expression is attached to any of these event directives, Angular triggers a digest cycle every time the corresponding event occurs and for events like mouse move, this can be a lot. We have already seen this behavior when working with 7 Minute Workout, when we tried to show a pause overlay on the exercise image when the mouse hovers over it. Avoid them at all cost. If we just want to trigger some style changes on mouse events, CSS is a better tool. Avoiding calling $scope.$apply Angular is smart enough to call $scope.$apply at appropriate times without us explicitly calling it. This can be confirmed from the fact that the only place we have seen and used $scope.$apply is within directives. The ng-click and updateOnBlur directives use $scope.$apply to transition from a DOM event handler execution to an Angular execution context. Even when wrapping the jQuery plugin, we may require to do a similar transition for an event raised by the JQuery plugin. Other than this, there is no reason to use $scope.$apply. Remember, every invocation of $apply results in the execution of a complete digest cycle. The $timeout and $interval services take a Boolean argument invokeApply. If set to false, the lapsed $timeout/$interval services does not call $scope.$apply or trigger a digest cycle. Therefore, if you are going to perform background operations that do not require $scope and the view to be updated, set the last argument to false. Always use Angular wrappers over standard JavaScript objects/functions such as $timeout and $interval to avoid manually calling $scope.$apply. These wrapper functions internally call $scope.$apply. Also, understand the difference between $scope.$apply and $scope.$digest. $scope.$apply triggers $rootScope.$digest that evaluates all application watches whereas, $scope.$digest only performs dirty checks on the current scope and its children. If we are sure that the model changes are not going to affect anything other than the child scopes, we can use $scope.$digest instead of $scope.$apply. Lazy-loading, minification, and creating multiple SPAs I hope you are not assuming that the apps that we have built will continue to use the numerous small script files that we have created to separate modules and module artefacts (controllers, directives, filters, and services). Any modern build system has the capability to concatenate and minify these files and replace the original file reference with a unified and minified version. Therefore, like any JavaScript library, use minified script files for production. The problem with the Angular bootstrapping process is that it expects all Angular application scripts to be loaded before the application can bootstrap. We cannot load modules, controllers, or in fact, any of the other Angular constructs on demand. This means we need to provide every artefact required by our app, upfront. For small applications, this is not a problem as the content is concatenated and minified; also, the Angular application code itself is far more compact as compared to the traditional JavaScript of jQuery-based apps. But, as the size of the application starts to grow, it may start to hurt when we need to load everything upfront. There are at least two possible solutions to this problem; the first one is about breaking our application into multiple SPAs. Breaking applications into multiple SPAs This advice may seem counterintuitive as the whole point of SPAs is to get rid of full page loads. By creating multiple SPAs, we break the app into multiple small SPAs, each supporting parts of the overall app functionality. When we say app, it implies a combination of the main (such as index.html) page with ng-app and all the scripts/libraries and partial views that the app loads over time. For example, we can break the Personal Trainer application into a Workout Builder app and a Workout Runner app. Both have their own start up page and scripts. Common scripts such as the Angular framework scripts and any third-party libraries can be referenced in both the applications. On similar lines, common controllers, directives, services, and filters too can be referenced in both the apps. The way we have designed Personal Trainer makes it easy to achieve our objective. The segregation into what belongs where has already been done. The advantage of breaking an app into multiple SPAs is that only relevant scripts related to the app are loaded. For a small app, this may be an overkill but for large apps, it can improve the app performance. The challenge with this approach is to identify what parts of an application can be created as independent SPAs; it totally depends upon the usage pattern of the application. For example, assume an application has an admin module and an end consumer/user module. Creating two SPAs, one for admin and the other for the end customer, is a great way to keep user-specific features and admin-specific features separate. A standard user may never transition to the admin section/area, whereas an admin user can still work on both areas; but transitioning from the admin area to a user-specific area will require a full page refresh. If breaking the application into multiple SPAs is not possible, the other option is to perform the lazy loading of a module. Lazy-loading modules Lazy-loading modules or loading module on demand is a viable option for large Angular apps. But unfortunately, Angular itself does not have any in-built support for lazy-loading modules. Furthermore, the additional complexity of lazy loading may be unwarranted as Angular produces far less code as compared to other JavaScript framework implementations. Also once we gzip and minify the code, the amount of code that is transferred over the wire is minimal. If we still want to try our hands on lazy loading, there are two libraries that can help: ocLazyLoad (https://github.com/ocombe/ocLazyLoad): This is a library that uses script.js to load modules on the fly angularAMD (http://marcoslin.github.io/angularAMD): This is a library that uses require.js to lazy load modules With lazy loading in place, we can delay the loading of a controller, directive, filter, or service script, until the page that requires them is loaded. The overall concept of lazy loading seems to be great but I'm still not sold on this idea. Before we adopt a lazy-load solution, there are things that we need to evaluate: Loading multiple script files lazily: When scripts are concatenated and minified, we load the complete app at once. Contrast it to lazy loading where we do not concatenate but load them on demand. What we gain in terms of lazy-load module flexibility we lose in terms of performance. We now have to make a number of network requests to load individual files. Given these facts, the ideal approach is to combine lazy loading with concatenation and minification. In this approach, we identify those feature modules that can be concatenated and minified together and served on demand using lazy loading. For example, Personal Trainer scripts can be divided into three categories: The common app modules: This consists of any script that has common code used across the app and can be combined together and loaded upfront The Workout Runner module(s): Scripts that support workout execution can be concatenated and minified together but are loaded only when the Workout Runner pages are loaded. The Workout Builder module(s): On similar lines to the preceding categories, scripts that support workout building can be combined together and served only when the Workout Builder pages are loaded. As we can see, there is a decent amount of effort required to refactor the app in a manner that makes module segregation, concatenation, and lazy loading possible. The effect on unit and integration testing: We also need to evaluate the effect of lazy-loading modules in unit and integration testing. The way we test is also affected with lazy loading in place. This implies that, if lazy loading is added as an afterthought, the test setup may require tweaking to make sure existing tests still run. Given these facts, we should evaluate our options and check whether we really need lazy loading or we can manage by breaking a monolithic SPA into multiple smaller SPAs. Caching remote data wherever appropriate Caching data is the one of the oldest tricks to improve any webpage/application performance. Analyze your GET requests and determine what data can be cached. Once such data is identified, it can be cached from a number of locations. Data cached outside the app can be cached in: Servers: The server can cache repeated GET requests to resources that do not change very often. This whole process is transparent to the client and the implementation depends on the server stack used. Browsers: In this case, the browser caches the response. Browser caching depends upon the server sending HTTP cache headers such as ETag and cache-control to guide the browser about how long a particular resource can be cached. Browsers can honor these cache headers and cache data appropriately for future use. If server and browser caching is not available or if we also want to incorporate any amount of caching in the client app, we do have some choices: Cache data in memory: A simple Angular service can cache the HTTP response in the memory. Since Angular is SPA, the data is not lost unless the page refreshes. This is how a service function looks when it caches data: var workouts;service.getWorkouts = function () {   if (workouts) return $q.resolve(workouts);   return $http.get("/workouts").then(function (response){       workouts = response.data;       return workouts;   });}; The implementation caches a list of workouts into the workouts variable for future use. The first request makes a HTTP call to retrieve data, but subsequent requests just return the cached data as promised. The usage of $q.resolve makes sure that the function always returns a promise. Angular $http cache: Angular's $http service comes with a configuration option cache. When set to true, $http caches the response of the particular GET request into a local cache (again an in-memory cache). Here is how we cache a GET request: $http.get(url, { cache: true}); Angular caches this cache for the lifetime of the app, and clearing it is not easy. We need to get hold of the cache dedicated to caching HTTP responses and clear the cache key manually. The caching strategy of an application is never complete without a cache invalidation strategy. With cache, there is always a possibility that caches are out of sync with respect to the actual data store. We cannot affect the server-side caching behavior from the client; consequently, let's focus on how to perform cache invalidation (clearing) for the two client-side caching mechanisms described earlier. If we use the first approach to cache data, we are responsible for clearing cache ourselves. In the case of the second approach, the default $http service does not support clearing cache. We either need to get hold of the underlying $http cache store and clear the cache key manually (as shown here) or implement our own cache that manages cache data and invalidates cache based on some criteria: var cache = $cacheFactory.get('$http');cache.remove("http://myserver/workouts"); //full url Using Batarang to measure performance Batarang (a Chrome extension), as we have already seen, is an extremely handy tool for Angular applications. Using Batarang to visualize app usage is like looking at an X-Ray of the app. It allows us to: View the scope data, scope hierarchy, and how the scopes are linked to HTML elements Evaluate the performance of the application Check the application dependency graph, helping us understand how components are linked to each other, and with other framework components. If we enable Batarang and then play around with our application, Batarang captures performance metrics for all watched expressions in the app. This data is nicely presented as a graph available on the Performance tab inside Batarang: That is pretty sweet! When building an app, use Batarang to gauge the most expensive watches and take corrective measures, if required. Play around with Batarang and see what other features it has. This is a very handy tool for Angular applications. This brings us to the end of the performance guidelines that we wanted to share in this article. Some of these guidelines are preventive measures that we should take to make sure we get optimal app performance whereas others are there to help when the performance is not up to the mark. Summary In this article, we looked at the ever-so-important topic of performance, where you learned ways to optimize an Angular app performance. Resources for Article: Further resources on this subject: Role of AngularJS [article] The First Step [article] Recursive directives [article]
Read more
  • 0
  • 0
  • 5548

article-image-getting-started-ext-gwt
Packt
28 Dec 2010
8 min read
Save for later

Getting Started with Ext GWT

Packt
28 Dec 2010
8 min read
  Ext GWT 2.0: Beginner's Guide Take the user experience of your website to a new level with Ext GWT Explore the full range of features of the Ext GWT library through practical, step-by-step examples Discover how to combine simple building blocks into powerful components Create powerful Rich Internet Applications with features normally only found in desktop applications Learn how to structure applications using MVC for maximum reliability and maintainability      What is GWT missing? GWT is a toolkit as opposed to a full development framework, and for most projects, it forms the part of a solution rather than the whole solution. Out-of-the-box GWT comes with only a basic set of widgets and lacks a framework to enable the developers to structure larger applications. Fortunately, GWT is both open and extensible and as a result, a range of complementary projects have grown up around it. Ext GWT is one of those projects. What does Ext GWT offer? Ext GWT sets out to build upon the strengths of GWT by enabling the developers to give their users an experience more akin to that of a desktop application. Ext GWT provides the GWT developer with a comprehensive component library similar to that used when developing for desktop environments. In addition to being a component library, powerful features for working with local and remote data are provided. It also features a model view controller framework, which can be used to structure larger applications. How is Ext GWT licensed? Licensing is always an important consideration when choosing technology to use in a project. At the time of writing, Ext GWT is offered with a dual license. The first license is an open source license compatible with the GNU GPL license v3. If you wish to use this license, you do not have to pay a fee for using Ext GWT, but in return you have to make your source code available under an open source license. This means you have to contribute all the source code of your project to the open source community and give everyone the right to modify or redistribute it. If you cannot meet the obligations of the open source license, for example, you are producing a commercial product or simply do not want to share your source code, you have to purchase a commercial license for Ext GWT. It is a good idea to check the current licensing requirements on the Sencha website, http://www.sencha.com, and take that into account when planning your project. Alternatives to Ext GWT Ext GWT is one of the many products produced by the company Sencha. Sencha was previously named Ext JS and started off developing a JavaScript library by the same name. Ext GWT is closely related to the Ext JS product in terms of functionality. Both Ext GWT and Ext JS also share the same look and feel as well as a similar API structure. However, Ext GWT is a native GWT implementation, written almost entirely in Java rather than a wrapper, the JavaScript-based Ext JS. GWT-Ext Before Ext GWT, there was GWT-Ext: http://code.google.com/p/gwt-ext/. This library was developed by Sanjiv Jeevan as a GWT wrapper around an earlier, 2.0.2 version of Ext JS. Being based on Ext JS, it has a very similar look and feel to Ext GWT. However, after the license of Ext JS changed from LGPL to GPL in 2008, active development came to an end. Apart from no longer being developed or supported, developing with GWT-Ext is more difficult than with Ext GWT. This is because the library is a wrapper around JavaScript and the Java debugger cannot help when there is a problem in the JavaScript code. Manual debugging is required. Smart GWT When development of GWT-Ext came to an end, Sanjiv Jeevan started a new project named Smart GWT: http://www.smartclient.com/smartgwt/. This is a LGPL framework that wraps the Smart Client JavaScript library in a similar way that GWT-Ext wraps Ext JS. Smart GWT has the advantage that it is still being actively developed. Being LGPL-licensed, it also can be used commercially without the need to pay the license fee that is required for Ext GWT. Smart GWT still has the debugging problems of GWT-Ext and the components are often regarded not as visually pleasing as Ext GWT. This could be down to personal taste of course. Vaadin Vaadin, http://vaadin.com, is a third alternative to Ext GWT. Vaadin is a server-side framework that uses a set of precompiled GWT components. Although you can write your own components if required, Vaadin is really designed so that you can build applications by combining the ready-made components. In Vaadin the browser client is just a dumb view of the server components and any user interaction is sent to the server for processing much like traditional Java web frameworks. This can be slow depending on the speed of the connection between the client and the server. The main disadvantage of Vaadin is the dependency on the server. GWT or Ext GWT's JavaScript can run in a browser without needing to communicate with a server. This is not possible in Vaadin. Ext GWT or GXT? To avoid confusion with GWT-Ext and to make it easier to write, Ext GWT is commonly abbreviated to GXT. We will use GXT synonymously with Ext GWT throughout the rest of this article. Working with GXT: A different type of web development If you are a web developer coming to GXT or GWT for the first time, it is very important to realize that working with this toolset is not like traditional web development. In traditional web development, most of the work is done on the server and the part the browser plays is li?? le more than a view-making request and receiving responses. When using GWT, especially GXT, at times it is easier if you suspend your web development thinking and think more like a desktop-rich client developer. Java Swing developers, for example, may find themselves at home. How GXT fits into GWT GXT is simply a library that plugs into any GWT project. If we have an existing GWT project setup, all we need to do to use it is: Download the GXT SDK from the Sencha website Add the library to the project and reference it in the GWT configuration Copy a set of resource files to the project If you haven't got a GWT project setup, don't worry. We will now work through getting GXT running from the beginning. Downloading what you need Before we can start working with GXT, we first need to download the toolkit and set up our development environment. Here is the list of what you need to download for running the examples.     Recommended Notes Download from Sun JDK 6 The Java development kit http://java.sun.com/javase/downloads/widget/jdk6.jsp Eclipse IDE for Java EE Developers 3.6 The Eclipse IDE for Java developers, which also includes some useful web development tools http://www.eclipse.org/downloads/ Ext GWT 2.2.0 SDK for GWT 2.0 The GXT SDK itself http://www.sencha.com/products/gwt/download.php Google supplies a useful plugin that integrates GWT into Eclipse. However, there is no reason that you cannot use an alternative development environment, if you prefer. Eclipse setup There are different versions of Eclipse, and although Eclipse for Java EE developers is not strictly required, it contains some useful tools for editing web-specific files such as CSS. These tools will be useful for GXT development, so it is strongly recommended. We will not cover the details of installing Eclipse here, as this is covered more than adequately on the Eclipse website. For that reason, we make the assumption that you already have a fresh installation of Eclipse ready to go. GWT setup You may have noticed that GWT is not included in the list of downloads. This is because since version 2.0.0, GWT has been available within an Eclipse plugin, which we will now set up. Time for action – setting up GWT In Eclipse, select Help Install New Software|. The installation dialog will appear. Click on the Add button to add a new site. Enter the name and location in the respective fields, as shown in the following screenshot, and click on the OK button. Move the mouse over the image to enlarge it. Select Google Plugin for Eclipse from the plugin section and Google Web Toolkit SDK from the SDKs section. Click on Next. The following dialog will appear. Click on Next to proceed. Click on the radio button to accept the license. Click on Finish. Eclipse will now download the Google Web Toolkit and configure the plugin. Restart when prompted. On restarting, if GWT and the Google Eclipse Plugin are installed successfully, you will notice the following three new icons in your toolbar. What just happened? You have now set up GWT in your Eclipse IDE. You are now ready to create GWT applications. However, before we can create GXT applications, there is a bit more work to do.
Read more
  • 0
  • 0
  • 5482

article-image-handling-long-running-requests-play
Packt
22 Sep 2014
18 min read
Save for later

Handling Long-running Requests in Play

Packt
22 Sep 2014
18 min read
In this article by Julien Richard-Foy, author of Play Framework Essentials, we will dive in the framework internals and explain how to leverage its reactive programming model to manipulate data streams. (For more resources related to this topic, see here.) Firstly, I would like to mention that the code called by controllers must be thread-safe. We also noticed that the result of calling an action has type Future[Result] rather than just Result. This article explains these subtleties and gives answers to questions such as "How are concurrent requests processed by Play applications?" More precisely, this article presents the challenges of stream processing and the way the Play framework solves them. You will learn how to consume, produce, and transform data streams in a non-blocking way using the Iteratee library. Then, you will leverage these skills to stream results and push real-time notifications to your clients. By the end of the article, you will be able to do the following: Produce, consume, and transform streams of data Process a large request body chunk by chunk Serve HTTP chunked responses Push real-time notifications using WebSockets or server-sent events Manage the execution context of your code Play application's execution model The streaming programming model provided by Play has been influenced by the execution model of Play applications, which itself has been influenced by the nature of the work a web application performs. So, let's start from the beginning: what does a web application do? For now, our example application does the following: the HTTP layer invokes some business logic via the service layer, and the service layer does some computations by itself and also calls the database layer. It is worth noting that in our configuration, the database system runs on the same machine as the web application but this is, however, not a requirement. In fact, there are chances that in real-world projects, your database system is decoupled from your HTTP layer and that both run on different machines. It means that while a query is executed on the database, the web layer does nothing but wait for the response. Actually, the HTTP layer is often waiting for some response coming from another system; it could, for example, retrieve some data from an external web service, or the business layer itself could be located on a remote machine. Decoupling the HTTP layer from the business layer or the persistence layer gives a finer control on how to scale the system (more details about that are given further in this article). Anyway, the point is that the HTTP layer may essentially spend time waiting. With that in mind, consider the following diagram showing how concurrent requests could be executed by a web application using a threaded execution model. That is, a model where each request is processed in its own thread.  Threaded execution model Several clients (shown on the left-hand side in the preceding diagram) perform queries that are processed by the application's controller. On the right-hand side of the controller, the figure shows an execution thread corresponding to each action's execution. The filled rectangles represent the time spent performing computations within a thread (for example, for processing data or computing a result), and the lines represent the time waiting for some remote data. Each action's execution is distinguished by a particular color. In this fictive example, the action handling the first request may execute a query to a remote database, hence the line (illustrating that the thread waits for the database result) between the two pink rectangles (illustrating that the action performs some computation before querying the database and after getting the database result). The action handling the third request may perform a call to a distant web service and then a second one, after the response of the first one has been received; hence, the two lines between the green rectangles. And the action handling the last request may perform a call to a distant web service that streams a response of an infinite size, hence, the multiple lines between the purple rectangles. The problem with this execution model is that each request requires the creation of a new thread. Threads have an overhead at creation, because they consume memory (essentially because each thread has its own stack), and during execution, when the scheduler switches contexts. However, we can see that these threads spend a lot of time just waiting. If we could use the same thread to process another request while the current action is waiting for something, we could avoid the creation of threads, and thus save resources. This is exactly what the execution model used by Play—the evented execution model—does, as depicted in the following diagram: Evented execution model Here, the computation fragments are executed on two threads only. Note that the same action can have its computation fragments run by different threads (for example, the pink action). Also note that several threads are still in use, that's why the code must be thread-safe. The time spent waiting between computing things is the same as before, and you can see that the time required to completely process a request is about the same as with the threaded model (for instance, the second pink rectangle ends at the same position as in the earlier figure, same for the third green rectangle, and so on). A comparison between the threaded and evented models can be found in the master's thesis of Benjamin Erb, Concurrent Programming for Scalable Web Architectures, 2012. An online version is available at http://berb.github.io/diploma-thesis/. An attentive reader may think that I have cheated; the rectangles in the second figure are often thinner than their equivalent in the first figure. That's because, in the first model, there is an overhead for scheduling threads and, above all, even if you have a lot of threads, your machine still has a limited number of cores effectively executing the code of your threads. More precisely, if you have more threads than your number of cores, you necessarily have threads in an idle state (that is, waiting). This means, if we suppose that the machine executing the application has only two cores, in the first figure, there is even time spent waiting in the rectangles! Scaling up your server The previous section raises the question of how to handle a higher number of concurrent requests, as depicted in the following diagram: A server under an increasing load The previous section explained how to avoid wasting resources to leverage the computing power of your server. But actually, there is no magic; if you want to compute even more things per unit of time, you need more computing power, as depicted in the following diagram: Scaling using more powerful hardware One solution could be to have a more powerful server. But you could be smarter than that and avoid buying expensive hardware by studying the shape of the workload and make appropriate decisions at the software-level. Indeed, there are chances that your workload varies a lot over time, with peaks and holes of activity. This information suggests that if you wanted to buy more powerful hardware, its performance characteristics would be drawn by your highest activity peak, even if it occurs very occasionally. Obviously, this solution is not optimal because you would buy expensive hardware even if you actually needed it only one percent of the time (and more powerful hardware often also means more power-consuming hardware). A better way to handle the workload elasticity consists of adding or removing server instances according to the activity level, as depicted in the following diagram: Scaling using several server instances This architecture design allows you to finely (and dynamically) tune your server capacity according to your workload. That's actually the cloud computing model. Nevertheless, this architecture has a major implication on your code; you cannot assume that subsequent requests issued by the same client will be handled by the same server instance. In practice, it means that you must treat each request independently of each other; you cannot for instance, store a counter on a server instance to count the number of requests issued by a client (your server would miss some requests if one is routed to another server instance). In a nutshell, your server has to be stateless. Fortunately, Play is stateless, so as long as you don't explicitly have a mutable state in your code, your application is stateless. Note that the first implementation I gave of the shop was not stateless; indeed the state of the application was stored in the server's memory. Embracing non-blocking APIs In the first section of this article, I claimed the superiority of the evented execution model over the threaded execution model, in the context of web servers. That being said, to be fair, the threaded model has an advantage over the evented model: it is simpler to program with. Indeed, in such a case, the framework is responsible for creating the threads and the JVM is responsible for scheduling the threads, so that you don't even have to think about this at all, yet your code is concurrently executed. On the other hand, with the evented model, concurrency control is explicit and you should care about it. Indeed, the fact that the same execution thread is used to run several concurrent actions has an important implication on your code: it should not block the thread. Indeed, while the code of an action is executed, no other action code can be concurrently executed on the same thread. What does blocking mean? It means holding a thread for too long a duration. It typically happens when you perform a heavy computation or wait for a remote response. However, we saw that these cases, especially waiting for remote responses, are very common in web servers, so how should you handle them? You have to wait in a non-blocking way or implement your heavy computations as incremental computations. In all the cases, you have to break down your code into computation fragments, where the execution is managed by the execution context. In the diagram illustrating the evented execution model, computation fragments are materialized by the rectangles. You can see that rectangles of different colors are interleaved; you can find rectangles of another color between two rectangles of the same color. However, by default, the code you write forms a single block of execution instead of several computation fragments. It means that, by default, your code is executed sequentially; the rectangles are not interleaved! This is depicted in the following diagram: Evented execution model running blocking code The previous figure still shows both the execution threads. The second one handles the blue action and then the purple infinite action, so that all the other actions can only be handled by the first execution context. This figure illustrates the fact that while the evented model can potentially be more efficient than the threaded model, it can also have negative consequences on the performances of your application: infinite actions block an execution thread forever and the sequential execution of actions can lead to much longer response times. So, how can you break down your code into blocks that can be managed by an execution context? In Scala, you can do so by wrapping your code in a Future block: Future { // This is a computation fragment} The Future API comes from the standard Scala library. For Java users, Play provides a convenient wrapper named play.libs.F.Promise: Promise.promise(() -> {// This is a computation fragment}); Such a block is a value of type Future[A] or, in Java, Promise<A> (where A is the type of the value computed by the block). We say that these blocks are asynchronous because they break the execution flow; you have no guarantee that the block will be sequentially executed before the following statement. When the block is effectively evaluated depends on the execution context implementation that manages it. The role of an execution context is to schedule the execution of computation fragments. In the figure showing the evented model, the execution context consists of a thread pool containing two threads (represented by the two lines under the rectangles). Actually, each time you create an asynchronous value, you have to supply the execution context that will manage its evaluation. In Scala, this is usually achieved using an implicit parameter of type ExecutionContext. You can, for instance, use an execution context provided by Play that consists, by default, of a thread pool with one thread per processor: import play.api.libs.concurrent.Execution.Implicits.defaultContext In Java, this execution context is automatically used by default, but you can explicitly supply another one: Promise.promise(() -> { ... }, myExecutionContext); Now that you know how to create asynchronous values, you need to know how to manipulate them. For instance, a sequence of several Future blocks is concurrently executed; how do we define an asynchronous computation depending on another one? You can eventually schedule a computation after an asynchronous value has been resolved using the foreach method: val futureX = Future { 42 }futureX.foreach(x => println(x)) In Java, you can perform the same operation using the onRedeem method: Promise<Integer> futureX = Promise.promise(() -> 42);futureX.onRedeem((x) -> System.out.println(x)); More interestingly, you can eventually transform an asynchronous value using the map method: val futureIsEven = futureX.map(x => x % 2 == 0) The map method exists in Java too: Promise<Boolean> futureIsEven = futureX.map((x) -> x % 2 == 0); If the function you use to transform an asynchronous value returned an asynchronous value too, you would end up with an inconvenient Future[Future[A]] value (or a Promise<Promise<A>> value, in Java). So, use the flatMap method in that case: val futureIsEven = futureX.flatMap(x => Future { x % 2 == 0 }) The flatMap method is also available in Java: Promise<Boolean> futureIsEven = futureX.flatMap((x) -> {Promise.promise(() -> x % 2 == 0)}); The foreach, map, and flatMap functions (or their Java equivalent) all have in common to set a dependency between two asynchronous values; the computation they take as the parameter is always evaluated after the asynchronous computation they are applied to. Another method that is worth mentioning is zip: val futureXY: Future[(Int, Int)] = futureX.zip(futureY) The zip method is also available in Java: Promise<Tuple<Integer, Integer>> futureXY = futureX.zip(futureY); The zip method returns an asynchronous value eventually resolved to a tuple containing the two resolved asynchronous values. It can be thought of as a way to join two asynchronous values without specifying any execution order between them. If you want to join more than two asynchronous values, you can use the zip method several times (for example, futureX.zip(futureY).zip(futureZ).zip(…)), but an alternative is to use the Future.sequence function: val futureXs: Future[Seq[Int]] =Future.sequence(Seq(futureX, futureY, futureZ, …)) This function transforms a sequence of future values into a future sequence value. In Java, this function is named Promise.sequence. In the preceding descriptions, I always used the word eventually, and it has a reason. Indeed, if we use an asynchronous value to manipulate a result sent by a remote machine (such as a database system or a web service), the communication may eventually fail due to some technical issue (for example, if the network is down). For this reason, asynchronous values have error recovery methods; for example, the recover method: futureX.recover { case NonFatal(e) => y } The recover method is also available in Java: futureX.recover((throwable) -> y); The previous code resolves futureX to the value of y in the case of an error. Libraries performing remote calls (such as an HTTP client or a database client) return such asynchronous values when they are implemented in a non-blocking way. You should always be careful whether the libraries you use are blocking or not and keep in mind that, by default, Play is tuned to be efficient with non-blocking APIs. It is worth noting that JDBC is blocking. It means that the majority of Java-based libraries for database communication are blocking. Obviously, once you get a value of type Future[A] (or Promise<A>, in Java), there is no way to get the A value unless you wait (and block) for the value to be resolved. We saw that the map and flatMap methods make it possible to manipulate the future A value, but you still end up with a Future[SomethingElse] value (or a Promise<SomethingElse>, in Java). It means that if your action's code calls an asynchronous API, it will end up with a Future[Result] value rather than a Result value. In that case, you have to use Action.async instead of Action, as illustrated in this typical code example: val asynchronousAction = Action.async { implicit request =>  service.asynchronousComputation().map(result => Ok(result))} In Java, there is nothing special to do; simply make your method return a Promise<Result> object: public static Promise<Result> asynchronousAction() { service.asynchronousComputation().map((result) -> ok(result));} Managing execution contexts Because Play uses explicit concurrency control, controllers are also responsible for using the right execution context to run their action's code. Generally, as long as your actions do not invoke heavy computations or blocking APIs, the default execution context should work fine. However, if your code is blocking, it is recommended to use a distinct execution context to run it. An application with two execution contexts (represented by the black and grey arrows). You can specify in which execution context each action should be executed, as explained in this section Unfortunately, there is no non-blocking standard API for relational database communication (JDBC is blocking). It means that all our actions that invoke code executing database queries should be run in a distinct execution context so that the default execution context is not blocked. This distinct execution context has to be configured according to your needs. In the case of JDBC communication, your execution context should be a thread pool with as many threads as your maximum number of connections. The following diagram illustrates such a configuration: This preceding diagram shows two execution contexts, each with two threads. The execution context at the top of the figure runs database code, while the default execution context (on the bottom) handles the remaining (non-blocking) actions. In practice, it is convenient to use Akka to define your execution contexts as they are easily configurable. Akka is a library used for building concurrent, distributed, and resilient event-driven applications. This article assumes that you have some knowledge of Akka; if that is not the case, do some research on it. Play integrates Akka and manages an actor system that follows your application's life cycle (that is, it is started and shut down with the application). For more information on Akka, visit http://akka.io. Here is how you can create an execution context with a thread pool of 10 threads, in your application.conf file: jdbc-execution-context {thread-pool-executor {   core-pool-size-factor = 10.0   core-pool-size-max = 10}} You can use it as follows in your code: import play.api.libs.concurrent.Akkaimport play.api.Play.currentimplicit val jdbc =  Akka.system.dispatchers.lookup("jdbc-execution-context") The Akka.system expression retrieves the actor system managed by Play. Then, the execution context is retrieved using Akka's API. The equivalent Java code is the following: import play.libs.Akka;import akka.dispatch.MessageDispatcher;import play.core.j.HttpExecutionContext;MessageDispatcher jdbc =   Akka.system().dispatchers().lookup("jdbc-execution-context"); Note that controllers retrieve the current request's information from a thread-local static variable, so you have to attach it to the execution context's thread before using it from a controller's action: play.core.j.HttpExecutionContext.fromThread(jdbc) Finally, forcing the use of a specific execution context for a given action can be achieved as follows (provided that my.execution.context is an implicit execution context): import my.execution.contextval myAction = Action.async {Future { … }} The Java equivalent code is as follows: public static Promise<Result> myAction() {return Promise.promise(   () -> { … },   HttpExecutionContext.fromThread(myExecutionContext));} Does this feels like clumsy code? Buy the book to learn how to reduce the boilerplate! Summary This article detailed a lot of things on the internals of the framework. You now know that Play uses an evented execution model to process requests and serve responses and that it implies that your code should not block the execution thread. You know how to use future blocks and promises to define computation fragments that can be concurrently managed by Play's execution context and how to define your own execution context with a different threading policy, for example, if you are constrained to use a blocking API. Resources for Article: Further resources on this subject: Play! Framework 2 – Dealing with Content [article] So, what is Play? [article] Play Framework: Introduction to Writing Modules [article]
Read more
  • 0
  • 0
  • 5480
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-so-what-apache-wicket
Packt
04 Sep 2013
7 min read
Save for later

So, what is Apache Wicket?

Packt
04 Sep 2013
7 min read
(For more resources related to this topic, see here.) Wicket is a component-based Java web framework that uses just Java and HTML. Here, you will see the main advantages of using Apache Wicket in your projects. Using Wicket, you will not have mutant HTML pages. Most of the Java web frameworks require the insertion of special syntax to the HTML code, making it more difficult for Web designers. On the other hand, Wicket adopts HTML templates by using a namespace that follows the XHTML standard. It consists of an id attribute in the Wicket namespace (wicket:id). You won't need scripts to generate messy HTML code. Using Wicket, the code will be clearer, and refactoring and navigating within the code will be easier. Moreover, you can utilize any HTML editor to edit the HTML files, and web designers can work with little knowledge of Wicket in the presentation layer without worrying about business rules and other developer concerns. The advantages for developers are as follows: All code is written in Java No XML configuration files POJO-centric programming No Back-button problems (that is, unexpected and undesirable results on clicking on the browser's Back button) Ease of creating bookmarkable pages Great compile-time and runtime problem diagnosis Easy testability of components Another interesting thing is that concepts such as generics and anonymous subclasses are widely used in Wicket, leveraging the Java programming language to the max. Wicket is based on components. A component is an object that interacts with other components and encapsulates a set of functionalities. Each component should be reusable, replaceable, extensible, encapsulated, and independent, and it does not have a specific context. Wicket provides all these principles to developers because it has been designed taking into account all of them. In particular, the most remarkable principle is reusability. Developers can create custom reusable components in a straightforward way. For instance, you could create a custom component called SearchPanel (by extending the Panel class, which is also a component) and use it in all your other Wicket projects. Wicket has many other interesting features. Wicket also aims to make the interaction of the stateful server-side Java programming language with the stateless HTTP protocol more natural. Wicket's code is safe by default. For instance, it does not encode state in URLs. Wicket is also efficient (for example, it is possible to do a tuning of page-state replication) and scalable (Wicket applications can easily work on a cluster). Last, but not least, Wicket has support for frameworks like EJB and Spring. Installation In seven easy steps, you can build a Wicket "Hello World" application. Step 1 – what do I need? Before you start to use Apache Wicket 6, you will need to check if you have all of the required elements, listed as follows: Wicket is a Java framework, so you need to have Java virtual machine (at least Version 6) installed on your machine. Apache Maven is required. Maven is a tool that can be used for building and managing Java projects. Its main purpose is to make the development process easier and more structured. More information on how to install and configure Maven can be found at http://maven.apache.org. The examples of this book use the Eclipse IDE Juno version, but you can also use other versions or other IDEs, such as NetBeans. In case you are using other versions, check the link for installing the plugins to the version you have; the remaining steps will be the same. In case of other IDEs, you will need to follow some tutorial to install other equivalent plugins or not use them at all. Step 2 – installing the m2eclipse plugin The steps for installing the m2eclipse plugin are as follows: Go to Help | Install New Software. Click on Add and type in m2eclipse in the Name field; copy and paste the link https://repository.sonatype.org/content/repositories/forge-sites/m2e/1.3.0/N/LATEST onto the Location field. Check all options and click on Next. Conclude the installation of the m2eclipse plugin by accepting all agreements and clicking on Finish. Step 3 – creating a new Maven application The steps for creating a new Maven application are as follows: Go to File | New | Project. Then go to Maven | Maven Project. Click on Next and type wicket in the next form. Choose the wicket-archetype-quickstart maven Archetype and click on Next. Fill the next form according to the following screenshot and click on Finish: Step 4 – coding the "Hello World" program In this step, we will build the famous "Hello World" program. The separation of concerns will be clear between HTML and Java code. In this example, and in most cases, each HTML file has a corresponding Java class (with the same name). First, we will analyse the HTML template code. The content of the HomePage.html file must be replaced by the following code: <!DOCTYPE html> <html > <body> <span wicket_id="helloWorldMessage">Test</span> </body> </html> It is simple HTML code with the Wicket template wicket:id="helloWorldMessage". It indicates that in the Java code related to this page, a method will replace the message Test by another message. Now, let's edit the corresponding Java class; that is, HomePage. package com.packtpub.wicket.hello_world; import org.apache.wicket.markup.html.WebPage; import org.apache.wicket.markup.html.basic.Label; public class HomePage extends WebPage { public HomePage() { add(new Label("helloWorldMessage", "Hello world!!!")); } } The class HomePage extends WebPage; that is, it inherits some of the WebPage class's methods and attributes, and it becomes a WebPage subtype. One of these inherited methods is the method add(), where a Label object can be passed as a parameter. A Label object can be built by passing two parameters: an identifier and a string. The method add() is called in the HomePage class's constructor and will change the message in wicket:id="helloWorldMessage" with Hello world!!!. The resulting HTML code will be as shown in the following code snippet: <!DOCTYPE html> <html > <body> <span>Hello world!!!</span> </body> </html> Step 5 – compile and run! The steps to compile and run the project are as follows: To compile, right-click on the project and go to Run As | Maven install. Verify if the compilation was successful. If not, Wicket provides good error messages, so you can try to fix what is wrong. To run the project, right-click on the class Start and go to Run As | Java application. The class Start will run an embedded Jetty instance that will run the application. Verify if the server has started without any problems. Open a web browser and enter this in the address field: http://localhost:8080. In case you have changed the port, enter http://localhost:<port>. The browser should show Hello world!!!. The most common problem that can occur is that port 8080 is already in use. In this case, you can go into the Java Start class (found at src/test/java) and set another port by replacing 8080 in connector. setPort(8080) (line 21) by another number (for example, 9999). To stop the server, you can either click on Console and press any key or click on the red square on the console, which indicates termination. And that's it! By this point, you should have a working Wicket "Hello World" application and are free to play around and discover more about it. Summary This article describes how to create a simple "Hello World" application using Apache Wicket 6. Resources for Article : Further resources on this subject: Tips for Deploying Sakai [Article] OSGi life cycle [Article] Apache Wicket: Displaying Data Using DataTable [Article]
Read more
  • 0
  • 0
  • 5478

article-image-touch-events
Packt
26 Nov 2013
7 min read
Save for later

Touch Events

Packt
26 Nov 2013
7 min read
(For more resources related to this topic, see here.) The why and when of touch events These touch events come in a few different flavors. There are taps, swipes, rotations, pinches, and more complicated gestures available for you to make your users' experience more enjoyable. Each type of touch event has an established convention which is important to know, so your user has minimal friction when using your website for the first time: The tap is analogous to the mouse click in a traditional desktop environment, whereas, the rotate, pinch, and related gestures have no equivalence. This presents an interesting problem—what can those events be used for? Pinch events are typically used to scale views (have a look at the following screenshot of Google Maps accessed from an iPad, which uses pinch events to control the map's zoom level), while rotate events generally rotate elements on screen. Swipes can be used to move content from left to right or top to bottom. Multifigure gestures can be different as per your website's use cases, but it's still worthwhile examining if a similar interaction already exists, and thus an established convention. Web applications can benefit greatly from touch events. For example, imagine a paint-style web application. It could use pinches to control zoom, rotations to rotate the canvas, and swipes to move between color options. There is almost always an intuitive way of using touch events to enhance a web application. In order to get comfortable with touch events, it's best to get out there and start using them practically. To this end, we will be building a paint tool with a difference, that is, instead of using the traditional mouse pointer as the selection and drawing tool, we will exclusively use touch events. Together, we can learn just how awesome touch events are and the pleasure they can afford your users. Touch events are the way of the future, and adding them to your developer tool belt will mean that you can stay at the cutting edge. While MC Hammer may not agree with these events, the rest of the world will appreciate touching your applications. Testing while keeping your sanity (Simple) Touch events are awesome on touch-enabled devices. Unfortunately, your development environment is generally not touch-enabled, which means testing these events can be trickier than normal testing. Thankfully, there are a bunch of methods to test these events on non-touch enabled devices, so you don't have to constantly switch between environments. In saying that though, it's still important to do the final testing on actual devices, as there can be subtle differences. How to do it... Learn how to do initial testing with Google Chrome. Debug remotely with Mobile Safari and Safari. Debug further with simulators and emulators. Use general tools to debug your web application. How it works… As mentioned earlier, testing touch events generally isn't as straightforward as the normal update-reload-test flow in web development. Luckily though, the traditional desktop browser can still get you some of the way when it comes to testing these events. While you generally are targeting Mobile Safari on iOS and Chrome, or Browser on Android, it's best to do development and initial testing in Chrome on the desktop. Chrome has a handy feature in its debugging tools called Emulate touch events. This allows a lot of your mouse-related interactions to trigger their touch equivalents. You can find this setting by opening the Web Inspector, clicking on the Settings cog, switching to the Overrides tab, and enabling Emulate touch events (if the option is grayed out, simply click on Enable at the top of the screen). Using this emulation, we are able to test many touch interactions; however, there are certain patterns (such as scaling and rotating) that aren't really possible just using emulated events and the mouse pointer. To correctly test these, we need to find the middle ground between testing on a desktop browser and testing on actual devices. There's more… Remember the following about testing with simulators and browsers: Device simulators While testing with Chrome will get you some of the way, eventually you will want to use a simulator or emulator for the target device, be it a phone, tablet, or other touch-enabled device. Testing in Mobile Safari The iOS Simulator, for testing Mobile Safari on iPhone, iPod, and iPad, is available only on OS X. You must first download Xcode from the App Store. Once you have Xcode, go to Xcode | Preferences | Downloads | Components, where you can download the latest iOS Simulator. You can also refer to the following screenshot: You can then open the iOS Simulator from Xcode by accessing Xcode | Open Developer Tool | iOS Simulator. Alternatively, you can make a direct alias by accessing Xcode's package contents by accessing the Xcode icon's context menu in the Applications folder and selecting Show Package Contents. You can then access it at Xcode.app/Contents/Applications. Here you can create an alias, and drag it anywhere using Finder. Once you have the simulator (or an actual device); you can then attach Safari's Web Inspector to it, which makes debugging very easy. Firstly, open Safari and then launch Mobile Safari in either the iOS Simulator or on your actual device (ensure it's plugged in via its USB cable). Ensure that Web Inspector is enabled in Mobile Safari's settings via Settings | Safari | Advanced | Web Inspector. You can refer to the following screenshot: Then, in Safari, go to Develop | <your device name> | <tab name>. You can now use the Web Inspector to inspect your remote Mobile Safari instance as shown in the following screenshot: Testing in Mobile Safari in the iOS Simulator allows you to use the mouse to trigger all the relevant touch events. In addition, you can hold down the Alt key to test pinch gestures. Testing on Android's Chrome Testing on the Android on desktop is a little more complicated, due to the nature of the Android emulator. Being an emulator and not a simulator, its performance suffers (in the name of accuracy). It also isn't so straightforward to attach a debugger. You can download the Android SDK from http://developer.android.com/sdk. Inside the tools directory, you can launch the Android Emulator by setting up a new Android Virtual Device (AVD). You can then launch the browser to test the touch events. Keep in mind to use 10.0.2.2 instead of localhost to access local running servers. To debug with a real device and Google Chrome, official documentation can be followed at https://developers.google.com/chrome-developer-tools/docs/remote-debugging. When there isn't a built-in debugger or inspector, you can use generic tools such as Web Inspector Remote (Weinre). This tool is platform-agnostic, so you can use it for Mobile Safari, Android's Browser, Chrome for Android, Windows Phone, Blackberry, and so on. It can be downloaded from http://people.apache.org/~pmuellr/weinre. To start using Weinre, you need to include a JavaScript on the page you want to test, and then run the testing server. The URL of this JavaScript file is given to you when you run the weinre command from its downloaded directory. When you have Weinre running on your desktop, you can use its Web Inspector-like interface to do remote debugging and inspecting your touch events, you can send messages to console.log() and use a familiar interface for other debugging. Summary In this article the author has discussed about the touch events and has highlighted the limitations faced by such events when an unfavorable environment is encountered. He has also discussed the methods to test these events on non-touch enabled devices. Resources for Article: Further resources on this subject: Web Design Principles in Inkscape [Article] Top Features You Need to Know About – Responsive Web Design [Article] Sencha Touch: Layouts Revisited [Article]
Read more
  • 0
  • 0
  • 5477

article-image-wpf-45-application-and-windows
Packt
24 Sep 2012
14 min read
Save for later

WPF 4.5 Application and Windows

Packt
24 Sep 2012
14 min read
Creating a window Windows are the typical top level controls in WPF. By default, a MainWindow class is created by the application wizard and automatically shown upon running the application. In this recipe, we'll take a look at creating and showing other windows that may be required during the lifetime of an application. Getting ready Make sure Visual Studio is up and running. How to do it... We'll create a new class derived from Window and show it when a button is clicked: Create a new WPF application named CH05.NewWindows. Right-click on the project node in Solution explorer, and select Add | Window…: In the resulting dialog, type OtherWindow in the Name textbox and click on Add. A file named OtherWindow.xaml should open in the editor. Add a TextBlock to the existing Grid, as follows: <TextBlock Text="This is the other window" FontSize="20" VerticalAlignment="Center" HorizontalAlignment="Center" /> Open MainWindow.xaml. Add a Button to the Grid with a Click event handler: <Button Content="Open Other Window" FontSize="30" Click="OnOpenOtherWindow" /> In the Click event handler, add the following code: void OnOpenOtherWindow(object sender, RoutedEventArgs e) { var other = new OtherWindow(); other.Show(); } Run the application, and click the button. The other window should appear and live happily alongside the main window: How it works... A Window is technically a ContentControl, so can contain anything. It's made visible using the Show method. This keeps the window open as long as it's not explicitly closed using the classic close button, or by calling the Close method. The Show method opens the window as modeless—meaning the user can return to the previous window without restriction. We can click the button more than once, and consequently more Window instances would show up. There's more... The first window shown can be configured using the Application.StartupUri property, typically set in App.xaml. It can be changed to any other window. For example, to show the OtherWindow from the previous section as the first window, open App.xaml and change the StartupUri property to OtherWindow.xaml: StartupUri="OtherWindow.xaml" Selecting the startup window dynamically Sometimes the first window is not known in advance, perhaps depending on some state or setting. In this case, the StartupUri property is not helpful. We can safely delete it, and provide the initial window (or even windows) by overriding the Application.OnStartup method as follows (you'll need to add a reference to the System.Configuration assembly for the following to compile): protected override void OnStartup(StartupEventArgs e) {    Window mainWindow = null;    // check some state or setting as appropriate          if(ConfigurationManager.AppSettings["AdvancedMode"] == "1")       mainWindow = new OtherWindow();    else       mainWindow = new MainWindow();    mainWindow.Show(); } This allows complete flexibility in determining what window or windows should appear at application startup. Accessing command line arguments The WPF application created by the New Project wizard does not expose the ubiquitous Main method. WPF provides this for us – it instantiates the Application object and eventually loads the main window pointed to by the StartupUri property. The Main method, however, is not just a starting point for managed code, but also provides an array of strings as the command line arguments passed to the executable (if any). As Main is now beyond our control, how do we get the command line arguments? Fortunately, the same OnStartup method provides a StartupEventArgs object, in which the Args property is mirrored from Main. The downloadable source for this chapter contains the project CH05.CommandLineArgs, which shows an example of its usage. Here's the OnStartup override: protected override void OnStartup(StartupEventArgs e) { string text = "Hello, default!"; if(e.Args.Length > 0) text = e.Args[0]; var win = new MainWindow(text); win.Show(); } The MainWindow instance constructor has been modified to accept a string that is later used by the window. If a command line argument is supplied, it is used. Creating a dialog box A dialog box is a Window that is typically used to get some data from the user, before some operation can proceed. This is sometimes referred to as a modal window (as opposed to modeless, or non-modal). In this recipe, we'll take a look at how to create and manage such a dialog box. Getting ready Make sure Visual Studio is up and running. How to do it... We'll create a dialog box that's invoked from the main window to request some information from the user: Create a new WPF application named CH05.Dialogs. Add a new Window named DetailsDialog.xaml (a DetailsDialog class is created). Visual Studio opens DetailsDialog.xaml. Set some Window properties: FontSize to 16, ResizeMode to NoResize, SizeToContent to Height, and make sure the Width is set to 300: ResizeMode="NoResize" SizeToContent="Height" Width="300" FontSize="16" Add four rows and two columns to the existing Grid, and add some controls for a simple data entry dialog as follows: <Grid.RowDefinitions> <RowDefinition Height="Auto" /> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" /> <ColumnDefinition /> </Grid.ColumnDefinitions> <TextBlock Text="Please enter details:" Grid.ColumnSpan="2" Margin="4,4,4,20" HorizontalAlignment="Center"/> <TextBlock Text="Name:" Grid.Row="1" Margin="4"/> <TextBox Grid.Column="1" Grid.Row="1" Margin="4" x_Name="_name"/> <TextBlock Text="City:" Grid.Row="2" Margin="4"/> <TextBox Grid.Column="1" Grid.Row="2" Margin="4" x_Name="_city"/> <StackPanel Grid.Row="3" Orientation="Horizontal" Margin="4,20,4,4" Grid.ColumnSpan="2" HorizontalAlignment="Center"> <Button Content="OK" Margin="4"  /> <Button Content="Cancel" Margin="4" /> </StackPanel> This is how it should look in the designer: The dialog should expose two properties for the name and city the user has typed in. Open DetailsDialog.xaml.cs. Add two simple properties: public string FullName { get; private set; } public string City { get; private set; } We need to show the dialog from somewhere in the main window. Open MainWindow.xaml, and add the following markup to the existing Grid: <Grid.RowDefinitions>     <RowDefinition Height="Auto" />     <RowDefinition /> </Grid.RowDefinitions> <Button Content="Enter Data" Click="OnEnterData"         Margin="4" FontSize="16"/> <TextBlock FontSize="24" x_Name="_text" Grid.Row="1"     VerticalAlignment="Center" HorizontalAlignment="Center"/> In the OnEnterData handler, add the following: private void OnEnterData(object sender, RoutedEventArgs e) { var dlg = new DetailsDialog(); if(dlg.ShowDialog() == true) { _text.Text = string.Format( "Hi, {0}! I see you live in {1}.", dlg.FullName, dlg.City); } } Run the application. Click the button and watch the dialog appear. The buttons don't work yet, so your only choice is to close the dialog using the regular close button. Clearly, the return value from ShowDialog is not true in this case. When the OK button is clicked, the properties should be set accordingly. Add a Click event handler to the OK button, with the following code: private void OnOK(object sender, RoutedEventArgs e) { FullName = _name.Text; City = _city.Text; DialogResult = true; Close(); } The Close method dismisses the dialog, returning control to the caller. The DialogResult property indicates the returned value from the call to ShowDialog when the dialog is closed. Add a Click event handler for the Cancel button with the following code: private void OnCancel(object sender, RoutedEventArgs e) { DialogResult = false; Close(); } Run the application and click the button. Enter some data and click on OK: You will see the following window: How it works... A dialog box in WPF is nothing more than a regular window shown using ShowDialog instead of Show. This forces the user to dismiss the window before she can return to the invoking window. ShowDialog returns a Nullable (can be written as bool? in C#), meaning it can have three values: true, false, and null. The meaning of the return value is mostly up to the application, but typically true indicates the user dismissed the dialog with the intention of making something happen (usually, by clicking some OK or other confirmation button), and false means the user changed her mind, and would like to abort. The null value can be used as a third indicator to some other application-defined condition. The DialogResult property indicates the value returned from ShowDialog because there is no other way to convey the return value from the dialog invocation directly. That's why the OK button handler sets it to true and the Cancel button handler sets it to false (this also happens when the regular close button is clicked, or Alt + F4 is pressed). Most dialog boxes are not resizable. This is indicated with the ResizeMode property of the Window set to NoResize. However, because of WPF's flexible layout, it certainly is relatively easy to keep a dialog resizable (and still manageable) where it makes sense (such as when entering a potentially large amount of text in a TextBox – it would make sense if the TextBox could grow if the dialog is enlarged). There's more... Most dialogs can be dismissed by pressing Enter (indicating the data should be used) or pressing Esc (indicating no action should take place). This is possible to do by setting the OK button's IsDefault property to true and the Cancel button's IsCancel property to true. The default button is typically drawn with a heavier border to indicate it's the default button, although this eventually depends on the button's control template. If these settings are specified, the handler for the Cancel button is not needed. Clicking Cancel or pressing Esc automatically closes the dialog (and sets DiaglogResult to false). The OK button handler is still needed as usual, but it may be invoked by pressing Enter, no matter what control has the keyboard focus within the Window. The CH05.DefaultButtons project from the downloadable source for this chapter demonstrates this in action. Modeless dialogs A dialog can be show as modeless, meaning it does not force the user to dismiss it before returning to other windows in the application. This is done with the usual Show method call – just like any Window. The term dialog in this case usually denotes some information expected from the user that affects other windows, sometimes with the help of another button labelled "Apply". The problem here is mostly logical—how to convey the information change. The best way would be using data binding, rather than manually modifying various objects. We'll take an extensive look at data binding in the next chapter. Using the common dialog boxes Windows has its own built-in dialog boxes for common operations, such as opening files, saving a file, and printing. Using these dialogs is very intuitive from the user's perspective, because she has probably used those dialogs before in other applications. WPF wraps some of these (native) dialogs. In this recipe, we'll see how to use some of the common dialogs. Getting ready Make sure Visual Studio is up and running. How to do it... We'll create a simple image viewer that uses the Open common dialog box to allow the user to select an image file to view: Create a new WPF Application named CH05.CommonDialogs. Open MainWindow.xaml. Add the following markup to the existing Grid: <Grid.RowDefinitions>     <RowDefinition Height="Auto" />     <RowDefinition /> </Grid.RowDefinitions> <Button Content="Open Image" FontSize="20" Click="OnOpenImage"         HorizontalAlignment="Center" Margin="4" /> <Image Grid.Row="1" x_Name="_img" Stretch="Uniform" /> Add a Click event handler for the button. In the handler, we'll first create an OpenFileDialog instance and initialize it (add a using to the Microsoft.Win32 namespace): void OnOpenImage(object sender, RoutedEventArgs e) { var dlg = new OpenFileDialog { Filter = "Image files|*.png;*.jpg;*.gif;*.bmp", Title = "Select image to open", InitialDirectory = Environment.GetFolderPath( Environment.SpecialFolder.MyPictures) }; Now we need to show the dialog and use the selected file (if any): if(dlg.ShowDialog() == true) { try { var bmp = new BitmapImage(new Uri(dlg.FileName)); _img.Source = bmp; } catch(Exception ex) { MessageBox.Show(ex.Message, "Open Image"); } } Run the application. Click the button and navigate to an image file and select it. You should see something like the following: How it works... The OpenFileDialog class wraps the Win32 open/save file dialog, providing easy enough access to its capabilities. It's just a matter of instantiating the object, setting some properties, such as the file types (Filter property) and then calling ShowDialog. This call, in turn, returns true if the user selected a file and false otherwise (null is never returned, although the return type is still defined as Nullable for consistency). The look of the Open file dialog box may be different in various Windows versions. This is mostly unimportant unless some automated UI testing is done. In this case, the way the dialog looks or operates may have to be taken into consideration when creating the tests. The filename itself is returned in the FileName property (full path). Multiple selections are possible by setting the MultiSelect property to true (in this case the FileNames property returns the selected files). There's more... WPF similarly wraps the Save As common dialog with the SaveFileDialog class (in the Microsoft.Win32 namespace as well). Its use is very similar to OpenFileDialog (in fact, both inherit from the abstract FileDialog class). What about folder selection (instead of files)? The WPF OpenFileDialog does not support that. One solution is to use Windows Forms' FolderBrowseDialog class. Another good solution is to use the Windows API Code Pack described shortly. Another common dialog box WPF wraps is PrintDialog (in System.Windows.Controls). This shows the familiar print dialog, with options to select a printer, orientation, and so on. The most straightforward way to print would be calling PrintVisual (after calling ShowDialog), providing anything that derives from the Visual abstract class (which include all elements). General printing is a complex topic and is beyond the scope of this book. What about colors and fonts? Windows also provides common dialogs for selecting colors and fonts. However, these are not wrapped by WPF. There are several alternatives: Use the equivalent Windows Forms classes (FontDialog and ColorDialog, both from System.Windows.Forms) Wrap the native dialogs yourself Look for alternatives on the Web The first option is possible, but has two drawbacks: first, it requires adding reference to the System.Windows.Forms assembly; this adds a dependency at compile time, and increases memory consumption at run time, for very little gain. The second drawback has to do with the natural mismatch between Windows Forms and WPF. For example, ColorDialog returns a color as a System.Drawing.Color, but WPF uses System.Windows.Media.Color. This requires mapping a GDI+ color (WinForms) to WPF's color, which is cumbersome at best. The second option of doing your own wrapping is a non-trivial undertaking and requires good interop knowledge. The other downside is that the default color and font common dialogs are pretty old (especially the color dialog), so there's much room for improvement. The third option is probably the best one. There are more than a few good candidates for color and font pickers. For a color dialog, for example, you can use the ColorPicker or ColorCanvas provided with the Extended WPF toolkit library on CodePlex (http://wpftoolkit.codeplex.com/). Here's how these may look (ColorCanvas on the left-hand side, and one of the possible views of ColorPicker on the right-hand side): The Windows API Code Pack The Windows API Code Pack is a Microsoft project on CodePlex (http://archive.msdn.microsoft.com/WindowsAPICodePack) that provides many .NET wrappers to native Windows features, in various areas, such as shell, networking, Windows 7 features (this is less important now as WPF 4 added first class support for Windows 7), power management, and DirectX. One of the Shell features in the library is a wrapper for the Open dialog box that allows selecting a folder instead of a file. This has no dependency on the WinForms assembly.
Read more
  • 0
  • 0
  • 5420

article-image-load-validate-and-submit-forms-using-ext-js-30-part-2
Packt
19 Nov 2009
4 min read
Save for later

Load, Validate, and Submit Forms using Ext JS 3.0: Part 2

Packt
19 Nov 2009
4 min read
Creating validation functions for URLs, email addresses, and other types of data Ext JS has an extensive library of validation functions. This is how it can be used to validate URLs, email addresses, and other types of data. The following screenshot shows email address validation in action: This screenshot displays URL validation in action: How to do it... Initialize the QuickTips singleton: Ext.QuickTips.init(); Create a form with fields that accept specific data formats: Ext.onReady(function() { var commentForm = new Ext.FormPanel({ frame: true, title: 'Send your comments', bodyStyle: 'padding:5px', width: 550, layout: 'form', defaults: { msgTarget: 'side' }, items: [ { xtype: 'textfield', fieldLabel: 'Name', name: 'name', anchor: '95%', allowBlank: false }, { xtype: 'textfield', fieldLabel: 'Email', name: 'email', anchor: '95%', vtype: 'email' }, { xtype: 'textfield', fieldLabel: 'Web page', name: 'webPage', vtype: 'url', anchor: '95%' }, { xtype: 'textarea', fieldLabel: 'Comments', name: 'comments', anchor: '95%', height: 150, allowBlank: false }], buttons: [{ text: 'Send' }, { text: 'Cancel' }] }); commentForm.render(document.body);}); How it works... The vtype configuration option specifies which validation function will be applied to the field. There's more... Validation types in Ext JS include alphanumeric, numeric, URL, and email formats. You can extend this feature with custom validation functions, and virtually, any format can be validated. For example, the following code shows how you can add a validation type for JPG and PNG files: Ext.apply(Ext.form.VTypes, { Picture: function(v) { return /^.*.(jpg|JPG|png|PNG)$/.test(v); }, PictureText: 'Must be a JPG or PNG file';}); If you need to replace the default error text provided by the validation type, you can do so by using the vtypeText configuration option: { xtype: 'textfield', fieldLabel: 'Web page', name: 'webPage', vtype: 'url', vtypeText: 'I am afraid that you did not enter a URL', anchor: '95%'} See also... The Specifying the required fields in a form recipe, covered earlier in this article, explains how to make some form fields required The Setting the minimum and maximum length allowed for a field's value recipe, covered earlier in this article, explains how to restrict the number of characters entered in a field The Changing the location where validation errors are displayed recipe, covered earlier in this article, shows how to relocate a field's error icon Refer to the previous recipe, Deferring field validation until form submission, to know how to validate all fields at once upon form submission, instead of using the default automatic field validation The next recipe, Confirming passwords and validating dates using relational field validation, explains how to perform validation when the value of one field depends on the value of another field The Rounding up your validation strategy with server-side validation of form fields recipe (covered later in this article) explains how to perform server-side validation Confirming passwords and validating dates using relational field validation Frequently, you face scenarios where the values of two fields need to match, or the value of one field depends on the value of another field. Let's examine how to build a registration form that requires the user to confirm his or her password when signing up. How to do it… Initialize the QuickTips singleton: Ext.QuickTips.init(); Create a custom vtype to handle the relational validation of the password: Ext.apply(Ext.form.VTypes, { password: function(val, field) { if (field.initialPassField) { var pwd = Ext.getCmp(field.initialPassField); return (val == pwd.getValue()); } return true; }, passwordText: 'What are you doing?<br/>The passwords entered do not match!'}); Create the signup form: var signupForm = { xtype: 'form', id: 'register-form', labelWidth: 125, bodyStyle: 'padding:15px;background:transparent', border: false, url: 'signup.php', items: [ { xtype: 'box', autoEl: { tag: 'div', html: '<div class="app-msg"><img src="img/businessman add.png" class="app-img" /> Register for The Magic Forum</div>' } }, { xtype: 'textfield', id: 'email', fieldLabel: 'Email', allowBlank: false, minLength: 3, maxLength: 64,anchor:'90%', vtype:'email' }, { xtype: 'textfield', id: 'pwd', fieldLabel: 'Password', inputType: 'password',allowBlank: false, minLength: 6, maxLength: 32,anchor:'90%', minLengthText: 'Password must be at least 6 characters long.' }, { xtype: 'textfield', id: 'pwd-confirm', fieldLabel: 'Confirm Password', inputType: 'password', allowBlank: false, minLength: 6, maxLength: 32,anchor:'90%', minLengthText: 'Password must be at least 6 characters long.', vtype: 'password', initialPassField: 'pwd' }],buttons: [{ text: 'Register', handler: function() { Ext.getCmp('register-form').getForm().submit(); }},{ text: 'Cancel', handler: function() { win.hide(); } }]} Create the window that will host the signup form: Ext.onReady(function() { win = new Ext.Window({ layout: 'form', width: 340, autoHeight: true, closeAction: 'hide', items: [signupForm] }); win.show();
Read more
  • 0
  • 0
  • 5410
Packt
22 Oct 2009
8 min read
Save for later

Working with Rails – ActiveRecord, Migrations, Models, Scaffolding, and Database Completion

Packt
22 Oct 2009
8 min read
ActiveRecord, Migrations, and Models ActiveRecord is the ORM layer (see the section Connecting Rails to a Database in the previous article) used in Rails. It is used by controllers as a proxy to the database tables. What's really great about this is that it protects you against having to code SQL. Writing SQL is one of the least desirable aspects of developing with other web-centric languages (like PHP): having to manually build SQL statements, remembering to correctly escape quotes, and creating labyrinthine join statements to pull data from multiple tables. ActiveRecord does away with all of that (most of the time), instead presenting database tables through classes (a class which wraps around a database table is called a model) and instances of those classes (model instances). The best way to illustrate the beauty of ActiveRecord is to start using it. Model == Table The base concept in ActiveRecord is the model. Each model class is stored in the app/models directory inside your application, in its own file. So, if you have a model called Person, the file holding that model is in app/models/person.rb, and the class for that model, defined in that file, is called Person. Each model will usually correspond to a table in the database. The name of the database table is, by convention, the pluralized (in the English language), lower-case form of the model's class name. In the case of our Intranet application, the models are organized as follows: Table Model class File containing class definition (in app/models) people Person person.rb companies Company company.rb addresses Address address.rb We haven't built any of these yet, but we will shortly. Which Comes First: The Model or The Table? To get going with our application, we need to generate the tables to store data into, as shown in the previous section. It used to be at this point where we would reach for a MySQL client, and create the database tables using a SQL script. (This is typically how you would code a database for a PHP application.) However, things have moved on in the Rails world. The Rails developers came up with a pretty good (not perfect, but pretty good) mechanism for generating databases without the need for SQL: it's called migrations, and is a part of ActiveRecord. Migrations enable a developer to generate a database structure using a series of Ruby script files (each of which is an individual migration) to define database operations. The "operations" part of that last sentence is important: migrations are not just for creating tables, but also for dropping tables, altering them, and even adding data to them. It is this multi-faceted aspect of migrations which makes them useful, as they can effectively be used to version a database (in much the same way as Subversion can be used to version code). A team of developers can use migrations to keep their databases in sync: when a change to the database is made by one of the team and coded into a migration, the other developers can apply the same migration to their database, so they are all working with a consistent structure. When you run a migration, the Ruby script is converted into the SQL code appropriate to your database server and executed over the database connection. However, migrations don't work with every database adapter in Rails: check the Database Support section of the ActiveRecord::Migration documentation to find out whether your adapter is supported. At the time of writing, MySQL, PostgreSQL, SQLite, SQL Server, Sybase, and Oracle were all supported by migrations. Another way to check whether your database supports migrations is to run the following command in the console (the output shown below is the result of running this using the MySQL adapter): >> ActiveRecord::Base.connection.supports_migrations? => true We're going to use migrations to develop our database, so we'll be building the model first. The actual database table will be generated from a migration attached to the model. Building a Model with Migrations In this section, we're going to develop a series of migrations to recreate the database structure outlined in Chapter 2 of the book Ruby on Rails Enterprise Application Development: Plan, Program, Extend. First, we'll work on a model and migration for the people table. Rails has a generate script for generating a model and its migration. (This script is in the script directory, along with the other Rails built-in scripts.) The script builds the model, a base migration for the table, plus scripts for testing the model. Run it like this: $ ruby script/generate model Person exists app/models/  exists test/unit/    exists test/fixtures/    create app/models/person.rb    create test/unit/person_test.rb    create test/fixtures/people.yml    exists db/migrate    create db/migrate/001_create_people.rb Note that we passed the singular, uppercase version of the table name ("people" becomes "Person") to the generate script. This generates a Person model in the file app/models/person.rb; and a corresponding migration for a people table (db/ migrate/001_create_people.rb). As you can see, the script enforces the naming conventions, which connects the table to the model. The migration name is important, as it contains sequencing information: the "001" part of the name indicates that running this migration will bring the database schema up to version 1; subsequent migrations will be numbered "002...", "003..." etc., each specifying the actions required to bring the database schema up to that version from the previous one. The next step is to edit the migration so that it will create the people table structure. At this point, we can return to Eclipse to do our editing. (Remember that you need to refresh the file list in Eclipse to see the files you just generated). Once, you have started Eclipse, open the file db/migrate/001_create_people.rb. It should look like this:     class CreatePeople < ActiveRecord::Migration        def self.up            create_table :people do |t|                # t.column :name, :string            end        end        def self.down            drop_table :people        end    end This is a migration class with two class methods, self.up and self.down. The self.up method is applied when migrating up one database version number: in this case, from version 0 to version 1. The self.down method is applied when moving down a version number (from version 1 to 0). You can leave self.down as it is, as it simply drops the database table. This migration's self.up method is going to add our new table using the create_table method, so this is the method we're going to edit in the next section. Ruby syntaxExplaining the full Ruby syntax is outside the scope of this book. For our purposes, it suffices to understand the most unusual parts. For example, in the create_table method call shown above:,     create_table :people do |t|        t.column :title, :string        ...    end The first unusual part of this is the block construct, a powerful technique for creating nameless functions. In the example code above, the block is initialized by the do keyword; this is followed by a list of parameters to the block (in this case, just t); and closed by the end keyword. The statements in-between the do and end keywords are run within the context of the block. Blocks are similar to lambda functions in Lisp or Python, providing a mechanism for passing a function as an argument to another function. In the case of the example, the method call create_table:people is passed to a block, which accepts a single argument, t; t has methods called on it within the body of the block. When create_table is called, the resulting table object is "yielded" to the block; effectively, the object is passed into the block as the argument t, and has its column method called multiple times. One other oddity is the symbol: that's what the words prefixed with a colon are. A symbol is the name of a variable. However, in much of Rails, it is used in contexts where it is functionally equivalent to a string, to make the code look more elegant. In fact, in migrations, strings can be used interchangeably with symbols.  
Read more
  • 0
  • 0
  • 5395

article-image-feeds-facebook-applications
Packt
16 Oct 2009
7 min read
Save for later

Feeds in Facebook Applications

Packt
16 Oct 2009
7 min read
{literal} What Are Feeds? Feeds are the way to publish news in Facebook. As we have already mentioned before, there are two types of feeds in Facebook, News feed and Mini feed. News feed instantly tracks activities of a user's online friends, ranging from changes in relationship status to added photos to wall comments. Mini feed appears on individuals' profiles and highlights recent social activity. You can see your news feed right after you log in, and point your browser to http://www.facebook.com/home.php. It looks like the following, which is, in fact, my news feed. Mini feeds are seen in your profile page, displaying your recent activities and look like the following one: Only the last 10 entries are being displayed in the mini feed section of the profile page. But you can always see the complete list of mini feeds by going to http://www.facebook.com/minifeed.php. Also the mini feed of any user can be accessed from http://www.facebook.com/minifeed.php?id=userid. There is another close relation between news feed and mini feed. When applications publish a mini feed in your profile, it will also appear in your friend's news feed page. How to publish Feeds Facebook provides three APIs to publish mini feeds and news feeds. But these are restricted to call not more than 10 times for a particular user in a 48 hour cycle. This means you can publish a maximum of 10 feeds in a specific user's profile within 48 hours. The following three APIs help to publish feeds: feed_publishStoryToUser—this function publishes the story to the news feed of any user (limited to call once every 12 hours). feed_publishActionOfUser—this one publishes the story to a user's mini feed, and to his or her friend's news feed (limited to call 10 times in a rolling 48 hour slot). feed_publishTemplatizedAction—this one also publishes mini feeds and news feeds, but in an easier way (limited to call 10 times in a rolling 48 hour slot). You can test this API also from http://developers.facebook.com/tools.php?api, and by choosing Feed Preview Console, which will give you the following interface: And once you execute the sample, like the previous one, it will preview the sample of your feed. Sample application to play with Feeds Let's publish some news to our profile, and test how the functions actually work. In this section, we will develop a small application (RateBuddies) by which we will be able to send messages to our friends, and then publish our activities as a mini feed. The purpose of this application is to display friends list and rate them in different categories (Awesome, All Square, Loser, etc.). Here is the code of our application: index.php<?include_once("prepend.php"); //the Lib and key container?><div style="padding:20px;"><?if (!empty($_POST['friend_sel'])){ $friend = $_POST['friend_sel']; $rating = $_POST['rate']; $title = "<fb:name uid='{$fbuser}' useyou='false' /> just <a href='http://apps.facebook.com/ratebuddies/'>Rated</a> <fb:name uid='{$friend}' useyou='false' /> as a '{$rating}' "; $body = "Why not you also <a href='http://apps.facebook.com/ratebuddies/'>rate your friends</a>?";try{//now publish the story to user's mini feed and on his friend's news feed $facebook->api_client->feed_publishActionOfUser($title, $body, null, $null,null, null, null, null, null, null, 1); } catch(Exception $e) { //echo "Error when publishing feeds: "; echo $e->getMessage(); }}?> <h1>Welcome to RateBuddies, your gateway to rate your friends</h1> <div style="padding-top:10px;"> <form method="POST"> Seect a friend: <br/><br/> <fb:friend-selector uid="<?=$fbuser;?>" name="friendid" idname="friend_sel" /> <br/><br/><br/> And your friend is: <br/> <table> <tr> <td valign="middle"><input name="rate" type="radio" value="funny" /></td> <td valign="middle">Funny</td> </tr> <tr> <td valign="middle"><input name="rate" type="radio" value="hot tempered" /></td> <td valign="middle">Hot Tempered</td> </tr> <tr> <td valign="middle"><input name="rate" type="radio" value="awesome" /></td> <td valign="middle">Awesome</td> </tr> <tr> <td valign="middle"><input name="rate" type="radio" value="naughty professor" /></td> <td valign="middle">Naughty Professor</td> </tr> <tr> <td valign="middle"><input name="rate" type="radio" value="looser" /></td> <td valign="middle">Looser</td> </tr> <tr> <td valign="middle"><input name="rate" type="radio" value="empty veseel" /></td> <td valign="middle">Empty Vessel</td> </tr> <tr> <td valign="middle"><input name="rate" type="radio" value="foxy" /></td> <td valign="middle">Foxy</td> </tr> <tr> <td valign="middle"><input name="rate" type="radio" value="childish" /></td> <td valign="middle">Childish</td> </tr> </table> &nbsp; <input type="submit" value="Rate Buddy"/> </form> </div></div> index.php includes another file called prepend.php. In that file, we initialized the facebook api client using the API key and Secret key of the current application. It is a good practice to keep them in separate file because we need to use them throughout our application, in as many pages as we have. Here is the code of that file: prepend.php<?php// this defines some of your basic setupinclude 'client/facebook.php'; // the facebook API library// Get these from ?http://www.facebook.com/developers/apps.phphttp://www.facebook.com/developers/apps.php$api_key = 'your api key';//the api ket of this application$secret = 'your secret key'; //the secret key$facebook = new Facebook($api_key, $secret); //catch the exception that gets thrown if the cookie has an invalid session_key in it try { if (!$facebook->api_client->users_isAppAdded()) { $facebook->redirect($facebook->get_add_url()); } } catch (Exception $ex) { //this will clear cookies for your application and redirect them to a login prompt $facebook->set_user(null, null); $facebook->redirect($appcallbackurl); }?> The client is a standard Facebook REST API client, which is available directly from Facebook. If you are not sure about these API keys, then point your browser to http://www.facebook.com/developers/apps.php and collect the API key and secret key from there. Here is a screenshot of that page: Just collect your API key and Secret Key from this page, when you develop your own application. Now, when you point your browser to http://apps.facebooks.com/ratebuddies and successfully add that application, it will look like this: To see how this app works, type a friend in the box, Select a friend, and click on any rating such as Funny or Foxy. Then click on the Rate Buddy button. As soon as the page submits, open your profile page and you will see that it has published a mini feed in your profile.
Read more
  • 0
  • 0
  • 5373

Packt
23 Jun 2014
10 min read
Save for later

Kendo UI DataViz – Advance Charting

Packt
23 Jun 2014
10 min read
(For more resources related to this topic, see here.) Creating a chart to show stock history The Kendo UI library provides a specialized chart widget that can be used to display the stock price data for a particular stock over a period of time. In this recipe, we will take a look at creating a Stock chart and customizing it. Getting started Include the CSS files, kendo.dataviz.min.css and kendo.dataviz.default.min.css, in the head section. These files are used in styling some of the parts of a stock history chart. How to do it… A Stock chart is made up of two charts: a pane that shows you the stock history and another pane that is used to navigate through the chart by changing the date range. The stock price for a particular stock on a day can be denoted by the following five attributes: Open: This shows you the value of the stock when the trading starts for the day Close: This shows you the value of the stock when the trading closes for the day High: This shows you the highest value the stock was able to attain on the day Low: This shows you the lowest value the stock reached on the day Volume: This shows you the total number of shares of that stock traded on the day Let's assume that a service returns this data in the following format: [ { "Date" : "2013/01/01", "Open" : 40.11, "Close" : 42.34, "High" : 42.5, "Low" : 39.5, "Volume": 10000 } . . . ] We will use the preceding data to create a Stock chart. The kendoStockChart function is used to create a Stock chart, and it is configured with a set of options similar to the area chart or Column chart. In addition to the series data, you can specify the navigator option to show a navigation pane below the chart that contains the entire stock history: $("#chart").kendoStockChart({ title: { text: 'Stock history' }, dataSource: { transport: { read: '/services/stock?q=ADBE' } }, dateField: "Date", series: [{ type: "candlestick", openField: "Open", closeField: "Close", highField: "High", lowField: "Low" }], navigator: { series: { type: 'area', field: 'Volume' } } }); In the preceding code snippet, the DataSource object refers to the remote service that would return the stock data for a set of days. The series option specifies the series type as candlestick; a candlestick chart is used here to indicate the stock price for a particular day. The mappings for openField, closeField, highField, and lowField are specified; they will be used in plotting the chart and also to show a tooltip when the user hovers over it. The navigator option is specified to create an area chart, which uses volume data to plot the chart. The dateField option is used to specify the mapping between the date fields in the chart and the one in the response. How it works… When you load the page, you will see two panes being shown; the navigator is below the main chart. By default, the chart displays data for all the dates in the DataSource object, as shown in the following screenshot: In the preceding screenshot, a candlestick chart is created and it shows you the stock price over a period of time. Also, notice that in the navigator pane, all date ranges are selected by default, and hence, they are reflected in the chart (candlestick) as well. When you hover over the series, you will notice that the stock quote for the selected date is shown. This includes the date and other fields such as Open, High, Low, and Close. The area of the chart is adjusted to show you the stock price for various dates such that the dates are evenly distributed. In the previous case, the dates range from January 1, 2013 to January 31, 2013. However, when you hover over the series, you will notice that some of the dates are omitted. To overcome this, you can either increase the width of the chart area or use the navigator to reduce the date range. The former option is not advisable if the date range spans across several months and years. To reduce the date range in the navigator, move the two date range selectors towards each other to narrow down the dates, as shown in the following screenshot: When you try to narrow down the dates, you will see a tooltip in the chart, indicating the date range that you are trying to select. The candlestick chart is adjusted to show you the stock price for the selected date range. Also, notice that the opacity of the selected date range in the navigator remains the same while the rest of the area's opacity is reduced. Once the date range is selected, the selected pane can be moved in the navigator. There's more… There are several options available to you to customize the behavior and the look and feel of the Stock Chart widget. Specifying the date range in the navigator when initializing the chart By default, all date ranges in the chart are selected and the user will have to narrow them down in the navigator pane. When you work with a large dataset, you will want to show the stock data for a specific range of date when the chart is rendered. To do this, specify the select option in navigator: navigator: { series: { type: 'area', field: 'Volume' }, select: { from: '2013/01/07', to: '2013/01/14' } } In the previous code snippet, the from and to date ranges are specified. Now, when you render the page, you will see that the same dates are selected in the navigator pane. Customizing the look and feel of the Stock Chart widget There are various options available to customize the navigator pane in the Stock Chart widget. Let's increase the height of the pane and also include a title text for it: navigator: { . . pane: { height: '50px', title: { text: 'Stock Volume' } } } Now when you render the page, you will see that the title and height of the navigator pane have been increased. Using the Radial Gauge widget The Radial Gauge widget allows you to build a dashboard-like application wherein you want to indicate a value that lies in a specific range. For example, a car's dashboard can contain a couple of Radial Gauge widgets that can be used to indicate the current speed and RPM. How to do it… To create a Radial Gauge widget, invoke the kendoRadialGauge function on the selected DOM element. A Radial Gauge widget contains some components, and it can be configured by providing options, as shown in the following code snippet: $("#chart").kendoRadialGauge({ scale: { startAngle: 0, endAngle: 180, min: 0, max: 180 }, pointer: { value: 20 } }); Here the scale option is used to configure the range for the Radial Gauge widget. The startAngle and endAngle options are used to indicate the angle at which the Radial Gauge widget's range should start and end. By default, its values are 30 and 210, respectively. The other two options, that is, min and max, are used to indicate the range values over which the value can be plotted. The pointer option is used to indicate the current value in the Radial Gauge widget. There are several options available to configure the Radial Gauge widget; these include positioning of the labels and configuring the look and feel of the widget. How it works… When you render the page, you will see a Radial Gauge widget that shows you the scale from 0 to 180 and the pointer pointing to the value 20. Here, the values from 0 to 180 are evenly distributed, that is, the major ticks are in terms of 20. There are 10 minor ticks, that is, ticks between two major ticks. The widget shows values in the clockwise direction. Also, the pointer value 20 is selected in the scale. There's more… The Radial Gauge widget can be customized to a great extent by including various options when initializing the widget. Changing the major and minor unit values Specify the majorUnit and minorUnit options in the scale: scale: { startAngle: 0, endAngle: 180, min: 0, max: 180, majorUnit: 30, minorUnit: 10, } The scale option specifies the majorUnit value as 30 (instead of the default 20) and minorUnit as 10. This will now add labels at every 30 units and show you two minor ticks between the two major ticks, each at a distance of 10 units, as shown in the following screenshot: The ticks shown in the preceding screenshot can also be customized: scale: { . . minorTicks: { size: 30, width: 1, color: 'green' }, majorTicks: { size: 100, width: 2, color: 'red' } } Here, the size option is used to specify the length of the tick marker, width is used to specify the thickness of the tick, and the color option is used to change the color of the tick. Now when you render the page, you will see the changes for the major and minor ticks. Changing the color of the radial using the ranges option The scale attribute can include the ranges option to specify a radial color for the various ranges on the Radial Gauge widget: scale: { . . ranges: [ { from: 0, to: 60, color: '#00F' }, { from: 60, to: 130, color: '#0F0' }, { from: 130, to: 200, color: '#F00' } ] } In the preceding code snippet, the ranges array contains three objects that specify the color to be applied on the circumference of the widget. The from and to values are used to specify the range of tick values for which the color should be applied. Now when you render the page, you will see the Radial Gauge widget showing the colors for various ranges along the circumference of the widget, as shown in the following screenshot: In the preceding screenshot, the startAngle and endAngle fields are changed to 10 and 250, respectively. The widget can be further customized by moving the labels outside. This can be done by specifying the labels attribute with position as outside. In the preceding screenshot, the labels are positioned outside, hence, the radial appears inside. Updating the pointer value using a Slider widget The pointer value is set when the Radial Gauge widget is initialized. It is possible to change the pointer value of the widget at runtime using a Slider widget. The changes in the Slider widget can be observed, and the pointer value of the Radial Gauge can be updated accordingly. Let's use the Radial Gauge widget. A Slider widget is created using an input element: <input id="slider" value="0" /> The next step is to initialize the previously mentioned input element to a Slider widget: $('#slider').kendoSlider({ min: 0, max: 200, showButtons: false, smallStep: 10, tickPlacement: 'none', change: updateRadialGuage }); The min and max values specify the range of values that can be set for the slider. The smallStep attribute specifies the minimum increment value of the slider. The change attribute specifies the function that should be invoked when the slider value changes. The updateRadialGuage function should then update the value of the pointer in the Radial Gauge widget: function updateRadialGuage() { $('#chart').data('kendoRadialGauge') .value($('#slider').val()); } The function gets the instance of the widget and then sets its value to the value obtained from the Slider widget. Here, the slider value is changed to 100, and you will notice that it is reflected in the Radial Gauge widget.
Read more
  • 0
  • 0
  • 5304
article-image-using-google-maps-apis-knockoutjs
Packt
22 Sep 2015
7 min read
Save for later

Using Google Maps APIs with Knockout.js

Packt
22 Sep 2015
7 min read
This article by Adnan Jaswal, the author of the book, KnockoutJS by Example, will render a map of the application and allow the users to place markers on it. The users will also be able to get directions between two addresses, both as description and route on the map. (For more resources related to this topic, see here.) Placing marker on the map This feature is about placing markers on the map for the selected addresses. To implement this feature, we will: Update the address model to hold the marker Create a method to place a marker on the map Create a method to remove an existing marker Register subscribers to trigger the removal of the existing markers when an address changes Update the module to add a marker to the map Let's get started by updating the address model. Open the MapsApplication module and locate the AddressModel variable. Add an observable to this model to hold the marker like this: /* generic model for address */ var AddressModel = function() { this.marker = ko.observable(); this.location = ko.observable(); this.streetNumber = ko.observable(); this.streetName = ko.observable(); this.city = ko.observable(); this.state = ko.observable(); this.postCode = ko.observable(); this.country = ko.observable(); }; Next, we create a method that will create and place the marker on the map. This method should take location and address model as parameters. The method will also store the marker in the address model. Use the google.maps.Marker class to create and place the marker. Our implementation of this method looks similar to this: /* method to place a marker on the map */ var placeMarker = function (location, value) { // create and place marker on the map var marker = new google.maps.Marker({ position: location, map: map }); //store the newly created marker in the address model value().marker(marker); }; Now, create a method that checks for an existing marker in the address model and removes it from the map. Name this method removeMarker. It should look similar to this: /* method to remove old marker from the map */ var removeMarker = function(address) { if(address != null) { address.marker().setMap(null); } }; The next step is to register subscribers that will trigger when an address changes. We will use these subscribers to trigger the removal of the existing markers. We will use the beforeChange event of the subscribers so that we have access to the existing markers in the model. Add subscribers to the fromAddress and toAddress observables to trigger on the beforeChange event. Remove the existing markers on the trigger. To achieve this, I created a method called registerSubscribers. This method is called from the init method of the module. The method registers the two subscribers that triggers calls to removeMarker. Our implementation looks similar to this: /* method to register subscriber */ var registerSubscribers = function () { //fire before from address is changed mapsModel.fromAddress.subscribe(function(oldValue) { removeMarker(oldValue); }, null, "beforeChange"); //fire before to address is changed mapsModel.toAddress.subscribe(function(oldValue) { removeMarker(oldValue); }, null, "beforeChange"); }; We are now ready to bring the methods we created together and place a marker on the map. Create a map called updateAddress. This method should take two parameters: the place object and the value binding. The method should call populateAddress to extract and populate the address model, and placeMarker to place a new marker on the map. Our implementation looks similar to this: /* method to update the address model */ var updateAddress = function(place, value) { populateAddress(place, value); placeMarker(place.geometry.location, value); }; Call the updateAddress method from the event listener in the addressAutoComplete custom binding: google.maps.event.addListener(autocomplete, 'place_changed', function() { var place = autocomplete.getPlace(); console.log(place); updateAddress(place, value); }); Open the application in your browser. Select from and to addresses. You should now see markers appear for the two selected addresses. In our browser, the application looks similar to the following screenshot: Displaying a route between the markers The last feature of the application is to draw a route between the two address markers. To implement this feature, we will: Create and initialize the direction service Request routing information from the direction service and draw the route Update the view to add a button to get directions Let's get started by creating and initializing the direction service. We will use the google.maps.DirectionsService class to get the routing information and the google.maps.DirectionsRenderer to draw the route on the map. Create two attributes in the MapsApplication module: one for directions service and the other for directions renderer: /* the directions service */ var directionsService; /* the directions renderer */ var directionsRenderer; Next, create a method to create and initialize the preceding attributes: /* initialise the direction service and display */ var initDirectionService = function () { directionsService = new google.maps.DirectionsService(); directionsRenderer = new google.maps.DirectionsRenderer({suppressMarkers: true}); directionsRenderer.setMap(map); }; Call this method from the mapPanel custom binding handler after the map has been created and cantered. The updated mapPanel custom binding should look similar to this: /* custom binding handler for maps panel */ ko.bindingHandlers.mapPanel = { init: function(element, valueAccessor){ map = new google.maps.Map(element, { zoom: 10 }); centerMap(localLocation); initDirectionService(); } }; The next step is to create a method that will build and fire a request to the direction service to fetch the direction information. The direction information will then be used by the direction renderer to draw the route on the map. Our implementation of this method looks similar to this: /* method to get directions and display route */ var getDirections = function () { //create request for directions var routeRequest = { origin: mapsModel.fromAddress().location(), destination: mapsModel.toAddress().location(), travelMode: google.maps.TravelMode.DRIVING }; //fire request to route based on request directionsService.route(routeRequest, function(response, status) { if (status == google.maps.DirectionsStatus.OK) { directionsRenderer.setDirections(response); } else { console.log("No directions returned ..."); } }); }; We create a routing request in the first part of the method. The request object consists of origin, destination, and travelMode. The origin and destination values are set to the locations for from and to addresses. The travelMode is set to google.maps.TravelMode.DRIVING, which, as the name suggests, specifies that we require driving route. Add the getDirections method to the return statement of the module as we will bind it to a button in the view. One last step before we can work on the view is to clear the route on the map when the user selects a new address. This can be achieved by adding an instruction to clear the route information in the subscribers we registerd earlier. Update the subscribers in the registerSubscribers method to clear the routes on the map: /* method to register subscriber */ var registerSubscribers = function () { //fire before from address is changed mapsModel.fromAddress.subscribe(function(oldValue) { removeMarker(oldValue); directionsRenderer.set('directions', null); }, null, "beforeChange"); //fire before to address is changed mapsModel.toAddress.subscribe(function(oldValue) { removeMarker(oldValue); directionsRenderer.set('directions', null); }, null, "beforeChange"); }; The last step is to update the view. Open the view and add a button under the address input components. Add click binding to the button and bind it to the getDirections method of the module. Add enable binding to make the button clickable only after the user has selected the two addresses. The button should look similar to this: <button type="button" class="btn btn-default" data-bind="enable: MapsApplication.mapsModel.fromAddress && MapsApplication.mapsModel.toAddress, click: MapsApplication.getDirections"> Get Directions </button> Open the application in your browser and select the From address and To address option. The address details and markers should appear for the two selected addresses. Click on the Get Directions button. You should see the route drawn on the map between the two markers. In our browser, the application looks similar to the following screenshot: Summary In this article, we walked through placing markers on the map and displaying the route between the markers. Resources for Article: Further resources on this subject: KnockoutJS Templates[article] Components [article] Web Application Testing [article]
Read more
  • 0
  • 0
  • 5071

article-image-kendo-mvvm-framework
Packt
06 Sep 2013
19 min read
Save for later

The Kendo MVVM Framework

Packt
06 Sep 2013
19 min read
(For more resources related to this topic, see here.) Understanding MVVM – basics MVVM stands for Model ( M ), View ( V ), and View-Model ( VM ). It is part of a family of design patterns related to system architecture that separate responsibilities into distinct units. Some other related patterns are Model-View-Controller ( MVC ) and Model-View-Presenter ( MVP ). These differ on what each portion of the framework is responsible for, but they all attempt to manage complexity through the same underlying design principles. Without going into unnecessary details here, suffice it to say that these patterns are good for developing reliable and reusable code and they are something that you will undoubtedly benefit from if you have implemented them properly. Fortunately, the good JavaScript MVVM frameworks make it easy by wiring up the components for you and letting you focus on the code instead of the "plumbing". In the MVVM pattern for JavaScript through Kendo UI, you will need to create a definition for the data that you want to display and manipulate (the Model), the HTML markup that structures your overall web page (the View), and the JavaScript code that handles user input, reacts to events, and transforms the static markup into dynamic elements (the View-Model). Another way to put it is that you will have data (Model), presentation (View), and logic (View-Model). In practice, the Model is the most loosely-defined portion of the MVVM pattern and is not always even present as a unique entity in the implementation. The View-Model can assume the role of both Model and View-Model by directly containing the Model data properties within itself, instead of referencing them as a separate unit. This is acceptable and is also seen within ASP.NET MVC when a View uses the ViewBag or the ViewData collections instead of referencing a strongly-typed Model class. Don't let it bother you if the Model isn't as well defined as the View-Model and the View. The implementation of any pattern should be filtered down to what actually makes sense for your application. Simple data binding As an introductory example, consider that you have a web page that needs to display a table of data, and also provide the users with the ability to interact with that data, by clicking specifically on a single row or element. The data is dynamic, so you do not know beforehand how many records will be displayed. Also, any change should be reflected immediately on the page instead of waiting for a full page refresh from the server. How do you make this happen? A traditional approach would involve using special server-side controls that can dynamically create tables from a data source and can even wire-up some JavaScript interactivity. The problem with this approach is that it usually requires some complicated extra communication between the server and the web browser either through "view state", hidden fields, or long and ugly query strings. Also, the output from these special controls is rarely easy to customize or manipulate in significant ways and reduces the options for how your site should look and behave. Another choice would be to create special JavaScript functions to asynchronously retrieve data from an endpoint, generate HTML markup within a table and then wire up events for buttons and links. This is a good solution, but requires a lot of coding and complexity which means that it will likely take longer to debug and refine. It may also be beyond the skill set of a given developer without significant research. The third option, available through a JavaScript MVVM like Kendo UI, strikes a balance between these two positions by reducing the complexity of the JavaScript but still providing powerful and simple data binding features inside of the page. Creating the view Here is a simple HTML page to show how a view basically works: <!DOCTYPE html> <html > <head> <title>MVVM Demo 1</title> <script src ="/Scripts/kendo/jquery.js"></script> <script src ="/Scripts/kendo/kendo.all.js"></script> <link href="/Content/kendo/kendo.common.css" rel="stylesheet" /> <link href="/Content/kendo/kendo.default.css" rel="stylesheet" /> <style type="text/css"> th { width: 135px; } </style> </head> <body> <table> <caption>People Data</caption> <thead> <tr> <th>Name</th> <th>Hair Color</th> <th>Favorite Food</th> </tr> </thead> <tbody data-template="row-template" data-bind="source: people"></tbody> </table> </body> </html> Here we have a simple table element with three columns but instead of the body containing any tr elements, there are some special HTML5 data-* attributes indicating that something special is going on here. These data-* attributes do nothing by themselves, but Kendo UI reads them (as you will see below) and interprets their values in order to link the View with the View-Model. The data-bind attribute indicates to Kendo UI that this element should be bound to a collection of objects called people. The data-template attribute tells Kendo UI that the people objects should be formatted using a Kendo UI template. Here is the code for the template: <script id="row-template" type="text/x-kendo-template"> <tr> <td data-bind="text: name"></td> <td data-bind="text: hairColor"></td> <td data-bind="text: favoriteFood"></td> </tr> </script> This is a simple template that defines a tr structure for each row within the table. The td elements also have a data-bind attribute on them so that Kendo UI knows to insert the value of a certain property as the "text" of the HTML element, which in this case means placing the value in between <td> and </td> as simple text on the page. Creating the Model and View-Model In order to wire this up, we need a View-Model that performs the data binding. Here is the View-Model code for this View: <script type="text/javascript"> var viewModel = kendo.observable({ people: [ {name: "John", hairColor: "Blonde", favoriteFood: "Burger"}, {name: "Bryan", hairColor: "Brown", favoriteFood: "Steak"}, {name: "Jennifer", hairColor: "Brown", favoriteFood: "Salad"} ] }); kendo.bind($("body"), viewModel); </script> A Kendo UI View-Model is declared through a call to kendo.observable() which creates an observable object that is then used for the data-binding within the View. An observable object is a special object that wraps a normal JavaScript variable with events that fire any time the value of that variable changes. These events notify the MVVM framework to update any data bindings that are using that variable's value, so that they can update immediately and reflect the change. These data bindings also work both ways so that if a field bound to an observable object variable is changed, the variable bound to that field is also changed in real time. In this case, I created an array called people that contains three objects with properties about some people. This array, then, operates as the Model in this example since it contains the data and the definition of how the data is structured. At the end of this code sample, you can see the call to kendo.bind($("body"), viewModel) which is how Kendo UI actually performs its MVVM wiring. I passed a jQuery selector for the body tag to the first parameter since this viewModel object applies to the full body of my HTML page, not just a portion of it. With everything combined, here is the full source for this simplified example: <!DOCTYPE html> <html > <head> <title>MVVM Demo 1</title> <scriptsrc ="/Scripts/kendo/jquery.js"></script> <scriptsrc ="/Scripts/kendo/kendo.all.js"></script> <link href="/Content/kendo/kendo.common.css" rel="stylesheet" /> <link href="/Content/kendo/kendo.default.css" rel="stylesheet" /> <style type="text/css"> th { width: 135px; } </style> </head> <body> <table> <caption>People Data</caption> <thead> <tr> <th>Name</th> <th>Hair Color</th> <th>Favorite Food</th> </tr> </thead> <tbody data-template="row-template" data-bind="source: people"></tbody> </table> <script id="row-template" type="text/x-kendo-template"> <tr> <td data-bind="text: name"></td> <td data-bind="text: hairColor"></td> <td data-bind="text: favoriteFood"></td> </tr> </script> <script type="text/javascript"> var viewModel = kendo.observable({ people: [ {name: "John", hairColor: "Blonde", favoriteFood: "Burger"}, {name: "Bryan", hairColor: "Brown", favoriteFood: "Steak"}, { name: "Jennifer", hairColor: "Brown", favoriteFood: "Salad" } ] }); kendo.bind($("body"), viewModel); </script> </body> </html> Here is a screenshot of the page in action. Note how the data from the JavaScript people array is populated into the table automatically: Even though this example contains a Model, a View, and a View-Model, all three units appear in the same HTML file. You could separate the JavaScript into other files, of course, but it is also acceptable to keep them together like this. Hopefully you are already seeing what sort of things this MVVM framework can do for you. Observable data binding Binding data into your HTML web page (View) using declarative attributes is great, and very useful, but the MVVM framework offers some much more significant functionality that we didn't see in the last example. Instead of simply attaching data to the View and leaving it at that, the MVVM framework maintains a running copy of all of the View-Model's properties, and keeps references to those properties up to date in real time. This is why the View-Model is created with a function called "observable". The properties inside, being observable, report changes back up the chain so that the data-bound fields always reflect the latest data. Let's see some examples. Adding data dynamically Building on the example we just saw, add this horizontal rule and form just below the table in the HTML page: <hr /> <form> <header>Add a Person</header> <input type="text" name="personName" placeholder="Name" data-bind="value: personName" /><br /> <input type="text" name="personHairColor" placeholder="Hair Color" data-bind="value: personHairColor" /><br /> <input type="text" name="personFavFood" placeholder="Favorite Food" data-bind="value: personFavFood" /><br /> <button type="button" data-bind="click: addPerson">Add</button> </form> This adds a form to the page so that a user can enter data for a new person that should appear in the table. Note that we have added some data-bind attributes, but this time we are binding the value of the input fields not the text. Note also that we have added a data-bind attribute to the button at the bottom of the form that binds the click event of that button with a function inside our View-Model. By binding the click event to the addPerson JavaScript method, the addPerson method will be fired every time this button is clicked. These bindings keep the value of those input fields linked with the View-Model object at all times. If the value in one of these input fields changes, such as when a user types something in the box, the View-Model object will immediately see that change and update its properties to match; it will also update any areas of the page that are bound to the value of that property so that they match the new data as well. The binding for the button is special because it allows the View-Model object to attach its own event handler to the click event for this element. Binding an event handler to an event is nothing special by itself, but it is important to do it this way (through the data-bind attribute) so that the specific running View-Model instance inside of the page has attached one of its functions to this event so that the code inside the event handler has access to this specific View-Model's data properties and values. It also allows for a very specific context to be passed to the event that would be very hard to access otherwise. Here is the code I added to the View-Model just below the people array. The first three properties that we have in this example are what make up the Model. They contain that data that is observed and bound to the rest of the page: personName: "", // Model property personHairColor: "", // Model property personFavFood: "", // Model property addPerson: function () { this.get("people").push({ name: this.get("personName"), hairColor: this.get("personHairColor"), favoriteFood: this.get("personFavFood") }); this.set("personName", ""); this.set("personHairColor", ""); this.set("personFavFood", ""); } The first several properties you see are the same properties that we are binding to in the input form above. They start with an empty value because the form should not have any values when the page is first loaded. It is still important to declare these empty properties inside the View-Model in order that their value can be tracked when it changes. The function after the data properties, addPerson , is what we have bound to the click event of the button in the input form. Here in this function we are accessing the people array and adding a new record to it based on what the user has supplied in the form fields. Notice that we have to use the this.get() and this.set() functions to access the data inside of our View-Model. This is important because the properties in this View-Model are special observable properties so accessing their values directly may not give you the results you would expect. The most significant thing that you should notice about the addPerson function is that it is interacting with the data on the page through the View-Model properties. It is not using jQuery, document.querySelector, or any other DOM interaction to read the value of the elements! Since we declared a data-bind attribute on the values of the input elements to the properties of our View-Model, we can always get the value from those elements by accessing the View-Model itself. The values are tracked at all times. This allows us to both retrieve and then change those View-Model properties inside the addPerson function and the HTML page will show the changes right as it happens. By calling this.set() on the properties and changing their values to an empty string, the HTML page will clear the values that the user just typed and added to the table. Once again, we change the View-Model properties without needing access to the HTML ourselves. Here is the complete source of this example: <!DOCTYPE html> <html > <head> <title>MVVM Demo 2</title> <scriptsrc ="/Scripts/kendo/jquery.js"></script> <scriptsrc ="/Scripts/kendo/kendo.all.js"></script> <link href="/Content/kendo/kendo.common.css" rel="stylesheet" /> <link href="/Content/kendo/kendo.default.css" rel="stylesheet" /> <style type="text/css"> th { width: 135px; } </style> </head> <body> <table> <caption>People Data</caption> <thead> <tr> <th>Name</th> <th>Hair Color</th> <th>Favorite Food</th> </tr> </thead> <tbody data-template="row-template" data-bind="source: people"></tbody> </table> <hr /> <form> <header>Add a Person</header> <input type="text" name="personName" placeholder="Name"data-bind="value: personName" /><br /> <input type="text" name="personHairColor" placeholder="Hair Color"data-bind="value: personHairColor" /><br /> <input type="text" name="personFavFood" placeholder="Favorite Food"data-bind="value: personFavFood" /><br /> <button type="button" data-bind="click: addPerson">Add</button> </form> <script id="row-template" type="text/x-kendo-template"> <tr> <td data-bind="text: name"></td> <td data-bind="text: hairColor"></td> <td data-bind="text: favoriteFood"></td> </tr> </script> <script type="text/javascript"> var viewModel = kendo.observable({ people: [ {name: "John", hairColor: "Blonde", favoriteFood: "Burger"}, {name: "Bryan", hairColor: "Brown", favoriteFood: "Steak"}, {name: "Jennifer", hairColor: "Brown", favoriteFood: "Salad"} ], personName: "", personHairColor: "", personFavFood: "", addPerson: function () { this.get("people").push({ name: this.get("personName"), hairColor: this.get("personHairColor"), favoriteFood: this.get("personFavFood") }); this.set("personName", ""); this.set("personHairColor", ""); this.set("personFavFood", ""); } }); kendo.bind($("body"), viewModel); </script> </body> </html> And here is a screenshot of the page in action. You will see that one additional person has been added to the table by filling out the form. Try it out yourself to see the immediate interaction that you get with this code: Using observable properties in the View We just saw how simple it is to add new data to observable collections in the View-Model, and how this causes any data-bound elements to immediately show that new data. Let's add some more functionality to illustrate working with individual elements and see how their observable values can update content on the page. To demonstrate this new functionality, I have added some columns to the table: <table> <caption>People Data</caption> <thead> <tr> <th>Name</th> <th>Hair Color</th> <th>Favorite Food</th> <th></th> <th>Live Data</th> </tr> </thead> <tbody data-template="row-template" data-bind="source: people"></tbody> </table> The first new column has no heading text but will contain a button on the page for each of the table rows. The second new column will be displaying the value of the "live data" in the View-Model for each of the objects displayed in the table. Here is the updated row template: <script id="row-template" type="text/x-kendo-template"> <tr> <td><input type="text" data-bind="value: name" /></td> <td><input type="text" data-bind="value: hairColor" /></td> <td><input type="text" data-bind="value: favoriteFood" /></td> <td><button type="button" data-bind="click: deletePerson">Delete</button></td> <td><span data-bind="text: name"></span>&nbsp;-&nbsp; <span data-bind="text: hairColor"></span>&nbsp;-&nbsp; <span data-bind="text: favoriteFood"></span></td> </tr> </script> Notice that I have replaced all of the simple text data-bind attributes with input elements and valuedata-bind attributes. I also added a button with a clickdata-bind attribute and a column that displays the text of the three properties so that you can see the observable behavior in real time. The View-Model gets a new method for the delete button: deletePerson: function (e) { var person = e.data; var people = this.get("people"); var index = people.indexOf(person); people.splice(index, 1); } When this function is called through the binding that Kendo UI has created, it passes an event argument, here called e, into the function that contains a data property. This data property is a reference to the model object that was used to render the specific row of data. In this function, I created a person variable for a reference to the person in this row and a reference to the people array; we then use the index of this person to splice it out of the array. When you click on the Delete button, you can observe the table reacting immediately to the change. Here is the full source code of the updated View-Model: <script id="row-template" type="text/x-kendo-template"> <tr> <td><input type="text" data-bind="value: name" /></td> <td><input type="text" data-bind="value: hairColor" /></td><td><input type="text" data-bind="value: favoriteFood" /></td> <td><button type="button" data-bind="click: deletePerson">Delete</button></td> <td><span data-bind="text: name"></span>&nbsp;-&nbsp; <span data-bind="text: hairColor"></span>&nbsp;-&nbsp; <span data-bind="text: favoriteFood"></span></td></tr> </script><script type="text/javascript"> var viewModel = kendo.observable({ people: [ {name: "John", hairColor: "Blonde", favoriteFood: "Burger"}, {name: "Bryan", hairColor: "Brown", favoriteFood: "Steak"}, {name: "Jennifer", hairColor: "Brown", favoriteFood: "Salad"} ], personName: "", personHairColor: "", personFavFood: "", addPerson: function () { this.get("people").push({ name: this.get("personName"), hairColor: this.get("personHairColor"), favoriteFood: this.get("personFavFood") }); this.set("personName", ""); this.set("personHairColor", ""); this.set("personFavFood", ""); }, deletePerson: function (e) { var person = e.data; var people = this.get("people"); var index = people.indexOf(person); people.splice(index, 1); } }); kendo.bind($("body"), viewModel); </script> </body> </html> Here is a screenshot of the new page: Click on the Delete button to see an entry disappear. You can also see that I have added a new person to the table and that I have made changes in the input boxes of the table and that those changes immediately show up on the right-hand side. This indicates that the View-Model is keeping track of the live data and updating its bindings accordingly.
Read more
  • 0
  • 0
  • 5064
Modal Close icon
Modal Close icon