Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-importance-securing-web-services
Packt
23 Jul 2014
10 min read
Save for later

The Importance of Securing Web Services

Packt
23 Jul 2014
10 min read
(For more resources related to this topic, see here.) In the upcoming sections of this article we are going to briefly explain several concepts about the importance of securing web services. The importance of security The management of securities is one of the main aspects to consider when designing applications. No matter what, neither the functionality nor the information of organizations can be exposed to all users without any kind of restriction. Suppose the case of a human resource management application that allows you to consult wages of their employees, for example, if the company manager needs to know the salary of one of their employees, it is not something of great importance. But in the same context, imagine that one of the employees wants to know the salary of their colleagues, if access to this information is completely open, it could generate problems among employees with varied salaries. Security management options Java provides some options for security management. Right now we will explain some of them and demonstrate how to implement them. All authentication methods are practically based on credentials delivery from the client to the server. In order to perform this, there are several methods: BASIC authentication DIGEST authentication CLIENT CERT authentication Using API keys The Security Management in applications built with Java including those ones with RESTful web services, always rely on JAAS. Basic authentication by providing user credentials Possibly one of the most used techniques in all kind of applications. The user, before gaining functionality over the application is requested to enter a username and password both are validated in order to verify if credentials are correct (belongs to an application user). We are 99 percent sure you have performed this technique at least once, maybe through a customized mechanism, or if you used JEE platform, probably through JAAS. This kind of control is known as basic authentication. In order to have a working example, let’s start our application server JBoss AS 7, then go to bin directory and execute the file add-user.bat (.sh file for UNIX users). Finally, we will create a new user as follows: As a result, we will have a new user in JBOSS_HOME/standalone/configuration/application - users.properties file. JBoss is already set with a default security domain called other; the same one uses the information stored in the file we mentioned earlier in order to authenticate. Right now we are going to configure the application to use this security domain, inside the folder WEB-INF from resteasy-examples project, let's create a file named jboss-web.xml with the following content: <?xml version="1.0" encoding="UTF-8"?> <jboss-web> <security-domain>other</security-domain> </jboss-web> Alright, let's configure the file web.xml in order to aggregate the securities constraints. In the following block of code, you will see on bold what you should add: <?xml version="1.0" encoding="UTF-8"?> <web-app version="3.0" xsi_schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"> <!-- Roles --> <security-role> <description>Any rol </description> <role-name>*</role-name> </security-role> <!-- Resource / Role Mapping --> <security-constraint> <display-name>Area secured</display-name> <web-resource-collection> <web-resource-name>protected_resources</web-resource-name> <url-pattern>/services/*</url-pattern> <http-method>GET</http-method> <http-method>POST</http-method> </web-resource-collection> <auth-constraint> <description>User with any role</description> <role-name>*</role-name> </auth-constraint> </security-constraint> <login-config> <auth-method>BASIC</auth-method> </login-config> </web-app> From a terminal let's go to the home folder of the resteasy-examples project and execute mvn jboss-as:redeploy. Now we are going to test our web service as we did earlier by using SoapUI. We are going to perform request using the POST method to the URL SOAP UI shows us the HTTP 401 error; this means that the request wasn't authorized. This is because we performed the request without delivering the credentials to the server. Digest access authentication This authentication method makes use of a hash function to encrypt the password entered by the user before sending it to the server. This makes it obviously much safer than the BASIC authentication method, in which the user’s password travels in plain text that can be easily read by whoever intercepts. To overcome such drawbacks, digest MD5 authentication applies a function on the combination of the values of the username, realm of application security, and password. As a result we obtain an encrypted string that can hardly be interpreted by an intruder. Now, in order to perform what we explained before, we need to generate a password for our example user. And we have to generate it using the parameters we talked about earlier; username, realm, and password. Let’s go into the directory of JBOSS_HOME/modules/org/picketbox/main/ from a terminal and type the following: java -cp picketbox-4.0.7.Final.jar org.jboss. security.auth.callback.RFC2617Digest username MyRealmName password We will obtain the following result: RFC2617 A1 hash: 8355c2bc1aab3025c8522bd53639c168 Through this process we obtain the encrypted password, and use it in our password storage file (the JBOSS_HOME/standalone/configuration/application-users.properties). We must replace the password in the file and it will be used for the user username. We have to replace it because the old password doesn't contain the realm name information of the application. Next, We have to modify the web.xml file in the tag auth-method and change the value FORM to DIGEST, and we should set the application realm name this way: <login-config> <auth-method>DIGEST</auth-method> <realm-name>MyRealmName</realm-name> </login-config> Now, let's create a new security domain in JBoss, so we can manage the authentication mechanism DIGEST. In the file JBOSS_HOME/standalone/configuration/standalone.xml, on the section <security-domains>, let's add the following entry: <security-domain name="domainDigest" cache-type ="default"> <authentication> <login-module code="UsersRoles" flag="required"> <module-option name="usersProperties" value="${jboss.server.config.dir} /application-users.properties"/> <module-option name="rolesProperties" value="${jboss.server.config.dir}/ application-roles.properties"/> <module-option name="hashAlgorithm" value="MD5"/> <module-option name= "hashEncoding" value="RFC2617"/> <module-option name="hashUserPassword" value="false"/> <module-option name="hashStorePassword" value="true"/> <module-option name="passwordIsA1Hash" value="true"/> <module-option name="storeDigestCallback" value=" org.jboss.security.auth.callback.RFC2617Digest"/> </login-module> </authentication> </security-domain> Finally, in the application, change the security domain name in the file jboss-web.xml as shown in the following snippet: <?xml version="1.0" encoding="UTF-8"?> <jboss-web> <security-domain>java:/jaas/domainDigest</security-domain> </jboss-web> We are going to change the authentication method from BASIC to DIGEST in the web.xml file. Also we enter the name of the security realm; all these changes must be applied in the tag login-config, this way: <login-config> <auth-method>DIGEST</auth-method> <realm-name>MyRealmName</realm-name </login-config> Now, restart the application server and then redeploy the application on JBoss. To do this, execute the next command on the terminal: mvn jboss-as:redeploy Authentication through certificates It is a mechanism in which a trust agreement is established between the server and the client through certificates. They must be signed by an agency established to ensure that the certificate presented for authentication is legitimate, it is known as CA. This security mechanism needs that our application server uses HTTPS as communication protocol. So we must enable HTTPS. Let's add a connector in the standalone.xml file; look for the following line: <connector name="http" Add the following block of code: <connector name="https" protocol="HTTP/1.1" scheme="https" socket-binding="https" secure="true"> <ssl password="changeit" certificate-key-file="${jboss.server.config.dir}/server.keystore" verify-client="want" ca-certificate-file="${jboss.server.config.dir}/server.truststore"/> </connector> Next we add the security domain: <security-domain name="RequireCertificateDomain"> <authentication> <login-module code="CertificateRoles" flag="required"> <module-option name="securityDomain" value="RequireCertificateDomain"/> <module-option name="verifier" value=" org.jboss.security.auth.certs.AnyCertVerifier"/> <module-option name="usersProperties" value= "${jboss.server.config.dir}/my-users.properties"/> <module-option name="rolesProperties" value= "${jboss.server.config.dir}/my-roles.properties"/> </login-module> </authentication> <jsse keystore-password="changeit" keystore-url= "file:${jboss.server.config.dir}/server.keystore" truststore-password="changeit" truststore-url ="file:${jboss.server.config.dir}/server.truststore"/> </security-domain> As you can see, we need two files: my-users.properties and my-roles.properties, both are empty and located in the JBOSS_HOME/standalone/configuration path. We are going to add the <user-data-constraint> tag in the web.xml in this way: <security-constraint> ...<user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> Then, change the authentication method to CLIENT-CERT: <login-config> <auth-method>CLIENT-CERT</auth-method> </login-config> And finally change the security domain in the jboss-web.xml file in the following way: <?xml version="1.0" encoding="UTF-8"?> <jboss-web> <security-domain>RequireCertificateDomain</security-domain> </jboss-web> Now, restart the application server, and redeploy the application with Maven: mvn jboss-as:redeploy API keys With the advent of cloud computing, it is not difficult to think of applications that integrate with many others available in the cloud. Right now, it's easy to see how applications interact with Flickr, Facebook, Twitter, Tumblr, and so on through APIKeys usage. This authentication method is used primarily when we need to authenticate from another application but we do not want to access the private user data hosted in another application, on the contrary, if you want to access this information, you must use OAuth. Today it is very easy to get an API key. Simply log into one of the many cloud providers and obtain credentials, consisting of a KEY and a SECRET, the same that are needed to interact with the authenticating service providers. Keep in mind that when creating an API Key, accept the terms of the supplier, which clearly states what we can and cannot do, protecting against abusive users trying to affect their services. The following chart shows how this authentication mechanism works: Summary In this article, we went through some models of authentication. We can apply them to any web service functionality we created. As you realize, it is important to choose the correct security management, otherwise information is exposed and can easily be intercepted and used by third parties. Therefore, tread carefully. Resources for Article: Further resources on this subject: RESTful Java Web Services Design [Article] Debugging REST Web Services [Article] RESTful Services JAX-RS 2.0 [Article]
Read more
  • 0
  • 0
  • 3403

article-image-indexes
Packt
23 Jul 2014
8 min read
Save for later

Indexes

Packt
23 Jul 2014
8 min read
(For more resources related to this topic, see here.) As a database administrator (DBA) or developer, one of your most important goals is to ensure that the query times are consistent with the service-level agreement (SLA) and are meeting user expectations. Along with other performance enhancement techniques, creating indexes for your queries on underlying tables is one of the most effective and common ways to achieve this objective. The indexes of underlying relational tables are very similar in purpose to an index section at the back of a book. For example, instead of flipping through each page of the book, you use the index section at the back of the book to quickly find the particular information or topic within the book. In the same way, instead of scanning each individual row on the data page, SQL Server uses indexes to quickly find the data for the qualifying query. Therefore, by indexing an underlying relational table, you can significantly enhance the performance of your database. Indexing affects the processing speed for both OLTP and OLAP and helps you achieve optimum query performance and response time. The cost associated with indexes SQL Server uses indexes to optimize overall query performance. However, there is also a cost associated with indexes; that is, indexes slow down insert, update, and delete operations. Therefore, it is important to consider the cost and benefits associated with indexes when you plan your indexing strategy. How SQL Server uses indexes A table that doesn't have a clustered index is stored in a set of data pages called a heap. Initially, the data in the heaps is stored in the order in which the rows are inserted into the table. However, SQL Server Database Engine moves the data around the heap to store the rows efficiently. Therefore, you cannot predict the order of the rows for heaps because data pages are not sequenced in any particular order. The only way to guarantee the order of the rows from a heap is to use the SELECT statement with the ORDER BY clause. Access without an index When you access the data, SQL Server first determines whether there is a suitable index available for the submitted SELECT statement. If no suitable index is found for the submitted SELECT statement, SQL Server retrieves the data by scanning the entire table. The database engine begins scanning at the physical beginning of the table and scans through the full table page by page and row by row to look for qualifying data that is specified in the submitted SELECT statement. Then, it extracts and returns the rows that meet the criteria in the format specified in the submitted SELECT statement. Access with an index The process is improved when indexes are present. If the appropriate index is available, SQL Server uses it to locate the data. An index improves the search process by sorting data on key columns. The database engine begins scanning from the first page of the index and only scans those pages that potentially contain qualifying data based on the index structure and key columns. Finally, it retrieves the data rows or pointers that contain the locations of the data rows to allow direct row retrieval. The structure of indexes In SQL Server, all indexes—except full-text, XML, in-memory optimized, and columnstore indexes—are organized as a balanced tree (B-tree). This is because full-text indexes use their own engine to manage and query full-text catalogs, XML indexes are stored as internal SQL Server tables, in-memory optimized indexes use the Bw-tree structure, and columnstore indexes utilize SQL Server in-memory technology. In the B-tree structure, each page is called a node. The top page of the B-tree structure is called the root node. Non-leaf nodes, also referred to as intermediate levels, are hierarchical tree nodes that comprise the index sort order. Non-leaf nodes point to other non-leaf nodes that are one step below in the B-tree hierarchy, until reaching the leaf nodes. Leaf nodes are at the bottom of the B-tree hierarchy. The following diagram illustrates the typical B-tree structure: Index types In SQL Server 2014, you can create several types of indexes. They are explored in the next sections. Clustered indexes A clustered index sorts table or view rows in the order based on clustered index key column values. In short, a leaf node of a clustered index contains data pages, and scanning them will return the actual data rows. Therefore, a table can have only one clustered index. Unless explicitly specified as nonclustered, SQL Server automatically creates the clustered index when you define a PRIMARY KEY constraint on a table. When should you have a clustered index on a table? Although it is not mandatory to have a clustered index per table, according to the TechNet article, Clustered Index Design Guidelines, with few exceptions, every table should have a clustered index defined on the column or columns that used as follows: The table is large and does not have a nonclustered index. The presence of a clustered index improves performance because without it, all rows of the table will have to be read if any row needs to be found. A column or columns are frequently queried, and data is returned in a sorted order. The presence of a clustered index on the sorting column or columns prevents the sorting operation from being started and returns the data in the sorted order. A column or columns are frequently queried, and data is grouped together. As data must be sorted before it is grouped, the presence of a clustered index on the sorting column or columns prevents the sorting operation from being started. A column or columns data that are frequently used in queries to search data ranges from the table. The presence of clustered indexes on the range column will help avoid the sorting of the entire table data. Nonclustered indexes Nonclustered indexes do not sort or store the data of the underlying table. This is because the leaf nodes of the nonclustered indexes are index pages that contain pointers to data rows. SQL Server automatically creates nonclustered indexes when you define a UNIQUE KEY constraint on a table. A table can have up to 999 nonclustered indexes. You can use the CREATE INDEX statement to create clustered and nonclustered indexes. A detailed discussion on the CREATE INDEX statement and its parameters is beyond the scope of this article. For help with this, refer to the CREATE INDEX (Transact-SQL) article at http://msdn.microsoft.com/en-us/library/ms188783.aspx. SQL Server 2014 also supports new inline index creation syntax for standard, disk-based database tables, temp tables, and table variables. For more information, refer to the CREATE TABLE (SQL Server) article at http://msdn.microsoft.com/en-us/library/ms174979.aspx. Single-column indexes As the name implies, single-column indexes are based on a single-key column. You can define it as either clustered or nonclustered. You cannot drop the index key column or change the data type of the underlying table column without dropping the index first. Single-column indexes are useful for queries that search data based on a single column value. Composite indexes Composite indexes include two or more columns from the same table. You can define composite indexes as either clustered or nonclustered. You can use composite indexes when you have two or more columns that need to be searched together. You typically place the most unique key (the key with the highest degree of selectivity) first in the key list. For example, examine the following query that returns a list of account numbers and names from the Purchasing.Vendor table, where the name and account number starts with the character A: USE [AdventureWorks2012]; SELECT [AccountNumber] , [Name] FROM [Purchasing].[Vendor] WHERE [AccountNumber] LIKE 'A%' AND [Name] LIKE 'A%'; GO If you look at the execution plan of this query without modifying the existing indexes of the table, you will notice that the SQL Server query optimizer uses the table's clustered index to retrieve the query result, as shown in the following screenshot: As our search is based on the Name and AccountNumber columns, the presence of the following composite index will improve the query execution time significantly: USE [AdventureWorks2012]; GO CREATE NONCLUSTERED INDEX [AK_Vendor _ AccountNumber_Name] ON [Purchasing].[Vendor] ([AccountNumber] ASC, [Name] ASC) ON [PRIMARY]; GO Now, examine the query execution plan of this query once again, after creating the previous composite index on the Purchasing.Vendor table, as shown in the following screenshot: As you can see, SQL Server performs a seek operation on this composite index to retrieve the qualifying data. Summary Thus we have learned what indexes are, how SQL Server uses indexes, structure of indexes, and some of the types of indexes. Resources for Article: Further resources on this subject: Easily Writing SQL Queries with Spring Python [article] Manage SQL Azure Databases with the Web Interface 'Houston' [article] VB.NET Application with SQL Anywhere 10 database: Part 1 [article]
Read more
  • 0
  • 0
  • 7666

article-image-tuning-solr-jvm-and-container
Packt
22 Jul 2014
6 min read
Save for later

Tuning Solr JVM and Container

Packt
22 Jul 2014
6 min read
(For more resources related to this topic, see here.) Some of these JVMs are commercially optimized for production usage; you may find comparison studies at http://dior.ics.muni.cz/~makub/java/speed.html. Some of the JVM implementations provide server versions, which would be more appropriate than normal ones. Since Solr runs in JVM, all the standard optimizations for applications are applicable to it. It starts with choosing the right heap size for your JVM. The heap size depends upon the following aspects: Use of facets and sorting options Size of the Solr index Update frequencies on Solr Solr cache Heap size for JVM can be controlled by the following parameters: Parameter Description -Xms This is the minimum heap size required during JVM initialization, that is, container -Xmx This is the maximum heap size up to which the JVM or J2EE container can consume Deciding heap size Heap in JVM contributes as a major factor while optimizing the performance of any system. JVM uses heap to store its objects, as well as its own content. Poor allocation of JVM heap results in Java heap space OutOfMemoryError thrown at runtime crashing the application. When the heap is allocated with less memory, the application takes a longer time to initialize, as well as slowing the execution speed of the Java process during runtime. Similarly, higher heap size may underutilize expensive memory, which otherwise could have been used by the other application. JVM starts with initial heap size, and as the demand grows, it tries to resize the heap to accommodate new space requirements. If a demand for memory crosses the maximum limit, JVM throws an Out of Memory exception. The objects that expire or are unused, unnecessarily consume memory in JVM. This memory can be taken back by releasing these objects by a process called garbage collection. Although it's tricky to find out whether you should increase or reduce the heap size, there are simple ways that can help you out. In a memory graph, typically, when you start the Solr server and run your first query, the memory usage increases, and based on subsequent queries and memory size, the memory graph may increase or remain constant. When garbage collection is run automatically by the JVM container, it sharply brings down its usage. If it's difficult to trace GC execution from the memory graph, you can run Solr with the following additional parameters: -Xloggc:<some file> -verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails If you are monitoring the heap usage continuously, you will find a graph that increases and decreases (sawtooth); the increase is due to the querying that is going on consistently demanding more memory by your Solr cache, and decrease is due to GC execution. In a running environment, the average heap size should not grow over time or the number of GC runs should be less than the number of queries executed on Solr. If that's not the case, you will need more memory. Features such as Solr faceting and sorting requires more memory on top of traditional search. If memory is unavailable, the operating system needs to perform hot swapping with the storage media, thereby increasing the response time; thus, users find huge latency while searching on large indexes. Many of the operating systems allow users to control swapping of programs. How can we optimize JVM? Whenever a facet query is run in Solr, memory is used to store each unique element in the index for each field. So, for example, a search over a small set of facet value (an year from 1980 to 2014) will consume less memory than a search with larger set of facet value, such as people's names (can vary from person to person). To reduce the memory usage, you may set the term index divisor to 2 (default is 4) by setting the following in solrconfig.xml: <indexReaderFactory name="IndexReaderFactory" class="solr.StandardIndexReaderFactory"> <int name="setTermIndexDivisor">2</int> </indexReaderFactory > From Solr 4.x onwards, the ability to set the min, max (term index divisor) block size ability is not available. This will reduce the memory usage for storing all the terms to half; however, it will double the seek time for terms and will impact a little on your search runtime. One of the causes of large heap is the size of index, so one solution is to introduce SolrCloud and the distributed large index into multiple shards. This will not reduce your memory requirement, but will spread it across the cluster. You can look at some of the optimized GC parameters described at http://wiki.apache.org/solr/ShawnHeisey#GC_Tuning page. Similarly, Oracle provides a GC tuning guide for advanced development stages, and it can be seen at http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html. Additionally, you can look at the Solr performance problems at http://wiki.apache.org/solr/SolrPerformanceProblems. Optimizing JVM container JVM containers allow users to have their requests served in threads. This in turn enables JVM to support concurrent sessions created for different users connecting at the same time. The concurrency can, however, be controlled to reduce the load on the search server. If you are using Apache Tomcat, you can modify the following entries in server.xml for changing the number of concurrent connections: Similarly, in Jetty, you can control the number of connections held by modifying jetty.xml: Similarly, for other containers, these files can change appropriately. Many containers provide a cache on top of the application to avoid server hits. This cache can be utilized for static pages such as the search page. Containers such as Weblogic provide a development versus production mode. Typically, a development mode runs with 15 threads and a limited JDBC pool size by default, whereas, for a production mode, this can be increased. For tuning containers, besides standard optimization, specific performance-tuning guidelines should be followed, as shown in the following table: Container Performance tuning guide Jetty http://wiki.eclipse.org/Jetty/Howto/High_Load Tomcat http://www.mulesoft.com/tcat/tomcat-performance and http://javamaster.wordpress.com/2013/03/13/apache-tomcat-tuning-guide/ JBoss https://access.redhat.com/site/documentation/en-US/JBoss_Enterprise_Application_Platform/5/pdf/Performance_Tuning_Guide/JBoss_Enterprise_Application_Platform-5-Performance_Tuning_Guide-en-US.pdf Weblogic http://docs.oracle.com/cd/E13222_01/wls/docs92/perform/WLSTuning.html Websphere http://www.ibm.com/developerworks/websphere/techjournal/0909_blythe/0909_blythe.html Apache Solr works better with the default container it ships with, Jetty, since it offers a small footprint compared to other containers such as JBoss and Tomcat for which the memory required is a little higher. Summary In this article, we have learned about about Apache Solr which runs on the underlying JVM in the J2EE container and tuning containers. Resources for Article: Further resources on this subject: Apache Solr: Spellchecker, Statistics, and Grouping Mechanism [Article] Getting Started with Apache Solr [Article] Apache Solr PHP Integration [Article]
Read more
  • 0
  • 0
  • 16951

article-image-linking-dynamic-content-external-websites
Packt
22 Jul 2014
5 min read
Save for later

Linking Dynamic Content from External Websites

Packt
22 Jul 2014
5 min read
(For more resources related to this topic, see here.) Introduction to the YouTube API YouTube provides three different APIs for a client application to access. The following figure shows the three different APIs provided by YouTube: Configuring a YouTube API In the Google Developers Console, we need to create a client project. We will be creating a new project, called PacktYoutubeapi. The URL for the Google Developers Console is https://console.developers.google.com. The following screenshot shows the pop-up window that appears when you want to create a new client project in the Developers Console: After the successful creation of the new client project, it will be available in the Console's project list. The following screenshot shows our new client project listed in the Developers Console: There is an option available to enable access to the YouTube API for our application. The following screenshot shows the YouTube API listed in the Developers Console. By default, the status of this API is OFF for the application. To enable this API for our application, we need to toggle the STATUS button to ON. The following screenshot shows the status of the YouTube API, which is ON for our application: To access YouTube API methods, we need to create an API key for our client application. You can find the option to create a public API key in the APIs & auth section. The following screenshot shows the Credentials subsection where you can create an API key: In the preceding screenshot, you can see a button to create a new API key. After clicking on this button, it provides some choices to create an API key, and after the successful creation of an API key, the key will be listed in the Credentials section. The following screenshot shows the API key generated for our application: Searching for a YouTube video In this section, we will learn about integrating a YouTube-related search video. YouTube Data API Version 3.0 is the new API to access YouTube data. It requires the API key that has been created in the previous section. The main steps that we have to follow to do a YouTube search are: After adding the YouTube Search button, click on it to trigger the search process. The script reads the data-booktitle attribute to get the title. This will serve as a keyword for the search. Check the following screenshot for the HTML markup showing the data-booktitle attribute: Then, it creates an AJAX request to make an asynchronous call to the YouTube API, and returns a promise object. After the successful completion of the AJAX call, the promise object is resolved successfully. Once the data is available, we fetch the jQuery template for the search results and compile it with a script function. We then link it to the search data returned by the AJAX call and generate the HTML markup for rendering. The base URL for the YouTube search is through a secure HTTP protocol, https://www.googleapis.com/youtube/v3/search. It takes different parameters as input for the search and filter criteria. Some of the important parameters are field and part. The part parameter The part parameter is about accessing a resource from a YouTube API. It really helps the application to choose resource components that your application actually uses. The following figure shows some of the resource components: The fields parameter The fields parameter is used to filter out the exact fields that are needed by the client application. This is really helpful to reduce the size of the response. For example, fields = items(id, snippet(title)) will result in a small footprint of a response containing an ID and a title. The YouTube button markup We have added a button in our jQuery product template to display the search option in the product. The following code shows the updated template: <script id="aProductTemplate" type="text/x-jquery-tmpl"> <div class="ts-product panel panel-default"> <div class="panel-head"> <div class="fb-like" data-href="${url}" datalayout=" button_count" data-action="like" data-show-faces="true" datashare=" true"> </div> </div> <div class="panel-body"> <span class="glyphicon glyphicon-certificate ts-costicon"> <label>${cost}$</label> </span> <img class="img-responsive" src ="${url}"> <h5>${title}</h5> </div> <div class="panel-footer"> <button type="button" class="btn btn-danger btn-block packt-youtube-button" data-bookTitle="${title}">YouTube Search</ button> <button type="button" class="btn btn-info btn-block">Buy</ button> <button type="button" class="btn btn-info btn-block twitme" data-bookTitle="${title}" data-imgURI="${url}">Tweet</button> <div class="g-plus-button"> <div class="g-plusone" data-width="180" datahref="${ url}"></div> </div> </div> </div> </script> The following screenshot shows the updated product markup with a YouTube button added to the product template:
Read more
  • 0
  • 0
  • 11323

article-image-keeping-site-secure
Packt
17 Jul 2014
9 min read
Save for later

Keeping the Site Secure

Packt
17 Jul 2014
9 min read
(For more resources related to this topic, see here.) Choosing a web host that meets your security requirements In this article, you'll learn what you, as the site administrator, can do to keep your site safe. However, there are also some basic but critical security measures that your web hosting company should take. You'll probably have a shared hosting account, where multiple sites are hosted on one server computer and each site has access to their part of the available disk space and resources. Although this is much cheaper than hiring a dedicated server to host your site, it does involve some security risks. Good web hosting companies take precautions to minimize these risks. When selecting your web host, it's worth checking if they have a good reputation of keeping their security up to standards. The official Joomla resources site (http://resources.joomla.org/directory/support-services/hosting.html) features hosting companies that fully meet the security requirements of a typical Joomla-powered site. Tip 1 – Download from reliable sources To avoid installing corrupted versions, it's a good idea to download the Joomla software itself only from the official website (www.joomla.org) or from reliable local Joomla community sites. This is also true when downloading third-party extensions. Use only extensions that have a good reputation; you can check the reviews at www.extensions.joomla.org. Preferably, download extensions only from the original developer's website or from Joomla community websites that have a good reputation. Tip 2 – Update regularly The Joomla development team regularly releases updates to fix bugs and security issues. Fortunately, Joomla makes keeping your site up to date effortless. In the backend control panel, you'll find a MAINTENANCE section in the quick icons column displaying the current status of both the Joomla software itself and of installed extensions. This is shown in the following screenshot: If updates are found, the quick icon text will prompt you to update by adding an Update now! link, as shown in the following screenshot: Clicking on this link takes you to the Joomla! Update component (which can also be found by navigating to Components | Joomla! Update). In this window, you'll see the details of the update. Just click on Install the update; the process is fully automated. Before you upgrade Joomla to an updated version, it's a good idea to create a backup of your current site. If anything goes wrong, you can quickly have it up and running again. See the Tip 6 – Protect files and directories section in this article for more information on creating backups. If updates for installed extensions are available, you'll also see a notice stating this in the MAINTENANCE section of the control panel. However, you can also check for updates manually; in the backend, navigate to Extensions | Extension Manager and click on Update in the menu on the left-hand side. Click on Find Updates to search for updates. After you've clicked on the Find Updates button, you'll see a notice informing you whether updates are available or not. Select the update you want to install and click on the Update button. Be patient; you may not see much happening for a while. After completion, a message is displayed that the available updates have been installed successfully. The update functionality only works for extensions that support it. It's to be expected that this feature will be widely supported by extension developers; but for other extensions, you'll still have to manually check for updates by visiting the extension developer's website. The Joomla update packages are stored in your website's tmp directory before the update is installed on your site. After installation, you can remove these files from the tmp directory to avoid running into disc space problems. Tip 3 – Choose a safe administrator username When you install Joomla, you also choose and enter a username for your login account (the critical account of the almighty Super User). Although you can enter any administrator username when installing Joomla, many people enter a generic administrator username, for example, admin. However, this username is far too common, and therefore poses a security risk; hackers only have to guess your password to gain access to your site. If you haven't come up with something different during the installation process, you can change the administrator username later on. You can do this using the following steps: In the backend of the site, navigate to Users | User Manager. Select the Super User user record. In the Edit Profile screen, enter a new Login Name. Be creative! Click on Save & Close to apply changes. Log out and log in to the backend with the new username. Tip 4 – Pick a strong password Pick an administrator password that isn't easy to guess. It's best to have a password that's not too short; 8 or more characters is fine. Use a combination of uppercase letters, lowercase letters, numbers, and special characters. This should guarantee a strong password. Don't use the same username and password you use for other online accounts, and regularly change your password. You can create a new password anytime in the backend User Manager section in the same way as you enter or change a username (see Tip 2 – Update regularly). Tip 5 – Use Two-Factor authentication By default, logging in to the administrative interface of your site just requires the correct combination of username and password. A much more secure way to log in is using the Two-Factor authentication system, a recent addition to Joomla. It requires you to log in with not just your username and password, but also with a six-digit security code. To get this code, you need to use the Google Authenticator app, which is available for most Android, iOS, Blackberry, Windows 8, and Windows Mobile devices. The app doesn't store the six-digit code; it changes the code every 30 seconds. Two-Factor authentication is a great solution, but it does require a little extra effort every time you log in to your site. You need to have the app ready every time you log in to generate a new access code. However, you can selectively choose which users will require this system. You can decide, for example, that only the site administrator has to log in using Two-Factor authentication. Enabling the Two-Factor authentication system of Joomla! To enable Joomla's Two-Factor authentication, where a user has to enter an additional secret key when logging in to the site, follow these steps: To use this system on your site, first enable the Two-Factor authentication plugin that comes with Joomla. In the Joomla backend, navigate to Extensions | Plugin Manager, select Two Factor Authentication – Google Authenticator, and enable it. Next, download and install the Google Authenticator app for your device. Once this is set up, you can set the authentication procedure for any user in the Joomla backend. In the user manager section, click on the account name. Then, click on the Two Factor Authentication tab and select Google Authenticator from the Authentication method dropdown. This is shown in the following screenshot: Joomla will display the account name and the key for this account. Enter these account details in the Google Authenticator app on your device. The app will generate a security code. In the Joomla backend, enter this in the Security Code field, as shown in the following screenshot: Save your changes. From now on, the Joomla login screen will ask you for your username, password, and a secret key. This is shown in the following screenshot: There are other secure login systems available besides the Google Authenticator app. Joomla also supports Yubico's YubiKey, a USB stick that generates an additional password every time it is used. After entering their usual password, the user inserts the YubiKey into the USB port of the computer and presses the YubiKey button. Automatically, the extra one-time password will be entered in Joomla's Secret Key field. For more information on YubiKey, visit http://www.yubico.com. Tip 6 – Protect files and directories Obviously, you don't want everybody to be able to access the Joomla files and folders on the web server. You can protect files and folders by setting access permissions using the Change Mode (CHMOD) commands. Basically, the CHMOD settings tell the web server who has access to a file or folder and who is allowed to read it, write to it, or to execute a file (run it as a program). Once your Joomla site is set up and everything works OK, you can use CHMOD to change permissions. You don't use Joomla to change the CHMOD settings; these are set with FTP software. You can make this work using the following steps: In your FTP program, right-click on the name of the file or directory you want to protect. In this example, we'll use the open source FTP program FileZilla. In the right-click menu, select File Permissions. You'll be presented with a pop-up screen. Here, you can check permissions and change them by selecting the appropriate options, as shown in the following screenshot: As you can see, it's possible to set permissions for the file owner (that's you), for group members (that's likely to be only you too), and for the public (everyone else). The public permissions are the tricky part; you should restrict public permissions as much as possible. When changing the permission settings, the file permissions number (the value in the Numeric value: field in the previous screenshot) will change accordingly. Every combination of settings has its particular number. In the previous example, this number is 644. Click on OK to execute the CHMOD command and set file permissions. Setting file permissions What files should you protect and what CHMOD settings should you choose? Here are a few pointers: By default, permissions for files are set to 644. Don't change this; it's a safe value. For directories, a safe setting is 750 (which doesn't allow any public access). However, some extensions may need access to certain directories. In such cases, the 750 setting may result in error messages. In this case, set permissions to 755. Never leave permissions for a file or directory set to 777. This allows everybody to write data to it. You can also block direct access to critical directories using a .htaccess file. This is a special file containing instructions for the Apache web server. Among other things, it tells the web server who's allowed access to the directory contents. You can add a .htaccess file to any folder on the server by using specific instructions. This is another way to instruct the web server to restrict access. Check the Joomla security documentation at www.joomla.org for instructions.
Read more
  • 0
  • 0
  • 11079

article-image-creating-maze-and-animating-cube
Packt
07 Jul 2014
9 min read
Save for later

Creating the maze and animating the cube

Packt
07 Jul 2014
9 min read
(For more resources related to this topic, see here.) A maze is a rather simple shape that consists of a number of walls and a floor. So, what we need is a way to create these shapes. Three.js, not very surprisingly, doesn't have a standard geometry that will allow you to create a maze, so we need to create this maze by hand. To do this, we need to take two different steps: Find a way to generate the layout of the maze so that not all the mazes look the same. Convert that to a set of cubes (THREE.BoxGeometry) that we can use to render the maze in 3D. There are many different algorithms that we can use to generate a maze, and luckily there are also a number of open source JavaScript libraries that implement such an algorithm. So, we don't have to start from scratch. For the example in this book, I've used the following random-maze-generator project that you can find on GitHub at the following link: https://github.com/felipecsl/random-maze-generator Generating a maze layout Without going into too much detail, this library allows you to generate a maze and render it on an HTML5 canvas. The result of this library looks something like the following screenshot: You can generate this by just using the following JavaScript: var maze = new Maze(document, 'maze'); maze.generate(); maze.draw(); Even though this is a nice looking maze, we can't use this directly to create a 3D maze. What we need to do is change the code the library uses to write on the canvas, and change it to create Three.js objects. This library draws the lines on the canvas in a function called drawLine: drawLine: function(x1, y1, x2, y2) { self.ctx.beginPath(); self.ctx.moveTo(x1, y1); self.ctx.lineTo(x2, y2); self.ctx.stroke(); } If you're familiar with the HTML5 canvas, you can see that this function draws lines based on the input arguments. Now that we've got this maze, we need to convert it to a number of 3D shapes so that we can render them in Three.js. Converting the layout to a 3D set of objects To change this library to create Three.js objects, all we have to do is change the drawLine function to the following code snippet: drawLine: function(x1, y1, x2, y2) { var lengthX = Math.abs(x1 - x2); var lengthY = Math.abs(y1 - y2); // since only 90 degrees angles, so one of these is always 0 // to add a certain thickness to the wall, set to 0.5 if (lengthX === 0) lengthX = 0.5; if (lengthY === 0) lengthY = 0.5; // create a cube to represent the wall segment var wallGeom = new THREE.BoxGeometry(lengthX, 3, lengthY); var wallMaterial = new THREE.MeshPhongMaterial({ color: 0xff0000, opacity: 0.8, transparent: true }); // and create the complete wall segment var wallMesh = new THREE.Mesh(wallGeom, wallMaterial); // finally position it correctly wallMesh.position = new THREE.Vector3( x1 - ((x1 - x2) / 2) - (self.height / 2), wallGeom.height / 2, y1 - ((y1 - y2)) / 2 - (self.width / 2)); self.elements.push(wallMesh); scene.add(wallMesh); } In this new drawLine function, instead of drawing on the canvas, we create a THREE.BoxGeometry object whose length and depth are based on the supplied arguments. Using this geometry, we create a THREE.Mesh object and use the position attribute to position the mesh on a specific points with the x, y, and z coordinates. Before we add the mesh to the scene, we add it to the self.elements array. Now we can just use the following code snippet to create a 3D maze: var maze = new Maze(scene,17, 100, 100); maze.generate(); maze.draw(); As you can see, we've also changed the input arguments. These properties now define the scene to which the maze should be added and the size of the maze. The result from these changes can be seen in the following screenshot: Every time you refresh, you'll see a newly generated random maze. Now that we've got our generated maze, the next step is to add the object that we'll move through the maze. Animating the cube Before we dive into the code, let's first look at the result as shown in the following screenshot: Using the controls at the top-right corner, you can move the cube around. What you'll see is that the cube rotates around its edges, not around its center. In this section, we'll show you how to create that effect. Let's first look at the default rotation, which is along an object's central axis, and the translation behavior of Three.js. The standard Three.js rotation behavior Let's first look at all the properties you can set on THREE.Mesh. They are as follows: Function/property Description position This property refers to the position of an object, which is relative to the position of its parent. In all our examples, so far the parent is THREE.Scene. rotation This property defines the rotation of THREE.Mesh around its own x, y, or z axis. scale With this property, you can scale the object along its own x, y, and z axes. translateX(amount) This property moves the object by a specified amount over the x axis. translateY(amount) This property moves the object by a specified amount over the y axis. translateZ(amount) This property moves the object by a specified amount over the z axis. If we want to rotate a mesh around one of its own axes, we can just call the following line of code: plane.rotation.x = -0.5 * Math.PI; We've used this to rotate the ground area from a horizontal position to a vertical one. It is important to know that this rotation is done around its own internal axis, not the x, y, or z axis of the scene. So, if you first do a number of rotations one after another, you have to keep track at the orientation of your mesh to make sure you get the required effect. Another point to note is that rotation is done around the center of the object—in this case the center of the cube. If we look at the effect we want to accomplish, we run into the following two problems: First, we don't want to rotate around the center of the object; we want to rotate around one of its edges to create a walking-like animation Second, if we use the default rotation behavior, we have to continuously keep track of our orientation since we're rotating around our own internal axis In the next section, we'll explain how you can solve these problems by using matrix-based transformations. Creating an edge rotation using matrix-based transformation If we want to perform edge rotations, we have to take the following few steps: If we want to rotate around the edge, we have to change the center point of the object to the edge we want to rotate around. Since we don't want to keep track of all the rotations we've done, we'll need to make sure that after each rotation, the vertices of the cube represent the correct position. Finally, after we've rotated around the edge, we have to do the inverse of the first step. This is to make sure the center point of the object is back in the center of the cube so that it is ready for the next step. So, the first thing we need to do is change the center point of the cube. The approach we use is to offset the position of all individual vertices and then change the position of the cube in the opposite way. The following example will allow us to make a step to the right-hand side: cubeGeometry.applyMatrix(new THREE.Matrix4().makeTranslation (0, width / 2, width / 2)); cube.position.y += -width / 2; cube.position.z += -width / 2; With the cubeGeometry.applyMatrix function, we can change the position of the individual vertices of our geometry. In this example, we will create a translation (using makeTranslation), which offsets all the y and z coordinates by half the width of the cube. The result is that it will look like the cube moved a bit to the right-hand side and then up, but the actual center of the cube now is positioned at one of its lower edges. Next, we use the cube.position property to position the cube back at the ground plane since the individual vertices were offset by the makeTranslation function. Now that the edge of the object is positioned correctly, we can rotate the object. For rotation, we could use the standard rotation property, but then, we will have to constantly keep track of the orientation of our cube. So, for rotations, we once again use a matrix transformation on the vertices of our cube: cube.geometry.applyMatrix(new THREE.Matrix4().makeRotationX(amount); As you can see, we use the makeRotationX function, which changes the position of our vertices. Now we can easily rotate our cube, without having to worry about its orientation. The final step we need to take is reset the cube to its original position; taking into account that we've moved a step to the right, we can take the next step: cube.position.y += width/2; // is the inverse + width cube.position.z += -width/2; cubeGeometry.applyMatrix(new THREE.Matrix4().makeTranslation(0, - width / 2, width / 2)); As you can see, this is the inverse of the first step; we've added the width of the cube to position.y and subtracted the width from the second argument of the translation to compensate for the step to the right-hand side we've taken. If we use the preceding code snippet, we will only see the result of the step to the right. Summary In this article, we have seen how to create a maze and animate a cube. Resources for Article: Further resources on this subject: Working with the Basic Components That Make Up a Three.js Scene [article] 3D Websites [article] Rich Internet Application (RIA) – Canvas [article]
Read more
  • 0
  • 0
  • 15469
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-component-communication-reactjs
Richard Feldman
30 Jun 2014
5 min read
Save for later

Component Communication in React.js

Richard Feldman
30 Jun 2014
5 min read
You can get a long way in React.js solely by having parent components create child components with varying props, and having each component deal only with its own state. But what happens when a child wants to affect its parent’s state or props? Or when a child wants to inspect that parent’s state or props? Or when a parent wants to inspect its child’s state? With the right techniques, you can handle communication between React components without introducing unnecessary coupling. Child Elements Altering Parents Suppose you have a list of buttons, and when you click one, a label elsewhere on the page updates to reflect which button was most recently clicked. Although any button’s click handler can alter that button’s state, the handler has no intrinsic knowledge of the label that we need to update. So how can we give it access to do what we need? The idiomatic approach is to pass a function through props. Like so: var ExampleParent = React.createClass({ getInitialState: function() { return {lastLabelClicked: "none"} }, render: function() { var me = this; var setLastLabel = function(label) { me.setState({lastLabelClicked: label}); }; return <div> <p>Last clicked: {this.state.lastLabelClicked}</p> <LabeledButton label="Alpha Button" setLastLabel={setLastLabel}/> <LabeledButton label="Beta Button" setLastLabel={setLastLabel}/> <LabeledButton label="Delta Button" setLastLabel={setLastLabel}/> </div>; } }); var LabeledButton = React.createClass({ handleClick: function() { this.props.setLastLabel(this.props.label); }, render: function() { return <button onClick={this.handleClick}>{this.props.label}</button>; } }); Note that this does not actually affect the label’s state directly; rather, it affects the parent component’s state, and doing so will cause the parent to re-render the label as appropriate. What if we wanted to avoid using state here, and instead modify the parent’s props? Since props are externally specified, this would be a lot of extra work. Rather than telling the parent to change, the child would necessarily have to tell its parent’s parent—its grandparent, in other words—to change that grandparent’s child. This is not a route worth pursuing; besides being less idiomatic, there is no real benefit to changing the parent’s props when you could change its state instead. Inspecting Props Once created, the only way for a child’s props to “change” is for the child to be recreated when the parent’s render method is called again. This helpfully guarantees that the parent’s render method has all the information needed to determine the child’s props—not only in the present, but for the indefinite future as well. Thus if another of the parent’s methods needs to know the child’s props, like for example a click handler, it’s simply a matter of making sure that data is available outside the parent’s render method. An easy way to do this is to record it in the parent’s state: var ExampleComponent = React.createClass({ handleClick: function() { var buttonStatus = this.state.buttonStatus; // ...do something based on buttonStatus }, render: function() { // Pretend it took some effort to determine this value var buttonStatus = "btn-disabled"; this.setState({buttonStatus: buttonStatus}); return <button className={buttonStatus} onClick={this.handleClick}> Click this button! </button>; } }); It’s even easier to let a child know about its parent’s props: simply have the parent pass along whatever information is necessary when it creates the child. It’s cleaner to pass along only what the child needs to know, but if all else fails you can go as far as to pass in the parent’s entire set of props: var ParentComponent = React.createClass({ render: function() { return <ChildComponent parentProps={this.props} />; } }); Inspecting State State is trickier to inspect, because it can change on the fly. But is it ever strictly necessary for components to inspect each other’s states, or might there be a universal workaround? Suppose you have a child whose click handler cares about its parent’s state. Is there any way we could refactor things such that the child could always know that value, without having to ask the parent directly? Absolutely! Simply have the parent pass the current value of its state to the child as a prop. Whenever the parent’s state changes, it will re-run its render method, so the child (including its click handler) will automatically be recreated with the new prop. Now the child’s click handler will always have an up-to-date knowledge of the parent’s state, just as we wanted. Suppose instead that we have a parent that cares about its child’s state. As we saw earlier with the buttons-and-labels example, children can affect their parent’s states, so we can use that technique again here to refactor our way into a solution. Simply include in the child’s props a function that updates the parent’s state, and have the child incorporate that function into its relevant state changes. With the child thus keeping the parent’s state up to speed on relevant changes to the child’s state, the parent can obtain whatever information it needed simply by inspecting its own state. Takeaways Idiomatic communication between parent and child components can be easily accomplished by passing state-altering functions through props. When it comes to inspecting props and state, a combination of passing props on a need-to-know basis and refactoring state changes can ensure the relevant parties have all the information they need, whenever they need it. About the Author Richard Feldman is a functional programmer who specializes in pushing the limits of browser-based UIs. He’s built a framework that performantly renders hundreds of thousands of shapes in HTML5 canvas, a writing web app that functions like a desktop app in the absence of an Internet connection, and much more in between.
Read more
  • 0
  • 0
  • 6309

article-image-using-reactjs-without-jsx
Richard Feldman
30 Jun 2014
6 min read
Save for later

Using React.js without JSX

Richard Feldman
30 Jun 2014
6 min read
React.js was clearly designed with JSX in mind, however, there are plenty of good reasons to use React without it. Using React as a standalone library lets you evaluate the technology without having to spend time learning a new syntax. Some teams—including my own—prefer to have their entire frontend code base in one compile-to-JavaScript language, such as CoffeeScript or TypeScript. Others might find that adding another JavaScript library to their dependencies is no big deal, but adding a compilation step to the build chain is a deal-breaker. There are two primary drawbacks to eschewing JSX. One is that it makes using React significantly more verbose. The other is that the React docs use JSX everywhere; examples demonstrating vanilla JavaScript are few and far between. Fortunately, both drawbacks are easy to work around. Translating documentation The first code sample you see in the React Documentation includes this JSX snippet: /** @jsx React.DOM */ React.renderComponent( <h1>Hello, world!</h1>, document.getElementById('example') ); Suppose we want to see the vanilla JS equivalent. Although the code samples on the React homepage include a helpful Compiled JS tab, the samples in the docs—not to mention React examples you find elsewhere on the Web—will not. Fortunately, React’s Live JSX Compiler can help. To translate the above JSX into vanilla JS, simply copy and paste it into the left side of the Live JSX Compiler. The output on the right should look like this: /** @jsx React.DOM */ React.renderComponent( React.DOM.h1(null, "Hello, world!"), document.getElementById('example') ); Pretty similar, right? We can discard the comment, as it only represents a necessary directive in JSX. When writing React in vanilla JS, it’s just another comment that will be disregarded as usual. Take a look at the call to React.renderComponent. Here we have a plain old two-argument function, which takes a React DOM element (in this case, the one returned by React.DOM.h1) as its first argument, and a regular DOM element (in this case, the one returned by document.getElementById('example')) as its second. jQuery users should note that the second argument will not accept jQuery objects, so you will have to extract the underlying DOM element with $("#example")[0] or something similar. The React.DOM object has a method for every supported tag. In this case we’re using h1, but we could just as easily have used h2, div, span, input, a, p, or any other supported tag. The first argument to these methods is optional; it can either be null (as in this case), or an object specifying the element’s attributes. This argument is how you specify things like class, ID, and so on. The second argument is either a string, in which case it specifies the object’s text content, or a list of child React DOM elements. Let’s put this together with a more advanced example, starting with the vanilla JS: React.DOM.form({className:"commentForm"}, React.DOM.input({type:"text", placeholder:"Your name"}), React.DOM.input({type:"text", placeholder:"Say something..."}), React.DOM.input({type:"submit", value:"Post"}) ) For the most part, the attributes translate as you would expect: type, value, and placeholder do exactly what they would do if used in HTML. The one exception is className, which you use in place of the usual class. The above is equivalent to the following JSX: /** @jsx React.DOM */ <form className="commentForm"> <input type="text" placeholder="Your name" /> <input type="text" placeholder="Say something..." /> <input type="submit" value="Post" /> </form> This JSX is a snippet found elsewhere in the React docs, and again you can view its vanilla JS equivalent by pasting it into the Live JSX Compiler. Note that you can include pure JSX here without any surrounding JavaScript code (unlike the JSX playground), but you do need the /** @jsx React.DOM */ comment at the top of the JSX side. Without the comment, the compiler will simply output the JSX you put in. Simple DSLs to make things concise Although these two implementations are functionally identical, clearly the JSX version is more concise. How can we make the vanilla JS version less verbose? A very quick improvement is to alias the React.DOM object: var R = React.DOM; R.form({className:"commentForm"}, R.input({type:"text", placeholder:"Your name"}), R.input({type:"text", placeholder:"Say something..."}), R.input({type:"submit", value:"Post"})) You can take it even further with a tiny bit of DSL: var R = React.DOM; var form = R.form; var input = R.input; form({className:"commentForm"}, input({type:"text", placeholder:"Your name"}), input({type:"text", placeholder:"Say something..."}), input({type:"submit", value:"Post"}) ) This is more verbose in terms of lines of code, but if you have a large DOM to set up, the extra up-front declarations can make the rest of the file much nicer to read. In CoffeeScript, a DSL like this can tidy things up even further: {form, input} = React.DOM form {className:"commentForm"}, [ input type: "text", placeholder:"Your name" input type:"text", placeholder:"Say something..." input type:"submit", value:"Post" ] Note that in this example, the form’s children are passed as an array rather than as a list of extra arguments (which, in CoffeeScript, allows you to omit commas after each line). React DOM element constructors support either approach. (Also note that CoffeeScript coders who don’t mind mixing languages can use the coffee-react compiler or set up a custom build chain that allows for inline JSX in CoffeeScript sources instead.) Takeaways No matter your particular use case, there are plenty of ways to effectively use React without JSX. Thanks to the Live JSX Compiler ’s ability to quickly translate documentation code samples, and the ease with which you can set up a simple DSL to reduce verbosity, there really is very little overhead to using React as a JavaScript library like any other. About the author Richard Feldman is a functional programmer who specializes in pushing the limits of browser-based UIs. He’s built a framework that performantly renders hundreds of thousands of shapes in the HTML5 canvas, a writing web app that functions like a desktop app in the absence of an Internet connection, and much more in between
Read more
  • 0
  • 0
  • 12853

article-image-introduction-mapreduce
Packt
25 Jun 2014
10 min read
Save for later

Introduction to MapReduce

Packt
25 Jun 2014
10 min read
(For more resources related to this topic, see here.) The Hadoop platform Hadoop can be used for a lot of things. However, when you break it down to its core parts, the primary features of Hadoop are Hadoop Distributed File System (HDFS) and MapReduce. HDFS stores read-only files by splitting them into large blocks and distributing and replicating them across a Hadoop cluster. Two services are involved with the filesystem. The first service, the NameNode acts as a master and keeps the directory tree of all file blocks that exist in the filesystem and tracks where the file data is kept across the cluster. The actual data of the files is stored in multiple DataNode nodes, the second service. MapReduce is a programming model for processing large datasets with a parallel, distributed algorithm in a cluster. The most prominent trait of Hadoop is that it brings processing to the data; so, MapReduce executes tasks closest to the data as opposed to the data travelling to where the processing is performed. Two services are involved in a job execution. A job is submitted to the service JobTracker, which first discovers the location of the data. It then orchestrates the execution of the map and reduce tasks. The actual tasks are executed in multiple TaskTracker nodes. Hadoop handles infrastructure failures such as network issues, node, or disk failures automatically. Overall, it provides a framework for distributed storage within its distributed file system and execution of jobs. Moreover, it provides the service ZooKeeper to maintain configuration and distributed synchronization. Many projects surround Hadoop and complete the ecosystem of available Big Data processing tools such as utilities to import and export data, NoSQL databases, and event/real-time processing systems. The technologies that move Hadoop beyond batch processing focus on in-memory execution models. Overall multiple projects, from batch to hybrid and real-time execution exist. MapReduce Massive parallel processing of large datasets is a complex process. MapReduce simplifies this by providing a design pattern that instructs algorithms to be expressed in map and reduce phases. Map can be used to perform simple transformations on data, and reduce is used to group data together and perform aggregations. By chaining together a number of map and reduce phases, sophisticated algorithms can be achieved. The shared nothing architecture of MapReduce prohibits communication between map tasks of the same phase or reduces tasks of the same phase. Communication that's required happens at the end of each phase. The simplicity of this model allows Hadoop to translate each phase, depending on the amount of data that needs to be processed into tens or even hundreds of tasks being executed in parallel, thus achieving scalable performance. Internally, the map and reduce tasks follow a simplistic data representation. Everything is a key or a value. A map task receives key-value pairs and applies basic transformations emitting new key-value pairs. Data is then partitioned and different partitions are transmitted to different reduce tasks. A reduce task also receives key-value pairs, groups them based on the key, and applies basic transformation to those groups. A MapReduce example To illustrate how MapReduce works, let's look at an example of a log file of total size 1 GB with the following format: INFO MyApp - Entering application. WARNING com.foo.Bar - Timeout accessing DB - Retrying ERROR com.foo.Bar - Did it again! INFO MyApp - Exiting application Once this file is stored in HDFS, it is split into eight 128 MB blocks and distributed in multiple Hadoop nodes. In order to build a MapReduce job to count the amount of INFO, WARNING, and ERROR log lines in the file, we need to think in terms of map and reduce phases. In one map phase, we can read local blocks of the file and map each line to a key and a value. We can use the log level as the key and the number 1 as the value. After it is completed, data is partitioned based on the key and transmitted to the reduce tasks. MapReduce guarantees that the input to every reducer is sorted by key. Shuffle is the process of sorting and copying the output of the map tasks to the reducers to be used as input. By setting the value to 1 on the map phase, we can easily calculate the total in the reduce phase. Reducers receive input sorted by key, aggregate counters, and store results. In the following diagram, every green block represents an INFO message, every yellow block a WARNING message, and every red block an ERROR message: Implementing the preceding MapReduce algorithm in Java requires the following three classes: A Map class to map lines into <key,value> pairs; for example, <"INFO",1> A Reduce class to aggregate counters A Job configuration class to define input and output types for all <key,value> pairs and the input and output files MapReduce abstractions This simple MapReduce example requires more than 50 lines of Java code (mostly because of infrastructure and boilerplate code). In SQL, a similar implementation would just require the following: SELECT level, count(*) FROM table GROUP BY level Hive is a technology originating from Facebook that translates SQL commands, such as the preceding one, into sets of map and reduce phases. SQL offers convenient ubiquity, and it is known by almost everyone. However, SQL is declarative and expresses the logic of a computation without describing its control flow. So, there are use cases that will be unusual to implement in SQL, and some problems are too complex to be expressed in relational algebra. For example, SQL handles joins naturally, but it has no built-in mechanism for splitting data into streams and applying different operations to each substream. Pig is a technology originating from Yahoo that offers a relational data-flow language. It is procedural, supports splits, and provides useful operators for joining and grouping data. Code can be inserted anywhere in the data flow and is appealing because it is easy to read and learn. However, Pig is a purpose-built language; it excels at simple data flows, but it is inefficient for implementing non-trivial algorithms. In Pig, the same example can be implemented as follows: LogLine = load 'file.logs' as (level, message); LevelGroup = group LogLine by level; Result = foreach LevelGroup generate group, COUNT(LogLine); store Result into 'Results.txt'; Both Pig and Hive support extra functionality through loadable user-defined functions (UDF) implemented in Java classes. Cascading is implemented in Java and designed to be expressive and extensible. It is based on the design pattern of pipelines that many other technologies follow. The pipeline is inspired from the original chain of responsibility design pattern and allows ordered lists of actions to be executed. It provides a Java-based API for data-processing flows. Developers with functional programming backgrounds quickly introduced new domain specific languages that leverage its capabilities. Scalding, Cascalog, and PyCascading are popular implementations on top of Cascading, which are implemented in programming languages such as Scala, Clojure, and Python. Introducing Cascading Cascading is an abstraction that empowers us to write efficient MapReduce applications. The API provides a framework for developers who want to think in higher levels and follow Behavior Driven Development (BDD) and Test Driven Development (TDD) to provide more value and quality to the business. Cascading is a mature library that was released as an open source project in early 2008. It is a paradigm shift and introduces new notions that are easier to understand and work with. In Cascading, we define reusable pipes where operations on data are performed. Pipes connect with other pipes to create a pipeline. At each end of a pipeline, a tap is used. Two types of taps exist: source, where input data comes from and sink, where the data gets stored. In the preceding image, three pipes are connected to a pipeline, and two input sources and one output sink complete the flow. A complete pipeline is called a flow, and multiple flows bind together to form a cascade. In the following diagram, three flows form a cascade: The Cascading framework translates the pipes, flows, and cascades into sets of map and reduce phases. The flow and cascade planner ensure that no flow or cascade is executed until all its dependencies are satisfied. The preceding abstraction makes it easy to use a whiteboard to design and discuss data processing logic. We can now work on a productive higher level abstraction and build complex applications for ad targeting, logfile analysis, bioinformatics, machine learning, predictive analytics, web content mining, and for extract, transform and load (ETL) jobs. By abstracting from the complexity of key-value pairs and map and reduce phases of MapReduce, Cascading provides an API that so many other technologies are built on. What happens inside a pipe Inside a pipe, data flows in small containers called tuples. A tuple is like a fixed size ordered list of elements and is a base element in Cascading. Unlike an array or list, a tuple can hold objects with different types. Tuples stream within pipes. Each specific stream is associated with a schema. The schema evolves over time, as at one point in a pipe, a tuple of size one can receive an operation and transform into a tuple of size three. To illustrate this concept, we will use a JSON transformation job. Each line is originally stored in tuples of size one with a schema: 'jsonLine. An operation transforms these tuples into new tuples of size three: 'time, 'user, and 'action. Finally, we extract the epoch, and then the pipe contains tuples of size four: 'epoch, 'time, 'user, and 'action. Pipe assemblies Transformation of tuple streams occurs by applying one of the five types of operations, also called pipe assemblies: Each: To apply a function or a filter to each tuple GroupBy: To create a group of tuples by defining which element to use and to merge pipes that contain tuples with similar schemas Every: To perform aggregations (count, sum) and buffer operations to every group of tuples CoGroup: To apply SQL type joins, for example, Inner, Outer, Left, or Right joins SubAssembly: To chain multiple pipe assemblies into a pipe To implement the pipe for the logfile example with the INFO, WARNING, and ERROR levels, three assemblies are required: The Each assembly generates a tuple with two elements (level/message), the GroupBy assembly is used in the level, and then the Every assembly is applied to perform the count aggregation. We also need a source tap to read from a file and a sink tap to store the results in another file. Implementing this in Cascading requires 20 lines of code; in Scala/Scalding, the boilerplate is reduced to just the following: TextLine(inputFile) .mapTo('line->'level,'message) { line:String => tokenize(line) } .groupBy('level) { _.size } .write(Tsv(outputFile)) Cascading is the framework that provides the notions and abstractions of tuple streams and pipe assemblies. Scalding is a domain-specific language (DSL) that specializes in the particular domain of pipeline execution and further minimizes the amount of code that needs to be typed. Cascading extensions Cascading offers multiple extensions that can be used as taps to either read from or write data to, such as SQL, NoSQL, and several other distributed technologies that fit nicely with the MapReduce paradigm. A data processing application, for example, can use taps to collect data from a SQL database and some more from the Hadoop file system. Then, process the data, use a NoSQL database, and complete a machine learning stage. Finally, it can store some resulting data into another SQL database and update a mem-cache application. Summary This article explains the core technologies used in the distributed model of Hadoop Resources for Article: Further resources on this subject: Analytics – Drawing a Frequency Distribution with MapReduce (Intermediate) [article] Understanding MapReduce [article] Advanced Hadoop MapReduce Administration [article]
Read more
  • 0
  • 0
  • 3429

article-image-various-subsystem-configurations
Packt
25 Jun 2014
8 min read
Save for later

Various subsystem configurations

Packt
25 Jun 2014
8 min read
(For more resources related to this topic, see here.) In a high-performance environment, every costly resource instantiation needs to be minimized. This can be done effectively using pools. The different subsystems in WildFly often use various pools of resources to minimize the cost of creating new ones. These resources are often threads or various connection objects. Another benefit is that the pools work as a gatekeeper, hindering the underlying system from being overloaded. This is performed by preventing client calls from reaching their target if a limit has been reached. In the upcoming sections of this article, we will provide an overview of the different subsystems and their pools. The thread pool executor subsystem The thread pool executor subsystem was introduced in JBoss AS 7. Other subsystems can reference thread pools configured in this one. This makes it possible to normalize and manage the thread pools via native WildFly management mechanisms, and it allows you to share thread pools across subsystems. The following code is an example taken from the WildFly Administration Guide (https://docs.jboss.org/author/display/WFLY8/Admin+Guide) that describes how the Infinispan subsystem may use the subsystem, setting up four different pools: <subsystem > <thread-factory name="infinispan-factory" priority="1"/> <bounded-queue-thread-pool name="infinispan-transport"> <core-threads count="1"/> <queue-length count="100000"/> <max-threads count="25"/> <thread-factory name="infinispan-factory"/> </bounded-queue-thread-pool> <bounded-queue-thread-pool name="infinispan-listener"> <core-threads count="1"/> <queue-length count="100000"/> <max-threads count="1"/> <thread-factory name="infinispan-factory"/> </bounded-queue-thread-pool> <scheduled-thread-pool name="infinispan-eviction"> <max-threads count="1"/> <thread-factory name="infinispan-factory"/> </scheduled-thread-pool> <scheduled-thread-pool name="infinispan-repl-queue"> <max-threads count="1"/> <thread-factory name="infinispan-factory"/> </scheduled-thread-pool> </subsystem> ... <cache-container name="web" default-cache="repl"listener-executor= "infinispan-listener" eviction-executor= "infinispan-eviction"replication-queue-executor ="infinispan-repl-queue"> <transport executor="infinispan-transport"/> <replicated-cache name="repl" mode="ASYNC" batching="true"> <locking isolation="REPEATABLE_READ"/> <file-store/> </replicated-cache> </cache-container> The following thread pools are available: unbounded-queue-thread-pool bounded-queue-thread-pool blocking-bounded-queue-thread-pool queueless-thread-pool blocking-queueless-thread-pool scheduled-thread-pool The details of these thread pools are described in the following sections: unbounded-queue-thread-pool The unbounded-queue-thread-pool thread pool executor has the maximum size and an unlimited queue. If the number of running threads is less than the maximum size when a task is submitted, a new thread will be created. Otherwise, the task is placed in a queue. This queue is allowed to grow infinitely. The configuration properties are shown in the following table: max-threads Max allowed threads running simultaneously keepalive-time This specifies the amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.) thread-factory This specifies the thread factory to use to create worker threads. bounded-queue-thread-pool The bounded-queue-thread-pool thread pool executor has a core, maximum size, and a specified queue length. If the number of running threads is less than the core size when a task is submitted, a new thread will be created; otherwise, it will be put in the queue. If the queue's maximum size has been reached and the maximum number of threads hasn't been reached, a new thread is also created. If max-threads is hit, the call will be sent to the handoff-executor. If no handoff-executor is configured, the call will be discarded. The configuration properties are shown in the following table: core-threads Optional and should be less that max-threads queue-length This specifies the maximum size of the queue. max-threads This specifies the maximum number of threads that are allowed to run simultaneously. keepalive-time This specifies the amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.) Handoff-executor This specifies an executor to which tasks will be delegated, in the event that a task cannot be accepted. allow-core-timeout This specifies whether core threads may time-out; if false, only threads above the core size will time-out. thread-factory This specifies the thread factory to use to create worker threads. blocking-bounded-queue-thread-pool The blocking-bounded-queue-thread-pool thread pool executor has a core, a maximum size and a specified queue length. If the number of running threads is less than the core size when a task is submitted, a new thread will be created. Otherwise, it will be put in the queue. If the queue's maximum size has been reached, a new thread is created; if not, max-threads is exceeded. If so, the call is blocked. The configuration properties are shown in the following table: core-threads Optional and should be less that max-threads queue-length This specifies the maximum size of the queue. max-threads This specifies the maximum number of simultaneous threads allowed to run. keepalive-time This specifies the amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.) allow-core-timeout This specifies whether core threads may time-out; if false, only threads above the core size will time-out. thread-factory This specifies the thread factory to use to create worker threads queueless-thread-pool The queueless-thread-pool thread pool is a thread pool executor without any queue. If the number of running threads is less than max-threads when a task is submitted, a new thread will be created; otherwise, the handoff-executor will be called. If no handoff-executor is configured the call will be discarded. The configuration properties are shown in the following table: max-threads Max allowed threads running simultaneously keepalive-time The amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.) handoff-executor Specifies an executor to delegate tasks to in the event that a task cannot be accepted thread-factory The thread factory to use to create worker threads blocking-queueless-thread-pool The blocking-queueless-thread-pool thread pool executor has no queue. If the number of running threads is less than max-threads when a task is submitted, a new thread will be created. Otherwise, the caller will be blocked. The configuration properties are shown in the following table: max-threads Max allowed threads running simultaneously keepalive-time This specifies the amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.) thread-factory This specifies the thread factory to use to create worker threads scheduled-thread-pool The scheduled-thread-pool thread pool is used by tasks that are scheduled to trigger at a certain time. The configuration properties are shown in the following table: max-threads Max allowed threads running simultaneously keepalive-time This specifies the amount of time that pool threads should be kept running when idle. (If not specified, threads will run until the executor is shut down.) thread-factory This specifies the thread factory to use to create worker threads Monitoring All of the pools just mentioned can be administered and monitored using both CLI and JMX (actually, the Admin Console can be used to administer, but not see, any live data). The following example and screenshots show the access to an unbounded-queue-thread-pool called test. Using CLI, run the following command: /subsystem=threads/unbounded-queue-thread-pool=test:read-resource (include-runtime=true) The response to the preceding command is as follows: { "outcome" => "success", "result" => { "active-count" => 0, "completed-task-count" => 0L, "current-thread-count" => 0, "keepalive-time" => undefined, "largest-thread-count" => 0, "max-threads" => 100, "name" => "test", "queue-size" => 0, "rejected-count" => 0, "task-count" => 0L, "thread-factory" => undefined } } Using JMX (query and result in the JConsole UI), run the following code: jboss.as:subsystem=threads,unbounded-queue-thread-pool=test An example thread pool by JMX is shown in the following screenshot: An example thread pool by JMX The following screenshot shows the corresponding information in the Admin Console Example thread pool—Admin Console The future of the thread subsystem According to the official JIRA case WFLY-462 (https://issues.jboss.org/browse/WFLY-462), the central thread pool configuration has been targeted for removal in future versions of the application server. It is, however, uncertain that all subprojects will adhere to this. The actual configuration will then be moved out to the subsystem itself. This seems to be the way the general architecture of WildFly is moving in terms of pools—moving away from generic ones and making them subsystem-specific. The different types of pools described here are still valid though. Note that, contrary to previous releases, Stateless EJB is no longer pooled by default. More information of this is available in the JIRA case WFLY-1383. It can be found at https://issues.jboss.org/browse/WFLY-1383.
Read more
  • 0
  • 0
  • 3235
article-image-serving-and-processing-forms
Packt
24 Jun 2014
13 min read
Save for later

Serving and processing forms

Packt
24 Jun 2014
13 min read
(For more resources related to this topic, see here.) Spring supports different view technologies, but if we are using JSP-based views, we can make use of the Spring tag library tags to make up our JSP pages. These tags provide many useful, common functionalities such as form binding, evaluating errors outputting internationalized messages, and so on. In order to use these tags, we must add references to this tag library in our JSP pages as follows: <%@taglib prefix="form" uri="http://www.springframework.org/tags/form" %> <%@taglib prefix="spring" uri="http://www.springframework.org/tags" %> The data transfer took place from model to view via the controller. The following line is a typical example of how we put data into the model from a controller: model.addAttribute(greeting,"Welcome") Similarly the next line shows how we retrieve that data in the view using the JSTL expression: <p> ${greeting} </p> JavaServer Pages Standard Tag Library (JSTL) is also a tag library provided by Oracle. And it is a collection of useful JSP tags that encapsulates the core functionality common to many JSP pages. We can add a reference to the JSTL tag library in our JSP pages as <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core"%>. However, what if we want to put data into the model from the view? How do we retrieve that data from the controller? For example, consider a scenario where an admin of our store wants to add new product information in our store by filling and submitting an HTML form. How can we collect the values filled in the HTML form elements and process it in the controller? This is where the Spring tag library tags help us to bind the HTML tag element's values to a form-backing bean in the model. Later, the controller can retrieve the form-backing bean from the model using the @ModelAttribute annotation (org.springframework.web.bind.annotation.ModelAttribute). Form-backing beans (sometimes called form beans) are used to store form data. We can even use our domain objects as form beans; this works well when there's a close match between the fields on the form and the properties on our domain object. Another approach is to create separate classes for form beans, which are sometimes called Data Transfer Objects (DTOs). Time for action – serving and processing forms The Spring tag library provides some special <form> and <input> tags that are more or less similar to HTML form and input tags, but it has some special attributes to bind the form elements data with the form-backing bean. Let's create a Spring web form in our application to add new products to our product list by performing the following steps: We open our ProductRepository interface and add one more method declaration in it as follows: void addProduct(Product product); We then add an implementation for this method in the InMemoryProductRepository class as follows: public void addProduct(Product product) { listOfProducts.add(product); } We open our ProductService interface and add one more method declaration in it as follows: void addProduct(Product product); And, we add an implementation for this method in the ProductServiceImpl class as follows: public void addProduct(Product product) { productRepository.addProduct(product); } We open our ProductController class and add two more request mapping methods as follows: @RequestMapping(value = "/add", method = RequestMethod.GET) public String getAddNewProductForm(Model model) { Product newProduct = new Product(); model.addAttribute("newProduct", newProduct); return "addProduct"; } @RequestMapping(value = "/add", method = RequestMethod.POST) public String processAddNewProductForm(@ModelAttribute("newProduct") Product newProduct) { productService.addProduct(newProduct); return "redirect:/products"; } Finally, we add one more JSP view file called addProduct.jsp under src/main/webapp/WEB-INF/views/ and add the following tag reference declaration in it as the very first line: <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core"%> <%@ taglib prefix="form" uri="http://www.springframework.org/tags/form" %> Now, we add the following code snippet under the tag declaration line and save addProduct.jsp (note that I have skipped the <form:input> binding tags for some of the fields of the product domain object, but I strongly encourage that you add binding tags for the skipped fields when you try out this exercise): <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <link rel="stylesheet"href="//netdna.bootstrapcdn.com/bootstrap/3.0.0/css/bootstrap.min.css"> <title>Products</title> </head> <body> <section> <div class="jumbotron"> <div class="container"> <h1>Products</h1> <p>Add products</p> </div> </div> </section> <section class="container"> <form:form modelAttribute="newProduct" class="form-horizontal"> <fieldset> <legend>Add new product</legend> <div class="form-group"> <label class="control-label col-lg-2 col-lg-2" for="productId">Product Id</label> <div class="col-lg-10"> <form:input id="productId" path="productId" type="text" class="form:input-large"/> </div> </div> <!-- Similarly bind <form:input> tag for name,unitPrice,manufacturer,category,unitsInStock and unitsInOrder fields--> <div class="form-group"> <label class="control-label col-lg-2" for="description">Description</label> <div class="col-lg-10"> form:textarea id="description" path="description" rows = "2"/> </div> </div> <div class="form-group"> <label class="control-label col-lg-2" for="discontinued">Discontinued</label> <div class="col-lg-10"> <form:checkbox id="discontinued" path="discontinued"/> </div> </div> <div class="form-group"> <label class="control-label col-lg-2" for="condition">Condition</label> <div class="col-lg-10"> <form:radiobutton path="condition" value="New" />New <form:radiobutton path="condition" value="Old" />Old <form:radiobutton path="condition" value="Refurbished" />Refurbished </div> </div> <div class="form-group"> <div class="col-lg-offset-2 col-lg-10"> <input type="submit" id="btnAdd" class="btn btn-primary" value ="Add"/> </div> </div> </fieldset> </form:form> </section> </body> </html> Now, we run our application and enter the URL http://localhost:8080/webstore/products/add. We will be able to see a web page that displays a web form where we can add the product information as shown in the following screenshot: Add the product's web form Now, we enter all the information related to the new product that we want to add and click on the Add button; we will see the new product added in the product listing page under the URL http://localhost:8080/webstore/products. What just happened? In the whole sequence, steps 5 and 6 are very important steps that need to be observed carefully. I will give you a brief note on what we have done in steps 1 to 4. In step 1, we created a method declaration addProduct in our ProductRepository interface to add new products. In step 2, we implemented the addProduct method in our InMemoryProductRepository class; the implementation is just to update the existing listOfProducts by adding a new product to the list. Steps 3 and 4 are just a service layer extension for ProductRepository. In step 3, we declared a similar method, addProduct, in our ProductService interface and implemented it in step 4 to add products to the repository via the productRepository reference. Okay, coming back to the important step; we have done nothing but added two request mapping methods, namely, getAddNewProductForm and processAddNewProductForm, in step 5 as follows: @RequestMapping(value = "/add", method = RequestMethod.GET) public String getAddNewProductForm(Model model) { Product newProduct = new Product(); model.addAttribute("newProduct", newProduct); return "addProduct"; } @RequestMapping(value = "/add", method = RequestMethod.POST) public String processAddNewProductForm(@ModelAttribute("newProduct") Product productToBeAdded) { productService.addProduct(productToBeAdded); return "redirect:/products"; } If you observe these methods carefully, you will notice a peculiar thing, which is that both the methods have the same URL mapping value in their @RequestMapping annotation (value = "/add"). So, if we enter the URL http://localhost:8080/webstore/products/add in the browser, which method will Spring MVC map that request to? The answer lies in the second attribute of the @RequestMapping annotation (method = RequestMethod.GET and method = RequestMethod.POST). If you will notice again, even though both methods have the same URL mapping, they differ in request method. So, what is happening behind the screen is that when we enter the URL http://localhost:8080/webstore/products/add in the browser, it is considered as a GET request. So, Spring MVC maps this request to the getAddNewProductForm method, and within this method, we simply attach a new empty Product domain object to the model under the attribute name, newProduct. Product newProduct = new Product(); model.addAttribute("newProduct", newProduct); So in the view addproduct.jsp, we can access this model object, newProduct. Before jumping into the processAddNewProductForm method, let's review the addproduct.jsp view file for some time so that we are able to understand the form processing flow without confusion. In addproduct.jsp, we have just added a <form:form> tag from the Spring tag library using the following line of code: <form:form modelAttribute="newProduct" class="form-horizontal"> Since this special <form:form> tag is acquired from the Spring tag library, we need to add a reference to this tag library in our JSP file. That's why we have added the following line at the top of the addProducts.jsp file in step 6: <%@ taglib prefix="form" uri="http://www.springframework.org/tags/form" %> In the Spring <form:form> tag, one of the important attributes is modelAttribute. In our case, we assigned the value newProduct as the value of modelAttribute in the <form:form> tag. If you recall correctly, you will notice that this value of modelAttribute and the attribute name we used to store the newProduct object in the model from our getAddNewProductForm method are the same. So, the newProduct object that we attached to the model in the controller method (getAddNewProductForm) is now bound to the form. This object is called the form-backing bean in Spring MVC. Okay, now notice each <form:input> tag inside the <form:form> tag shown in the following code. You will observe that there is a common attribute in every tag. This attribute name is path: <form:input id="productId" path="productId" type="text" class="form:input-large"/> The path attribute just indicates the field name that is relative to the form-backing bean. So, the value that is entered in this input box at runtime will be bound to the corresponding field of the form bean. Okay, now is the time to come back and review our processAddNewProductForm method. When will this method be invoked? This method will be invoked once we press the submit button of our form. Yes, since every form submission is considered as a POST request, this time the browser will send a POST request to the same URL, that is, http://localhost:8080/webstore/products/add. So, this time, the processAddNewProductForm method will get invoked since it is a POST request. Inside the processAddNewProductForm method, we simply call the service method addProduct to add the new product to the repository, as follows: productService.addProduct(productToBeAdded); However, the interesting question here is, how is the productToBeAdded object populated with the data that we entered in the form? The answer lies within the @ModelAttribute annotation (org.springframework.web.bind.annotation.ModelAttribute). Note the method signature of the processAddNewProductForm method shown in the following line of code: public String processAddNewProductForm(@ModelAttribute("newProduct") Product productToBeAdded) Here, if you notice the value attribute of the @ModelAttribute annotation, you will observe a pattern. The values of the @ModelAttribute annotation and modelAttribute from the <form:form> tag are the same. So, Spring MVC knows that it should assign the form-bound newProduct object to the productToBeAdded parameter of the processAddNewProductForm method. The @ModelAttribute annotation is not only used to retrieve an object from a model, but if we want to, we can even use it to add objects to the model. For instance, we rewrite our getAddNewProductForm method to something like the following code with the use of the @ModelAttribute annotation: @RequestMapping(value = "/add", method = RequestMethod.GET) public String getAddNewProductForm(@ModelAttribute("newProduct") Product newProduct) { return "addProduct"; } You can notice that we haven't created any new empty Product domain object and attached it to the model. All we have done was added a parameter of the type Product and annotated it with the @ModelAttribute annotation so that Spring MVC would know that it should create an object of Product and attach it to the model under the name newProduct. One more thing that needs to be observed in the processAddNewProductForm method is the logical view name, redirect:/products, that it returns. So, what are we trying to tell Spring MVC by returning a string redirect:/products? To get the answer, observe the logical view name string carefully. If we split this string with the : (colon) symbol, we will get two parts; the first part is the prefix redirect and the second part is something that looks like a request path, /products. So, instead of returning a view name, we simply instruct Spring to issue a redirect request to the request path, /products, which is the request path for the list method of our ProductController class. So, after submitting the form, we list the products using the list method of ProductController. As a matter of fact, when we return any request path with the redirect: prefix from a request mapping method, Spring uses a special view object, RedirectView (org.springframework.web.servlet.view.RedirectView), to issue a redirect command behind the screen. Instead of landing in a web page after the successful submission of a web form, we are spawning a new request to the request path /products with the help of RedirectView. This pattern is called Redirect After Post, which is a common pattern to use with web-based forms. We are using this pattern to avoid double submission of the same form; sometimes, if we press the browser's refresh button or back button after submitting the form, there are chances that the same form will be resubmitted. Summary This article introduced you to Spring and Spring form tag libraries in web form handling. You also learned how to bind domain objects with views and how to use message bundles to externalize label caption texts. Resources for Article: Further resources on this subject: Spring MVC - Configuring and Deploying the Application [article] Getting Started With Spring MVC - Developing the MVC components [article] So, what is Spring for Android? [article]
Read more
  • 0
  • 0
  • 1848

Packt
23 Jun 2014
10 min read
Save for later

Kendo UI DataViz – Advance Charting

Packt
23 Jun 2014
10 min read
(For more resources related to this topic, see here.) Creating a chart to show stock history The Kendo UI library provides a specialized chart widget that can be used to display the stock price data for a particular stock over a period of time. In this recipe, we will take a look at creating a Stock chart and customizing it. Getting started Include the CSS files, kendo.dataviz.min.css and kendo.dataviz.default.min.css, in the head section. These files are used in styling some of the parts of a stock history chart. How to do it… A Stock chart is made up of two charts: a pane that shows you the stock history and another pane that is used to navigate through the chart by changing the date range. The stock price for a particular stock on a day can be denoted by the following five attributes: Open: This shows you the value of the stock when the trading starts for the day Close: This shows you the value of the stock when the trading closes for the day High: This shows you the highest value the stock was able to attain on the day Low: This shows you the lowest value the stock reached on the day Volume: This shows you the total number of shares of that stock traded on the day Let's assume that a service returns this data in the following format: [ { "Date" : "2013/01/01", "Open" : 40.11, "Close" : 42.34, "High" : 42.5, "Low" : 39.5, "Volume": 10000 } . . . ] We will use the preceding data to create a Stock chart. The kendoStockChart function is used to create a Stock chart, and it is configured with a set of options similar to the area chart or Column chart. In addition to the series data, you can specify the navigator option to show a navigation pane below the chart that contains the entire stock history: $("#chart").kendoStockChart({ title: { text: 'Stock history' }, dataSource: { transport: { read: '/services/stock?q=ADBE' } }, dateField: "Date", series: [{ type: "candlestick", openField: "Open", closeField: "Close", highField: "High", lowField: "Low" }], navigator: { series: { type: 'area', field: 'Volume' } } }); In the preceding code snippet, the DataSource object refers to the remote service that would return the stock data for a set of days. The series option specifies the series type as candlestick; a candlestick chart is used here to indicate the stock price for a particular day. The mappings for openField, closeField, highField, and lowField are specified; they will be used in plotting the chart and also to show a tooltip when the user hovers over it. The navigator option is specified to create an area chart, which uses volume data to plot the chart. The dateField option is used to specify the mapping between the date fields in the chart and the one in the response. How it works… When you load the page, you will see two panes being shown; the navigator is below the main chart. By default, the chart displays data for all the dates in the DataSource object, as shown in the following screenshot: In the preceding screenshot, a candlestick chart is created and it shows you the stock price over a period of time. Also, notice that in the navigator pane, all date ranges are selected by default, and hence, they are reflected in the chart (candlestick) as well. When you hover over the series, you will notice that the stock quote for the selected date is shown. This includes the date and other fields such as Open, High, Low, and Close. The area of the chart is adjusted to show you the stock price for various dates such that the dates are evenly distributed. In the previous case, the dates range from January 1, 2013 to January 31, 2013. However, when you hover over the series, you will notice that some of the dates are omitted. To overcome this, you can either increase the width of the chart area or use the navigator to reduce the date range. The former option is not advisable if the date range spans across several months and years. To reduce the date range in the navigator, move the two date range selectors towards each other to narrow down the dates, as shown in the following screenshot: When you try to narrow down the dates, you will see a tooltip in the chart, indicating the date range that you are trying to select. The candlestick chart is adjusted to show you the stock price for the selected date range. Also, notice that the opacity of the selected date range in the navigator remains the same while the rest of the area's opacity is reduced. Once the date range is selected, the selected pane can be moved in the navigator. There's more… There are several options available to you to customize the behavior and the look and feel of the Stock Chart widget. Specifying the date range in the navigator when initializing the chart By default, all date ranges in the chart are selected and the user will have to narrow them down in the navigator pane. When you work with a large dataset, you will want to show the stock data for a specific range of date when the chart is rendered. To do this, specify the select option in navigator: navigator: { series: { type: 'area', field: 'Volume' }, select: { from: '2013/01/07', to: '2013/01/14' } } In the previous code snippet, the from and to date ranges are specified. Now, when you render the page, you will see that the same dates are selected in the navigator pane. Customizing the look and feel of the Stock Chart widget There are various options available to customize the navigator pane in the Stock Chart widget. Let's increase the height of the pane and also include a title text for it: navigator: { . . pane: { height: '50px', title: { text: 'Stock Volume' } } } Now when you render the page, you will see that the title and height of the navigator pane have been increased. Using the Radial Gauge widget The Radial Gauge widget allows you to build a dashboard-like application wherein you want to indicate a value that lies in a specific range. For example, a car's dashboard can contain a couple of Radial Gauge widgets that can be used to indicate the current speed and RPM. How to do it… To create a Radial Gauge widget, invoke the kendoRadialGauge function on the selected DOM element. A Radial Gauge widget contains some components, and it can be configured by providing options, as shown in the following code snippet: $("#chart").kendoRadialGauge({ scale: { startAngle: 0, endAngle: 180, min: 0, max: 180 }, pointer: { value: 20 } }); Here the scale option is used to configure the range for the Radial Gauge widget. The startAngle and endAngle options are used to indicate the angle at which the Radial Gauge widget's range should start and end. By default, its values are 30 and 210, respectively. The other two options, that is, min and max, are used to indicate the range values over which the value can be plotted. The pointer option is used to indicate the current value in the Radial Gauge widget. There are several options available to configure the Radial Gauge widget; these include positioning of the labels and configuring the look and feel of the widget. How it works… When you render the page, you will see a Radial Gauge widget that shows you the scale from 0 to 180 and the pointer pointing to the value 20. Here, the values from 0 to 180 are evenly distributed, that is, the major ticks are in terms of 20. There are 10 minor ticks, that is, ticks between two major ticks. The widget shows values in the clockwise direction. Also, the pointer value 20 is selected in the scale. There's more… The Radial Gauge widget can be customized to a great extent by including various options when initializing the widget. Changing the major and minor unit values Specify the majorUnit and minorUnit options in the scale: scale: { startAngle: 0, endAngle: 180, min: 0, max: 180, majorUnit: 30, minorUnit: 10, } The scale option specifies the majorUnit value as 30 (instead of the default 20) and minorUnit as 10. This will now add labels at every 30 units and show you two minor ticks between the two major ticks, each at a distance of 10 units, as shown in the following screenshot: The ticks shown in the preceding screenshot can also be customized: scale: { . . minorTicks: { size: 30, width: 1, color: 'green' }, majorTicks: { size: 100, width: 2, color: 'red' } } Here, the size option is used to specify the length of the tick marker, width is used to specify the thickness of the tick, and the color option is used to change the color of the tick. Now when you render the page, you will see the changes for the major and minor ticks. Changing the color of the radial using the ranges option The scale attribute can include the ranges option to specify a radial color for the various ranges on the Radial Gauge widget: scale: { . . ranges: [ { from: 0, to: 60, color: '#00F' }, { from: 60, to: 130, color: '#0F0' }, { from: 130, to: 200, color: '#F00' } ] } In the preceding code snippet, the ranges array contains three objects that specify the color to be applied on the circumference of the widget. The from and to values are used to specify the range of tick values for which the color should be applied. Now when you render the page, you will see the Radial Gauge widget showing the colors for various ranges along the circumference of the widget, as shown in the following screenshot: In the preceding screenshot, the startAngle and endAngle fields are changed to 10 and 250, respectively. The widget can be further customized by moving the labels outside. This can be done by specifying the labels attribute with position as outside. In the preceding screenshot, the labels are positioned outside, hence, the radial appears inside. Updating the pointer value using a Slider widget The pointer value is set when the Radial Gauge widget is initialized. It is possible to change the pointer value of the widget at runtime using a Slider widget. The changes in the Slider widget can be observed, and the pointer value of the Radial Gauge can be updated accordingly. Let's use the Radial Gauge widget. A Slider widget is created using an input element: <input id="slider" value="0" /> The next step is to initialize the previously mentioned input element to a Slider widget: $('#slider').kendoSlider({ min: 0, max: 200, showButtons: false, smallStep: 10, tickPlacement: 'none', change: updateRadialGuage }); The min and max values specify the range of values that can be set for the slider. The smallStep attribute specifies the minimum increment value of the slider. The change attribute specifies the function that should be invoked when the slider value changes. The updateRadialGuage function should then update the value of the pointer in the Radial Gauge widget: function updateRadialGuage() { $('#chart').data('kendoRadialGauge') .value($('#slider').val()); } The function gets the instance of the widget and then sets its value to the value obtained from the Slider widget. Here, the slider value is changed to 100, and you will notice that it is reflected in the Radial Gauge widget.
Read more
  • 0
  • 0
  • 5304

article-image-adding-developer-django-forms
Packt
18 Jun 2014
8 min read
Save for later

Adding a developer with Django forms

Packt
18 Jun 2014
8 min read
(For more resources related to this topic, see here.) When displaying the form, it will generate the contents of the form template. We may change the type of field that the object sends to the template if needed. While receiving the data, the object will check the contents of each form element. If there is an error, the object will send a clear error to the client. If there is no error, we are certain that the form data is correct. CSRF protection Cross-Site Request Forgery (CSRF) is an attack that targets a user who is loading a page that contains a malicious request. The malicious script uses the authentication of the victim to perform unwanted actions, such as changing data or access to sensitive data. The following steps are executed during a CSRF attack: Script injection by the attacker. An HTTP query is performed to get a web page. Downloading the web page that contains the malicious script. Malicious script execution. In this kind of attack, the hacker can also modify information that may be critical for the users of the website. Therefore, it is important for a web developer to know how to protect their site from this kind of attack, and Django will help with this. To re-enable CSRF protection, we must edit the settings.py file and uncomment the following line: 'django.middleware.csrf.CsrfViewMiddleware', This protection ensures that the data that has been sent is really sent from a specific property page. You can check this in two easy steps: When creating an HTML or Django form, we insert a CSRF token that will store the server. When the form is sent, the CSRF token will be sent too. When the server receives the request from the client, it will check the CSRF token. If it is valid, it validates the request. Do not forget to add the CSRF token in all the forms of the site where protection is enabled. HTML forms are also involved, and the one we have just made does not include the token. For the previous form to work with CSRF protection, we need to add the following line in the form of tags and <form> </form>: {% csrf_token %} The view with a Django form We will first write the view that contains the form because the template will display the form defined in the view. Django forms can be stored in other files as forms.py at the root of the project file. We include them directly in our view because the form will only be used on this page. Depending on the project, you must choose which architecture suits you best. We will create our view in the views/create_developer.py file with the following lines: from django.shortcuts import render from django.http import HttpResponse from TasksManager.models import Supervisor, Developer from django import forms # This line imports the Django forms package class Form_inscription(forms.Form): # This line creates the form with four fields. It is an object that inherits from forms.Form. It contains attributes that define the form fields. name = forms.CharField(label="Name", max_length=30) login = forms.CharField(label="Login", max_length=30) password = forms.CharField(label="Password", widget=forms.PasswordInput) supervisor = forms.ModelChoiceField(label="Supervisor", queryset=Supervisor.objects.all()) # View for create_developer def page(request): if request.POST: form = Form_inscription(request.POST) # If the form has been posted, we create the variable that will contain our form filled with data sent by POST form. if form.is_valid(): # This line checks that the data sent by the user is consistent with the field that has been defined in the form. name = form.cleaned_data['name'] # This line is used to retrieve the value sent by the client. The collected data is filtered by the clean() method that we will see later. This way to recover data provides secure data. login = form.cleaned_data['login'] password = form.cleaned_data['password'] supervisor = form.cleaned_data['supervisor'] # In this line, the supervisor variable is of the Supervisor type, that is to say that the returned data by the cleaned_data dictionary will directly be a model. new_developer = Developer(name=name, login=login, password=password, email="", supervisor=supervisor) new_developer.save() return HttpResponse("Developer added") else: return render(request, 'en/public/create_developer.html', {'form' : form}) # To send forms to the template, just send it like any other variable. We send it in case the form is not valid in order to display user errors: else: form = Form_inscription() # In this case, the user does not yet display the form, it instantiates with no data inside. return render(request, 'en/public/create_developer.html', {'form' : form}) This screenshot shows the display of the form with the display of an error message: Template of a Django form We set the template for this view. The template will be much shorter: {% extends "base.html" %} {% block title_html %} Create Developer {% endblock %} {% block h1 %} Create Developer {% endblock %} {% block article_content %} <form method="post" action="{% url "create_developer" %}" > {% csrf_token %} <!-- This line inserts a CSRF token. --> <table> {{ form.as_table }} <!-- This line displays lines of the form.--> </table> <p><input type="submit" value="Create" /></p> </form> {% endblock %} As the complete form operation is in the view, the template simply executes the as_table() method to generate the HTML form. The previous code displays data in tabular form. The three methods to generate an HTML form structure are as follows: as_table: This displays fields in the <tr> <td> tags as_ul: This displays the form fields in the <li> tags as_p: This displays the form fields in the <p> tags So, we quickly wrote a secure form with error handling and CSRF protection through Django forms. The form based on a model ModelForms are Django forms based on models. The fields of these forms are automatically generated from the model that we have defined. Indeed, developers are often required to create forms with fields that correspond to those in the database to a non-MVC website. These particular forms have a save() method that will save the form data in a new record. The supervisor creation form To broach ModelForms, we will take, for example, the addition of a supervisor. For this, we will create a new page. For this, we will create the following URL: url(r'^create-supervisor$', 'TasksManager.views.create_supervisor.page', name="create_supervisor"), Our view will contain the following code: from django.shortcuts import render from TasksManager.models import Supervisor from django import forms from django.http import HttpResponseRedirect from django.core.urlresolvers import reverse def page(request): if len(request.POST) > 0: form = Form_supervisor(request.POST) if form.is_valid(): form.save(commit=True) # If the form is valid, we store the data in a model record in the form. return HttpResponseRedirect(reverse('public_index')) # This line is used to redirect to the specified URL. We use the reverse() function to get the URL from its name defines urls.py. else: return render(request, 'en/public/create_supervisor.html', {'form': form}) else: form = Form_supervisor() return render(request, 'en/public/create_supervisor.html', {'form': form}) class Form_supervisor(forms.ModelForm): # Here we create a class that inherits from ModelForm. class Meta: # We extend the Meta class of the ModelForm. It is this class that will allow us to define the properties of ModelForm. model = Supervisor # We define the model that should be based on the form. exclude = ('date_created', 'last_connexion', ) # We exclude certain fields of this form. It would also have been possible to do the opposite. That is to say with the fields property, we have defined the desired fields in the form. As seen in the line exclude = ('date_created', 'last_connexion', ), it is possible to restrict the form fields. Both the exclude and fields properties must be used correctly. Indeed, these properties receive a tuple of the fields to exclude or include as arguments. They can be described as follows: exclude: This is used in the case of an accessible form by the administrator. Because, if you add a field in the model, it will be included in the form. fields: This is used in cases in which the form is accessible to users. Indeed, if we add a field in the model, it will not be visible to the user. For example, we have a website selling royalty-free images with a registration form based on ModelForm. The administrator adds a credit field in the extended model of the user. If the developer has used an exclude property in some of the fields and did not add credits, the user will be able to take as many credits as he/she wants. We will resume our previous template, where we will change the URL present in the attribute action of the <form> tag: {% url "create_supervisor" %} This example shows us that ModelForms can save a lot of time in development by having a form that can be customized (by modifying the validation, for example). Summary This article discusses Django forms. It explains how to create forms with Django and how to treat them. Resources for Article: Further resources on this subject: So, what is Django? [article] Creating an Administration Interface in Django [article] Django Debugging Overview [article]
Read more
  • 0
  • 0
  • 2091
article-image-working-live-data-and-angularjs
Packt
12 Jun 2014
14 min read
Save for later

Working with Live Data and AngularJS

Packt
12 Jun 2014
14 min read
(For more resources related to this topic, see here.) Big Data is a new field that is growing every day. HTML5 and JavaScript applications are being used to showcase these large volumes of data in many new interesting ways. Some of the latest client implementations are being accomplished with libraries such as AngularJS. This is because of its ability to efficiently handle and organize data in many forms. Making business-level decisions off of real-time data is a revolutionary concept. Humans have only been able to fathom metrics based off of large-scale systems, in real time, for the last decade at most. During this time, the technology to collect large amounts of data has grown tremendously, but the high-level applications that use this data are only just catching up. Anyone can collect large amounts of data with today's complex distributed systems. Displaying this data in different formats that allow for any level of user to digest and understand its meaning is currently the main portion of what the leading-edge technology is trying to accomplish. There are so many different formats that raw data can be displayed in. The trick is to figure out the most efficient ways to showcase patterns and trends, which allow for more accurate business-level decisions to be made. We live in a fast paced world where everyone wants something done in real time. Load times must be in milliseconds, new features are requested daily, and deadlines get shorter and shorter. The Web gives companies the ability to generate revenue off a completely new market and AngularJS is on the leading edge. This new market creates many new requirements for HTML5 applications. JavaScript applications are becoming commonplace in major companies. These companies are using JavaScript to showcase many different types of data from inward to outward facing products. Working with live data sets in client-side applications is a common practice and is the real world standard. Most of the applications today use some type of live data to accomplish some given set of tasks. These tasks rely on this data to render views that the user can visualize and interact with. There are many advantages of working with the Web for data visualization, and we are going to showcase how these tie into an AngularJS application. AngularJS offers different methods to accomplish a view that is in charge of elegantly displaying large amounts of data in very flexible and snappy formats. Some of these different methods feed directives' data that has been requested and resolved, while others allow the directive to maintain control of the requests. We will go over these different techniques of how to efficiently get live data into the view layer by creating different real-world examples. We will also go over how to properly test directives that rely on live data to achieve their view successfully. Techniques that drive directives Most standard data requirements for a modern application involve an entire view that depends on a set of data. This data should be dependent on the current state of the application. The state can be determined in different ways. A common tactic is to build URLs that replicate a snapshot of the application's state. This can be done with a combination of URL paths and parameters. URL paths and parameters are what you will commonly see change when you visit a website and start clicking around. An AngularJS application is made up of different route configurations that use the URL to determine which action to take. Each configuration will have an associated controller, template, and other forms of options. These configurations work in unison to get data into the application in the most efficient ways. AngularUI also offers its own routing system. This UI-Router is a simple system built on complex concepts, which allows nested views to be controlled by different state options. This concept yields the same result as ngRoute, which is to get data into the controller; however, UI-Router does it in a more eloquent way, which creates more options. AngularJS 2.0 will contain a hybrid router that utilizes the best of each. Once the controller gets the data, it feeds the retrieved data to the template views. The template is what holds the directives that are created to perform the view layer functionality. The controller feeds directives' data, which forces the directives to rely on the controllers to be in charge of the said data. This data can either be fed immediately after the route configurations are executed or the application can wait for the data to be resolved. AngularJS offers you the ability to make sure that data requests have been successfully accomplished before any controller logic is executed. The method is called resolving data, and it is utilized by adding the resolve functions to the route configurations. This allows you to write the business logic in the controller in a synchronous manner, without having to write callbacks, which can be counter-intuitive. The XHR extensions of AngularJS are built using promise objects. These promise objects are basically a way to ensure that data has been successfully retrieved or to verify whether an error has occurred. Since JavaScript embraces callbacks at the core, there are many points of failure with respect to timing issues of when data is ready to be worked with. This is where libraries such as the Q library come into play. The promise object allows the execution thread to resemble a more synchronous flow, which reduces complexity and increases readability. The $q library The $q factory is a lite instantiation of the formally accepted Q library (https://github.com/kriskowal/q). This lite package contains only the functions that are needed to defer JavaScript callbacks asynchronously, based on the specifications provided by the Q library. The benefits of using this object are immense, when working with live data. Basically, the $q library allows a JavaScript application to mimic synchronous behavior when dealing with asynchronous data requests or methods that are not thread blocked by nature. This means that we can now successfully write our application's logic in a way that follows a synchronous flow. ES6 (ECMAScript6) incorporates promises at its core. This will eventually alleviate the need, for many functions inside the $q library or the entire library itself, in AngularJS 2.0. The core AngularJS service that is related to CRUD operations is called $http. This service uses the $q library internally to allow the powers of promises to be used anywhere a data request is made. Here is an example of a service that uses the $q object in order to create an easy way to resolve data in a controller. Refer to the following code: this.getPhones = function() { var request = $http.get('phones.json'), promise; promise = request.then(function(response) { return response.data; },function(errorResponse){ return errorResponse; }); return promise; } Here, we can see that the phoneService function uses the $http service, which can request for all the phones. The phoneService function creates a new request object, that calls a then function that returns a promise object. This promise object is returned synchronously. Once the data is ready, the then function is called and the correct data response is returned. This service is best showcased correctly when used in conjunction with a resolve function that feeds data into a controller. The resolve function will accept the promise object being returned and will only allow the controller to be executed once all of the phones have been resolved or rejected. The rest of the code that is needed for this example is the application's configuration code. The config process is executed on the initialization of the application. This is where the resolve function is supposed to be implemented. Refer to the following code: var app = angular.module('angularjs-promise-example',['ngRoute']); app.config(function($routeProvider){ $routeProvider.when('/', { controller: 'PhoneListCtrl', templateUrl: 'phoneList.tpl.html', resolve: { phones: function(phoneService){ return phoneService.getPhones(); } } }).otherwise({ redirectTo: '/' }); }) app.controller('PhoneListCtrl', function($scope, phones) { $scope.phones = phones; }); A live example of this basic application can be found at http://plnkr.co/edit/f4ZDCyOcud5WSEe9L0GO?p=preview. Directives take over once the controller executes its initial context. This is where the $compile function goes through all of its stages and links directives to the controller's template. The controller will still be in charge of driving the data that is sitting inside the template view. This is why it is important for directives to know what to do when their data changes. How should data be watched for changes? Most directives are on a need-to-know basis about the details of how they receive the data that is in charge of their view. This is a separation of logic that reduces cyclomatic complexity in an application. The controllers should be in charge of requesting data and passing this data to directives, through their associated $scope object. Directives should be in charge of creating DOM based on what data they receive and when the data changes. There are an infinite number of possibilities that a directive can try to achieve once it receives its data. Our goal is to showcase how to watch live data for changes and how to make sure that this works at scale so that our directives have the opportunity to fulfill their specific tasks. There are three built-in ways to watch data in AngularJS. Directives use the following methods to carry out specific tasks based on the different conditions set in the source of the program: Watching an object's identity for changes Recursively watching all of the object's properties for changes Watching just the top level of an object's properties for changes Each of these methods has its own specific purpose. The first method can be used if the variable that is being watched is a primitive type. The second type of method is used for deep comparisons between objects. The third type is used to do a shallow watch on an array of any type or just on a normal object. Let's look at an example that shows the last two watcher types. This example is going to use jsPerf to showcase our logic. We are leaving the first watcher out because it only watches primitive types and we will be watching many objects for different levels of equality. This example sets the $scope variable in the app's run function because we want to make sure that the jsPerf test resets each data set upon initialization. Refer to the following code: app.run(function($rootScope) { $rootScope.data = [ {'bob': true}, {'frank': false}, {'jerry': 'hey'}, {'bargle':false}, {'bob': true}, {'bob': true}, {'frank': false}, {'jerry':'hey'},{'bargle': false},{'bob': true},{'bob': true},{'frank': false}]; }); This run function sets up our data object that we will watch for changes. This will be constant throughout every test we run and will reset back to this form at the beginning of each test. Doing a deep watch on $rootScope.data This watch function will do a deep watch on the data object. The true flag is the key to setting off a deep watch. The purpose of a deep comparison is to go through every object property and compare it for changes on every digest. This is an expensive function and should be used only when necessary. Refer to the following code: app.service('Watch', function($rootScope) { return { run: function() { $rootScope.$watch('data', function(newVal, oldVal) { },true); //the digest is here because of the jsPerf test. We are using thisrun function to mimic a real environment. $rootScope.$digest(); } }; }); Doing a shallow watch on $rootScope.data The shallow watch is called whenever a top-level object is changed in the data object. This is less expensive because the application does not have to traverse n levels of data. Refer to the following code: app.service('WatchCollection', function($rootScope) { return { run: function() { $rootScope.$watchCollection('data', function(n, o) { }); $rootScope.$digest(); } }; }); During each individual test, we get each watcher service and call its run function. This fires the watcher on initialization, and then we push another test object to the data array, which fires the watch's trigger function again. That is the end of the test. We are using jsperf.com to show the results. Note that the watchCollection function is much faster and should be used in cases where it is acceptable to shallow watch an object. The example can be found at http://jsperf.com/watchcollection-vs-watch/5. Refer to the following screenshot: This test implies that the watchCollection function is a better choice to watch an array of objects that can be shallow watched for changes. This test is also true for an array of strings, integers, or floats. This brings up more interesting points, such as the following: Does our directive depend on a deep watch of the data? Do we want to use the $watch function, even though it is slow and memory taxing? Is it possible to use the $watch function if we are using large data objects? The directives that have been used in this book have used the watch function to watch data directly, but there are other methods to update the view if our directives depend on deep watchers and very large data sets. Directives can be in charge There are some libraries that believe that elements can be in charge of when they should request data. Polymer (http://www.polymer-project.org/) is a JavaScript library that allows DOM elements to control how data is requested, in a declarative format. This is a slight shift from the processes that have been covered so far in this article, when thinking about what directives are meant for and how they should receive data. Let's come up with an actual use case that could possibly allow this type of behavior. Let's consider a page that has many widgets on it. A widget is a directive that needs a set of large data objects to render its view. To be more specific, lets say we want to show a catalog of phones. Each phone has a very large amount of data associated with it, and we want to display this data in a very clean simple way. Since watching large data sets can be very expensive, what will allow directives to always have the data they require, depending on the state of the application? One option is to not use the controller to resolve the Big Data and inject it into a directive, but rather to use the controller to request for directive configurations that tell the directive to request certain data objects. Some people would say this goes against normal conventions, but I say it's necessary when dealing with many widgets in the same view, which individually deal with large amounts of data. This method of using directives to determine when data requests should be made is only suggested if many widgets on a page depend on large data sets. To create this in a real-life example, let's take the phoneService function, which was created earlier, and add a new method to it called getPhone. Refer to the following code: this.getPhone = function(config) { return $http.get(config.url); }; Now, instead of requesting for all the details on the initial call, the original getPhones method only needs to return phone objects with a name and id value. This will allow the application to request the details on demand. To do this, we do not need to alter the getPhones method that was created earlier. We only need to alter the data that is supplied when the request is made. It should be noted that any directive that is requesting data should be tested to prove that it is requesting the correct data at the right time. Testing directives that control data Since the controller is usually in charge of how data is incorporated into the view, many directives do not have to be coupled with logic related to how that data is retrieved. Keeping things separate is always good and is encouraged, but in some cases, it is necessary that directives and XHR logic be used together. When these use cases reveal themselves in production, it is important to test them properly. The tests in the book use two very generic steps to prove business logic. These steps are as follows: Create, compile, and link DOM to the AngularJS digest cycle Test scope variables and DOM interactions for correct outputs Now, we will add one more step to the process. This step will lie in the middle of the two steps. The new step is as follows: Make sure all data communication is fired correctly AngularJS makes it very simple to allow additional resource related logic. This is because they have a built-in backend service mock, which allows many different ways to create fake endpoints that return structured data. The service is called $httpBackend.
Read more
  • 0
  • 0
  • 4310

article-image-building-web-application-php-and-mariadb-introduction-caching
Packt
11 Jun 2014
4 min read
Save for later

Building a Web Application with PHP and MariaDB - Introduction to caching

Packt
11 Jun 2014
4 min read
Let's begin with database caching. All the data for our application is stored on MariaDB. When a request is made for retrieving the list of available students, we run a query on our course_registry database. Running a single query at a time is simple but as the application gets popular, we will have more concurrent users. As the number of concurrent connections to the database increases, we will have to make sure that our database server is optimized to handle that load. In this section, we will look at the different types of caching that can be performed in the database. Let's start with query caching. Query caching is available by default on MariaDB; to verify if the installation has a query cache, we will use the have_query_cache global variable. Let's use the SHOW VARIABLES command to verify if the query cache is available on our MariaDB installation, as shown in the following screenshot: Now that we have a query cache, let's verify if it is active. To do this, we will use the query_cache_type global variable, shown as follows: From this query, we can verify that the query cache is turned on. Now, let's take a look at the memory that is allocated for the query cache by using the query_cache_size command, shown as follows: The query cache size is currently set to 64 MB; let's modify our query cache size to 128 MB. The following screenshot shows the usage of the SET GLOBAL syntax: We use the SET GLOBAL syntax to set the value for the query_cache_size command, and we verify this by reloading the value of the query_cache_size command. Now that we have the query cache turned on and working, let's look at a few statistics that would give us an idea of how often the queries are being cached. To retrieve this information, we will query the Qcache variable, as shown in the following screenshot: From this output, we can verify whether we are retrieving a lot of statistics about the query cache. One thing to verify is the Qcache_not_cached variable that is high for our database. This is due to the use of prepared statements. The prepared statements are not cached by MariaDB. Another important variable to keep an eye on is the Qcache_lowmem_prunes variable that will give us an idea of the number of queries that were deleted due to low memory. This will indicate that the query cache size has to be increased. From these stats, we understand that for as long as we use the prepared statements, our queries will not be cached on the database server. So, we should use a combination of prepared statements and raw SQL statements, depending on our use cases. Now that we understand a good bit about query caches, let's look at the other caches that MariaDB provides, such as the table open cache, the join cache, and the memory storage cache. The table open cache allows us to define the number of tables that can be left open by the server to allow faster look-ups. This will be very helpful where there is a huge number of requests for a table, and so the table need not be opened for every request. The join buffer cache is commonly used for queries that perform a full join, wherein there are no indexes to be used for finding rows for the next table. Normally, indexes help us avoid these problems. The memory storage cache, previously known as the heap cache, is commonly is used for read-only caches of data from other tables or for temporary work areas. Let's look at the variables that are with MariaDB, as shown in the following screenshot: Database caching is a very important step towards making our application scalable. However, it is important to understand when to cache, the correct caching techniques, and the size for each cache. Allocation of memory for caching has to be done very carefully as the application can run out of memory if too much space is allocated. A good method to allocate memory for caching is by running benchmarks to see how the queries perform, and have a list of popular queries that will run often so that we can begin by caching and optimizing the database for those queries. Now that we have a good understanding of database caching, let's proceed to application-level caching. Resources for Article: Introduction to Kohana PHP Framework Creating and Consuming Web Services in CakePHP 1.3 Installing MariaDB on Windows and Mac OS X
Read more
  • 0
  • 0
  • 11264
Modal Close icon
Modal Close icon