Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Front-End Web Development

341 Articles
article-image-rabbitmq-acknowledgements
Packt
16 Sep 2013
3 min read
Save for later

RabbitMQ Acknowledgements

Packt
16 Sep 2013
3 min read
(For more resources related to this topic, see here.) Acknowledgements (Intermediate) This task will examine reliable message delivery from the RabbitMQ server to a consumer. Getting ready If a consumer takes a message/order from our queue and the consumer dies, our unprocessed message/order will die with it. In order to make sure a message is never lost, RabbitMQ supports message acknowledgments or acks. When a consumer has received and processed a message, an acknowledgment is sent to RabbitMQ from the consumer informing RabbitMQ that it is free to delete the message. If a consumer dies without sending an acknowledgment, RabbitMQ will redeliver it to another consumer. How to do it... Let's navigate to our source code examples folder and locate the folder Message-Acknowledgement. Take a look at the consumer.js script and examine the changes we have made to support acks. We pass to the {ack:true} option to the q.subscribe function, which tells the queue that messages should be acknowledged before being removed: q.subscribe({ack:true}, function(message) { When our message has been processed we call q.shift, which informs RabbitMQ that the message has been processed, and it can now be removed from the queue: q.shift(); You can also use the prefetchCount option to increase the window of how many messages the server will send you before you need to send an acknowledgement. {ack:true, prefetchCount:1} is the default and will only send you one message before you acknowledge. Setting prefetchCount to 0 will make that window unlimited. A low value will impact performance, so it may be worth considering a higher value. Let's demonstrate this concept. Edit the consumer.js script located in the folder Message-Acknowledgement. Simply comment out the line q.shift(), which will stop the consumer from acknowledging the messages. Open a command-line console and start RabbitMQ: rabbitmq-server Now open a command-line console, navigate to our source code examples folder, and locate the folder Message-Acknowledgement. Execute the following command: Message-Acknowledgement> node producer Let the producer create several message/orders; press Ctrl + C while on the command-line console to stop the producer creating orders. Now execute the following to begin consuming messages: Message-Acknowledgement> node consumer Let's open another command-line console and run list_queues: rabbitmqctl list_queues messages_readymessages_unacknowledged The response should display our shop queue; details include the name, the number of messages ready to be processed, and one message which has not been acknowledged. Listing queues ...shop.queue 9 1...done. If you press Ctrl + C while on the command-line console, the consumer script is stopped, and then list the queues again you will notice the message has returned to the queue. Listing queues ...shop.queue 10 0...done. If you edit the change we made to consumer.js script and re-run these steps, the application will work correctly, consuming messages one at a time and sending an acknowledgment to RabbitMQ when each message has been processed. Summary This article explained a reliable message delivery process in RabbitMQ using Acknowledgements. It also listed the steps that will give you acknowledegements for a messaging application using scripts in RabbitMQ. Resources for Article : Further resources on this subject: Getting Started with Oracle Information Integration [Article] Messaging with WebSphere Application Server 7.0 (Part 1) [Article] Using Virtual Destinations (Advanced) [Article]
Read more
  • 0
  • 0
  • 7715

article-image-scaling-your-application-across-nodes-spring-pythons-remoting
Packt
24 May 2010
5 min read
Save for later

Scaling your Application Across Nodes with Spring Python's Remoting

Packt
24 May 2010
5 min read
(For more resources on Spring, see here.) With the explosion of the Internet into e-commerce in recent years, companies are under pressure to support lots of simultaneous customers. With users wanting richer interfaces that perform more business functions, this constantly leads to a need for more computing power than ever before, regardless of being web-based or thick-client. Seeing the slowdown of growth in total CPU horsepower, people are looking to multi-core CPUs, 64-bit chips, or at adding more servers to their enterprise in order to meet their growing needs. Developers face the challenge of designing applications in the simple environment of a desktop scaled back for cost savings. Then they must be able to deploy into multi-core, multi-server environments in order to meet their companies business demands. Different technologies have been developed in order to support this. Different protocols have been drafted to help communicate between nodes. The debate rages on whether talking across the network should be visible in the API or abstracted away. Different technologies to support remotely connecting client process with server processes is under constant development. Introduction to Pyro (Python Remote Objects) Pyro is an open source project (pyro.sourceforge.net) that provides an object oriented form of RPC. As stated on the project's site, it resembles Java's Remote Method Invocation (RMI). It is less similar to CORBA (http://www.corba.org), a technology-neutral wire protocol used to link multiple processes together, because it doesn't require an interface definition language, nor is oriented towards linking different languages together. Pyro supports Python-to-Python communications. Thanks to the power of Jython, it is easy to link Java-to-Python, and vice versa. Python Remote Objects is not to be confused with the Python Robotics open source project (also named Pyro). Pyro is very easy to use out of the box with existing Python applications. The ability to publish services isn't hard to add to existing applications. Pyro uses its own protocol for RPC communication. Fundamentally, a Pyro-based application involves launching a Pyro daemon thread and then registering your server component with this thread. From that point on, the thread along with your server code is in stand-by mode, waiting to process client calls. The next step involves creating a Pyro client proxy that is configured to find the daemon thread, and then forward client calls to the server. From a high level perspective, this is very similar to what Java RMI and CORBA offer. However, thanks to the dynamic nature of Python, the configuration steps are much easier, and there are no requirements to extend any classes or implement any interfaces.. As simple as it is to use Pyro, there is still the requirement to write some minimal code to instantiate your objects and then register them. You must also code up the clients, making them aware of Pyro as well. Since the intent of this article is to dive into using Spring Python, we will skip writing a pure Pyro application. Instead, let's see how to use Spring Python's out-of-the-box Pyro-based components, eliminating the need to write any Pyro glue code. This lets us delegate everything to our IoC container so that it can do all the integration steps by itself. This reduces the cost of making our application distributed to zero. Converting a simple application into a distributed one on the same machine For this example, let's develop a simple service that processes some data and produces a response. Then, we'll convert it to a distributed service. First, let's create a simple service. For this example, let's create one that returns us an array of strings representing the Happy Birthday song with someone's name embedded in it. class Service(object): def happy_birthday(self, name): results = [] for i in range(4): if i == 2: results.append("Happy Birthday Dear %s!" % name) else: results.append("Happy Birthday to you!") return results Our service isn't too elaborate. Instead of printing the data directly to screen, it collects it together and returns it to the caller. This allows us the caller to print it, test it, store it, or do whatever it wants with the result. In the following screen text, we see a simple client taking the results and printing them a little formatting inside the Python shell. As we can see, we have defined a simple service, and can call it directly. In our case, we are simply joining the list together with a newline character, and printing it to the screen. Fetching the service from an IoC container from springpython.config import *from simple_service import *class HappyBirthdayContext(PythonConfig): def __init__(self): PythonConfig.__init__(self) @Object def service(self): return Service() Creating a client to call the service Now let's write a client script that will create an instance of this IoC container, fetch the service, and use it. from springpython.context import *from simple_service_ctx import *if __name__ == "__main__": ctx = ApplicationContext(HappyBirthdayContext()) s = ctx.get_object("service") print "n".join(s.happy_birthday("Greg")) Running this client script neatly creates an instance of our IoC container, fetches the service, and calls it with the same arguments shown earlier.
Read more
  • 0
  • 0
  • 7701

article-image-indexes
Packt
23 Jul 2014
8 min read
Save for later

Indexes

Packt
23 Jul 2014
8 min read
(For more resources related to this topic, see here.) As a database administrator (DBA) or developer, one of your most important goals is to ensure that the query times are consistent with the service-level agreement (SLA) and are meeting user expectations. Along with other performance enhancement techniques, creating indexes for your queries on underlying tables is one of the most effective and common ways to achieve this objective. The indexes of underlying relational tables are very similar in purpose to an index section at the back of a book. For example, instead of flipping through each page of the book, you use the index section at the back of the book to quickly find the particular information or topic within the book. In the same way, instead of scanning each individual row on the data page, SQL Server uses indexes to quickly find the data for the qualifying query. Therefore, by indexing an underlying relational table, you can significantly enhance the performance of your database. Indexing affects the processing speed for both OLTP and OLAP and helps you achieve optimum query performance and response time. The cost associated with indexes SQL Server uses indexes to optimize overall query performance. However, there is also a cost associated with indexes; that is, indexes slow down insert, update, and delete operations. Therefore, it is important to consider the cost and benefits associated with indexes when you plan your indexing strategy. How SQL Server uses indexes A table that doesn't have a clustered index is stored in a set of data pages called a heap. Initially, the data in the heaps is stored in the order in which the rows are inserted into the table. However, SQL Server Database Engine moves the data around the heap to store the rows efficiently. Therefore, you cannot predict the order of the rows for heaps because data pages are not sequenced in any particular order. The only way to guarantee the order of the rows from a heap is to use the SELECT statement with the ORDER BY clause. Access without an index When you access the data, SQL Server first determines whether there is a suitable index available for the submitted SELECT statement. If no suitable index is found for the submitted SELECT statement, SQL Server retrieves the data by scanning the entire table. The database engine begins scanning at the physical beginning of the table and scans through the full table page by page and row by row to look for qualifying data that is specified in the submitted SELECT statement. Then, it extracts and returns the rows that meet the criteria in the format specified in the submitted SELECT statement. Access with an index The process is improved when indexes are present. If the appropriate index is available, SQL Server uses it to locate the data. An index improves the search process by sorting data on key columns. The database engine begins scanning from the first page of the index and only scans those pages that potentially contain qualifying data based on the index structure and key columns. Finally, it retrieves the data rows or pointers that contain the locations of the data rows to allow direct row retrieval. The structure of indexes In SQL Server, all indexes—except full-text, XML, in-memory optimized, and columnstore indexes—are organized as a balanced tree (B-tree). This is because full-text indexes use their own engine to manage and query full-text catalogs, XML indexes are stored as internal SQL Server tables, in-memory optimized indexes use the Bw-tree structure, and columnstore indexes utilize SQL Server in-memory technology. In the B-tree structure, each page is called a node. The top page of the B-tree structure is called the root node. Non-leaf nodes, also referred to as intermediate levels, are hierarchical tree nodes that comprise the index sort order. Non-leaf nodes point to other non-leaf nodes that are one step below in the B-tree hierarchy, until reaching the leaf nodes. Leaf nodes are at the bottom of the B-tree hierarchy. The following diagram illustrates the typical B-tree structure: Index types In SQL Server 2014, you can create several types of indexes. They are explored in the next sections. Clustered indexes A clustered index sorts table or view rows in the order based on clustered index key column values. In short, a leaf node of a clustered index contains data pages, and scanning them will return the actual data rows. Therefore, a table can have only one clustered index. Unless explicitly specified as nonclustered, SQL Server automatically creates the clustered index when you define a PRIMARY KEY constraint on a table. When should you have a clustered index on a table? Although it is not mandatory to have a clustered index per table, according to the TechNet article, Clustered Index Design Guidelines, with few exceptions, every table should have a clustered index defined on the column or columns that used as follows: The table is large and does not have a nonclustered index. The presence of a clustered index improves performance because without it, all rows of the table will have to be read if any row needs to be found. A column or columns are frequently queried, and data is returned in a sorted order. The presence of a clustered index on the sorting column or columns prevents the sorting operation from being started and returns the data in the sorted order. A column or columns are frequently queried, and data is grouped together. As data must be sorted before it is grouped, the presence of a clustered index on the sorting column or columns prevents the sorting operation from being started. A column or columns data that are frequently used in queries to search data ranges from the table. The presence of clustered indexes on the range column will help avoid the sorting of the entire table data. Nonclustered indexes Nonclustered indexes do not sort or store the data of the underlying table. This is because the leaf nodes of the nonclustered indexes are index pages that contain pointers to data rows. SQL Server automatically creates nonclustered indexes when you define a UNIQUE KEY constraint on a table. A table can have up to 999 nonclustered indexes. You can use the CREATE INDEX statement to create clustered and nonclustered indexes. A detailed discussion on the CREATE INDEX statement and its parameters is beyond the scope of this article. For help with this, refer to the CREATE INDEX (Transact-SQL) article at http://msdn.microsoft.com/en-us/library/ms188783.aspx. SQL Server 2014 also supports new inline index creation syntax for standard, disk-based database tables, temp tables, and table variables. For more information, refer to the CREATE TABLE (SQL Server) article at http://msdn.microsoft.com/en-us/library/ms174979.aspx. Single-column indexes As the name implies, single-column indexes are based on a single-key column. You can define it as either clustered or nonclustered. You cannot drop the index key column or change the data type of the underlying table column without dropping the index first. Single-column indexes are useful for queries that search data based on a single column value. Composite indexes Composite indexes include two or more columns from the same table. You can define composite indexes as either clustered or nonclustered. You can use composite indexes when you have two or more columns that need to be searched together. You typically place the most unique key (the key with the highest degree of selectivity) first in the key list. For example, examine the following query that returns a list of account numbers and names from the Purchasing.Vendor table, where the name and account number starts with the character A: USE [AdventureWorks2012]; SELECT [AccountNumber] , [Name] FROM [Purchasing].[Vendor] WHERE [AccountNumber] LIKE 'A%' AND [Name] LIKE 'A%'; GO If you look at the execution plan of this query without modifying the existing indexes of the table, you will notice that the SQL Server query optimizer uses the table's clustered index to retrieve the query result, as shown in the following screenshot: As our search is based on the Name and AccountNumber columns, the presence of the following composite index will improve the query execution time significantly: USE [AdventureWorks2012]; GO CREATE NONCLUSTERED INDEX [AK_Vendor _ AccountNumber_Name] ON [Purchasing].[Vendor] ([AccountNumber] ASC, [Name] ASC) ON [PRIMARY]; GO Now, examine the query execution plan of this query once again, after creating the previous composite index on the Purchasing.Vendor table, as shown in the following screenshot: As you can see, SQL Server performs a seek operation on this composite index to retrieve the qualifying data. Summary Thus we have learned what indexes are, how SQL Server uses indexes, structure of indexes, and some of the types of indexes. Resources for Article: Further resources on this subject: Easily Writing SQL Queries with Spring Python [article] Manage SQL Azure Databases with the Web Interface 'Houston' [article] VB.NET Application with SQL Anywhere 10 database: Part 1 [article]
Read more
  • 0
  • 0
  • 7666

article-image-customizing-and-automating-google-applications
Packt
27 Jan 2016
7 min read
Save for later

Customizing and Automating Google Applications

Packt
27 Jan 2016
7 min read
In this article by the author, Ramalingam Ganapathy, of the book, Learning Google Apps Script, we will see how to create new projects in sheets and send an email with inline image and attachments. You will also learn to create clickable buttons, a custom menu, and a sidebar. (For more resources related to this topic, see here.) Creating new projects in sheets Open any newly created google spreadsheet (sheets). You will see a number of menu items at the top of the window. Point your mouse to it and click on Tools. Then, click on Script editor as shown in the following screenshot: A new browser tab or window with a new project selection dialog will open. Click on Blank Project or close the dialog. Now, you have created a new untitled project with one script file (Code.gs), which has one default empty function (myFunction). To rename the project, click on project title (at the top left-hand side of the window), and then a rename dialog will open. Enter your favored project name, and then click on the OK button. Creating clickable buttons Open the script editor in a newly created or any existing Google sheet. Select the cell B3 or any other cell. Click on Insert and Drawing as shown in the following screenshot: A drawing editor window will open. Click on the Textbox icon and click anywhere on the canvas area. Type Click Me. Resize the object so as to only enclose the text as shown in the screenshot here: Click on Save & Close to exit from the drawing editor. Now, the Click Me image will be inserted at the top of the active cell (B3) as shown in the following screenshot: You can drag this image anywhere around the spreadsheet. In Google sheets, images are not anchored to a particular cell, it can be dragged or moved around. If you right-click on the image, then a drop-down arrow at the top right corner of the image will be visible. Click on the Assign script menu item. A script assignment window will open as shown here: Type "greeting" or any other name as you like but remember its name (so as to create a function with the same name for the next steps). Click on the OK button. Now, open the script editor in the same spreadsheet. When you the open script editor, the project selector dialog will open. You'll close or select blank project. A default function called myFunction will be there in the editor. Delete everything in the editor and insert the following code. function greeting() { Browser.msgBox("Greeting", "Hello World!", Browser.Buttons.OK); } Click on the save icon and enter a project name if asked. You have completed coding your greeting function. Activate the spreadsheet tab/window, and click on your button called Click Me. Then, an authorization window will open; click on Continue. In the successive Request for Permission window, click on Allow button. As soon as you click on Allow and the permission gets dialog disposed, your actual greeting message box will open as shown here: Click on OK to dispose the message box. Whenever you click on your button, this message box will open. Creating a custom menu Can you execute the function greeting without the help of the button? Yes, in the script editor, there is a Run menu. If you click on Run and greeting, then the greeting function will be executed and the message box will open. Creating a button for every function may not be feasible. Although, you cannot alter or add items to the application's standard menu (except the Add-on menu), such as File, Edit and View, and others, you can add the custom menu and its items. For this task, create a new Google docs document or open any existing document. Open the script editor and type these two functions: function createMenu() { DocumentApp.getUi() .createMenu("PACKT") .addItem("Greeting", "greeting") .addToUi(); } function greeting() { var ui = DocumentApp.getUi(); ui.alert("Greeting", "Hello World!", ui.ButtonSet.OK); } In the first function, you use the DocumentApp class, invoke the getUi method, and consecutively invoke the createMenu, addItem, and addToUi methods by method chaining. The second function is familiar to you that you have created in the previous task but this time with the DocumentApp class and associated methods. Now, run the function called createMenu and flip to the document window/tab. You can notice a new menu item called PACKT added next to the Help menu. You can see the custom menu PACKT with an item Greeting as shown next. The item label called Greeting is associated with the function called greeting: The menu item called Greeting works the same way as your button created in previous task. The drawback with this method of inserting custom menu is used to show up the custom menu. You need to run createMenu every time within the script editor. Imagine how your user can use this greeting function if he/she doesn't know about the GAS and script editor? Think that your user might not be a programmer as you. To enable your users to execute the selected GAS functions, then you should create a custom menu and make it visible as soon as the application is opened. To do so, rename the function called createMenu to onOpen, that's it. Creating a sidebar Sidebar is a static dialog box and it will be included in the right-hand side of the document editor window. To create a sidebar, type the following code in your editor: function onOpen() { var htmlOutput = HtmlService .createHtmlOutput('<button onclick="alert('Hello World!');">Click Me</button>') .setTitle('My Sidebar'); DocumentApp.getUi() .showSidebar(htmlOutput); } In the previous code, you use HtmlService and invoke its method called createHtmlOutput and consecutively invoke the setTitle method. To test this code, run the onOpen function or the reload document. The sidebar will be opened in the right-hand side of the document window as shown in the following screenshot. The sidebar layout size is a fixed one that means you cannot change, alter, or resize it: The button in the sidebar is an HTML element, not a GAS element, and if clicked, it opens the browser interface's alert box. Sending an email with inline image and attachments To embed images such as logo in your email message, you may use HTML codes instead of some plain text. Upload your image to Google Drive and get and use the file ID in the code: function sendEmail(){ var file = SpreadsheetApp.getActiveSpreadsheet() .getAs(MimeType.PDF); var image = DriveApp.getFileById("[[image file's id in Drive ]]").getBlob(); var to = "[[receiving email id]]"; var message = '<img src="cid:logo" /> Embedding inline image example.</p>'; MailApp.sendEmail( to, "Email with inline image and attachment", "", { htmlBody:message, inlineImages:{logo:image}, attachments:[file] } ); } Summary In this article, you learned how to customize and automate Google applications with a few examples. Many more useful and interesting applications have been described in the actual book.  Resources for Article: Further resources on this subject: How to Expand Your Knowledge [article] Google Apps: Surfing the Web [article] Developing Apps with the Google Speech Apis [article]
Read more
  • 0
  • 0
  • 7659

article-image-making-progress-menus-and-toolbars-using-ext-js-30-part-2
Packt
19 Nov 2009
6 min read
Save for later

Making Progress with Menus and Toolbars using Ext JS 3.0: Part 2

Packt
19 Nov 2009
6 min read
Embedding a progress bar in a status bar This topic explains how to embed a progress bar in a panel's status bar, a scenario found in countless user interfaces: How to do it Create a click handler that will simulate a long-running activity and update the progress bar: Ext.onReady(function() { var loadFn = function(btn, statusBar) { btn = Ext.getCmp(btn); btn.disable(); Ext.fly('statusTxt').update('Saving...'); pBar.wait({ interval: 200, duration: 5000, increment: 15, fn: function() { btn.enable(); Ext.fly('statusTxt').update('Done'); } });}; Create an instance of the progress bar: var pBar = new Ext.ProgressBar({ id: 'pBar', width: 100}); Create a host panel and embed the progress bar in the bbar of the panel. Also, add a button that will start the progress bar updates: var pnl = new Ext.Panel({ title: 'Status bar with progress bar', renderTo: 'pnl1', width: 400, height: 200, bodyStyle: 'padding:10px;', items: [{ xtype: 'button', id: 'btn', text: 'Save', width:'75', handler: loadFn.createCallback('btn', 'sBar') }], bbar: { id: 'sBar', items: [{ xtype: 'tbtext', text: '',id:'statusTxt' },'->', pBar] }}); How it works The first step consists of creating loadFn, a function that simulates a long-running operation, so that we can see the progress bar animation when the button is clicked. The heart of loadFn is a call to ProgressBar.wait(…), which initiates the progress bar in an auto-update mode. And this is how the status bar is embedded in the bbar of the panel: bbar: { id: 'sBar', items: [{ xtype: 'tbtext', text: '',id:'statusTxt' },'->', pBar] Observe how the progress bar is sent to the rightmost location in the status bar with the help of a Toolbar.Fill instance, declared with '->'. Creating a custom look for the status bar items Customizing the look of toolbar items is relatively simple. In this recipe, you will learn how to create toolbar items with a sunken look that can be found in many desktop applications: How to do it Create the styles that will provide the custom look of the status bar text items: .custom-status-text-panel{ border-top:1px solid #99BBE8; border-right:1px solid #fff; border-bottom:1px solid #fff; border-left:1px solid #99BBE8; padding:1px 2px 2px 1px;} Create a host panel: Ext.onReady(function() { var pnl = new Ext.Panel({ title: 'Status bar with sunken text items', renderTo: 'pnl1', width: 400, height: 200, bodyStyle: 'padding:10px;', Define the panel's bbar with the text items: bbar: { id: 'sBar', items: [ { id: 'cachedCount', xtype:'tbtext', text: 'Cached: 15' }, ' ', { id: 'uploadedCount', xtype: 'tbtext', text: 'Uploaded: 7' }, ' ', { id: 'invalidCount', xtype: 'tbtext', text: 'Invalid: 2' } ]}, Now, add a handler for the afterrender event and use it to modify the styles of the text items: listeners: { 'afterrender': { fn: function() { Ext.fly(Ext.getCmp('cachedCount').getEl()).parent(). addClass('custom-status-text-panel'); Ext.fly(Ext.getCmp('uploadedCount').getEl()).parent(). addClass('custom-status-text-panel'); Ext.fly(Ext.getCmp('invalidCount').getEl()).parent(). addClass('custom-status-text-panel'); }, delay:500 }} How it works The actual look of the items is defined by the style in the custom-status-text-panel CSS class. After the host panel and toolbar are created and rendered, the look of the items is changed by applying the style to each of the TD elements that contain the items. For example: Ext.fly(Ext.getCmp('uploadedCount').getEl()).parent(). addClass('custom-status-text-panel'); See also... The previous recipe, Embedding a progress bar in a status bar, explains how a progress bar can be embedded in a panel's status bar Using a progress bar to indicate that your application is busy In this topic, you will learn how to use a progress bar to indicate that your application is busy performing an operation. The next screenshot shows a progress bar built using this recipe: How to do it Define the progress bar: Ext.onReady(function() { Ext.QuickTips.init(); var pBar = new Ext.ProgressBar({ id: 'pBar', width: 300, renderTo: 'pBarDiv'}); Add a handler for the update event and use it to update the wait message: pBar.on('update', function(val) { //Handle this event if you need to // execute code at each progress interval. Ext.fly('pBarText').dom.innerHTML += '.';}); Create a click handler for the button that will simulate a long-running activity: var btn = Ext.get('btn');btn.on('click', function() { Ext.fly('pBarText').update('Please wait'); btn.dom.disabled = true; pBar.wait({ interval: 200, duration: 5000, increment: 15, fn: function() { btn.dom.disabled = false; Ext.fly('pBarText').update('Done'); } });}); Add the button to the page: <button id="btn">Start long-running operation</button> How it works After creating the progress bar, the handler for its update event is created. While I use this handler simply to update the text message, you can use it to execute some other code every time that a progress interval occurs. The click handler for the button calls the progress bar's wait(…) function, which causes the progress bar to auto-update at the configured interval and reset itself after the configured duration: pBar.wait({ interval: 200, duration: 5000, increment: 15, fn: function() { btn.dom.disabled = false; Ext.fly('pBarText').update('Done'); }}); There's more The progress bar can also be configured to run indefinitely by not passing the duration config option. Clearing the progress bar in this scenario requires a call to the reset() function. See also... The next recipe, Using a progress bar to report progress updates, illustrates how a progress bar can be set up to notify the user that progress is being made in the execution of an operation The Changing the look of a progress bar recipe (covered later in this article) shows you how easy it is to change the look of the progress bar using custom styles Summary This article consisted recipes that examined the commonly-used menu items, as well as the different ways of setting up toolbars and progress bars in your applications. [ 1 | 2 ] If you have read this article you may be interested to view :   Making Progress with Menus and Toolbars using Ext JS 3.0: Part 1 Load, Validate, and Submit Forms using Ext JS 3.0: Part 1 Load, Validate, and Submit Forms using Ext JS 3.0: Part 2 Load, Validate, and Submit Forms using Ext JS 3.0: Part 3
Read more
  • 0
  • 0
  • 7378

article-image-building-your-first-zend-framework-application
Packt
26 Jul 2013
15 min read
Save for later

Building Your First Zend Framework Application

Packt
26 Jul 2013
15 min read
(For more resources related to this topic, see here.) Prerequisites Before you get started with setting up your first ZF2 Project, make sure that you have the following software installed and configured in your development environment: PHP Command Line Interface Git : Git is needed to check out source code from various github.com repositories Composer : Composer is the dependency management tool used for managing PHP dependencies The following commands will be useful for installing the necessary tools to setup a ZF2 Project: To install PHP Command Line Interface: $ sudo apt-get install php5-cli To install Git: $ sudo apt-get install git To install Composer: $ curl -s https://getcomposer.org/installer | php ZendSkeletonApplication ZendSkeletonApplication provides a sample skeleton application that can be used by developers as a starting point to get started with Zend Framework 2.0. The skeleton application makes use of ZF2 MVC, including a new module system. ZendSkeletonApplication can be downloaded from GitHub (https://github.com/zendframework/ZendSkeletonApplication). Time for action – creating a Zend Framework project To set up a new Zend Framework project, we will need to download the latest version of ZendSkeletonApplication and set up a virtual host to point to the newly created Zend Framework project. The steps are given as follows: Navigate to a folder location where you want to set up the new Zend Framework project: $ cd /var/www/ Clone the ZendSkeletonApplication app from GitHub: $ git clone git://github.com/zendframework/ ZendSkeletonApplication.git CommunicationApp In some Linux configurations, necessary permissions may not be available to the current user for writing to /var/www. In such cases, you can use any folder that is writable and make necessary changes to the virtual host configuration. Install dependencies using Composer: $ cd CommunicationApp/ $ php composer.phar self-update $ php composer.phar install The following screenshot shows how Composer downloads and installs the necessary dependencies: Before adding a virtual host entry we need to set up a hostname entry in our hosts file so that the system points to the local machine whenever the new hostname is used. In Linux this can be done by adding an entry to the /etc/hosts file: $ sudo vim /etc/hosts In Windows, this file can be accessed at %SystemRoot%system32driversetchosts. Add the following line to the hosts file: 127.0.0.1 comm-app.local The final hosts file should look like the following: Our next step would be to add a virtual host entry on our web server; this can be done by creating a new virtual host's configuration file: $ sudo vim /usr/local/zend/etc/sites.d/vhost_comm-app-80.conf This new virtual host filename could be different for you depending upon the web server that you use; please check out your web server documentation for setting up new virtual hosts. For example, if you have Apache2 running on Linux, you will need to create the new virtual host file in /etc/apache2/sites-available and enable the site using the command a2ensite comm-app.local. Add the following configuration to the virtual host file: <VirtualHost *:80> ServerName comm-app.local DocumentRoot /var/www/CommunicationApp/public SetEnv APPLICATION_ENV "development" <Directory /var/www/CommunicationApp/public> DirectoryIndex index.php AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> If you are using a different path for checking out the ZendSkeletonApplication project make sure that you include that path for both DocumentRoot and Directory directives. After configuring the virtual host file, the web server needs to be restarted: $ sudo service zend-server restart Once the installation is completed, you should be able to open http://comm-app.local on your web browser. This should take you to the following test page : Test rewrite rules In some cases, mod_rewrite may not have been enabled in your web server by default; to check if the URL redirects are working properly, try to navigate to an invalid URL such as http://comm-app.local/12345; if you get an Apache 404 page, then the .htaccess rewrite rules are not working; they will need to be fixed, otherwise if you get a page like the following one, you can be sure of the URL working as expected. What just happened? We have successfully created a new ZF2 project by checking out ZendSkeletonApplication from GitHub and have used Composer to download the necessary dependencies including Zend Framework 2.0. We have also created a virtual host configuration that points to the project's public folder and tested the project in a web browser. Alternate installation options We have seen just one of the methods of installing ZendSkeletonApplication; there are other ways of doing this. You can use Composer to directly download the skeleton application and create the project using the following command: $ php composer.phar create-project --repositoryurl="http://packages.zendframework.com" zendframework/skeleton-application path/to/install You can also use a recursive Git clone to create the same project: $ git clone git://github.com/zendframework/ZendSkeletonApplication.git --recursive Refer to: http://framework.zend.com/downloads/skeleton-app Zend Framework 2.0 – modules In Zend Framework, a module can be defined as a unit of software that is portable and reusable and can be interconnected to other modules to construct a larger, complex application. Modules are not new in Zend Framework, but with ZF2, there is a complete overhaul in the way modules are used in Zend Framework. With ZF2, modules can be shared across various systems, and they can be repackaged and distributed with relative ease. One of the other major changes coming into ZF2 is that even the main application is now converted into a module; that is, the application module. Some of the key advantages of Zend Framework 2.0 modules are listed as follows: Self-contained, portable, reusable Dependency management Lightweight and fast Support for Phar packaging and Pyrus distribution Zend Framework 2.0 – project folder structure The folder layout of a ZF2 project is shown as follows: Folder name Description config Used for managing application configuration. data Used as a temporary storage location for storing application data including cache files, session files, logs, and indexes. module Used to manage all application code. module/Application This is the default application module that is provided with ZendSkeletonApplication. public This is the default application module that is provided with ZendSkeletonApplication. vendor Used to manage common libraries that are used by the application. Zend Framework is also installed in this folder. vendor/zendframework endor/zendframework Zend Framework 2.0 is installed here. Time for action – creating a module Our next activity will be about creating a new Users module in Zend Framework 2.0. The Users module will be used for managing users including user registration, authentication, and so on. We will be making use of ZendSkeletonModule provided by Zend, shown as follows: Navigate to the application's module folder: $ cd /var/www/CommunicationApp/ $ cd module/ Clone ZendSkeletonModule into a desired module name, in this case it is Users: $ git clone git://github.com/zendframework/ZendSkeletonModule.git Users After the checkout is complete, the folder structure should look like the following screenshot: Edit Module.php ; this file will be located in the Users folder under modules (CommunicationApp/module/Users/module.php) and change the namespace to Users. Replace namespace ZendSkeletonModule; with namespace Users;. The following folders can be removed because we will not be using them in our project: * Users/src/ZendSkeletonModule * Users/view/zend-skeleton-module What just happened? We have installed a skeleton module for Zend Framework; this is just an empty module, and we will need to extend this by creating custom controllers and views. In our next activity, we will focus on creating new controllers and views for this module. Creating a module using ZFTool ZFTool is a utility for managing Zend Framework applications/projects, and it can also be used for creating new modules; in order to do that, you will need to install ZFTool and use the create module command to create the module using ZFTool: $ php composer.phar require zendframework/zftool:dev-master $ cd vendor/zendframework/zftool/ $ php zf.php create module Users2 /var/www/CommunicationApp Read more about ZFTool at the following link: http://framework.zend.com/manual/2.0/en/modules/zendtool.introduction.html MVC layer The fundamental goal of any MVC Framework is to enable easier segregation of three layers of the MVC, namely, model, view, and controller. Before we get to the details of creating modules, let's quickly try to understand how these three layers work in an MVC Framework: Model : The model is a representation of data; the model also holds the business logic for various application transactions. View : The view contains the display logic that is used to display the various user interface elements in the web browser. Controller : The controller controls the application logic in any MVC application; all actions and events are handled at the controller layer. The controller layer serves as a communication interface between the model and the view by controlling the model state and also by representing the changes to the view. The controller also provides an entry point for accessing the application. In the new ZF2 MVC structure, all the models, views, and controllers are grouped by modules. Each module will have its own set of models, views, and controllers, and will share some components with other modules. Zend Framework module – folder structure The folder structure of Zend Framework 2.0 module has three vital components—the configurations, the module logic, and the views. The following table describes how contents in a module are organized: Folder name Description config Used for managing module configuration src Contains all module source code, including all controllers and models view Used to store all the views used in the module Time for action – creating controllers and views Now that we have created the module, our next step would be having our own controllers and views defined. In this section, we will create two simple views and will write a controller to switch between them: Navigate to the module location: $ cd /var/www/CommunicationApp/module/Users Create the folder for controllers: $ mkdir -p src/Users/Controller/ Create a new IndexController file, < ModuleName >/src/<ModuleName>/Controller/: $ cd src/Users/Controller/ $ vim IndexController.php Add the following code to the IndexController file: <?php namespace UsersController; use ZendMvcControllerAbstractActionController; use ZendViewModelViewModel; class IndexController extends AbstractActionController { public function indexAction() { $view = new ViewModel(); return $view; } public function registerAction() { $view = new ViewModel(); $view->setTemplate('users/index/new-user'); return $view; } public function loginAction() { $view = new ViewModel(); $view->setTemplate('users/index/login'); return $view; } } The preceding code will do the following actions; if the user visits the home page, the user is shown the default view; if the user arrives with an action register, the user is shown the new-user template; and if the user arrives with an action set to login, then the login template is rendered. Now that we have created the controller, we will have to create necessary views to render for each of the controller actions. Create the folder for views: $ cd /var/www/CommunicationApp/module/Users $ mkdir -p view/users/index/ Navigate to the views folder, <Module>/view/<module-name>/index: $ cd view/users/index/ Create the following view files: index login new-user For creating the view/users/index/index.phtml file, use the following code: <h1>Welcome to Users Module</h1> <a href="/users/index/login">Login</a> | <a href = "/users/index/register">New User Registration</a> For creating the view/users/index/login.phtml file, use the following code: <h2> Login </h2> <p> This page will hold the content for the login form </p> <a href="/users"><< Back to Home</a> For creating the view/users/index/new-user.phtml file, use the following code: <h2> New User Registration </h2> <p> This page will hold the content for the registration form </p> <a href="/users"><< Back to Home</a> What just happened? We have now created a new controller and views for our new Zend Framework module; the module is still not in a shape to be tested. To make the module fully functional we will need to make changes to the module's configuration, and also enable the module in the application's configuration. Zend Framework module – configuration Zend Framework 2.0 module configuration is spread across a series of files which can be found in the skeleton module. Some of the configuration files are described as follows: Module.php: The Zend Framework 2 module manager looks for the Module.php file in the module's root folder. The module manager uses the Module.php file to configure the module and invokes the getAutoloaderConfig() and getConfig() methods. autoload_classmap.php: The getAutoloaderConfig() method in the skeleton module loads autoload_classmap.php to include any custom overrides other than the classes loaded using the standard autoloader format. Entries can be added or removed to the autoload_classmap.php file to manage these custom overrides. config/module.config.php: The getConfig() method loads config/module.config.php; this file is used for configuring various module configuration options including routes, controllers, layouts, and various other configurations. Time for action – modifying module configuration In this section will make configuration changes to the Users module to enable it to work with the newly created controller and views using the following steps: Autoloader configuration – The default autoloader configuration provided by the ZendSkeletonModule needs to be disabled; this can be done by editing autoload_classmap.php and replacing it with the following content: <?php return array(); Module configuration – The module configuration file can be found in config/module.config.php; this file needs to be updated to reflect the new controllers and views that have been created, as follows: Controllers – The default controller mapping points to the ZendSkeletonModule; this needs to be replaced with the mapping shown in the following snippet: 'controllers' => array( 'invokables' => array( 'UsersControllerIndex' => 'UsersControllerIndexController', ), ), Views – The views for the module have to be mapped to the appropriate view location. Make sure that the view uses lowercase names separated by a hyphen (for example, ZendSkeleton will be referred to as zend-skeleton): 'view_manager' => array( 'template_path_stack' => array( 'users' => __DIR__ . '/../view', ), ), Routes – The last module configuration is to define a route for accessing this module from the browser; in this case we are defining the route as /users, which will point to the index action in the Index controller of the Users module: 'router' => array( 'routes' => array( 'users' => array( 'type' => 'Literal', 'options' => array( 'route' => '/users', 'defaults' => array( '__NAMESPACE__' => 'UsersController', 'controller' => 'Index', 'action' => 'index', ), ), After making all the configuration changes as detailed in the previous sections, the final configuration file, config/module.config.php, should look like the following: <?php return array( 'controllers' => array( 'invokables' => array( 'UsersControllerIndex' => 'UsersControllerIndexController', ), ), 'router' => array( 'routes' => array( 'users' => array( 'type' => 'Literal', 'options' => array( // Change this to something specific to your module 'route' => '/users', 'defaults' => array( //Change this value to reflect the namespace in which // the controllers for your module are found '__NAMESPACE__' => 'UsersController', 'controller' => 'Index', 'action' => 'index', ), ), 'may_terminate' => true, 'child_routes' => array( // This route is a sane default when developing a module; // as you solidify the routes for your module, however, // you may want to remove it and replace it with more // specific routes. 'default' => array( 'type' => 'Segment', 'options' => array( 'route' => '/[:controller[/:action]]', 'constraints' => array( 'controller' => '[a-zA-Z][a-zA-Z0-9_-]*', 'action' => '[a-zA-Z][a-zA-Z0-9_-]*', ), 'defaults' => array( ), ), ), ), ), ), ), 'view_manager' => array( 'template_path_stack' => array( 'users' => __DIR__ . '/../view', ), ), ); Application configuration – Enable the module in the application's configuration—this can be done by modifying the application's config/application.config.php file, and adding Users to the list of enabled modules: 'modules' => array( 'Application', 'Users', ), To test the module in a web browser, open http://comm-app.local/users/ in your web browser; you should be able to navigate within the module. The module home page is shown as follows: The registration page is shown as follows: What just happened? We have modified the configuration of ZendSkeletonModule to work with the new controller and views created for the Users module. Now we have a fully-functional module up and running using the new ZF module system. Have a go hero Now that we have the knowledge to create and configure own modules, your next task would be to set up a new CurrentTime module. The requirement for this module is to render the current time and date in the following format: Time: 14:00:00 GMT Date: 12-Oct-2012 Summary We have now learned about setting up a new Zend Framework project using Zend's skeleton application and module. In our next chapters, we will be focusing on further development on this module and extending it into a fully-fledged application. Resources for Article : Further resources on this subject: Magento's Architecture: Part 2 [Article] Authentication with Zend_Auth in Zend Framework 1.8 [Article] Authorization with Zend_Acl in Zend Framework 1.8 [Article]
Read more
  • 0
  • 0
  • 7288
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-working-charts
Packt
07 Sep 2015
14 min read
Save for later

Working with Charts

Packt
07 Sep 2015
14 min read
In this article by Anand Dayalan, the author of the book Ext JS 6 By Example, he explores the different types of chart components in Ext JS and ends with a sample project called expense analyzer. The following topics will be covered: Charts types Bar and column charts Area and line charts Pie charts 3D charts The expense analyzer – a sample project (For more resources related to this topic, see here.) Charts Ext JS is almost like a one-stop shop for all your JavaScript framework needs. Yes, Ext JS also includes charts with all other rich components you learned so far. Chart types There are three types of charts: cartesian, polar, and spacefilling. The cartesian chart Ext.chart.CartesianChart (xtype: cartesian or chart) A cartesian chart has two directions: X and Y. By default, X is horizontal and Y is vertical. Charts that use the cartesian coordinates are column, bar, area, line, and scatter. The polar chart Ext.chart.PolarChart (xtype: polar) These charts have two axes: angular and radial. Charts that plot values using the polar coordinates are pie and radar. The spacefilling chart Ext.chart.SpaceFillingChart (xtype: spacefilling) These charts fill the complete area of the chart. Bar and column charts For bar and column charts, at the minimum, you need to provide a store, axes, and series. The basic column chart Let's start with a simple basic column chart. First, let's create a simple tree store with the inline hardcoded data as follows: Ext.define('MyApp.model.Population', { extend: 'Ext.data.Model', fields: ['year', 'population'] }); Ext.define('MyApp.store.Population', { extend: 'Ext.data.Store', storeId: 'population', model: 'MyApp.model.Population', data: [ { "year": "1610","population": 350 }, { "year": "1650","population": 50368 }, { "year": "1700", "population": 250888 }, { "year": "1750","population": 1170760 }, { "year": "1800","population": 5308483 }, { "year": "1900","population": 76212168 }, { "year": "1950","population": 151325798 }, { "year": "2000","population": 281421906 }, { "year": "2010","population": 308745538 }, ] }); var store = Ext.create("MyApp.store.Population"); Now, let's create the chart using Ext.chart.CartesianChart (xtype: cartesian or chart ) and use the store created above. Ext.create('Ext.Container', { renderTo: Ext.getBody(), width: 500, height: 500, layout: 'fit', items: [{ xtype: 'chart', insetPadding: { top: 60, bottom: 20, left: 20, right: 40 }, store: store, axes: [{ type: 'numeric', position: 'left', grid: true, title: { text: 'Population in Millions', fontSize: 16 }, }, { type: 'category', title: { text: 'Year', fontSize: 16 }, position: 'bottom', } ], series: [{ type: 'bar', xField: 'year', yField: ['population'] }], sprites: { type: 'text', text: 'United States Population', font: '25px Helvetica', width: 120, height: 35, x: 100, y: 40 } } ] }); Important things to note in the preceding code are axes, series, and sprite. Axes can be of one of the three types: numeric, time, and category. In series, you can see that the type is set to bar. In Ext JS, to render the column or bar chart, you have to specify the type as bar, but if you want a bar chart, you have to set flipXY to true in the chart config. The sprites config used here is quite straightforward. Sprites is optional, not a must. The grid property can be specified for both the axes, although we have specified it only for one axis here. The insetPadding is used to specify the padding for the chart to render other information, such as the title. If we don't specify insetPadding, the title and other information may get overlapped with the chart. The output of the preceding code is shown here: The bar chart As mentioned before, in order to get the bar chart, you can just use the same code, but specify flipXP to true and change the positions of axes accordingly, as shown in the following code: Ext.create('Ext.Container', { renderTo: Ext.getBody(), width: 500, height: 500, layout: 'fit', items: [{ xtype: 'chart', flipXY: true, insetPadding: { top: 60, bottom: 20, left: 20, right: 40 }, store: store, axes: [{ type: 'numeric', position: 'bottom', grid: true, title: { text: 'Population in Millions', fontSize: 16 }, }, { type: 'category', title: { text: 'Year', fontSize: 16 }, position: 'left', } ], series: [{ type: 'bar', xField: 'year', yField: ['population'] } ], sprites: { type: 'text', text: 'United States Population', font: '25px Helvetica', width: 120, height: 35, x: 100, y: 40 } } ] }); The output of the preceding code is shown in the following screenshot: The stacked chart Now, let's say you want to plot two values in each category in the column chart. You can either stack them or have two bar columns for each category. Let's modify our column chart example to render a stacked column chart. For this, we need an additional numeric field in the store, and we need to specify two fields for yField in the series. You can stack more than two fields, but for this example, we will stack only two fields. Take a look at the following code: Ext.define('MyApp.model.Population', { extend: 'Ext.data.Model', fields: ['year', 'total','slaves'] }); Ext.define('MyApp.store.Population', { extend: 'Ext.data.Store', storeId: 'population', model: 'MyApp.model.Population', data: [ { "year": "1790", "total": 3.9, "slaves": 0.7 }, { "year": "1800", "total": 5.3, "slaves": 0.9 }, { "year": "1810", "total": 7.2, "slaves": 1.2 }, { "year": "1820", "total": 9.6, "slaves": 1.5 }, { "year": "1830", "total": 12.9, "slaves": 2 }, { "year": "1840", "total": 17, "slaves": 2.5 }, { "year": "1850", "total": 23.2, "slaves": 3.2 }, { "year": "1860", "total": 31.4, "slaves": 4 }, ] }); var store = Ext.create("MyApp.store.Population"); Ext.create('Ext.Container', { renderTo: Ext.getBody(), width: 500, height: 500, layout: 'fit', items: [{ xtype: 'cartesian', store: store, insetPadding: { top: 60, bottom: 20, left: 20, right: 40 }, axes: [{ type: 'numeric', position: 'left', grid: true, title: { text: 'Population in Millions', fontSize: 16 }, }, { type: 'category', title: { text: 'Year', fontSize: 16 }, position: 'bottom', } ], series: [{ type: 'bar', xField: 'year', yField: ['total','slaves'] } ], sprites: { type: 'text', text: 'United States Slaves Distribution 1790 to 1860', font: '20px Helvetica', width: 120, height: 35, x: 60, y: 40 } } ] }); The output of the stacked column chart is shown here: If you want to render multiple fields without stacking, then you can simply set the stacked property of the series to false to get the following output: There are so many options available in the chart. Let's take a look at some of the commonly used options: tooltip: This can be added easily by setting a tooltip property in the series legend: This can be rendered to any of the four sides of the chart by specifying the legend config sprites: This can be an array if you want to specify multiple informations, such as header, footer, and so on Here is the code for the same store configured with some advanced options: Ext.create('Ext.Container', { renderTo: Ext.getBody(), width: 500, height: 500, layout: 'fit', items: [{ xtype: 'chart', legend: { docked: 'bottom' }, insetPadding: { top: 60, bottom: 20, left: 20, right: 40 }, store: store, axes: [{ type: 'numeric', position: 'left', grid: true, title: { text: 'Population in Millions', fontSize: 16 }, minimum: 0, }, { type: 'category', title: { text: 'Year', fontSize: 16 }, position: 'bottom', } ], series: [{ type: 'bar', xField: 'year', stacked: false, title: ['Total', 'Slaves'], yField: ['total', 'slaves'], tooltip: { trackMouse: true, style: 'background: #fff', renderer: function (storeItem, item) { this.setHtml('In ' + storeItem.get('year') + ' ' + item.field + ' population was ' + storeItem.get(item.field) + ' m'); } } ], sprites: [{ type: 'text', text: 'United States Slaves Distribution 1790 to 1860', font: '20px Helvetica', width: 120, height: 35, x: 60, y: 40 }, { type: 'text', text: 'Source: http://www.wikipedia.org', fontSize: 10, x: 12, y: 440 }] }] }); The output with tooltip, legend, and footer is shown here: The 3D bar chart If you simply change the type of the series to 3D bar instead of bar, you'll get the 3D column chart, as show in the following screenshot: Area and line charts Area and line charts are also cartesian charts. The area chart To render an area chart, simply replace the series in the previous example with the following code: series: [{ type: 'area', xField: 'year', stacked: false, title: ['Total','slaves'], yField: ['total', 'slaves'], style: { stroke: "#94ae0a", fillOpacity: 0.6, } }] The output of the preceding code is shown here: Similar to the stacked column chart, you can have the stacked area chart as well by setting stacked to true in the series. If you set stacked to true in the preceding example, you'll get the following output:  Figure 7.1 The line chart To get the line chart shown in Figure 7.1, use the following series config in the preceding example instead: series: [{ type: 'line', xField: 'year', title: ['Total'], yField: ['total'] }, { type: 'line', xField: 'year', title: ['Slaves'], yField: ['slaves'] }], The pie chart This is one of the frequently used charts in many applications and reporting tools. Ext.chart.PolarChart (xtype: polar) should be used to render a pie chart. The basic pie chart Specify the type as pie, and specify the angleField and label to render a basic pie chart, as as shown in the following code: Ext.define('MyApp.store.Expense', { extend: 'Ext.data.Store', alias: 'store.expense', fields: [ 'cat', 'spent'], data: [ { "cat": "Restaurant", "spent": 100}, { "cat": "Travel", "spent": 150}, { "cat": "Insurance", "spent": 500}, { "cat": "Rent", "spent": 1000}, { "cat": "Groceries", "spent": 400}, { "cat": "Utilities", "spent": 300}, ] }); var store = Ext.create("MyApp.store.Expense"); Ext.create('Ext.Container', { renderTo: Ext.getBody(), width: 600, height: 500, layout: 'fit', items: [{ xtype: 'polar', legend: { docked: 'bottom' }, insetPadding: { top: 100, bottom: 20, left: 20, right: 40 }, store: store, series: [{ type: 'pie', angleField: 'spent', label: { field: 'cat', }, tooltip: { trackMouse: true, renderer: function (storeItem, item) { var value = ((parseFloat(storeItem.get('spent') / storeItem.store.sum('spent')) * 100.0).toFixed(2)); this.setHtml(storeItem.get('cat') + ': ' + value + '%'); } } }] }] }); The donut chart Just by setting the donut property of the series in the preceding example to 40, you'll get the following chart. Here, donut is the percentage of the radius of the hole compared to the entire disk: The 3D pie chart In Ext JS 6, there were some improvements made to the 3D pie chart. The 3D pie chart in Ext JS 6 now supports the label and configurable 3D aspects, such as thickness, distortion, and so on. Let's use the same model and store that was used in the pie chart example and create a 3D pie chart as follows: Ext.create('Ext.Container', { renderTo: Ext.getBody(), width: 600, height: 500, layout: 'fit', items: [{ xtype: 'polar', legend: { docked: 'bottom' }, insetPadding: { top: 100, bottom: 20, left: 80, right: 80 }, store: store, series: [{ type: 'pie3d', donut: 50, thickness: 70, distortion: 0.5, angleField: 'spent', label: { field: 'cat' }, tooltip: { trackMouse: true, renderer: function (storeItem, item) { var value = ((parseFloat(storeItem.get('spent') / storeItem.store.sum('spent')) * 100.0).toFixed(2)); this.setHtml(storeItem.get('cat') + ': ' + value + '%'); } } }] }] }); The following image shows the output of the preceding code: The expense analyzer – a sample project Now that you have learned the different kinds of charts available in Ext JS, let's use them to create a sample project called Expense Analyzer. The following screenshot shows the design of this sample project: Let's use Sencha Cmd to scaffold our application. Run the following command in the terminal or command window: sencha -sdk <path to SDK>/ext-6.0.0.415/ generate app EA ./expense-analyzer Now, let's remove all the unwanted files and code and add some additional files to create this project. The final folder structure and some of the important files are shown in the following Figure 7.2: The complete source code is not given in this article. Here, only some of the important files are shown. In between, some less important code has been truncated. The complete source is available at https://github.com/ananddayalan/extjs-by-example-expense-analyzer.  Figure 7.2 Now, let's create the grid shown in the design. The following code is used to create the grid. This List view extends from Ext.grid.Panel, uses the expense store for the data, and has three columns: Ext.define('EA.view.main.List', { extend: 'Ext.grid.Panel', xtype: 'mainlist', maxHeight: 400, requires: [ 'EA.store.Expense'], title: 'Year to date expense by category', store: { type: 'expense' }, columns: { defaults: { flex:1 }, items: [{ text: 'Category', dataIndex: 'cat' }, { formatter: "date('F')", text: 'Month', dataIndex: 'date' }, { text: 'Spent', dataIndex: 'spent' }] } }); Here, I have not used the pagination. The maxHeight is used to limit the height of the grid, and this enables the scroll bar as well because we have more records that won't fit the given maximum height of the grid. The following code creates the expense store used in the preceding example. This is a simple store with the inline data. Here, we have not created a separate model and added fields directly in the store: Ext.define('EA.store.Expense', { extend: 'Ext.data.Store', alias: 'store.expense', storeId: 'expense', fields: [{ name:'date', type: 'date' }, 'cat', 'spent' ], data: { items: [ { "date": "1/1/2015", "cat": "Restaurant", "spent": 100 }, { "date": "1/1/2015", "cat": "Travel", "spent": 22 }, { "date": "1/1/2015", "cat": "Insurance", "spent": 343 }, // Truncated code ]}, proxy: { type: 'memory', reader: { type: 'json', rootProperty: 'items' } } }); Next, let's create the bar chart shown in the design. In the bar chart, we will use another store called expensebyMonthStore, in which we'll populate data from the expense data store. The following 3D bar chart has two types of axis: numeric and category. We have used the month part of the date field as a category. A renderer is used to render the month part of the date field: Ext.define('EA.view.main.Bar', { extend: 'Ext.chart.CartesianChart', requires: ['Ext.chart.axis.Category', 'Ext.chart.series.Bar3D', 'Ext.chart.axis.Numeric', 'Ext.chart.interactions.ItemHighlight'], xtype: 'mainbar', height: 500, padding: { top: 50, bottom: 20, left: 100, right: 100 }, legend: { docked: 'bottom' }, insetPadding: { top: 100, bottom: 20, left: 20, right: 40 }, store: { type: 'expensebyMonthStore' }, axes: [{ type: 'numeric', position: 'left', grid: true, minimum: 0, title: { text: 'Spendings in $', fontSize: 16 }, }, { type: 'category', position: 'bottom', title: { text: 'Month', fontSize: 16 }, label: { font: 'bold Arial', rotate: { degrees: 300 } }, renderer: function (date) { return ["Jan", "Feb", "Mar", "Apr", "May"][date.getMonth()]; } } ], series: [{ type: 'bar3d', xField: 'date', stacked: false, title: ['Total'], yField: ['total'] }], sprites: [{ type: 'text', text: 'Expense by Month', font: '20px Helvetica', width: 120, height: 35, x: 60, y: 40 }] }); Now, let's create the MyApp.model.ExpensebyMonth store used in the preceding bar chart view. This store will display the total amount spent in each month. This data is populated by grouping the expense store with the date field. Take a look at how the data property is configured to populate the data: Ext.define('MyApp.model.ExpensebyMonth', { extend: 'Ext.data.Model', fields: [{name:'date', type: 'date'}, 'total'] }); Ext.define('MyApp.store.ExpensebyMonth', { extend: 'Ext.data.Store', alias: 'store.expensebyMonthStore', model: 'MyApp.model.ExpensebyMonth', data: (function () { var data = []; var expense = Ext.createByAlias('store.expense'); expense.group('date'); var groups = expense.getGroups(); groups.each(function (group) { data.push({ date: group.config.groupKey, total: group.sum('spent') }); }); return data; })() }); Then, the following code is used to generate the pie chart. This chart uses the expense store, but only shows one selected month of data at a time. A drop-down box is added to the main view to select the month. The beforerender is used to filter the expense store to show the data only for the month of January on the load: Ext.define('EA.view.main.Pie', { extend: 'Ext.chart.PolarChart', requires: ['Ext.chart.series.Pie3D'], xtype: 'mainpie', height: 800, legend: { docked: 'bottom' }, insetPadding: { top: 100, bottom: 20, left: 80, right: 80 }, listeners: { beforerender: function () { var dateFiter = new Ext.util.Filter({ filterFn: function(item) { return item.data.date.getMonth() ==0; } }); Ext.getStore('expense').addFilter(dateFiter); } }, store: { type: 'expense' }, series: [{ type: 'pie3d', donut: 50, thickness: 70, distortion: 0.5, angleField: 'spent', label: { field: 'cat', } }] }); So far, we have created the grid, the bar chart, the pie chart, and the stores required for this sample application. Now, we need to link them together in the main view. The following code shows the main view from the classic toolkit. The main view is simply a tab control and specifies what view to render for each tab: Ext.define('EA.view.main.Main', { extend: 'Ext.tab.Panel', xtype: 'app-main', requires: [ 'Ext.plugin.Viewport', 'Ext.window.MessageBox', 'EA.view.main.MainController', 'EA.view.main.List', 'EA.view.main.Bar', 'EA.view.main.Pie' ], controller: 'main', autoScroll: true, ui: 'navigation', // Truncated code items: [{ title: 'Year to Date', iconCls: 'fa-bar-chart', items: [ { html: '<h3>Your average expense per month is: ' + Ext.createByAlias('store.expensebyMonthStore').average('total') + '</h3>', height: 70, }, { xtype: 'mainlist'}, { xtype: 'mainbar' } ] }, { title: 'By Month', iconCls: 'fa-pie-chart', items: [{ xtype: 'combo', value: 'Jan', fieldLabel: 'Select Month', store: ['Jan', 'Feb', 'Mar', 'Apr', 'May'], listeners: { select: 'onMonthSelect' } }, { xtype: 'mainpie' }] }] }); Summary In this article, we looked at the different kinds of charts available in Ext JS. We also created a simple sample project called Expense Analyzer and used some of the concepts you learned in this article. Resources for Article: Further resources on this subject: Ext JS 5 – an Introduction[article] Constructing Common UI Widgets[article] Static Data Management [article]
Read more
  • 0
  • 0
  • 7275

article-image-creating-your-first-freemarker-template
Packt
26 Jul 2013
10 min read
Save for later

Creating your first FreeMarker Template

Packt
26 Jul 2013
10 min read
(For more resources related to this topic, see here.) Step 1 – setting up your development directory If you haven't done so, create a directory to work in. I'm going to keep this as simple as possible, so we won't need a complicated directory structure. Everything can be done in one directory.Put the freemarker.jar in the directory. All future talk about files and running from the command-line will refer to your working directory. If you want to, you can set up a more advanced project-like set of directories. Step 2 – writing your first template This is a quick start, so let's just dive in and write the template. Open a file for editing called hello.ftl. The ftl extension is customary for FreeMarker Template Language files, but you are free to name your template files anything you want. Put this line in your file: Hello, ${name}! FreeMarker will replace the ${name} expression with the value of an element called name in the model. FreeMarker calls this an interpolation. I prefer to refer to this as "evaluating an expression", but you will encounter the term interpolation in the documentation. Everything else you have put in this initial template is static text. If name contained the value World, then this template would evaluate to: Hello, World! Step 3 – writing the Java code Templates are not scripts that can be run, so we need to write some Java code to invoke the FreeMarker engine and combine the template with a populated model. Here is that code: import java.io.*;import java.util.*;import freemarker.template.*;public class HelloFreemarker { public static void main(String[] args) throws IOException, TemplateException { Configuration cfg = new Configuration(); cfg.setObjectWrapper(new DefaultObjectWrapper()); cfg.setDirectoryForTemplateLoading(new File(".")); Map<String, Object> model = new HashMap<String, Object>(); model.put("name", "World"); Template template = cfg.getTemplate("hello.ftl"); template.process(model, new OutputStreamWriter(System.out)); }} The highlighted line says that FreeMarker should look for FTL files in the "working directory" where the program is run as a simple Java application. If you set your project up differently, or run in an IDE, you may need to change this to an absolute path. The first thing we do is create a FreeMarker freemarker.template.Configuration object. This acts as a factory for freemarker.template.Template objects. FreeMarker has its own internal object types that it uses to extract values from the model.In order to use the objects that you supply, it must wrap these in its own native types. The job of doing this is done by an object wrapper. You must provide an object wrapper. It will always be FreeMarker's own freemarker.template.DefaultObjectWrapper unless you havespecial object wrapping requirements. Finally, we set the root directory for loading templates. For the purposes of our sample code, everything is in the same directory so we just set it to ".". Setting the template directory can throw an java.lang.IOException exception in this code. We simply allow that to be thrown out of the method. Next, we create our model, which is a simple map of java.lang.String keys to java.lang.Object values. The values can be simple object types such as String or java.lang.Number, or they can be complex object types, including arrays and collections. Our needs are simple here, so we're going to map "name" to the string "World". The next step is to get a Template object. We ask the Configuration instance to load the template into a Template object. This can also throw an IOException. The magic finally happens when we ask the Template instance to process the model and create an output. We already have the model, but where does the output go? For this, we need an implementation of java.io.Writer. For convenience, we are going to wrap the java.io.PrintWriter in java.lang.System.out with a java.io.OutputStreamWriter and give that to the template. After compiling this program, we can run it from the command line: java -cp .;freemarker.jar HelloFreemarker For Linux or OSX, you would use a ":" instead of a ";" in the command: java -cp .:freemarker.jar HelloFreemarker The result should be that the program prints out: Hello, World! Step 4 – moving beyond strings If you plan to create simple templates populated with preformatted text, then you now know all you need to know about FreeMarker. Chances are that you will, so let's take a look at how FreeMarker handles formatting other types and complex objects. Let's try binding the "name" object in our model to some other types of objects. We can replace: model.put("name", "World"); with: model.put("name", 123456789); The output format of the program will depend on the default locale, so if you are in the United States, you will see this: Hello, 123,456,789! If your default locale was set to Germany, you would see this: Hello, 123.456.789! FreeMarker does not call toString() method on instances of Number types it employs java.text.DecimalFormat. Unless you want to pass all of your values to FreeMarker as preformatted strings, you are going to need to understand how to control the way FreeMarker converts values to text. If preformatting all of the items in your model sounds like a good idea, it isn't. Moving "view" logic into your "controller" code is a sure-fre way to make updating the appearance of your site into a painful experience. Step 5 – formatting different types In the previous section, we saw how FreeMarker will choose a default method of formatting numbers. One of the features of this method is that it employs grouping separators: a comma or a period every three digits. It may also use a comma rather than a period to denote the decimal portion of the number. This is great for humans who may expect these formatting details, but if your number is destined to be parsed by a computer, it needs to be free of grouping separators and it must use a period as a decimal point. In this case, you need a way to control how FreeMarker decides to format a number. In order to control exactly how model objects are converted to text FreeMarker provides operators called built-ins. Let's create a new template called types.ftl and put in some expressions that use built-ins to control formatting: String: ${string?html}Number: ${number?c}Boolean: ${boolean?string("+++++", "-----")}Date: ${.now?time}Complex: ${object} The value .now come is a special variable that is automatically provided by FreeMarker. It contains the date and time when the Template began processing. There are other special variables, but this is the only one you're likely to use. This template is a little more complicated than the last template. The " ?" at the end of a variable name denotes the use of a built-in. Before we explore these particular built-ins, let's see them in action. Create a java program, FreemarkerTypes, which populates a model with values for our new template: import java.io.*;import java.math.BigDecimal;import java.util.*;import freemarker.template.*;public class FreemarkerTypes { public static void main(String[] args) throws IOException, TemplateException { Configuration cfg = new Configuration(); cfg.setObjectWrapper(new DefaultObjectWrapper()); cfg.setDirectoryForTemplateLoading(new File(".")); Map<String, Object> model = new HashMap<String, Object>(); model.put("string", "easy & fast "); model.put("number", new BigDecimal("1234.5678")); model.put("boolean", true); model.put("object", Locale.US); Template template = cfg.getTemplate("types.ftl"); template.process(model, new OutputStreamWriter(System.out)); }} Run the FreemarkerType program the same way you ran HelloFreemarker. You will see this output: String: easy &amp; fastNumber: 1234.5678Boolean: +++++Date: 9:12:33 AMComplex: en_US Let's walk through the template and see how the built-ins affected the output. Our purpose is to get a solid foundation in the basics. We'll look at more details about how to use FreeMarker features in later articles. First we output a String modified with the html built-in. This encoded the string for HTML, turning the & into the &amp; HTML entity. You will want this applied to a lot of your expressions on HTML pages in order to ensure proper display of your text and to prevent cross-site scripting ( XSS ) attacks. The second line outputs a number with the c built-in. This tells FreeMarker that the number should be written for parsing by computers. As we saw in the previous section, FreeMarker will by default format numbers with grouping separators. It will also localize the decimal point, using a comma instead of a period. This is great when you are displaying numbers to humans, but not computers. If you want to put an ID number in a URL or a price in an XML document, you will want to use this built-in to format it. Next, we format a Boolean. It may surprise you to learn that unless you use the string built-in, FreeMarker will not format a Boolean value at all. In fact, it throws an exception. Conceptually, "true" and "false" have no universal text representation. If you use string with no arguments, the interpolation will evaluate to either "true" or "false", but this is a default you can change. Here, we have told the built-in to use a series of + characters for "true" and a series of – characters for "false". Another type which FreeMarker will not process without a built-in is java.util.Date. The main issue here is that FreeMarker doesn't know whether you want to display a date, a time, or both. By specifying the time built-in we are letting FreeMarker know that we want to display a time. The output shown previously was generated shortly past nine o'clock in the morning. Finally, we see a complex object converted to text with no built-ins. Complex objects are turned into text by calling their toString() method, so you can use string built-ins on them. Step 6 – where do we go from here? We've reached the end of the Quick start section. You've created two simple templates and worked with some of the basic features of FreeMarker. You might be wondering what are the other built-ins, or what options they offer. In the upcoming sections we'll look at these options and also ways to change the default behavior. Another issue we've glossed over is errors. Once you have applied some of these built-ins, you must make sure that you supply the correct types for the named model elements. We also haven't looked at what happens when a referenced model element is missing. The FreeMarker manual provides excellent reference for all of this. Rather than trying to find your way around on your own, we'll take a guided tour through the important features in the Top Features section of the article. Quick start versus slow start A key difference between the Quick start and Top Features sections is that we'll be starting with the sample output. In this article, we created templates and evaluated them to see what we would get. In a real-world project, you will get better results if you worked backwards from the desired result. In many cases, you won't have a choice. The sample output will be generated by web designers and you will be expected to produce the same HTML with dynamic content. In other cases, you will need to work from mock-ups and decide the HTML for yourself. In these cases, it is still worth creating a static sample document. These static samples will show you where you need to apply some of the techniques. Summary In this article, we discussed how to create a freemarker template. Resources for Article: Further resources on this subject: Getting Started with the Alfresco Records Management Module [Article] Installing Alfresco Software Development Kit (SDK) [Article] Apache Felix Gogo [Article]
Read more
  • 0
  • 0
  • 7222

article-image-creating-subtle-ui-details-using-midnightjs-wowjs-and-animatecss
Roberto González
10 Jul 2015
9 min read
Save for later

Creating subtle UI details using Midnight.js, Wow.js, and Animate.css

Roberto González
10 Jul 2015
9 min read
Creating animations in CSS or JavaScript is often annoying and/or time-consuming, so most people tend to pay a lot of attention to the content that’s below "the fold" ("the fold" is quickly becoming an outdated concept, but you know what I mean). I’ll be covering a few techniques to help you add some nice touches to your landing pages that only take a few minutes to implement and require pretty much no development work at all. To create a base for this project, I put together a bunch of photographs from https://unsplash.com/ with some text on top so we have something to work with. Download the files from http://aerolab.github.io/subtle-animations/assets/basics.zip and put them in a new folder. You can also check out the final result at http://aerolab.github.io/subtle-animations. Dynamically change your fixed headers using Midnight.js If you took a look at the demo site, you probably noticed that the minimalistic header we are using for "A How To Guide" becomes illegible in very light backgrounds. When this happens in most sites, we typically end up putting a background on the header, which usually improves legibility at the cost of making the design worse. Midnight.js is a jQuery plugin that changes your headers as you scroll, so the header always has a design that matches the content below it. This is particularly useful for minimalistic websites as they often use transparent headers. Implementation is quite simple as the setup is pretty much automatic. Start by adding a fixed header into the site. The example has one ready to go: <nav class="fixed"> <div class="container"> <span class="logo">A How To Guide</span> </div> </nav> Most of the setting up comes in specifying which header corresponds to which section. This is done by adding data-midnight="your-class" to any section or piece of content that requires a different design for the header. For the first section, we’ll be using a white header, so we’ll add data-midnight="white" to this section (it doesn’t have to be only a section, any large element works well). <section class="fjords" data-midnight="white"> <article> <h1>Adding Subtle UI Details</h1> <p>Using Midnight.js, Wow.js and Animate.css</p> </article> </section> In the next section, which is a photo of ships in very thick white fog, we’ll be using a darker header to help improve contrast. Let’s use data-midnight="gray" for the second one and data-midgnight="pink" for the last one, so it feels more in line with the content: <section class="ships" data-midnight="gray"> <article> <h1>Be quiet</h1> <p>I'm hunting wabbits</p> </article> </section> <section class="puppy" data-midnight="pink"> <article> <h1>OMG A PUPPY &lt;3</h1> </article> </section> Now we just need to add some css rules to change the look of the header in those cases. We’ll just be changing the color of the text for the moment, so open up css/styles.css and add the following rules: /* Styles for White, Gray and Pink headers */.midnightHeader.white { color: #fff; } .midnightHeader.gray { color: #999; } .midnightHeader.pink { color: #ffc0cb; } Last but not least, we need to include the necessary libraries. We’ll add two libraries right before the end of the body: jQuery and Midnight.js (they are included in the project files inside the js folder): <script src="js/jquery-1.11.1.min.js"></script> <script src="js/midnight.jquery.min.js"></script> Right after that, we start Midnight.js on document.ready, using $('nav.fixed').midnight() (you can change the selector to whatever you are using on your site): <script> $(document).ready(function(){ $('nav.fixed').midnight(); }); </script> If you check the site now, you’ll notice that the fixed header gracefully changes color when you start scrolling into the ships section. It’s a very subtle effect, but it helps keep your designs clean. Bonus Feature! It’s possible to completely change the markup of your header just for a specific section. It’s mostly used to add some visual details that require extra markup, but it can be used to completely alter your headers as necessary. In this case, we’ll be changing the “logo" from "A How To Guide" to "Shhhhhhhhh" on the ships section, and a bunch of hearts for the part of the puppy for additional bad comedy. To do this, we need to alter our fixed header a bit. First we need to identify the “default" header (all headers that don't have custom markup will be based on this one), and then add the markup we need for any custom headers, like the gray one. This is done by creating multiple copies of the header and wrapping them in .midnightHeader.default,.midnightHeader.gray and .midnightHeader.pink respectively: <nav class="fixed"> <div class="midnightHeader default"> <div class="container"> <span class="logo">A How To Guide</span> </div> </div> <div class="midnightHeader gray"> <div class="container"> <span class="logo">Shhhhhhhhh</span> </div> </div> <div class="midnightHeader pink"> <div class="container"> <span class="logo">❤❤❤ OMG PUPPIES ❤❤❤</span> </div> </div> </nav> If you test the site now, you’ll notice that the header not only changes color, but it also changes the "name" of the site to match the section, which gives you more freedom in terms of navigation and design. Simple animations with Wow.js and Animate.css Wow.js looks more like a toy than a serious plugin, but it’s actually a very powerful library that’s extremely easy to implement. Wow.js lets you animate things as they come into view. For instance, you can fade something in when you scroll to that section, letting users enjoy some extra UI candy. You can choose from a large set of animations from Animate.css so you don’t even have to touch the CSS (but you can still do that if you want). To get Wow.JS to work, we have to include just two things: Animate.css, which contains all the animations we need. Of course, you can create your own, or even tweak those to match your tastes. Just add a link to animate.css in the head of the document: <linkrel="stylesheet"href="css/animate.css"/> Wow.JS. This is simply just including the script and initializing it, which is done by adding the following just before the end of the document: <script src="js/wow.min.js"></script> <script>new WOW().init()</script> That’s it! To animate an element as soon as it gets into view, you just need to add the .wow class to that element, and then any animation from Animate.css (like .fadeInUp, .slideInLeft, or one of the many options available at http://daneden.github.io/animate.css/). For example, to make something fade in from the bottom of the screen, you just have to add wow fadeInUp. Let’s try this on the h1 our first section: <section class="fjords" data-midnight="white"> <article> <h1 class="wow fadeInUp">Adding Subtle UI Details</h1> <p>Using Midnight.js, Wow.js and Animate.css</p> </article> </section> If you feel like altering the animation slightly, you have quite a bit of control over how it behaves. For instance, let’s fade in the subtitle but do it a few milliseconds after the title, so it follows a sequence. We can use data-wow-delay="0.5s" to make the subtitle wait for half a second before making its appearance: <section class="fjords" data-midnight="white"> <article> <h1 class="wow fadeInUp">Adding Subtle UI Details</h1> <p class="wow fadeInUp" data-wow-delay="0.5s">Using Midnight.js, Wow.js and Animate.css</p> </article> </section> We can even tweak how long the animation takes by using data-wow-duration="1.5s" so it lasts a second and a half. This is particularly useful in the second section, combined with another delay: <section class="ships" data-midnight="gray"> <article> <h1 class="wow fadeIn" data-wow-duration="1.5s">Be quiet</h1> <p class="wow fadeIn" data-wow-delay="0.5s" data-wow-duration="1.5s">I'm hunting wabbits</p> </article> </section> We can even repeat an animation a few times. Let’s make the last title shake a few times as soon as it gets into view with data-wow-iteration="5". We'll take this opportunity to use all the properties, like data-wow-duration="0.5s" to make each shake last half a second, and we'll also add a large delay for the last piece so it appears after the main animation has finished: <section class="puppy"> <article> <h1 class="wow shake" data-wow-iteration="5" data-wow-duration="0.5s">OMG A PUPPY &lt;3</h1> <p class="wow fadeIn" data-wow-delay="2.5s">Ok, this one wasn't subtle at all</p> </article> </section> Summary That’s pretty much all there is to know about using Midnight.js, Wow.js and Animate.css! All you need to do now is find a project and experiment a bit with different animations. It’s a great tool to add some last-minute eye candy and - as long as you don’t overdo it - looks fantastic on most sites. I hope you enjoyed the article! About the author Roberto González is the co-founder of Aerolab, "an awesome place where we really push the barriers to create amazing, well coded design for the best digital products."He can be reached at @robertcode. From the 11th to 17th April, save 50% on top web development eBooks and 70% on our specially selected video courses. From Angular 2 to React and much more, find them all here.
Read more
  • 0
  • 0
  • 7221

article-image-introducing-sproutcore
Packt
10 Oct 2013
6 min read
Save for later

Introducing SproutCore

Packt
10 Oct 2013
6 min read
(For more resources related to this topic, see here.) Understanding the SproutCore approach In the strictly technical sense, I would describe SproutCore as an open source web application development framework. As you are likely a technical person interested in web application development, this should be reassuring. And if you are interested in developing web applications, you must also already know how difficult it is to keep track of the vast number of libraries and frameworks to choose from. While it would be nice if we could say that there was one true way, and even nicer if I could say that the one true way was SproutCore; this is not the case and never will be the case. Competing ideas will always exist, especially in this area because the future of software is largely JavaScript and the web. So where does SproutCore fit ideologically within this large and growing group? To best describe it, I would ask you to picture a spectrum of all the libraries and frameworks one can use to build a web application. Towards one end are the small single-feature libraries that provide useful helper functions for use in dynamic websites. As we move across, you'll see that the libraries grow and become combined into frameworks of libraries that provide larger functions, some of which start to bridge the gap between what we may call a website and what we may call a web app. Finally, at the other end of the spectrum you'll find the full application development frameworks. These are the frameworks dedicated to writing software for the web and as you may have guessed, this is where you would find SproutCore along with very few others. First, let me take a moment to argue the position of full application development frameworks such as SproutCore. In my experience, in order to develop web software that truly rivals the native software, you need more than just a collection of parts, and you need a cohesive set of tools with strong fundamentals. I've actually toyed with calling SproutCore something more akin to a platform, rather than a framework, because it is really more than just the framework code, it's also the tools, the ideas, and the experience that come with it. On the other side of the argument, there is the idea of picking small pieces and cobbling them together to form an application. While this is a seductive idea and makes great demos, this approach quickly runs out of steam when attempting to go beyond a simple project. The problem isn't the technology, it's the realities of software development: customization is the enemy of maintainability and growth. Without a native software like structure to build on, the developers must provide more and more glue code to keep it all together and writing architecturally sound code is extremely hard. Unfortunately, under deadlines this results in difficult to maintain codebases that don't scale. In the end, the ability to execute and the ability to iterate are more important than the ability to start. Fortunately, almost all of what you need in an application is common to all applications and so there is no need to reinvent the foundations in each project. It just needs to work and work exceptionally well so that we can free up time and resources to focus on attaining the next level in the user experience. This is the SproutCore approach. SproutCore does not just include all the components you need to create a real application. It also includes thousands of hours of real world tested professional engineering experience on how to develop and deploy genre-changing web applications that are used by millions of people. This experience is baked into the heart of SproutCore and it's completely free to use, which I hope you find as exciting a prospect as I do! Knowing when SproutCore is the right choice As you may have noticed, I use the word "software" occasionally and I will continue to do so, because I don't want to make any false pretenses about what it is we are doing. SproutCore is about writing software for the web. If the term software feels too heavy or too involved to describe your project, then SproutCore may not be the best platform for you. A good measure of whether SproutCore is a good candidate for your project or not, is to describe the goals of your project in normal language. For example, if we were to describe a typical SproutCore application, we would use terms such as: "rich user experience" "large scale" "extremely fast" "immediate feedback" "huge amounts of data" "fluid scrolling through gigantic lists" "works on multiple browsers, even IE7" "full screen" "pixel perfect design" "offline capable" "localized in multiple languages" and perhaps the most telling descriptor of them all, "like a native app" If these terms match several of the goals for your own project, then we are definitely on the right path. Let me talk about the other important factor to consider, possibly the most important factor to consider when deciding as a business on which technology to use: developer performance. It does not matter at all what features a framework has if the time it takes or the skill required to build real applications with it becomes unmanageable. I can tell you first hand that custom code written by a star developer quickly becomes useless in the hands of the next person and all software eventually ends up in someone else's hands. However, SproutCore is built using the same web technology (HTML, JavaScript and CSS) that millions are already familiar with. This provides a simple entry point for a lot of current web developers to start from. But more importantly, SproutCore was built around the software concepts that native desktop and mobile developers have used for years, but that have barely existed in the web. These concepts include: Class-like inheritance, encapsulation, and polymorphism Model-View-Controller (MVC) structure Statecharts Key-value coding, binding, and observing Computed properties Query-able data stores Centralized event handling Responder chains Run loops While there is also a full UI library and many conveniences, the application of software development principles onto web technology is what makes SproutCore so great. When your web app becomes successful and grows exponentially, and I hope it does, then you will be thankful to have SproutCore at its root. As I often heard Charles Jolley , the creator of SproutCore, say: "SproutCore is the technology you bet the company on."
Read more
  • 0
  • 0
  • 7159
article-image-regex-practice
Packt
04 Jun 2015
24 min read
Save for later

Regex in Practice

Packt
04 Jun 2015
24 min read
Knowing Regex's syntax allows you to model text patterns, but sometimes coming up with a good reliable pattern can be more difficult, so taking a look at some actual use cases can really help you learn some common design patterns. So, in this article by Loiane Groner and Gabriel Manricks, coauthors of the book JavaScript Regular Expressions, we will develop a form, and we will explore the following topics: Validating a name Validating e-mails Validating a Twitter username Validating passwords Validating URLs Manipulating text (For more resources related to this topic, see here.) Regular expressions and form validation By far, one of the most common uses for regular expressions on the frontend is for use with user submitted forms, so this is what we will be building. The form we will be building will have all the common fields, such as name, e-mail, website, and so on, but we will also experiment with some text processing besides all the validations. In real-world applications, you usually are not going to implement the parsing and validation code manually. You can create a regular expression and rely on some JavaScript libraries, such as: jQuery validation: Refer to http://jqueryvalidation.org/ Parsely.js: Refer to http://parsleyjs.org/ Even the most popular frameworks support the usage of regular expressions with its native validation engine, such as AngularJS (refer to http://www.ng-newsletter.com/posts/validations.html). Setting up the form This demo will be for a site that allows users to create an online bio, and as such, consists of different types of fields. However, before we get into this (since we won't be building a backend to handle the form), we are going to setup some HTML and JavaScript code to catch the form submission and extract/validate the data entered in it. To keep the code neat, we will create an array with all the validation functions, and a data object where all the final data will be kept. Here is a basic outline of the HTML code for which we begin by adding fields: <!DOCTYPE HTML> <html>    <head>        <title>Personal Bio Demo</title>    </head>    <body>        <form id="main_form">            <input type="submit" value="Process" />        </form>          <script>            // js goes here        </script>    </body> </html> Next, we need to write some JavaScript to catch the form and run through the list of functions that we will be writing. If a function returns false, it means that the verification did not pass and we will stop processing the form. In the event where we get through the entire list of functions and no problems arise, we will log out of the console and data object, which contain all the fields we extracted: <script>    var fns = [];    var data = {};      var form = document.getElementById("main_form");      form.onsubmit = function(e) {      e.preventDefault();          data = {};          for (var i = 0; i < fns.length; i++) {            if (fns[i]() == false) {                return;            }        }          console.log("Verified Data: ", data);    } </script> The JavaScript starts by creating the two variables I mentioned previously, we then pull the form's object from the DOM and set the submit handler. The submit handler begins by preventing a page from actually submitting, (as we don't have any backend code in this example) and then we go through the list of functions running them one by one. Validating fields In this section, we will explore how to validate different types of fields manually, such as name, e-mail, website URL, and so on. Matching a complete name To get our feet wet, let's begin with a simple name field. It's something we have gone through briefly in the past, so it should give you an idea of how our system will work. The following code goes inside the script tags, but only after everything we have written so far: function process_name() {    var field = document.getElementById("name_field");    var name = field.value;      var name_pattern = /^(S+) (S*) ?b(S+)$/;      if (name_pattern.test(name) === false) {        alert("Name field is invalid");         return false;    }      var res = name_pattern.exec(name);    data.first_name = res[1];    data.last_name = res[3];      if (res[2].length > 0) {        data.middle_name = res[2];    }      return true; }   fns.push(process_name); We get the name field in a similar way to how we got the form, then, we extract the value and test it against a pattern to match a full name. If the name doesn't match the pattern, we simply alert the user and return false to let the form handler know that the validations have failed. If the name field is in the correct format, we set the corresponding fields on the data object (remember, the middle name is optional here). The last line just adds this function to the array of functions, so it will be called when the form is submitted. The last thing required to get this working is to add HTML for this form field, so inside the form tags (right before the submit button), you can add this text input: Name: <input type="text" id="name_field" /><br /> Opening this page in your browser, you should be able to test it out by entering different values into the Name box. If you enter a valid name, you should get the data object printed out with the correct parameters, otherwise you should be able to see this alert message: Understanding the complete name Regex Let's go back to the regular expression used to match the name entered by a user: /^(S+) (S*) ?b(S+)$/ The following is a brief explanation of the Regex: The ^ character asserts its position at the beginning of a string The first capturing group (S+) S+ matches a non-white space character [^rntf] The + quantifier between one and unlimited times The second capturing group (S*) S* matches any non-whitespace character [^rntf] The * quantifier between zero and unlimited times " ?" matches the whitespace character The ? quantifier between zero and one time b asserts its position at a (^w|w$|Ww|wW) word boundary The third capturing group (S+) S+ matches a non-whitespace character [^rntf] The + quantifier between one and unlimited times $ asserts its position at the end of a string Matching an e-mail with Regex The next type of field we may want to add is an e-mail field. E-mails may look pretty simple at first glance, but there are a large variety of e-mails out there. You may just think of creating a word@word.word pattern, but the first section can contain many additional characters besides just letters, the domain can be a subdomain, or the suffix could have multiple parts (such as .co.uk for the UK). Our pattern will simply look for a group of characters that are not spaces or instances where the @ symbol has been used in the first section. We will then want an @ symbol, followed by another set of characters that have at least one period, followed by the suffix, which in itself could contain another suffix. So, this can be accomplished in the following manner: /[^s@]+@[^s@.]+.[^s@]+/ The pattern of our example is very simple and will not match every valid e-mail address. There is an official standard for an e-mail address's regular expressions called RFC 5322. For more information, please read http://www.regular-expressions.info/email.html. So, let's add the field to our page: Email: <input type="text" id="email_field" /><br /> We can then add this function to verify it: function process_email() {    var field = document.getElementById("email_field");    var email = field.value;      var email_pattern = /^[^s@]+@[^s@.]+.[^s@]+$/;      if (email_pattern.test(email) === false) {        alert("Email is invalid");        return false;    }      data.email = email;    return true; }   fns.push(process_email); There is an HTML5 field type specifically designed for e-mails, but here we are verifying manually, as this is a Regex book. For more information, please refer to http://www.w3.org/TR/html-markup/input.email.html. Understanding the e-mail Regex Let's go back to the regular expression used to match the name entered by the user: /^[^s@]+@[^s@.]+.[^s@]+$/ Following is a brief explanation of the Regex: ^ asserts a position at the beginning of the string [^s@]+ matches a single character that is not present in the following list: The + quantifier between one and unlimited times s matches any white space character [rntf ] @ matches the @ literal character [^s@.]+ matches a single character that is not present in the following list: The + quantifier between one and unlimited times s matches a [rntf] whitespace character @. is a single character in the @. list, literally . matches the . character literally [^s@]+ match a single character that is not present in the following list: The + quantifier between one and unlimited times s matches [rntf] a whitespace character @ is the @ literal character $ asserts its position at end of a string Matching a Twitter name The next field we are going to add is a field for a Twitter username. For the unfamiliar, a Twitter username is in the @username format, but when people enter this in, they sometimes include the preceding @ symbol and on other occasions, they only write the username by itself. Obviously, internally we would like everything to be stored uniformly, so we will need to extract the username, regardless of the @ symbol, and then manually prepend it with one, so regardless of whether it was there or not, the end result will look the same. So again, let's add a field for this: Twitter: <input type="text" id="twitter_field" /><br /> Now, let's write the function to handle it: function process_twitter() {    var field = document.getElementById("twitter_field");    var username = field.value;      var twitter_pattern = /^@?(w+)$/;      if (twitter_pattern.test(username) === false) {        alert("Twitter username is invalid");        return false;    }      var res = twitter_pattern.exec(username);    data.twitter = "@" + res[1];    return true; }   fns.push(process_twitter); If a user inputs the @ symbol, it will be ignored, as we will add it manually after checking the username. Understanding the twitter username Regex Let's go back to the regular expression used to match the name entered by the user: /^@?(w+)$/ This is a brief explanation of the Regex: ^ asserts its position at start of the string @? matches the @ character, literally The ? quantifier between zero and one time First capturing group (w+) w+ matches a [a-zA-Z0-9_] word character The + quantifier between one and unlimited times $ asserts its position at end of a string Matching passwords Another popular field, which can have some unique constraints, is a password field. Now, not every password field is interesting; you may just allow just about anything as a password, as long as the field isn't left blank. However, there are sites where you need to have at least one letter from each case, a number, and at least one other character. Considering all the ways these can be combined, creating a pattern that can validate this could be quite complex. A much better solution for this, and one that allows us to be a bit more verbose with our error messages, is to create four separate patterns and make sure the password matches each of them. For the input, it's almost identical: Password: <input type="password" id="password_field" /><br /> The process_password function is not very different from the previous example as we can see its code as follows: function process_password() {    var field = document.getElementById("password_field");    var password = field.value;      var contains_lowercase = /[a-z]/;    var contains_uppercase = /[A-Z]/;    var contains_number = /[0-9]/;    var contains_other = /[^a-zA-Z0-9]/;      if (contains_lowercase.test(password) === false) {        alert("Password must include a lowercase letter");        return false;    }      if (contains_uppercase.test(password) === false) {        alert("Password must include an uppercase letter");        return false;    }      if (contains_number.test(password) === false) {        alert("Password must include a number");        return false;    }      if (contains_other.test(password) === false) {        alert("Password must include a non-alphanumeric character");        return false;    }      data.password = password;    return true; }   fns.push(process_password); All in all, you may say that this is a pretty basic validation and something we have already covered, but I think it's a great example of working smart as opposed to working hard. Sure, we probably could have created one long pattern that would check everything together, but it would be less clear and less flexible. So, by breaking it into smaller and more manageable validations, we were able to make clear patterns, and at the same time, improve their usability with more helpful alert messages. Matching URLs Next, let's create a field for the user's website; the HTML for this field is: Website: <input type="text" id="website_field" /><br /> A URL can have many different protocols, but for this example, let's restrict it to only http or https links. Next, we have the domain name with an optional subdomain, and we need to end it with a suffix. The suffix itself can be a single word, such as .com or it can have multiple segments, such as.co.uk. All in all, our pattern looks similar to this: /^(?:https?://)?w+(?:.w+)?(?:.[A-Z]{2,3})+$/i Here, we are using multiple noncapture groups, both for when sections are optional and for when we want to repeat a segment. You may have also noticed that we are using the case insensitive flag (/i) at the end of the regular expression, as links can be written in lowercase or uppercase. Now, we'll implement the actual function: function process_website() {    var field = document.getElementById("website_field");    var website = field.value;      var pattern = /^(?:https?://)?w+(?:.w+)?(?:.[A-Z]{2,3})+$/i      if (pattern.test(website) === false) {       alert("Website is invalid");        return false;    }      data.website = website;    return true; }   fns.push(process_website); At this point, you should be pretty familiar with the process of adding fields to our form and adding a function to validate them. So, for our remaining examples let's shift our focus a bit from validating inputs to manipulating data. Understanding the URL Regex Let's go back to the regular expression used to match the name entered by the user: /^(?:https?://)?w+(?:.w+)?(?:.[A-Z]{2,3})+$/i This is a brief explanation of the Regex: ^ asserts its position at start of a string (?:https?://)? is anon-capturing group The ? quantifier between zero and one time http matches the http characters literally (case-insensitive) s? matches the s character literally (case-insensitive) The ? quantifier between zero and one time : matches the : character literally / matches the / character literally / matches the / character literally w+ matches a [a-zA-Z0-9_] word character The + quantifier between one and unlimited times (?:.w+)? is a non-capturing group The ? quantifier between zero and one time . matches the . character literally w+ matches a [a-zA-Z0-9_] word character The + quantifier between one and unlimited times (?:.[A-Z]{2,3})+ is a non-capturing group The + quantifier between one and unlimited times . matches the . character literally [A-Z]{2,3} matches a single character present in this list The {2,3} quantifier between2 and 3 times A-Z is a single character in the range between A and Z (case insensitive) $ asserts its position at end of a string i modifier: insensitive. Case insensitive letters, meaning it will match a-z and A-Z. Manipulating data We are going to add one more input to our form, which will be for the user's description. In the description, we will parse for things, such as e-mails, and then create both a plain text and HTML version of the user's description. The HTML for this form is pretty straightforward; we will be using a standard textbox and give it an appropriate field: Description: <br /> <textarea id="description_field"></textarea><br /> Next, let's start with the bare scaffold needed to begin processing the form data: function process_description() {    var field = document.getElementById("description_field");    var description = field.value;      data.text_description = description;      // More Processing Here      data.html_description = "<p>" + description + "</p>";      return true; }   fns.push(process_description); This code gets the text from the textbox on the page and then saves both a plain text version and an HTML version of it. At this stage, the HTML version is simply the plain text version wrapped between a pair of paragraph tags, but this is what we will be working on now. The first thing I want to do is split between paragraphs, in a text area the user may have different split-ups—lines and paragraphs. For our example, let's say the user just entered a single new line character, then we will add a <br /> tag and if there is more than one character, we will create a new paragraph using the <p> tag. Using the String.replace method We are going to use JavaScript's replace method on the string object This function can accept a Regex pattern as its first parameter, and a function as its second; each time it finds the pattern it will call the function and anything returned by the function will be inserted in place of the matched text. So, for our example, we will be looking for new line characters, and in the function, we will decide if we want to replace the new line with a break line tag or an actual new paragraph, based on how many new line characters it was able to pick up: var line_pattern = /n+/g; description = description.replace(line_pattern, function(match) {    if (match == "n") {        return "<br />";    } else {        return "</p><p>";    } }); The first thing you may notice is that we need to use the g flag in the pattern, so that it will look for all possible matches as opposed to only the first. Besides this, the rest is pretty straightforward. Consider this form: If you take a look at the output from the console of the preceding code, you should get something similar to this: Matching a description field The next thing we need to do is try and extract e-mails from the text and automatically wrap them in a link tag. We have already covered a Regexp pattern to capture e-mails, but we will need to modify it slightly, as our previous pattern expects that an e-mail is the only thing present in the text. In this situation, we are interested in all the e-mails included in a large body of text. If you were simply looking for a word, you would be able to use the b matcher, which matches any boundary (that can be the end of a word/the end of a sentence), so instead of the dollar sign, which we used before to denote the end of a string, we would place the boundary character to denote the end of a word. However, in our case it isn't quite good enough, as there are boundary characters that are valid e-mail characters, for example, the period character is valid. To get around this, we can use the boundary character in conjunction with a lookahead group and say we want it to end with a word boundary, but only if it is followed by a space or end of a sentence/string. This will ensure we aren't cutting off a subdomain or a part of a domain, if there is some invalid information mid-way through the address. Now, we aren't creating something that will try and parse e-mails no matter how they are entered; the point of creating validators and patterns is to force the user to enter something logical. That said, we assume that if the user wrote an e-mail address and then a period, that he/she didn't enter an invalid address, rather, he/she entered an address and then ended a sentence (the period is not part of the address). In our code, we assume that to the end an address, the user is either going to have a space after, such as some kind of punctuation, or that he/she is ending the string/line. We no longer have to deal with lines because we converted them to HTML, but we do have to worry that our pattern doesn't pick up an HTML tag in the process. At the end of this, our pattern will look similar to this: /b[^s<>@]+@[^s<>@.]+.[^s<>@]+b(?=.?(?:s|<|$))/g We start off with a word boundary, then, we look for the pattern we had before. I added both the (>) greater-than and the (<) less-than characters to the group of disallowed characters, so that it will not pick up any HTML tags. At the end of the pattern, you can see that we want to end on a word boundary, but only if it is followed by a space, an HTML tag, or the end of a string. The complete function, which does all the matching, is as follows: function process_description() {    var field = document.getElementById("description_field");    var description = field.value;      data.text_description = description;      var line_pattern = /n+/g;    description = description.replace(line_pattern, function(match) {        if (match == "n") {            return "<br />";        } else {            return "</p><p>";        }    });      var email_pattern = /b[^s<>@]+@[^s<>@.]+.[^s<>@]+b(?=.?(?:s|<|$))/g;    description = description.replace(email_pattern, function(match){        return "<a href='mailto:" + match + "'>" + match + "</a>";    });      data.html_description = "<p>" + description + "</p>";      return true; } We can continue to add fields, but I think the point has been understood. You have a pattern that matches what you want, and with the extracted data, you are able to extract and manipulate the data into any format you may need. Understanding the description Regex Let's go back to the regular expression used to match the name entered by the user: /b[^s<>@]+@[^s<>@.]+.[^s<>@]+b(?=.?(?:s|<|$))/g This is a brief explanation of the Regex: b asserts its position at a (^w|w$|Ww|wW) word boundary [^s<>@]+ matches a single character not present in the this list: The + quantifier between one and unlimited times s matches a [rntf ] whitespace character <>@ is a single character in the <>@ list (case-sensitive) @ matches the @ character literally [^s<>@.]+ matches a single character not present in this list: The + quantifier between one and unlimited times s matches any [rntf] whitespace character <>@. is a single character in the <>@. list literally (case sensitive) . matches the . character literally [^s<>@]+ matches a single character not present in this the list: The + quantifier between one and unlimited times s matches a [rntf ] whitespace character <>@ isa single character in the <>@ list literally (case sensitive) b asserts its position at a (^w|w$|Ww|wW) word boundary (?=.?(?:s|<|$)) Positive Lookahead - Assert that the Regex below can be matched .? matches any character (except new line) The ? quantifier between zero and one time (?:s|<|$) is a non-capturing group: First alternative: s matches any white space character [rntf] Second alternative: < matches the character < literally Third alternative: $ assert position at end of the string The g modifier: global match. Returns all matches of the regular expression, not only the first one Explaining a Markdown example More examples of regular expressions can be seen with the popular Markdown syntax (refer to http://en.wikipedia.org/wiki/Markdown). This is a situation where a user is forced to write things in a custom format, although it's still a format, which saves typing and is easier to understand. For example, to create a link in Markdown, you would type something similar to this: [Click Me](http://gabrielmanricks.com) This would then be converted to: <a href="http://gabrielmanricks.com">Click Me</a> Disregarding any validation on the URL itself, this can easily be achieved using this pattern: /[([^]]*)](([^(]*))/g It looks a little complex, because both the square brackets and parenthesis are both special characters that need to be escaped. Basically, what we are saying is that we want an open square bracket, anything up to the closing square bracket, then we want an open parenthesis, and again, anything until the closing parenthesis. A good website to write markdown documents is http://dillinger.io/. Since we wrapped each section into its own capture group, we can write this function: text.replace(/[([^]]*)](([^(]*))/g, function(match, text, link){    return "<a href='" + link + "'>" + text + "</a>"; }); We haven't been using capture groups in our manipulation examples, but if you use them, then the first parameter to the callback is the entire match (similar to the ones we have been working with) and then all the individual groups are passed as subsequent parameters, in the order that they appear in the pattern. Summary In this article, we covered a couple of examples that showed us how to both validate user inputs as well as manipulate them. We also took a look at some common design patterns and saw how it's sometimes better to simplify the problem instead of using brute force in one pattern for the purpose of creating validations. Resources for Article: Further resources on this subject: Getting Started with JSON [article] Function passing [article] YUI Test [article]
Read more
  • 0
  • 0
  • 7004

article-image-rest-apis-social-network-data-using-py2neo
Packt
14 Jul 2015
20 min read
Save for later

REST APIs for social network data using py2neo

Packt
14 Jul 2015
20 min read
In this article wirtten by Sumit Gupta, author of the book Building Web Applications with Python and Neo4j we will discuss and develop RESTful APIs for performing CRUD and search operations over our social network data, using Flask-RESTful extension and py2neo extension—Object-Graph Model (OGM). Let's move forward to first quickly talk about the OGM and then develop full-fledged REST APIs over our social network data. (For more resources related to this topic, see here.) ORM for graph databases py2neo – OGM We discussed about the py2neo in Chapter 4, Getting Python and Neo4j to Talk Py2neo. In this section, we will talk about one of the py2neo extensions that provides high-level APIs for dealing with the underlying graph database as objects and its relationships. Object-Graph Mapping (http://py2neo.org/2.0/ext/ogm.html) is one of the popular extensions of py2neo and provides the mapping of Neo4j graphs in the form of objects and relationships. It provides similar functionality and features as Object Relational Model (ORM) available for relational databases py2neo.ext.ogm.Store(graph) is the base class which exposes all operations with respect to graph data models. Following are important methods of Store which we will be using in the upcoming section for mutating our social network data: Store.delete(subj): It deletes a node from the underlying graph along with its associated relationships. subj is the entity that needs to be deleted. It raises an exception in case the provided entity is not linked to the server. Store.load(cls, node): It loads the data from the database node into cls, which is the entity defined by the data model. Store.load_related(subj, rel_type, cls): It loads all the nodes related to subj of relationship as defined by rel_type into cls and then further returns the cls object. Store.load_indexed(index_name, key,value, cls): It queries the legacy index, loads all the nodes that are mapped by key-value, and returns the associated object. Store.relate(subj, rel_type, obj, properties=None): It defines the relationship between two nodes, where subj and cls are two nodes connected by rel_type. By default, all relationships point towards the right node. Store.save(subj, node=None): It save and creates a given entity/node—subj into the graph database. The second argument is of type Node, which if given will not create a new node and will change the already existing node. Store.save_indexed(index_name,key,value,subj): It saves the given entity into the graph and also creates an entry into the given index for future reference. Refer to http://py2neo.org/2.0/ext/ogm.html#py2neo.ext.ogm.Store for the complete list of methods exposed by Store class. Let's move on to the next section where we will use the OGM for mutating our social network data model. OGM supports Neo4j version 1.9, so all features of Neo4j 2.0 and above are not supported such as labels. Social network application with Flask-RESTful and OGM In this section, we will develop a full-fledged application for mutating our social network data and will also talk about the basics of Flask-RESTful and OGM. Creating object model Perform the following steps to create the object model and CRUD/search functions for our social network data: Our social network data contains two kind of entities—Person and Movies. So as a first step let's create a package model and within the model package let's define a module SocialDataModel.py with two classes—Person and Movie: class Person(object):    def __init__(self, name=None,surname=None,age=None,country=None):        self.name=name        self.surname=surname        self.age=age        self.country=country   class Movie(object):    def __init__(self, movieName=None):        self.movieName=movieName Next, let's define another package operations and two python modules ExecuteCRUDOperations.py and ExecuteSearchOperations.py. The ExecuteCRUDOperations module will contain the following three classes: DeleteNodesRelationships: It will contain one method each for deleting People nodes and Movie nodes and in the __init__ method, we will establish the connection to the graph database. class DeleteNodesRelationships(object):    '''    Define the Delete Operation on Nodes    '''    def __init__(self,host,port,username,password):        #Authenticate and Connect to the Neo4j Graph Database        py2neo.authenticate(host+':'+port, username, password)        graph = Graph('http://'+host+':'+port+'/db/data/')        store = Store(graph)        #Store the reference of Graph and Store.        self.graph=graph        self.store=store      def deletePersonNode(self,node):        #Load the node from the Neo4j Legacy Index cls = self.store.load_indexed('personIndex', 'name', node.name, Person)          #Invoke delete method of store class        self.store.delete(cls[0])      def deleteMovieNode(self,node):        #Load the node from the Neo4j Legacy Index cls = self.store.load_indexed('movieIndex',   'name',node.movieName, Movie)        #Invoke delete method of store class            self.store.delete(cls[0]) Deleting nodes will also delete the associated relationships, so there is no need to have functions for deleting relationships. Nodes without any relationship do not make much sense for many business use cases, especially in a social network, unless there is a specific need or an exceptional scenario. UpdateNodesRelationships: It will contain one method each for updating People nodes and Movie nodes and, in the __init__ method, we will establish the connection to the graph database. class UpdateNodesRelationships(object):    '''      Define the Update Operation on Nodes    '''      def __init__(self,host,port,username,password):        #Write code for connecting to server      def updatePersonNode(self,oldNode,newNode):        #Get the old node from the Index        cls = self.store.load_indexed('personIndex', 'name', oldNode.name, Person)        #Copy the new values to the Old Node        cls[0].name=newNode.name        cls[0].surname=newNode.surname        cls[0].age=newNode.age        cls[0].country=newNode.country        #Delete the Old Node form Index        self.store.delete(cls[0])       #Persist the updated values again in the Index        self.store.save_unique('personIndex', 'name', newNode.name, cls[0])      def updateMovieNode(self,oldNode,newNode):          #Get the old node from the Index        cls = self.store.load_indexed('movieIndex', 'name', oldNode.movieName, Movie)        #Copy the new values to the Old Node        cls[0].movieName=newNode.movieName        #Delete the Old Node form Index        self.store.delete(cls[0])        #Persist the updated values again in the Index        self.store.save_ unique('personIndex', 'name', newNode.name, cls[0]) CreateNodesRelationships: This class will contain methods for creating People and Movies nodes and relationships and will then further persist them to the database. As with the other classes/ module, it will establish the connection to the graph database in the __init__ method: class CreateNodesRelationships(object):    '''    Define the Create Operation on Nodes    '''    def __init__(self,host,port,username,password):        #Write code for connecting to server    '''    Create a person and store it in the Person Dictionary.    Node is not saved unless save() method is invoked. Helpful in bulk creation    '''    def createPerson(self,name,surName=None,age=None,country=None):        person = Person(name,surName,age,country)        return person      '''    Create a movie and store it in the Movie Dictionary.    Node is not saved unless save() method is invoked. Helpful in bulk creation    '''    def createMovie(self,movieName):        movie = Movie(movieName)        return movie      '''    Create a relationships between 2 nodes and invoke a local method of Store class.    Relationship is not saved unless Node is saved or save() method is invoked.    '''    def createFriendRelationship(self,startPerson,endPerson):        self.store.relate(startPerson, 'FRIEND', endPerson)      '''    Create a TEACHES relationships between 2 nodes and invoke a local method of Store class.    Relationship is not saved unless Node is saved or save() method is invoked.    '''    def createTeachesRelationship(self,startPerson,endPerson):        self.store.relate(startPerson, 'TEACHES', endPerson)    '''    Create a HAS_RATED relationships between 2 nodes and invoke a local method of Store class.    Relationship is not saved unless Node is saved or save() method is invoked.    '''    def createHasRatedRelationship(self,startPerson,movie,ratings):      self.store.relate(startPerson, 'HAS_RATED', movie,{'ratings':ratings})    '''    Based on type of Entity Save it into the Server/ database    '''    def save(self,entity,node):        if(entity=='person'):            self.store.save_unique('personIndex', 'name', node.name, node)        else:            self.store.save_unique('movieIndex','name',node.movieName,node) Next we will define other Python module operations, ExecuteSearchOperations.py. This module will define two classes, each containing one method for searching Person and Movie node and of-course the __init__ method for establishing a connection with the server: class SearchPerson(object):    '''    Class for Searching and retrieving the the People Node from server    '''      def __init__(self,host,port,username,password):        #Write code for connecting to server      def searchPerson(self,personName):        cls = self.store.load_indexed('personIndex', 'name', personName, Person)        return cls;   class SearchMovie(object):    '''    Class for Searching and retrieving the the Movie Node from server    '''    def __init__(self,host,port,username,password):        #Write code for connecting to server      def searchMovie(self,movieName):        cls = self.store.load_indexed('movieIndex', 'name', movieName, Movie)        return cls; We are done with our data model and the utility classes that will perform the CRUD and search operation over our social network data using py2neo OGM. Now let's move on to the next section and develop some REST services over our data model. Creating REST APIs over data models In this section, we will create and expose REST services for mutating and searching our social network data using the data model created in the previous section. In our social network data model, there will be operations on either the Person or Movie nodes, and there will be one more operation which will define the relationship between Person and Person or Person and Movie. So let's create another package service and define another module MutateSocialNetworkDataService.py. In this module, apart from regular imports from flask and flask_restful, we will also import classes from our custom packages created in the previous section and create objects of model classes for performing CRUD and search operations. Next we will define the different classes or services which will define the structure of our REST Services. The PersonService class will define the GET, POST, PUT, and DELETE operations for searching, creating, updating, and deleting the Person nodes. class PersonService(Resource):    '''    Defines operations with respect to Entity - Person    '''    #example - GET http://localhost:5000/person/Bradley    def get(self, name):        node = searchPerson.searchPerson(name)        #Convert into JSON and return it back        return jsonify(name=node[0].name,surName=node[0].surname,age=node[0].age,country=node[0].country)      #POST http://localhost:5000/person    #{"name": "Bradley","surname": "Green","age": "24","country": "US"}    def post(self):          jsonData = request.get_json(cache=False)        attr={}        for key in jsonData:            attr[key]=jsonData[key]            print(key,' = ',jsonData[key] )        person = createOperation.createPerson(attr['name'],attr['surname'],attr['age'],attr['country'])        createOperation.save('person',person)          return jsonify(result='success')    #POST http://localhost:5000/person/Bradley    #{"name": "Bradley1","surname": "Green","age": "24","country": "US"}    def put(self,name):        oldNode = searchPerson.searchPerson(name)        jsonData = request.get_json(cache=False)        attr={}        for key in jsonData:            attr[key] = jsonData[key]            print(key,' = ',jsonData[key] )        newNode = Person(attr['name'],attr['surname'],attr['age'],attr['country'])          updateOperation.updatePersonNode(oldNode[0],newNode)          return jsonify(result='success')      #DELETE http://localhost:5000/person/Bradley1    def delete(self,name):        node = searchPerson.searchPerson(name)        deleteOperation.deletePersonNode(node[0])        return jsonify(result='success') The MovieService class will define the GET, POST, and DELETE operations for searching, creating, and deleting the Movie nodes. This service will not support the modification of Movie nodes because, once the Movie node is defined, it does not change in our data model. Movie service is similar to our Person service and leverages our data model for performing various operations. The RelationshipService class only defines POST which will create the relationship between the person and other given entity and can either be another Person or Movie. Following is the structure of the POST method: '''    Assuming that the given nodes are already created this operation    will associate Person Node either with another Person or Movie Node.      Request for Defining relationship between 2 persons: -        POST http://localhost:5000/relationship/person/Bradley        {"entity_type":"person","person.name":"Matthew","relationship": "FRIEND"}    Request for Defining relationship between Person and Movie        POST http://localhost:5000/relationship/person/Bradley        {"entity_type":"Movie","movie.movieName":"Avengers","relationship": "HAS_RATED"          "relationship.ratings":"4"}    '''    def post(self, entity,name):        jsonData = request.get_json(cache=False)        attr={}        for key in jsonData:            attr[key]=jsonData[key]            print(key,' = ',jsonData[key] )          if(entity == 'person'):            startNode = searchPerson.searchPerson(name)            if(attr['entity_type']=='movie'):                endNode = searchMovie.searchMovie(attr['movie.movieName'])                createOperation.createHasRatedRelationship(startNode[0], endNode[0], attr['relationship.ratings'])                createOperation.save('person', startNode[0])            elif (attr['entity_type']=='person' and attr['relationship']=='FRIEND'):                endNode = searchPerson.searchPerson(attr['person.name'])                createOperation.createFriendRelationship(startNode[0], endNode[0])                createOperation.save('person', startNode[0])            elif (attr['entity_type']=='person' and attr['relationship']=='TEACHES'):                endNode = searchPerson.searchPerson(attr['person.name'])                createOperation.createTeachesRelationship(startNode[0], endNode[0])                createOperation.save('person', startNode[0])        else:            raise HTTPException("Value is not Valid")          return jsonify(result='success') At the end, we will define our __main__ method, which will bind our services with the specific URLs and bring up our application: if __name__ == '__main__':    api.add_resource(PersonService,'/person','/person/<string:name>')    api.add_resource(MovieService,'/movie','/movie/<string:movieName>')    api.add_resource(RelationshipService,'/relationship','/relationship/<string:entity>/<string:name>')    webapp.run(debug=True) And we are done!!! Execute our MutateSocialNetworkDataService.py as a regular Python module and your REST-based services are up and running. Users of this app can use any REST-based clients such as SOAP-UI and can execute the various REST services for performing CRUD and search operations. Follow the comments provided in the code samples for the format of the request/response. In this section, we created and exposed REST-based services using Flask, Flask-RESTful, and OGM and performed CRUD and search operations over our social network data model. Using Neomodel in a Django app In this section, we will talk about the integration of Django and Neomodel. Django is a Python-based, powerful, robust, and scalable web-based application development framework. It is developed upon the Model-View-Controller (MVC) design pattern where developers can design and develop a scalable enterprise-grade application within no time. We will not go into the details of Django as a web-based framework but will assume that the readers have a basic understanding of Django and some hands-on experience in developing web-based and database-driven applications. Visit https://docs.djangoproject.com/en/1.7/ if you do not have any prior knowledge of Django. Django provides various signals or triggers that are activated and used to invoke or execute some user-defined functions on a particular event. The framework invokes various signals or triggers if there are any modifications requested to the underlying application data model such as pre_save(), post_save(), pre_delete, post_delete, and a few more. All the functions starting with pre_ are executed before the requested modifications are applied to the data model, and functions starting with post_ are triggered after the modifications are applied to the data model. And that's where we will hook our Neomodel framework, where we will capture these events and invoke our custom methods to make similar changes to our Neo4j database. We can reuse our social data model and the functions defined in ExploreSocialDataModel.CreateDataModel. We only need to register our event and things will be automatically handled by the Django framework. For example, you can register for the event in your Django model (models.py) by defining the following statement: signals.pre_save.connect(preSave, sender=Male) In the previous statement, preSave is the custom or user-defined method, declared in models.py. It will be invoked before any changes are committed to entity Male, which is controlled by the Django framework and is different from our Neomodel entity. Next, in preSave you need to define the invocations to the Neomodel entities and save them. Refer to the documentation at https://docs.djangoproject.com/en/1.7/topics/signals/ for more information on implementing signals in Django. Signals in Neomodel Neomodel also provides signals that are similar to Django signals and have the same behavior. Neomodel provides the following signals: pre_save, post_save, pre_delete, post_delete, and post_create. Neomodel exposes the following two different approaches for implementing signals: Define the pre..() and post..() methods in your model itself and Neomodel will automatically invoke it. For example, in our social data model, we can define def pre_save(self) in our Model.Male class to receive all events before entities are persisted in the database or server. Another approach is using Django-style signals, where we can define the connect() method in our Neomodel Model.py and it will produce the same results as in Django-based models: signals.pre_save.connect(preSave, sender=Male) Refer to http://neomodel.readthedocs.org/en/latest/hooks.html for more information on signals in Neomodel. In this section, we discussed about the integration of Django with Neomodel using Django signals. We also talked about the signals provided by Neomodel and their implementation approach. Summary Here we learned about creating web-based applications using Flask. We also used Flasks extensions such as Flask-RESTful for creating/exposing REST APIs for data manipulation. Finally, we created a full blown REST-based application over our social network data using Flask, Flask-RESTful, and py2neo OGM. We also learned about Neomodel and its various features and APIs provided to work with Neo4j. We also discussed about the integration of Neomodel with the Django framework. Resources for Article: Further resources on this subject: Firebase [article] Developing Location-based Services with Neo4j [article] Learning BeagleBone Python Programming [article]
Read more
  • 0
  • 0
  • 6793

article-image-professional-environment-react-native-part-1
Pierre Monge
09 Jan 2017
5 min read
Save for later

A Professional Environment for React Native, Part 1

Pierre Monge
09 Jan 2017
5 min read
React Native, a new framework, allows you to build mobile apps using JavaScript. It uses the same design as React.js, letting you compose a rich mobile UI from declarative components. Although many developers are talking about this technology, React Native is not yet approved by most professionals for several reasons: React Native isn’t fully stable yet. At the time of writing this, we are at version 0.40 It can be scary to use a web technology in a mobile application It’s hard to find good React Native developers because knowing the React.js stack is not enough to maintain a mobile React Native app from A to Z! To confront all these prejudices, this series will act as a guide, detailing how we see things in my development team. We will cover the entire React Native environment as well as discuss how to maintain a React Native application. This series may be of interest to companies who want to implement a React Native solution and also of interest to anyone who is looking for the perfect tools to maintain a mobile application in React Native. Let’s start here in part 1 by exploring the React Native environment. The environment The React Native environment is pretty consistent. To manage all of the parts of such an application, you will need a native stack, a JavaScript stack, and specific components from React Native. Let's examine all the aspects of the React Native environment: The Native part consists of two important pieces of software: Android Studio (Android) and Xcode (iOS). Both the pieces of software are provided with their emulators, so there is no need for a physical device! The negative point of Android Studio, however, is that you need to download the SDK, and you will have to find the good versions and download them all. In addition, these two programs take up a lot of room on your hard disk! The JavaScript part naturally consists of Node.js, but we must add Watchman to that to check the changes in a file in real time. The React Native CLI will automate the linking of all software. You only have to run react-native init helloworld to create a project and react-native run-ios --scheme 'Dev' to launch the project on an iOS simulator in debug mode. The supplied react-native controls will load almost everything! You have, no doubt, come to our first conclusion. React Native has a lot of prerequisites, and although it makes sense to have as much dependency as possible, you will have to master them all, which can take some time. And also a lot of space on your hard drive! Try this as your starting point if you want more information on getting started with React Native. Atom, linter, and flow A developer never starts coding without his text editor and his little tricks, just as a woodcutter never goes out into the forest without his ax! More than 80% of people around me use Atom as a text editor. And they are not wrong! React Native is 100% OpenSource, and Atom is also open source. And it is full of plug-ins and themes of all kinds. I personally use a lot of plug-ins, such as color-picker, file-icons, indent-guide-improved, minimap, etc., but there are some plug-ins that should be essential for every JavaScript coder, especially for your React Native application. linter-eslint To work alone or in a group, you must have a common syntax for all your files. To do this, we use linter-eslint with the fbjs configurations. This plugin provides the following: Good indentation Good syntax on how to define variables, objects, classes, etc. Indications on non-existent, unused variables and functions And many other great benefits. Flow One question you may be thinking is what is the biggest problem with using JavaScript? One issue with using JavaScript has always been that it's a language that has no type. In fact, there are types, such as String, Number, Boolean, Function, etc., but that's just not enough. There is no static type. To deal with this, we use Flow, which allows you to perform type check before run-time. This is, of course, useful for predicting bugs! There is even a plug-in version for Atom: linter-flow. Conclusion At this point, you should have everything you need to create your first React Native mobile applications. Here are some great examples of apps that are out there already. Check out part 2 in this series where I cover the tools that can help you maintain your React Native apps. About the author Pierre Monge (liroo.pierre@gmail.com) is a 21 year old student. He is a developer in C, JavaScript, and all things related to web development, and he has recently been creating mobile applications. He is currently working as an intern at a company named Azendoo, where he is developing a 100% React Native application.
Read more
  • 0
  • 0
  • 6723
article-image-common-api-liferay-portal-systems-development
Packt
01 Feb 2012
11 min read
Save for later

Common API in Liferay Portal Systems Development

Packt
01 Feb 2012
11 min read
(For more resources on Liferay, see here.) User management The portal has defined user management with a set of entities, such as, User, Contact, Address, EmailAddress, Phone, Website, and Ticket, and so on at /portal/service.xml. In the following section, we're going to address the User entity, its association, and relationship. Models and services The following figure depicts these entities and their relationships. The entity User has a one-to-one association with the entity Contact, which may have many contacts as children. And the entity Contact has a one-to-one association with the entity Account, which may have many accounts as children. The entity Contact can have a many-to-many association with the entities Address, EmailAddress, Phone, Website, and Ticket. Logically, the entities Address, EmailAddress, Phone, Website, and Ticket may have a many-to-many association with the other entities, such as Group, Organization, and UserGroup as shown in the following image: Services The following table shows user-related service interfaces, extensions, utilities, wrappers, and their main methods: Interface Extension Utility/Wrapper Main methods UserService, UserLocalService PersistedModelLocalService User(Local)ServiceUtil, User(Local)ServiceWrapper add*, authenticate*, check*, decrypt*, delete*, get*, has*, search, unset*, update*, and so on. ContactService, ContactLocalService persistedmodellocalservice> Contact(Local)ServiceUtil, Contact(Local)ServiceWrapper add*, create*, delete*, get*, update*, dynamicQuery, and so on. AccountService, AccountLocalService Account(Local)ServiceUtil, Account(Local)ServiceWrapper add*, create*, delete*, get*, update*, dynamicQuery, and so on. AddressService, AddressLocalService Address(Local)ServiceUtil, Address(Local)ServiceWrapper add*, create*, delete*, get*, update*, dynamicQuery, and so on. EmailAddressService, EmailAddressLocalService PersistedModelLocalService Address(Local)ServiceUtil, Address(Local)ServiceWrapper add*, create*, delete*, get*, update*, dynamicQuery, and so on. PhoneService, PhoneLocalService PersistedModelLocalService Phone(Local)ServiceUtil, Phone(Local)ServiceWrapper add*, create*, delete*, get*, update*, dynamicQuery, and so on. WebsiteService, WebsiteLocalService PersistedModelLocalService Website(Local)ServiceUtil, Website(Local)ServiceWrapper add*, create*, delete*, get*, update*, dynamicQuery, and so on. TicketLocalService PersistedModelLocalService TicketLocalServiceUtil, TicketLocalServiceWrapper add*, create*, delete*, get*, update*, dynamicQuery, and so on.   Relationships The portal also defined many-to-many relationships between User and Group, User and Organization, User and Team, User and UserGroup, as shown in the following code: <column name="groups" type="Collection" entity="Group" mapping-table="Users_Groups" /> <column name="userGroups" type="Collection" entity="UserGroup" mapping-table="Users_UserGroups" /> In particular, you will be able to find a similar definition at /portal/service.xml. Sample portal service portlet The portal provides a sample portal service plugin called sample-portal-service-portlet (refer to the plugin details at /portlets/sample-portal-service-portlet). The following is the code snippet: List organizations = OrganizationServiceUtil.getUserOrganizations( themeDisplay.getUserId()); // add your logic The previous code shows how to consume Liferay services through regular Java calls. These services include com.liferay.portal.service.OrganizationServiceUtil and the model involves com.liferay.portal.model.Organization. Similarly, you can use other services, for example, com.liferay.portal.service.UserServiceUtil and com.liferay.portal.service.GroupServiceUtil; and models, for example, com.liferay.portal.model.User, com.liferay.portal.model.Group. Of course, you can find other services and models—you will find services located at the com. liferay.portal.service package in the /portal-service/src folder. In the same way, you will find models located at the com.liferay.portal.model package in the /portal-service/src folder. What's the difference between *LocalServiceUtil and *ServiceUtil? The sign * represents models, for example, Organization, User, Group, and so on. Generally speaking, *Service is the remote service interface that defines the service methods available to remote code. *ServiceUtil has an additional permission check, since this method might be called as a remote service. *ServiceUtil is a facade class that combines the service locator with the actual call to the service *Service. While *LocalService is the internal service interface,*LocalServiceUtil is a facade class that combines the service locator with the actual call to the service *LocalService. *Service has a PermissionChecker in each method, and *LocalService usually doesn't have the same. Authorization Authorization is a process of finding out if the user, once identified, is permitted to have access to a resource. The portal implemented authorization by assigning permissions via roles and checking permissions, and this is called Role-Based Access Control (RBAC). The following figure depicts an overview of authorization. A user can be a member of Group, UserGroup, Organization, or Team. And a user or a group of users, such as Group, UserGroup, or Organization can be a member of Role. And the entity Role can have many ResourcePermission entities associated with it, while the entity ResourcePermission may contain many ResourceAction entities, as shown in the following diagram: The following table shows the entities Role, ResourcePermission, and ResourceAction: Interface Extension Wrapper/SOAP Main methods Role RoleModel, PersistedModel RoleWrapper, RoleSoap clone, compareTo, get*, set*, toCacheModel, toEscapedModel, and so on. ResourceAction ResourceActionModel, PersistedModel ResourceActionWrapper, ResourceActionSoap clone, compareTo, get*, set*, toCacheModel, toEscapedModel, and so on. ResourcePermission ResourcePermissionModel, PersistedModel ResourcePermissionWrapper, ResourcePermissionSoap clone, compareTo, get*, set*, toCacheModel, toEscapedModel, and so on. In addition, the portal specifies role constants in the class RoleConstants. The entity ResourceAction gets specified with the columns name, actionId, and bitwiseValue as follows: <column name="name" type="String" /> <column name="actionId" type="String" /> <column name="bitwiseValue" type="long" /> The entity ResourcePermission gets specified with the columns name, scope, primKey, roleId, and actionIds as follows: <column name="name" type="String" /> <column name="scope" type="int" /> <column name="primKey" type="String" /> <column name="roleId" type="long" /> <column name="ownerId" type="long" /> <column name="actionIds" type="long" /> In addition, the portal specified resource permission constants in the class ResourcePermissionConstants Password policy The portal implements enterprise password policies and user account lockout using the entities PasswordPolicy and PasswordPolicyRel, as shown in the following table: Interface Extension Wrapper/Soap Description PasswordPolicy PasswordPolicyModel, PersistedModel PasswordPolicyWrapper, PasswordPolicySoap Columns: name, description, minAge, minAlphanumeric, minLength, minLowerCase, minNumbers, minSymbols, minUpperCase, lockout, maxFailure, lockoutDuration, and so on. PasswordPolicyRel PasswordPolicyRelModel, PersistedModel PasswordPolicyRelWrapper, PasswordPolicyRelSoap Columns: passwordPolicyId, classNameId, and classPK. Ability to associate the entity PasswordPolicy with other entities.   Passwords toolkit The portal has defined the following properties related to the passwords toolkit in portal.properties: passwords.toolkit= com.liferay.portal.security.pwd.PasswordPolicyToolkit passwords.passwordpolicytoolkit.generator=dynamic passwords.passwordpolicytoolkit.static=iheartliferay The property passwords.toolkit defines a class name that extends com.liferay.portal.security.pwd.BasicToolkit, which is called to generate and validate passwords. If you choose to use com.liferay.portal.security.pwd.PasswordPolicyToolkit as your password toolkit, you can choose either static or dynamic password generation. Static is set through the property passwords.passwordpolicytoolkit.static and dynamic uses the class com.liferay.util.PwdGenerator to generate the password. If you are using LDAP password syntax checking, you will also have to use the static generator, so that you can guarantee that passwords obey their rules. The passwords' toolkits get addressed in detail in the following table: Class Interface Utility Property Main methods DigesterImpl Digester DigesterUtil passwords.digest.encoding digest, digestBase64, digestHex, digestRaw, and so on. Base64 None None None decode, encode, fromURLSafe, objectToString, stringToObject, toURLSafe, and so on. PwdEncryptor None None passwords.encryption.algorithm encrypt, default types: MD2, MD5, NONE, SHA, SHA-256, SHA-384, SSHA, UFC-CRYPT, and so on .   Authentication Authentication is the process of determining whether someone or something is, in fact, who or what it is declared to be. The portal defines the class called PwdAuthenticator for authentication, as shown in the following code: public static boolean authenticate( String login, String clearTextPassword, String currentEncryptedPassword) { String encryptedPassword = PwdEncryptor.encrypt( clearTextPassword, currentEncryptedPassword); if (currentEncryptedPassword.equals(encryptedPassword)) { return true; } } As you can see, it encrypts the clear text password first into the variable encryptedPassword. It then tests whether the variable currentEncryptedPassword has the same value as that of the variable encryptedPassword or not. The classes UserLocalServiceImpl (the method authenticate) and EditUserAction (the method updateUser) call the class PwdAuthenticator for authentication. A Message Authentication Code (MAC) is a short piece of information used to authenticate a message. The portal supports MAC through the following properties: auth.mac.allow=false auth.mac.algorithm=MD5 auth.mac.shared.key= To use authentication with MAC, simply post to a URL as follows: It passes the MAC in the password field. Make sure that the MAC gets URL encoded, since it might contain characters not allowed in a URL. Authentication with MAC also requires that you set the following property in system-ext.properties: com.liferay.util.servlet.SessionParameters=false As shown in the previous code, it encrypts session parameters, so that browsers can't remember them. Authentication pipeline The portal provides the authentication pipeline framework for authentication, as shown in the following code: auth.pipeline.pre=com.liferay.portal.security.auth.LDAPAuth auth.pipeline.post= auth.pipeline.enable.liferay.check=true As you can see, the property auth.pipeline.enable.liferay.check is set to true to enable password checking by the internal portal authentication. If it is set to false, essentially, password checking is delegated to the authenticators configured in the auth.pipeline.pre and auth.pipeline.post settings. The interface com.liferay.portal.security.auth.Authenticator defines the constant values that should be used as return code from the classes implementing the interface. If authentication is successful, it returns SUCCESS; if the user exists but the passwords doesn't match, then it returns FAILURE. If the user doesn't exist in the system, it returns DNE. Constants get defined in the interface Authenticator. As shown in the following table, the available authenticator is com.liferay.portal.security.auth.LDAPAuth: Class Extension Involved properties Main Methods PasswordPolicyToolkit BasicToolkit passwords.passwordpolicytoolkit.charset.lowercase, passwords.passwordpolicytoolkit.charset.numbers, passwords.passwordpolicytoolkit.charset.symbols, passwords.passwordpolicytoolkit.charset.uppercase, passwords.passwordpolicytoolkit.generator, passwords.passwordpolicytoolkit.static generate, validate RegExpToolkit BasicToolkit passwords.regexptoolkit.pattern, passwords.regexptoolkit.charset, passwords.regexptoolkit.length generate, validate PwdToolkitUtil None passwords.toolkit Generate, validate PwdGenerator None None getPassword. getPinNumber   Authentication token The portal provides the interface com.liferay.portal.security.auth.AuthToken for the authentication token as follows: auth.token.check.enabled=true auth.token.impl= com.liferay.portal.security.auth.SessionAuthToken As shown in the previous code, the property auth.token.check.enabled is set to true to enable authentication token security checks. The checks can be disabled for specific actions via the property auth.token.ignore.actions or for specific portlets via the init parameter check-auth-token in portlet.xml. The property auth.token.impl is set to the authentication token class. This class must implement the interface AuthToken. The class SessionAuthToken is used to prevent CSRF (Cross-Site Request Forgery) attacks. The following table shows the interface AuthToken and its implementation: Class Interface Properties Main Methods LDAPAuth Authenticator ldap.auth.method, ldap.referral, ldap.auth.password.encryption.algorithm, ldap.base.dn, ldap.error.user.lockout, ldap.error.password.expired, ldap.import.user.password.enabled, ldap.base.provider.url, auth.pipeline.enable.liferay.check, ldap.auth.required authenticateByEmailAddress, authenticateByScreenName, authenticateByUserId   JAAS Java Authentication and Authorization Service (JAAS) is a Java security framework for user-centric security to augment the Java code-based security. The portal has specified a set of properties for JAAS as follows: portal.jaas.enable=false portal.jaas.auth.type=userId portal.impersonation.enable=true The property portal.jaas.enable is set to false to disable JAAS security checks. Disabling JAAS would speed up login. Note that JAAS must be disabled, if administrators are able to impersonate other users. JAAS can authenticate users based on their e-mail address, screen name, user ID, or login, as determined by the property company.security.auth.type. By default, the class com.liferay.portal.security.jaas.PortalLoginModule loads the correct JAAS login module, based on what application server or servlet container the portal is deployed on. You can set a JAAS implementation class to override this behavior. The following table shows this class and its associations: Class Interface Properties Main methods AuthTokenImpl AuthToken auth.token.impl check, getToken AuthTokenWrapper AuthToken None check, getToken AuthTokenUtil None None check, getToken SessionAuthToken AuthToken auth.token.shared.secret check, getToken   As you have noticed, the classes com.liferay.portal.kernel.security.jaas, PortalLoginModule, and com.liferay.portal.security.jaas.PortalLoginModule, implement the interface LoginModule, configured by the property portal.jaas.impl. As shown in the following table, the portal has provided different login module implementation for different application servers or servlet containers: Class Interface/Extension Package Main methods ProtectedPrincipal Principal com.liferay.portal.kernel.servlet getName, equals, hasCode, toString PortalPrincipal ProtectedPrincipal com.liferay.portal.kernel.security.jaas PortalPrincipal PortalRole PortalPrincipal com.liferay.portal.kernel.security.jaas PortalRole PortalGroup PortalPrincipal, java.security.acl.Group com.liferay.portal.kernel.security.jaas addMember, isMember, members, removeMember PortalLoginModule javax.security.auth.spi.LoginModule com.liferay.portal.kernel.security.jaas, com.liferay.portal.security.jaas abort, commit, initialize, login, logout  
Read more
  • 0
  • 0
  • 6602

article-image-deploying-vertx-application
Packt
24 Sep 2013
12 min read
Save for later

Deploying a Vert.x application

Packt
24 Sep 2013
12 min read
(For more resources related to this topic, see here.) Setting up an Ubuntu box We are going to set up an Ubuntu virtual machine using the Vagrant tool. This virtual machine will simulate a real production server. If you already have an Ubuntu box (or a similar Linux box) handy, you can skip this step and move on to setting up a user. Vagrant (http://www.vagrantup.com/) is a tool for managing virtual machines. Many people use it to manage their development environments so that they can easily share them and test their software on different operating systems. For us, it is a perfect tool to practice Vert.x deployment into a Linux environment. Install Vagrant by heading to the Downloads area at http://vagrantup.com and selecting the latest version. Select a package for your operating system and run the installer. Once it is done you should have a vagrant command available on the command line as follows: vagrant –v Navigate to the root directory of our project and run the following command: vagrant init precise64 http://files.vagrantup.com/precise64.box This will generate a file called Vagrant file in the project folder. It contains configuration for the virtual machine we're about to create. We initialized a precise64 box, which is shorthand for the 64-bit version of Ubuntu 12.04 Precise Pangolin. Open the file in an editor and find the following line: # config.vm.network :private_network, ip: "192.168.33.10" Uncomment the line by removing the # character. This will enable private networking for the box. We will be able to conveniently access it with the IP address 192.168.33.10 locally. Run the following command to download, install, and launch the virtual machine: vagrant up This command launches the virtual machine configured in the Vagrantfile. On first launch it will also download it. Because of this, running the command may take a while. Once the command is finished you can check the status of the virtual machine by running vagrant status, suspend it by running vagrant suspend, bring it back up by running vagrant up, and remove it by running vagrant destroy. Setting up a user For any application deployment, it's a good idea have an application-specific user configured. The sole purpose of the user is to run the application. This gives you a nice way to control permissions and make sure the application can only do what it's supposed to. Open a shell connection to our Linux box. If you followed the steps to set up a Vagrant box, you can do this by running the following command in the project root directory: vagrant ssh Add a new user called mindmaps using the following command: sudo useradd -d /home/mindmaps -m mindmaps Also specify a password for the new user using the following command (and make a note of the password you choose; you'll need it): sudo passwd mindmaps Install Java on the server Install Java for the Linux box, as described in Getting Started with Vert.x. As a quick reminder, Java can be installed on Ubuntu with the following command: sudo apt-get install openjdk-7-jdk On fresh Ubuntu installations, it is a good idea to always make sure the package manager index is up-to-date before installing any packages. This is also the case for our Ubuntu virtual machine. Run the following command if the Java installation fails: sudo apt-get update Installing MongoDB on the server We also need MongoDB to be installed on the server, for persisting the mind maps. Setting up privileged ports Our application is configured to serve requests on port 8080. When we deploy to the Internet, we don't want users to have to know anything about ports, which means we should deploy our app to the default HTTP port 80 instead. On Unix systems (such as Linux) port 80 can only be bound to by the root user. Because it is not a good idea to run applications as the root user, we should set up a special privilege for the mindmaps user to bind to port 80. We can do this with the authbind utility. authbind is a Linux utility that can be used to bind processes to privileged ports without requiring root access. Install authbind using the package manager with the following command: sudo apt-get install authbind Set up a privilege for the mindmaps user to bind to port 80, by creating a file into the authbind configuration directory with the following command: cd /etc/authbind/byport/sudo touch 80sudo chown mindmaps:mindmaps 80sudo chmod 700 80 When authbind is run, it checks from this directory, whether there is a file corresponding to the used port and whether the current user has access to it. Here we have created such a file. Many people prefer to have a web server such as Nginx or Apache as a frontend and not expose backend services to the Internet directly. This can also be done with Vert.x. In that case, you could just deploy Vert.x to port 8080 and skip the authbind configuration. Then, you would need to configure reverse proxying for the Vert.x application in your web server. Note that we are using the event bus bridge in our application, and that uses HTTP WebSockets as the transport mechanism. This means the front-end web server must be able to also proxy WebSocket traffic. Nginx is able to do this starting from version 1.3 and Apache from version 2.4.5. Installing Vert.x on the server Switch to the mindmaps user in the shell on the virtual machine using the following command: sudo su – mindmaps Install Vert.x for this user, as described in Getting Started with Vert.x. As a quick reminder, it can be done by downloading and unpacking the latest distribution from http://vertx.io. Making the application port configurable Let's move back to our application code for a moment. During development we have been running the application in port 8080, but on the server we will want to run it in port 80. To support both of these scenarios we can make the port configurable through an environment variable. Vert.x makes environment variables available to verticles through the container API. In JavaScript, the variables can be found in the container.env object. Let's use it to give our application a port at runtime. Find the following line from the deployment verticle app.js: port: 8080, Change it to the following line: port: parseInt(container.env.get('MINDMAPS_PORT')) || 8080, This gets the MINDMAPS_PORT environment variable, and parses it from a string to an integer using the standard JavaScript parseInt function. If no port has been given, the default value 8080 is used. We also need to change the host configuration of the web server. So far, we have been binding to localhost, but now we also want the application to be accessible from outside the server. Find the following line in app.js: host: "localhost", Change it to the following line: host: "0.0.0.0", Using the host 0.0.0.0 will make the server bind to all IPv4 network interfaces the server has. Setting up the application on the server We are going to need some way of transferring the application code itself to the server, as well as delivering incremental updates as new versions of the application are developed. One of the simplest ways to accomplish this is to just transfer the application files over using the rsync tool, which is what we will do. rsync is a widely used Unix tool for transferring files between machines. It has some useful features over plain file copying, such as only copying the deltas of what has changed, and two-way synchronization of files. Create a directory for the application, on the home directory of the mindmaps user using the following command: mkdir ~/app Go back to the application root directory and transfer the files from it to the new remote directory: rsync -rtzv . mindmaps@192.168.33.10:~/app Testing the setup At this point, the project working tree should already be in the application directory on the remote server, because we have transferred it over using rsync. You should also be able to run it on the virtual machine, provided that you have the JDK, Vert.x, and MongoDB installed, and that you have authbind installed and configured. You can run the app with the following commands: cd ~/appJAVA_OPTS="-Djava.net.preferIPv4Stack=true" MINDMAPS_PORT=80 authbind ~/vert.x-2.0.1-final/bin/vertx run app.js Let's go through the file bit by bit as follows: We pass a Java system parameter called java.net.preferIPv4Stack to Java via the JAVA_OPTS environment variable. This will have Java use IPv4 networking only. We need it because the authbind utility only supports IPv4. We also explicitly set the application to use port 80 using the MINDMAPS_PORT environment variable. We wrap the Vert.x command with the authbind command. Finally, there's the call to Vert.x. Substitute the path to the Vert.x executable with the path you installed Vert.x to. After starting the application, you should be able to see it by navigating to //192.168.33.10 in a browser. Setting up an upstart service We have our application fully operational, but it isn't very convenient or reliable to have to start it manually. What we'll do next is to set up an Ubuntu upstart job that will make sure the application is always running and survives things like server restarts. Upstart is an Ubuntu utility that handles task supervision and automated starting and stopping of tasks when the machine starts up, shuts down, or when some other events occur. It is similar to the /sbin/init daemon, but is arguably easier to configure, which is the reason we'll be using it. The first thing we need to do is set up an upstart configuration file. Open an editor with root access (using sudo) for a new file /etc/init/mindmaps.conf, and set its contents as follows: start on runlevel [2345]stop on runlevel [016]setuid mindmapssetgid mindmapsenv JAVA_OPTS="-Djava.net.preferIPv4Stack=true"env MINDMAPS_PORT=80chdir /home/mindmaps/appexec authbind /home/mindmaps/vert.x-2.0.1-final/bin/vertx run app.js Let's go through the file bit by bit as follows: On the first two lines, we configure when this service will start and stop. This is defined using runlevels, which are numeric identifiers of different states of the operating system (http://en.wikipedia.org/wiki/Runlevel). 2, 3, 4, and 5 designate runlevels where the system is operational; 0, 1, and 6 designate runlevels where the system is stopping or restarting. We set the user and group the service will run as to the mindmaps user and its group. We set the two environment variables we also used earlier when testing the service: JAVA_OPTS for letting Java know it should only use the IPv4 stack, and MINDMAPS_PORT to let our application know that it should use port 80. We change the working directory of the service to where our application resides, using the chdir directive. Finally, we define the command that starts the service. It is the vertx command wrapped in the authbind command. Be sure to change the directory for the vertx binary to match the directory you installed Vert.x to. Let's give the mindmaps user the permission to manage this job so that we won't have to always run it as root. Open up the /etc/sudoers file into an editor with the following command: sudo /usr/sbin/visudo At the end of the file, add the following line: mindmaps ALL = (root) NOPASSWD: /sbin/start mindmaps, /sbin/stopmindmaps, /sbin/restart mindmaps, /sbin/status mindmaps The visudo command is used to configure the privileges of different users to use the sudo command. With the line we added, we enabled the mindmaps user to run a few specific commands without having to supply a password. At this point you should be able to start and stop the application as the mindmaps user: sudo start mindmaps You also have the following additional commands available for managing the service: sudo status mindmapssudo restart mindmapssudo stop mindmaps If there is a problem with the commands, there might be some configuration error. The upstart service will log errors to the file: /var/log/upstart/mindmaps.log. You will need to open it using the sudo command. Deploying new versions Deploying a new version of the application consists of the following two steps: Transferring new files over using rsync Restarting the mind maps service We can make this even easier by creating a shell script that executes both steps. Create a file called deploy.sh in the root directory of the project and set its contents as: #!/bin/shrsync -rtzv . mindmaps@192.168.33.10:~/app/ssh mindmaps@192.168.33.10 sudo restart mindmaps Make the script executable, using the following command: chmod +x deploy.sh After this, just run the following command whenever you want a new version on the server: ./deploy.sh To make deployment even more streamlined, you can set up SSH public key authentication so that you won't need to supply the password of the mindmaps user as you deploy. See https://help.ubuntu.com/community/SSH/OpenSSH/Keys for more information. Summary In this article, we have learned the following things: How to set up a Linux server for Vert.x production deployment How to set up deployment for a Vert.x application using rsync How to start and supervise a Vert.x process using upstart Resources for Article : Further resources on this subject: IRC-style chat with TCP server and event bus [Article] Coding for the Real-time Web [Article] Integrating Storm and Hadoop [Article]
Read more
  • 0
  • 0
  • 6425
Modal Close icon
Modal Close icon