Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Front-End Web Development

341 Articles
article-image-network-based-ubuntu-installations
Packt
08 Mar 2010
4 min read
Save for later

Network Based Ubuntu Installations

Packt
08 Mar 2010
4 min read
I will first outline the requirements and how to get started with a network installation. Next I will walk through a network installation including screenshots for every step. I will also include text descriptions of each step and screenshot. Requirements In order to install a machine over the network you'll need the network installer image. Unfortunately these images are not well publicized, and rarely listed alongside the other .ISO images. For this reason I have included direct download links to the 32bit and 64bit images. It is important that you download the network installer images from the same mirror that you will be installing from. These images are often bound to the kernel and library versions contained within the repository, and a mismatch will cause a failed installation. 32 bit http://archive.ubuntu.com/ubuntu/dists/karmic/main/installer-i386/current/images/netboot/mini.iso 64bit http://archive.ubuntu.com/ubuntu/dists/karmic/main/installer-amd64/current/images/netboot/mini.iso If you'd prefer to use a different repository, simply look for the "installer-$arch" folder within the main folder of the version you'd like to install. Once you've downloaded your preferred image you'll need to write the image to a CD. This can be done from an existing Linux installation (Ubuntu or otherwise), by following the steps below: Navigate to your download location (likely ~/Downloads/) Right-click on the mini.iso file Select Write to Disk... This will present you with an ISO image burning utility. Simply verify that it recognizes a disk in your CD writer, and that it has selected the mini.iso for writing. An image of this size (~12M) should only take a minute to write. If possible, you may want to burn this image to a business card CD. Due to the size of the installation image (~12M), you'll have plenty of room on even the smallest media. Installation Congratulations. You're now the proud owner of an Ubuntu network installation CD. You can use this small CD to install an Ubuntu machine anywhere you have access to an Ubuntu repository. This can be a public repository or a local repository. If you'd like to create a local repository you may want to read the article on Creating a Local Ubuntu Repository using Apt-Mirror and Apt-Cacher, for additional information on creating a mirror or caching server. To kick off your installation simply reboot your machine and instruct it to boot off a CD. This is often done by pressing a function key during the initial boot process. On many Dell machines this is the F12 button. Some machines are already configured to boot from a CD if present during the boot process. If this is not the case for you, please consult your BIOS settings for more information. When the CD boots you'll be presented with a very basic prompt. Simply press ENTER to continue. This will then load the Ubuntu specific boot menu. For this article I selected Install from the main menu. The other options are beyond the scope of this tutorial. This will load some modules and then start the installation program. The network installer is purely text based. This may seem like a step backward for those used to the LiveCD graphical installers, but the text based nature allows for greater flexibility and advanced features. During the following screens I will outline what the prompts are asking for, and what additional options (if any) are available at each stage. The first selection menu that you will be prompted with is the language menu. This should default to "English". Of course you can select your preferred language as needed. Second, to verify the language variant, you'll need to select your country. Based on your first selection your menu may not appear with the same options as in this screenshot. Third you'll be asked to select or verify your keyboard layout. The installer will ask you if you'd like to automatically detect the proper keyboard layout. If you select Yes you will be prompted to press specific keys from a displayed list until it has verified your layout. If you select No you'll be prompted to select your layout from a list.
Read more
  • 0
  • 0
  • 4399

article-image-preparing-optimizations
Packt
04 Jun 2015
11 min read
Save for later

Preparing Optimizations

Packt
04 Jun 2015
11 min read
In this article by Mayur Pandey and Suyog Sarda, authors of LLVM Cookbook, we will look into the following recipes: Various levels of optimization Writing your own LLVM pass Running your own pass with the opt tool Using another pass in a new pass (For more resources related to this topic, see here.) Once the source code transformation completes, the output is in the LLVM IR form. This IR serves as a common platform for converting into assembly code, depending on the backend. However, before converting into an assembly code, the IR can be optimized to produce more effective code. The IR is in the SSA form, where every new assignment to a variable is a new variable itself—a classic case of an SSA representation. In the LLVM infrastructure, a pass serves the purpose of optimizing LLVM IR. A pass runs over the LLVM IR, processes the IR, analyzes it, identifies the optimization opportunities, and modifies the IR to produce optimized code. The command-line interface opt is used to run optimization passes on LLVM IR. Various levels of optimization There are various levels of optimization, starting at 0 and going up to 3 (there is also s for space optimization). The code gets more and more optimized as the optimization level increases. Let's try to explore the various optimization levels. Getting ready... Various optimization levels can be understood by running the opt command-line interface on LLVM IR. For this, an example C program can first be converted to IR using the Clang frontend. Open an example.c file and write the following code in it: $ vi example.c int main(int argc, char **argv) { int i, j, k, t = 0; for(i = 0; i < 10; i++) {    for(j = 0; j < 10; j++) {      for(k = 0; k < 10; k++) {        t++;      }    }    for(j = 0; j < 10; j++) {      t++;    } } for(i = 0; i < 20; i++) {    for(j = 0; j < 20; j++) {      t++;    }    for(j = 0; j < 20; j++) {      t++;    } } return t; } Now convert this into LLVM IR using the clang command, as shown here: $ clang –S –O0 –emit-llvm example.c A new file, example.ll, will be generated, containing LLVM IR. This file will be used to demonstrate the various optimization levels available. How to do it… Do the following steps: The opt command-line tool can be run on the IR-generated example.ll file: $ opt –O0 –S example.ll The –O0 syntax specifies the least optimization level. Similarly, you can run other optimization levels: $ opt –O1 –S example.ll $ opt –O2 –S example.ll $ opt –O3 –S example.ll How it works… The opt command-line interface takes the example.ll file as the input and runs the series of passes specified in each optimization level. It can repeat some passes in the same optimization level. To see which passes are being used in each optimization level, you have to add the --debug-pass=Structure command-line option with the previous opt commands. See Also To know more on various other options that can be used with the opt tool, refer to http://llvm.org/docs/CommandGuide/opt.html Writing your own LLVM pass All LLVM passes are subclasses of the pass class, and they implement functionality by overriding the virtual methods inherited from pass. LLVM applies a chain of analyses and transformations on the target program. A pass is an instance of the Pass LLVM class. Getting ready Let's see how to write a pass. Let's name the pass function block counter; once done, it will simply display the name of the function and count the basic blocks in that function when run. First, a Makefile needs to be written for the pass. Follow the given steps to write a Makefile: Open a Makefile in the llvm lib/Transform folder: $ vi Makefile Specify the path to the LLVM root folder and the library name, and make this pass a loadable module by specifying it in Makefile, as follows: LEVEL = ../../.. LIBRARYNAME = FuncBlockCount LOADABLE_MODULE = 1 include $(LEVEL)/Makefile.common This Makefile specifies that all the .cpp files in the current directory are to be compiled and linked together in a shared object. How to do it… Do the following steps: Create a new .cpp file called FuncBlockCount.cpp: $ vi FuncBlockCount.cpp In this file, include some header files from LLVM: #include "llvm/Pass.h" #include "llvm/IR/Function.h" #include "llvm/Support/raw_ostream.h" Include the llvm namespace to enable access to LLVM functions: using namespace llvm; Then start with an anonymous namespace: namespace { Next declare the pass: struct FuncBlockCount : public FunctionPass { Then declare the pass identifier, which will be used by LLVM to identify the pass: static char ID; FuncBlockCount() : FunctionPass(ID) {} This step is one of the most important steps in writing a pass—writing a run function. Since this pass inherits FunctionPass and runs on a function, a runOnFunction is defined to be run on a function: bool runOnFunction(Function &F) override {      errs() << "Function " << F.getName() << 'n';      return false;    } }; } This function prints the name of the function that is being processed. The next step is to initialize the pass ID: char FuncBlockCount::ID = 0; Finally, the pass needs to be registered, with a command-line argument and a name: static RegisterPass<FuncBlockCount> X("funcblockcount", "Function Block Count", false, false); Putting everything together, the entire code looks like this: #include "llvm/Pass.h" #include "llvm/IR/Function.h" #include "llvm/Support/raw_ostream.h" using namespace llvm; namespace { struct FuncBlockCount : public FunctionPass { static char ID; FuncBlockCount() : FunctionPass(ID) {} bool runOnFunction(Function &F) override {    errs() << "Function " << F.getName() << 'n';    return false; }            };        }        char FuncBlockCount::ID = 0;        static RegisterPass<FuncBlockCount> X("funcblockcount", "Function Block Count", false, false); How it works A simple gmake command compiles the file, so a new file FuncBlockCount.so is generated at the LLVM root directory. This shared object file can be dynamically loaded to the opt tool to run it on a piece of LLVM IR code. How to load and run it will be demonstrated in the next section. See also To know more on how a pass can be built from scratch, visit http://llvm.org/docs/WritingAnLLVMPass.html Running your own pass with the opt tool The pass written in the previous recipe, Writing your own LLVM pass, is ready to be run on the LLVM IR. This pass needs to be loaded dynamically for the opt tool to recognize and execute it. How to do it… Do the following steps: Write the C test code in the sample.c file, which we will convert into an .ll file in the next step: $ vi sample.c   int foo(int n, int m) { int sum = 0; int c0; for (c0 = n; c0 > 0; c0--) {    int c1 = m;  for (; c1 > 0; c1--) {      sum += c0 > c1 ? 1 : 0;    } } return sum; } Convert the C test code into LLVM IR using the following command: $ clang –O0 –S –emit-llvm sample.c –o sample.ll This will generate a sample.ll file. Run the new pass with the opt tool, as follows: $ opt -load (path_to_.so_file)/FuncBlockCount.so -funcblockcount sample.ll The output will look something like this: Function foo How it works… As seen in the preceding code, the shared object loads dynamically into the opt command-line tool and runs the pass. It goes over the function and displays its name. It does not modify the IR. Further enhancement in the new pass is demonstrated in the next recipe. See also To know more about the various types of the Pass class, visit http://llvm.org/docs/WritingAnLLVMPass.html#pass-classes-and-requirements Using another pass in a new pass A pass may require another pass to get some analysis data, heuristics, or any such information to decide on a further course of action. The pass may just require some analysis such as memory dependencies, or it may require the altered IR as well. The new pass that you just saw simply prints the name of the function. Let's see how to enhance it to count the basic blocks in a loop, which also demonstrates how to use other pass results. Getting ready The code used in the previous recipe remains the same. Some modifications are required, however, to enhance it—as demonstrated in next section—so that it counts the number of basic blocks in the IR. How to do it… The getAnalysis function is used to specify which other pass will be used: Since the new pass will be counting the number of basic blocks, it requires loop information. This is specified using the getAnalysis loop function: LoopInfo *LI = &getAnalysis<LoopInfoWrapperPass>().getLoopInfo(); This will call the LoopInfo pass to get information on the loop. Iterating through this object gives the basic block information: unsigned num_Blocks = 0; Loop::block_iterator bb; for(bb = L->block_begin(); bb != L->block_end();++bb)    num_Blocks++; errs() << "Loop level " << nest << " has " << num_Blocks << " blocksn"; This will go over the loop to count the basic blocks inside it. However, it counts only the basic blocks in the outermost loop. To get information on the innermost loop, recursive calling of the getSubLoops function will help. Putting the logic in a separate function and calling it recursively makes more sense: void countBlocksInLoop(Loop *L, unsigned nest) { unsigned num_Blocks = 0; Loop::block_iterator bb; for(bb = L->block_begin(); bb != L->block_end();++bb)    num_Blocks++; errs() << "Loop level " << nest << " has " << num_Blocks << " blocksn"; std::vector<Loop*> subLoops = L->getSubLoops(); Loop::iterator j, f; for (j = subLoops.begin(), f = subLoops.end(); j != f; ++j)    countBlocksInLoop(*j, nest + 1); } virtual bool runOnFunction(Function &F) { LoopInfo *LI = &getAnalysis<LoopInfoWrapperPass>().getLoopInfo(); errs() << "Function " << F.getName() + "n"; for (Loop *L : *LI)    countBlocksInLoop(L, 0); return false; } How it works… The newly modified pass now needs to run on a sample program. Follow the given steps to modify and run the sample program: Open the sample.c file and replace its content with the following program: int main(int argc, char **argv) { int i, j, k, t = 0; for(i = 0; i < 10; i++) {    for(j = 0; j < 10; j++) {      for(k = 0; k < 10; k++) {        t++;      }    }    for(j = 0; j < 10; j++) {      t++;    } } for(i = 0; i < 20; i++) {    for(j = 0; j < 20; j++) {      t++;    }    for(j = 0; j < 20; j++) {      t++;    } } return t; } Convert it into a .ll file using Clang: $ clang –O0 –S –emit-llvm sample.c –o sample.ll Run the new pass on the previous sample program: $ opt -load (path_to_.so_file)/FuncBlockCount.so - funcblockcount sample.ll The output will look something like this: Function main Loop level 0 has 11 blocks Loop level 1 has 3 blocks Loop level 1 has 3 blocks Loop level 0 has 15 blocks Loop level 1 has 7 blocks Loop level 2 has 3 blocks Loop level 1 has 3 blocks There's more… The LLVM's pass manager provides a debug pass option that gives us the chance to see which passes interact with our analyses and optimizations, as follows: $ opt -load (path_to_.so_file)/FuncBlockCount.so - funcblockcount sample.ll –disable-output –debug-pass=Structure Summary In this article you have explored various optimization levels, and the optimization techniques kicking at each level. We also saw the step-by-step approach to writing our own LLVM pass. Resources for Article: Further resources on this subject: Integrating a D3.js visualization into a simple AngularJS application [article] Getting Up and Running with Cassandra [article] Cassandra Architecture [article]
Read more
  • 0
  • 0
  • 4371

article-image-creating-autocad-command
Packt
10 Oct 2013
5 min read
Save for later

Creating an AutoCAD command

Packt
10 Oct 2013
5 min read
Some custom AutoCAD applications are designed to run unattended, such as when a drawing loads or in reaction to some other event that occurs in your AutoCAD drawing session. But, the majority of your AutoCAD programming work will likely involve custom AutoCAD commands, whether automating a sequence of built-in AutoCAD commands, or implementing new functionality to address a business need. Commands can be simple (printing to the command window or a dialog box), or more difficult (generating a new design on-the-fly, based on data stored in an existing design). Our first custom command will be somewhat simple. We will define a command which will count the number of AutoCAD entities found in ModelSpace (the space in AutoCAD where you model your designs). Then, we will display that data in the command window. Frequently, custom commands acquire information about an object in AutoCAD (or summarize a collection of user input), and then present that information to the user, either for the purpose of reporting data or so the user can make an informed choice or selection based upon the data being presented. Using Netload to load our command class You may be wondering at this point, "How do we load and run our plugin?" I'm glad you asked! To load the plugin, enter the native AutoCAD command NETLOAD. When the dialog box appears, navigate to the DLL file, MyAcadCSharpPlugin1.dll, select it and click on OK. Our custom command will now be available in the AutoCAD session. At the command prompt, enter COUNTENTS to execute the command. Getting ready In our initial project, we have a class MyCommands, which was generated by the AutoCAD 2014 .NET Wizard. This class contains stubs for four types of AutoCAD command structures: basic command; command with pickfirst selection; a session command; and a lisp function. For this plugin, we will create a basic command, CountEnts, using the stub for the Modal command. How to do it... Let's take a look at the code we will need in order to read the AutoCAD database, count the entities in ModelSpace, identify (and count) block references, and display our findings to users: First, let's get the active AutoCAD document and the drawing database. Next, begin a new transaction. Use the using keyword, which will also take care of disposing of the transaction. Open the block table in AutoCAD. In this case, open it for read operation using the ForRead keyword. Similarly, open the block table record for ModelSpace, also for read (ForRead) (we aren't writing new entities to the drawing database at this time). We'll initialize two counters: one to count all AutoCAD entities; one to specifically count block references (also known as Inserts). Then, as we iterate through all of the entities in AutoCAD's ModelSpace, we'll tally AutoCAD entities in general, as well as block references. Having counted the total number of entities overall, as well as the total number of block references, we'll display that information to the user in a dialog box. How it works... AutoCAD is a multi-document application. We must identify the active document (the drawing that is activated) in order to read the correct database. Before reading the database we must start a transaction. In fact, we use transactions whenever we read from or write to the database. In the drawing database, we open AutoCAD's block table to read it. The block table contains the block table records ModelSpace, PaperSpace, and PaperSpace0. We are going to read the entities in ModelSpace so we will open that block table record for reading. We create two variables to store the tallies as we iterate through ModelSpace, keeping track of both block references and AutoCAD entities in general. A block reference is just a reference to a block. A block is a group of entities that is selectable as if it was a single entity. Blocks can be saved as drawing files (.dwg) and then inserted into other drawings. Once we have examined every entity in ModelSpace, we display the tallies (which are stored in the two count variables we created) to the user in a dialog box. Because we used the using keyword when creating the transaction, it is automatically disposed of when our command function ends. Summary The Session command, one of the four types of command stubs added to our project by the AutoCAD 2014 .NET Wizard, has application (rather than document) context. This means it is executed in the context of the entire AutoCAD session, not just within the context of the current document. This allows for some operations that are not permitted in document context, such as creating a new drawing. The other command stub, described as having pickfirst selection is executed with pre-selected AutoCAD entities. In other words, users can select (or pick) AutoCAD entities just prior to executing the command and those entities will be known to the command upon execution. Resources for Article: Further resources on this subject: Dynamically enable a control (Become an expert) [Article] Introduction to 3D Design using AutoCAD [Article] Getting Started with DraftSight [Article]
Read more
  • 0
  • 0
  • 4309

article-image-easily-writing-sql-queries-spring-python
Packt
24 May 2010
9 min read
Save for later

Easily Writing SQL Queries with Spring Python

Packt
24 May 2010
9 min read
(For more resources on Spring, see here.) Many of our applications contain dynamic data that needs to be pulled from and stored within a relational database. Even though key/value based data stores exist, a huge majority of data stores in production are housed in a SQL-based relational database. Given this de facto requirement, it improves developer efficiency if we can focus on the SQL queries themselves, and not spend lots of time writing plumbing code and making every query fault tolerant. The classic SQL issue SQL is a long existing standard that shares a common paradigm for writing queries with many modern programming languages (including Python). The resulting effect is that coding queries by hand is laborious. Let's explore this dilemma by writing a simple SQL query using Python's database API. SQL is a long existing standard that shares a common paradigm for writing queries with many modern programming languages (including Python). The resulting effect is that coding queries by hand is laborious. Let's explore this dilemma by writing a simple SQL query using Python's database API. DROP TABLE IF EXISTS article;CREATE TABLE article ( id serial PRIMARY KEY, title VARCHAR(11), wiki_text VARCHAR(10000));INSERT INTO article(id, title, wiki_textVALUES(1, 'Spring Python Book', 'Welcome to the [http://springpythonbook.com Spring Python] book, where you can learn more about [[Spring Python]].');INSERT INTO article(id, title, wiki_textVALUES(2, 'Spring Python', ''''Spring Python''' takes the concepts of Spring and applies them to world of [http://python.org Python].'); Now, let's write a SQL statement that counts the number of wiki articles in the system using the database's shell. SELECT COUNT(*) FROM ARTICLE Now let's write some Python code that will run the same query on an sqlite3 database using Python's official database API (http://www.python.org/dev/peps/pep-0249). import sqlite3db = sqlite3.connect("/path/to/sqlite3db")cursor = db.cursor()results = Nonetry: try: cursor.execute("SELECT COUNT(*) FROM ARTICLE") results = cursor.fetchall() except Exception, e: print "execute: Trapped %s" % efinally: try: cursor.close() except Exception, e: print "close: Trapped %s, and throwing away" % ereturn results[0][0] That is a considerable block of code to execute such a simple query. Let's examine it in closer detail. First, we connect to the database. For sqlite3, all we needed was a path. Other database engines usually require a username and a password. Next, we create a cursor in which to hold our result set. Then we execute the query. To protect ourselves from any exceptions, we need to wrap this with some exception handlers. After completing the query, we fetch the results. After pulling the results from the result set into a variable, we close the cursor. Finally, we can return our response. Python bundles up the results into an array of tuples. Since we only need one row, and the first column, we do a double index lookup. What is all this code trying to find in the database? The key statement is in a single line. cursor.execute("SELECT COUNT(*) FROM ARTICLE") What if we were writing a script? This would be a lot of work to find one piece of information. Granted, a script that exits quickly could probably skip some of the error handling as well as closing the cursor. But it is still is quite a bit of boiler plate to just get a cursor for running a query. But what if this is part of a long running application? We need to close the cursors after every query to avoid leaking database resources. Large applications also have a lot of different queries we need to maintain. Coding this pattern over and over can sap a development team of its energy. Parameterizing the code This boiler plate block of code is a recurring pattern. Do you think we could parameterize it and make it reusable? We've already identified that the key piece of the SQL statement. Let's try and rewrite it as a function doing just that. import sqlite3def query(sql_statement): db = sqlite3.connect("/path/to/sqlite3db") cursor = db.cursor() results = None try: try: cursor.execute(sql_statement) results = cursor.fetchall() except Exception, e: print "execute: Trapped %s" % efinally: try: cursor.close() except Exception, e: print "close: Trapped %s, and throwing away" % ereturn results[0][0] Our first step nicely parameterizes the SQL statement, but that is not enough. The return statement is hard coded to return the first entry of the first row. For counting articles, what we have written its fine. But this isn't flexible enough for other queries. We need the ability to plug in our own results handler. import sqlite3def query(sql_statement, row_handler): db = sqlite3.connect("/path/to/sqlite3db") cursor = db.cursor() results = None try: try: cursor.execute(sql_statement) results = cursor.fetchall() except Exception, e: print "execute: Trapped %s" % e finally: try: cursor.close() except Exception, e: print "close: Trapped %s, and throwing away" % e return row_handler(results) We can now code a custom handler. def count_handler(results): return results[0][0]query("select COUNT(*) from ARTICLES", count_handler) With this custom results handler, we can now invoke our query function, and feed it both the query and the handler. The only thing left is to handle creating a connection to the database. It is left as an exercise for the reader to wrap the sqlite3 connection code with a factory solution. What we have coded here is essentially the core functionality of DatabaseTemplate. This method of taking an algorithm and parameterizing it for reuse is known as the template pattern. There are some extra checks done to protect the query from SQL injection attacks. Replacing multiple lines of query code with one line of Spring Python Spring Python has a convenient utility class called DatabaseTemplate that greatly simplifies this problem. Let's replace the two lines of import and connect code from the earlier example with some Spring Python setup code. from springpython.database.factory import Sqlite3ConnectionFactoryfrom springpython.database.core import DatabaseTemplateconn_factory = Sqlite3ConnectionFactory("/path/to/sqlite3db")dt = DatabaseTemplate(conn_factory) At first glance, we appear to be taking a step back. We just replaced two lines of earlier code with four lines. However, the next block should improve things significantly. Let's replace the earlier coded query with a call using our instance of return dt.query_for_object("SELECT COUNT(*) FROM ARTICLE") Now we have managed to reduce a complex 14-line block of code into one line of Spring Python code. This makes our Python code appear as simple as the original SQL statement we typed in the database's shell. And it also reduces the noise. The Spring triangle—Portable Service Abstractions From this diagram earlier , as an illustration of the key principles behind Spring Python is being made. The DatabaseTemplate represents a Portable Service Abstraction because: It is portable because it uses Python's standardized API, not tying us to any database vendor. Instead, in our example, we injected in an instance of Sqlite3ConnectionFactory It provides the useful service of easily accessing information stored in a relational database, but letting us focus on the query, not the plumbing code It offers a nice abstraction over Python's low level database API with reduced code noise. This allows us to avoid the cost and risk of writing code to manage cursors and exception handling DatabaseTemplate handles exceptions by catching and holding them, then properly closing the cursor. It then raises it wrapped inside a Spring Python DataAccessException. This way, database resources are properly disposed of without losing the exception stack trace. Using DatabaseTemplate to retrieve objects Our first example showed how we can easily reduce our code volume. But it was really only for a simple case. A really useful operation would be to execute a query, and transform the results into a list of objects. First, let's define a simple object we want to populate with the information retrieved from the database. As shown on the Spring triangle diagram, using simple objects is a core facet to the 'Spring way'. class Article(object): def __init__(self, id=None, title=None, wiki_text=None): self.id = id self.title = title self.wiki_text = wiki_text If we wanted to code this using Python's standard API, our code would be relatively verbose like this: cursor = db.cursor()results = []try: try: cursor.execute("SELECT id, title, wiki_text FROM ARTICLE") temp = cursor.fetchall() for row in temp: results.append( Article(id=temp[0], title=temp[1], wiki_text=temp[2])) except Exception, e: print "execute: Trapped %s" % efinally: try: cursor.close() except Exception, e: print "close: Trapped %s, and throwing away" % ereturn results This isn't that different from the earlier example. The key difference is that instead of assigning fetchall directly to results, we instead iterate over it, generating a list of Article objects. Instead, let's use DatabaseTemplate to cut down on the volume of code return dt.query("SELECT id, title, wiki_text FROM ARTICLE", ArticleMapper()) We aren't done yet. We have to code ArticleMapper, the object class used to iterate over our result set. from springpython.database.core import RowMapperclass ArticleMapper(RowMapper): def map_row(self, row, metadata=None): return Article(id=row[0], title=row[1], wiki_text=row[2]) RowMapper defines a single method: map_row. This method is called for each row of data, and includes not only the information, but also the metadata provided by the database. ArticleMapper can be re-used for every query that performs the same mapping This is slightly different from the parameterized example shown earlier where we defined a row-handling function. Here we define a class that contains the map_row function. But the concept is the same: inject a row-handler to convert the data.
Read more
  • 0
  • 0
  • 4284

article-image-ride-through-worlds-best-etl-tool-informatica-powercenter
Packt
30 Dec 2014
25 min read
Save for later

A ride through world's best ETL tool – Informatica PowerCenter

Packt
30 Dec 2014
25 min read
In this article, by Rahul Malewar, author of the book, Learning Informatica PowerCenter 9.x, we will go through the basics of Informatica PowerCenter. Informatica Corporation (Informatica), a multi-million dollar company incorporated in February 1993, is an independent provider of enterprise data integration and data quality software and services. The company enables a variety of complex enterprise data integration products, which include PowerCenter, Power Exchange, enterprise data integration, data quality, master data management, business to business (B2B) data exchange, application information lifecycle management, complex event processing, ultra messaging, and cloud data integration. Informatica PowerCenter is the most widely used tool of Informatica across the globe for various data integration processes. Informatica PowerCenter tool helps integration of data from almost any business system in almost any format. This flexibility of PowerCenter to handle almost any data makes it most widely used tool in the data integration world. (For more resources related to this topic, see here.) Informatica PowerCenter architecture PowerCenter has a service-oriented architecture that provides the ability to scale services and share resources across multiple machines. This lets you access the single licensed software installed on a remote machine via multiple machines. High availability functionality helps minimize service downtime due to unexpected failures or scheduled maintenance in the PowerCenter environment. Informatica architecture is divided into two sections: server and client. Server is the basic administrative unit of Informatica where we configure all services, create users, and assign authentication. Repository, nodes, Integration Service, and code page are some of the important services we configure while we work on the server side of Informatica PowerCenter. Client is the graphical interface provided to the users. Client includes PowerCenter Designer, PowerCenter Workflow Manager, PowerCenter Workflow Monitor, and PowerCenter Repository Manager. The best place to download the Informatica software for training purpose is from EDelivery (www.edelivery.com) website of Oracle. Once you download the files, start the extraction of the zipped files. After you finish extraction, install the server first and later client part of PowerCenter. For installation of Informatica PowerCenter, the minimum requirement is to have a database installed in your machine. Because Informatica uses the space from the Oracle database to store the system-related information and the metadata of the code, which you develop in client tool. Informatica PowerCenter client tools Informatica PowerCenter Designer client tool talks about working of the source files and source tables and similarly talks about working on targets. Designer tool allows import/create flat files and relational databases tables. Informatica PowerCenter allows you to work on both types of flat files, that is, delimited and fixed width files. In delimited files, the values are separated from each other by a delimiter. Any character or number can be used as delimiter but usually for better interpretation we use special characters as delimiter. In delimited files, the width of each field is not a mandatory option as each value gets separated by other using a delimiter. In fixed width files, the width of each field is fixed. The values are separated by each other by the fixed size of the column defined. There can be issues in extracting the data if the size of each column is not maintained properly. PowerCenter Designer tool allows you to create mappings using sources, targets, and transformations. Mappings contain source, target, and transformations linked to each other through links. The group of transformations which can be reused is called as mapplets. Mapplets are another important aspect of Informatica tool. The transformations are most important aspect of Informatica, which allows you to manipulate the data based on your requirements. There are various types of transformations available in Informatica PowerCenter. Every transformation performs specific functionality. Various transformations in Informatica PowerCenter The following are the various transformations in Informatica PowerCenter: Expression transformation is used for row-wise manipulation. For any type of manipulation you wish to do on an individual record, use Expression transformation. Expression transformation accepts the row-wise data, manipulates it, and passes to the target. The transformation receives the data from input port and it sends the data out from output ports. Use the Expression transformation for any row-wise calculation, like if you want to concatenate the names, get total salary, and convert in upper case. Aggregator transformation is used for calculations using aggregate functions on a column as against in the Expression transformation, which is used for row-wise manipulation. You can use aggregate functions, such as SUM, AVG, MAX, MIN, and so on in Aggregator transformation. When you use Aggregator transformation, Integration Services stores the data temporarily in cache memory. Cache memory is created because the data flows in row-wise manner in Informatica and the calculations required in Aggregator transformation is column wise. Unless we store the data temporarily in cache, we cannot perform the aggregate calculations to get the result. Using Group By option in Aggregator transformation, you can get the result of the Aggregate function based on group. Also it is always recommended that we pass sorted input to Aggregator transformation as this will enhance the performance. When you pass the sorted input to Aggregator transformation, Integration Services enhances the performance by storing less data into cache. When you pass unsorted data, Aggregator transformation stores all the data into cache which takes more time. When you pass the sorted data to Aggregator transformation, Aggregator transformation stores comparatively lesser data in the cache. Aggregator passes the result of each group as soon the data for particular group is received. Note that Aggregator transformation does not sort the data. If you have unsorted data, use Sorter transformation to sort the data and then pass the sorted data to Aggregator transformation. Sorter transformation is used to sort the data in ascending or descending order based on single or multiple key. Apart from ordering the data in ascending or descending order, you can also use Sorter transformation to remove duplicates from the data using the distinct option in the properties. Sorter can remove duplicates only if complete record is duplicate and not only particular column. Filter transformation is used to remove unwanted records from the mapping. You define the Filter condition in the Filter transformation. Based on filter condition, the records will be rejected or passed further in mapping. The default condition in Filter transformation is TRUE. Based on the condition defined, if the record returns True, the Filter transformation allows the record to pass. For each record which returns False, the Filter transformation drops those records. It is always recommended to use Filter transformation as early as possible in the mapping for better performance. Router transformation is single input group multiple output group transformation. Router can be used in place of multiple Filter transformations. Router transformation accepts the data once through input group and based on the output groups you define, it sends the data to multiple output ports. You need to define the filter condition in each output group. It is always recommended to use Router in place of multiple filters in the mapping to enhance the performance. Rank transformation is used to get top or bottom specific number of records based on the key. When you create a Rank transformation, a default output port RANKINDEX comes with the transformation. It is not mandatory to use the RANKINDEX port. Sequence Generator transformation is used to generate sequence of unique numbers. Based on the property defined in the Sequence Generator transformation, the unique values are generated. You need to define the start value, the increment by value, and the end value in the properties. Sequence Generator transformation has only two ports: NEXTVAL and CURRVAL. Both the ports are output port. Sequence Generator does not have any input port. You cannot add or delete any port in Sequence Generator. It is recommended that you should always use the NEXTVAL port first. If the NEXTVAL port is utilized, use the CURRVAL port. You can define the value of CURRVAL in the properties of Sequence Generator transformation. Joiner transformation is used to join two heterogeneous sources. You can join data from same source type also. The basic criteria for joining the data are a matching column in both the source. Joiner transformation has two pipelines, one is called mater and other is called as detail. We do not have left or right join as we have in SQL database. It is always recommended to make table with lesser number of record as master and other one as details. This is because Integration Service picks the data from master source and scans the corresponding record in details table. So if we have lesser number of records in master table, lesser number of times the scanning will happen. This enhances the performance. Joiner transformation has four types of joins: normal join, full outer join, master outer join, details outer join. Union transformation is used the merge the data from multiple sources. Union is multiple input single output transformation. This is opposite of Router transformation, which we discussed earlier. The basic criterion for using Union transformation is that you should have data with matching data type. If you do not have data with matching data type coming from multiple sources, Union transformation will not work. Union transformation merges the data coming from multiple sources and do not remove duplicates, that is, it acts as UNION ALL of SQL statements. As mentioned earlier, Union requires data coming from multiple sources. Union reads the data concurrently from multiple sources and processes the data. You can use heterogeneous sources to merge the data using Union transformation. Source Qualifier transformation acts as virtual source in Informatica. When you drag relational table or flat file in Mapping Designer, Source Qualifier transformation comes along. Source Qualifier is the point where actually Informatica processing starts. The extraction process starts from the Source Qualifier. Lookup transformation is used to lookup of source, Source Qualifier, or target to get the relevant data. You can look up on flat file or relational tables. Lookup transformation works on the similar lines as Joiner with few differences like Lookup does not require two source. Lookup transformations can be connected and unconnected. Lookup transformation extracts the data from the lookup table or file based on the lookup condition. When you create the Lookup transformation you can configure the Lookup transformation to cache the data. Caching the data makes the processing faster since the data is stored internally after cache is created. Once you select to cache the data, Lookup transformation caches the data from the file or table once and then based on the condition defined, lookup sends the output value. Since the data gets stored internally, the processing becomes faster as it does not require checking the lookup condition in file or database. Integration Services queries the cache memory as against checking the file or table for fetching the required data. The cache is created automatically and also it is deleted automatically once the processing is complete. Lookup transformation has four different types of ports. Input ports (I) receive the data from other transformation. This port will be used in Lookup condition. You need to have at least one input port. Output port (O) passes the data out of the Lookup transformation to other transformations. Lookup port (L) is the port for which you wish to bring the data in mapping. Each column is assigned as lookup and output port when you create the Lookup transformation. If you delete the lookup port from the flat file lookup source, the session will fail. If you delete the lookup port from relational lookup table, Integration Services extracts the data only with Lookup port. This helps in reducing the data extracted from the lookup source. Return port (R) is only used in case of unconnected Lookup transformation. This port indicates which data you wish to return in the Lookup transformation. You can define only one port as return port. Return port is not used in case on connected Lookup transformation. Cache is the temporary memory, which is created when you execute the process. Cache is created automatically when the process starts and is deleted automatically once the process is complete. The amount of cache memory is decided based on the property you define in the transformation level or session level. You usually set the property as default, so as required it can increase the size of the cache. If the size required for caching the data is more than the cache size defined, the process fails with the overflow error. There are different types of caches available for lookup transformation. You can define the session property to create the cache either sequentially or concurrently. When you select to create the cache sequentially, Integration Service caches the data in row-wise manner as the records enters the Lookup transformation. When the first record enters the Lookup transformation, lookup cache gets created and stores the matching record from the lookup table or file in the cache. This way the cache stores only matching data. It helps in saving the cache space by not storing the unnecessary data. When you select to create cache concurrently, Integration Service does not wait for the data to flow from the source, but it first caches complete data. Once the caching is complete, it allows the data to flow from the source. When you select concurrent cache, the performance enhances as compared to sequential cache, since the scanning happens internally using the data stored in cache. You can configure the cache to permanently save the data. By default, the cache is created as non-persistent, that is, the cache will be deleted once the session run is complete. If the lookup table or file does not change across the session runs, you can use the existing persistent cache. A cache is said to be static if it does not change with the changes happening in the lookup table. The static cache is not synchronized with the lookup table. By default Integration Service creates a static cache. Lookup cache is created as soon as the first record enters the Lookup transformation. Integration Service does not update the cache while it is processing the data. A cache is said to be dynamic if it changes with the changes happening in the lookup table. The static cache is synchronized with the lookup table. You can choose from the Lookup transformation properties to make the cache as dynamic. Lookup cache is created as soon as the first record enters the lookup transformation. Integration Service keeps on updating the cache while it is processing the data. The Integration Service marks the record as insert for new row inserted in dynamic cache. For the record which is updated, it marks the record as update in the cache. For every record which no change, the Integration Service marks it as unchanged. Update Strategy transformation is used to INSERT, UPDATE, DELETE, or REJECT record based on defined condition in the mapping. Update Strategy transformation is mostly used when you design mappings for SCD. When you implement SCD, you actually decide how you wish to maintain historical data with the current data. When you wish to maintain no history, complete history, or partial history, you can either use property defined in the session task or you use Update Strategy transformation. When you use Session task, you instruct the Integration Service to treat all records in the same way, that is, either insert, update or delete. When you use Update Strategy transformation in the mapping, the control is no more with the session task. Update Strategy transformation allows you to insert, update, delete or reject record based on the requirement. When you use Update Strategy transformation, the control is no more with session task. You need to define the following functions to perform the corresponding operation: DD_INSERT: This can be used when you wish to insert the records. It is also represented by numeric 0. DD_UPDATE: This can be used when you wish to update the records. It is also represented by numeric 1. DD_DELETE: This can be used when you wish to delete the records. It is also represented by numeric 2. DD_REJECT: This can be used when you wish to reject the records. It is also represented by numeric 3. Normalizer transformation is used in place of Source Qualifier transformation when you wish to read the data from Cobol Copybook source. Also, the Normalizer transformation is used to convert column-wise data to row-wise data. This is similar to transpose feature of MS Excel. You can use this feature if your source is Cobol Copybook file or relational database tables. Normalizer transformation converts column to row and also generate index for each converted row. Stored procedure is a database component. Informatica uses the stored procedure similar to database tables. Stored procedures are set of SQL instructions, which require certain set of input values and in return stored procedure returns output value. The way you either import or create database tables, you can import or create the stored procedure in mapping. To use the Stored Procedure in mapping the stored procedure should exist in the database. Similar to Lookup transformation, stored procedure can also be connected or unconnected transformation in Informatica. When you use connected stored procedure, you pass the value to stored procedure through links. When you use unconnected stored procedure, you pass the value using :SP function. Transaction Control transformation allows you to commit or rollback individual records, based on certain condition. By default, Integration Service commits the data based on the properties you define at the session task level. Using the commit interval property Integration Service commits or rollback the data into target. Suppose you define commit interval as 10,000, Integration Service will commit the data after every 10,000 records. When you use Transaction Control transformation, you get the control at each record to commit or rollback. When you use Transaction Control transformation, you need to define the condition in expression editor of the Transaction Control transformation. When you run the process, the data enters the Transaction Control transformation in row-wise manner. The Transaction Control transformation evaluates each row, based on which it commits or rollback the data. Classification of Transformations The transformations, which we discussed are classified into two categories—active/passive and connected/unconnected. Active/Passive classification of transformations is based on the number of records at the input and output port of the transformation. If the transformation does not change the number of records at its input and output port, it is said to be passive transformation. If the transformation changes the number of records at the input and output port of the transformation, it is said to be active transformation. Also if the transformation changes the sequence of records passing through it, it will be an active transformation as in case of Union transformation. A transformation is said to be connected if it is connected to any source or any target or any other transformation by at least a link. If the transformation is not connected by any link is it classed as unconnected. Only Lookup and stored procedure transformations can be connected and unconnected, rest all transformations are connected. Advanced Features of designer screen Talking about the advanced features of PowerCenter Designer tool, debugger helps you to debug the mappings to find the error in your code. Informatica PowerCenter provides a utility called as debugger to debug the mapping so that you can easily find the issue in the mapping which you created. Using the debugger, you can see the flow of every record across the transformations. Another feature is target load plan, a functionality which allows you to load data in multiple targets in a same mapping maintaining their constraints. The reusable transformations are transformations which allow you to reuse the transformations across multiple mapping. As source and target are reusable components, transformations can also be reused. When you work on any technology, it is always advised that your code should be dynamic. This means you should use the hard coded values as less as possible in your code. It is always recommended that you use the parameters or the variable in your code so you can easily pass these values and need not frequently change the code. This functionality is achieved by using parameter file in Informatica. The value of a variable can change between the session run. The value of parameter will remain constant across the session runs. The difference is very minute so you should define parameter or variable properly as per your requirements. Informatica PowerCenter allows you to compare objects present within repository. You can compare sources, targets, transformations, mapplets, and mappings in PowerCenter Designer under Source Analyzer, Target Designer, Transformation Developer, Mapplet Designer, Mapping Designer respectively. You can compare the objects in the same repository or in multiple repositories. Tracing level in Informatica defines the amount of data you wish to write in the session log when you execute the workflow. Tracing level is a very important aspect in Informatica as it helps in analyzing the error. Tracing level is very helpful in finding the bugs in the process. You can define tracing level in every transformation. Tracing level option is present in every transformation properties window. There are four types of tracing level available: Normal: When you set the tracing level as normal, Informatica stores status information, information about errors, and information about skipped rows. You get detailed information but not at individual row level. Terse: When you set the tracing level as terse, Informatica stores error information and information of rejected records. Terse tracing level occupies lesser space as compared to normal. Verbose initialization: When you set the tracing level as verbose initialize, it stores process details related to startup, details about index and data files created and more details of transformation process in addition to details stored in normal tracing. This tracing level takes more space as compared to normal and terse. Verbose data: This is the most detailed level of tracing level. It occupies more space and takes longer time as compared to other three. It stores row level data in the session log. It writes the truncation information whenever it truncates the data. It also writes the data to error log if you enable row error logging. Default tracing level is normal. You can change the tracing level to terse to enhance the performance. Tracing level can be defined at individual transformation level or you can override the tracing level by defining it at session level. Informatica PowerCenter Workflow Manager Workflow Manager screen is the second and last phase of our development work. In the Workflow Manager session task and workflows are created, which is used to execute mapping. Workflow Manager screen allows you to work on various connections like relations, FTP, and so on. Basically, Workflow Manager contains set of instructions which we define as workflow. The basic building block of workflow is tasks. As we have multiple transformations in designer screen, we have multiple tasks in Workflow Manager Screen. When you create a workflow, you add tasks into it as per your requirement and execute the workflow to see the status in the monitor. Workflow is a combination of multiple tasks connected with links that trigger in proper sequence to execute a process. Every workflow contains start task along with other tasks. When you execute the workflow, you actually trigger start task, which in turn triggers other tasks connected in the flow. Every task performs a specific functionality. You need to use the task based on the functionality you need to achieve. Various tasks in Workflow Manager The following are the tasks in Workflow Manager: Session task is used to execute the mapping. Every session task can execute a single mapping. You need to define the path/connection of the source and target used in the mapping, so the session can extract the data from the defined path and send the data to the mapping for processing. Email task is used to send success or failure email notifications. You can configure your outlook or mailbox with the email task to directly send the notification. Command task is used to execute Unix scripts/commands or Windows commands. Timer task is used to add some time gap or to add delay between two tasks. Timer task have properties related to absolute time and relative time. Assignment task is used to assign a value to workflow variable. Control task is used to control the flow of workflow by stopping or aborting the workflow in case on some error. You can control the flow of complete workflow using control task. Decision task is used to check the status of multiple tasks and hence control the execution of workflow. Link task as against decision task can only check the status of the previous task. Event task is used to wait for a particular event to occur. Usually it is used as file watcher task. Using event wait task we can keep looking for a particular file and then trigger the next task. Evert raise task is used to trigger a particular event defined in workflow. Advanced Workflow Manager Workflow Manager screen has some very important features called as scheduling and incremental aggregation, which allows in easier and convenient processing of data. Scheduling allows you to schedule the workflow as specified timing so the workflow runs at the desired time. You need not manually run the workflow every time, schedule can do the needful. Incremental aggregation and partitioning are advanced features, which allows you to process the data faster. When you run the workflow, Integration Service extracts the data in row wise manner from the source path/connection you defined in session task and makes it flow from the mapping. The data reaches the target through the transformations you defined in the mapping. The data always flow in a row wise manner in Informatica, no matter what so ever is your calculation or manipulation. So if you have 10 records in source, there will be 10 Source to target flows while the process is executed. Informatica PowerCenter Workflow Monitor The Workflow Monitor screen allows the monitoring of the workflows execute in Workflow Manager. Workflow Monitor screen allows you check the status and log files for the Workflow. Using the logs generated, you can easily find the error and rectify the error. Workflow Manager also shows the statistics for number of records extracted from source and number of records loaded into target. Also it gives statistics of error records and bad records. Informatica PowerCenter Repository Manager Repository Manager screen is the fourth client screen, which is basically used for migration (deployment) purpose. This screen is also used for some administration related activities like configuring server with client and creating users. Performance Tuning in Informatica PowerCenter The performance tuning has the contents for the optimizations of various components of Informatica PowerCenter tool, such as source, targets, mappings, sessions, systems. Performance tuning at high level involves two stages, finding the issues called as bottleneck and resolving them. Informatica PowerCenter has features like pushdown optimization and partitioning for better performance. With the defined steps and using the best practices for coding the performance can be enhanced drastically. Slowly Changing Dimensions Using all the understanding of the different client tools you can implement the Data warehousing concepts called as SCD, slowly changing dimensions. Informatica PowerCenter provides wizards, which allow you to easily create different types of SCDs, that is, SCD1, SCD2, and SCD3. Type 1 Dimension mapping (SCD1): It keeps only current data and do not maintain historical data. Type 2 Dimension/Version Number mapping (SCD2): It keeps current as well as historical data in the table. SCD2 allows you to insert new record and changed records using a new column (PM_VERSION_NUMBER) by maintaining the version number in the table to track the changes. We use a new column PM_PRIMARYKEY to maintain the history. Type 2 Dimension/Flag mapping: It keeps current as well as historical data in the table. SCD2 allows you to insert new record and changed records using a new column (PM_CURRENT_FLAG) by maintaining the flag in the table to track the changes. We use a new column PRIMARY_KEY to maintain the history. Type 2 Dimension/Effective Date Range mapping: It keeps current as well as historical data in the table. SCD2 allows you to insert new record and changed records using two new columns (PM_BEGIN_DATE and PM_END_DATE) by maintaining the date range in the table to track the changes. We use a new column PRIMARY_KEY to maintain the history. Type 3 Dimension mapping: It keeps current as well as historical data in the table. We maintain only partial history by adding new column. Summary With this we have discussed the complete PowerCenter tool in brief. The PowerCenter is the best fit tool for any size and any type of data, which you wish to handle. It also provides compatibility with all the files and databases for processing purpose. The transformations present allow you to manipulate any type of data in any form you wish. The advanced features make your work simple by providing convenient options. The PowerCenter tool can make your life easy and can offer you some great career path if you learn it properly as Informatica PowerCenter tool have huge demand in job market and it is one of the highly paid technologies in IT market. Just grab a book and start walking the path. The end will be a great career. We are always available for help. For any help in installation or any issues related to PowerCenter you can reach me at info@dw-learnwell.com. Resources for Article:  Further resources on this subject: Building Mobile Apps [article] Adding a Geolocation Trigger to the Salesforce Account Object [article] Introducing SproutCore [article]
Read more
  • 0
  • 0
  • 4260

article-image-api-mongodb-and-nodejs
Packt
22 Dec 2014
26 min read
Save for later

API with MongoDB and Node.js

Packt
22 Dec 2014
26 min read
In this article by Fernando Monteiro, author of the book Learning Single-page Web Application Development, we will see how to build a solid foundation for our API. Our main aim is to discuss the techniques to build rich web applications, with the SPA approach. We will be covering the following topics in this article: The working of an API Boilerplates and generators The speakers API concept Creating the package.json file The Node server with server.js The model with the Mongoose schema Defining the API routes Using MongoDB in the cloud Inserting data with the Postman Chrome extension (For more resources related to this topic, see here.) The working of an API An API works through communication between different codes, thus defining specific behavior of certain objects on an interface. That is, the API will connect several functions on one website (such as search, images, news, authentications, and so on) to enable it to be used in other applications. Operating systems also have APIs, and they still have the same function. Windows, for example, has APIs such as the Win16 API, Win32 API, or Telephony API, in all its versions. When you run a program that involves some process of the operating system, it is likely that we make a connection with one or more Windows APIs. To clarify the concept of an API, we will give go through some examples of how it works. On Windows, it works on an application that uses the system clock to display the same function within the program. It then associates a behavior to a given clock time in another application, for example, using the Time/Clock API from Windows to use the clock functionality on your own application. Another example, is when you use the Android SDK to build mobile applications. When you use the device GPS, you are interacting with the API (android.location) to display the user location on the map through another API, in this case, Google Maps API. The following is the API example: When it comes to web APIs, the functionality can be even greater. There are many services that provide their code, so that they can be used on other websites. Perhaps, the best example is the Facebook API. Several other websites use this service within their pages, for instance a like button, share, or even authentication. An API is a set of programming patterns and instructions to access a software application based on the Web. So, when you access a page of a beer store in your town, you can log in with your Facebook account. This is accomplished through the API. Using it, software developers and web programmers can create beautiful programs and pages filled with content for their users. Boilerplates and generators On a MEAN stack environment, our ecosystem is infinitely diverse, and we can find excellent alternatives to start the construction of our API. At hand, we have simple boilerplates to complex code generators that can be used with other tools in an integrated way, or even alone. Boilerplates are usually a group of tested code that provides the basic structure to the main goal, that is to create a foundation of a web project. Besides saving us from common tasks such as assembling the basic structure of the code and organizing the files, boilerplates already have a number of scripts to make life easier for the frontend. Let's describe some alternatives that we consider as good starting points for the development of APIs with the Express framework, MongoDB database, Node server, and AngularJS for the frontend. Some more accentuated knowledge of JavaScript might be necessary for the complete understanding of the concepts covered here; so we will try to make them as clearly as possible. It is important to note that everything is still very new when we talk about Node and all its ecosystems, and factors such as scalability, performance, and maintenance are still major risk factors. Bearing in mind also that languages such as Ruby on Rails, Scala, and the Play framework have a higher reputation in building large and maintainable web applications, but without a doubt, Node and JavaScript will conquer your space very soon. That being said, we present some alternatives for the initial kickoff with MEAN, but remember that our main focus is on SPA and not directly on MEAN stack. Hackathon starter Hackathon is highly recommended for a quick start to develop with Node. This is because the boilerplate has the main necessary characteristics to develop applications with the Express framework to build RESTful APIs, as it has no MVC/MVVM frontend framework as a standard but just the Bootstrap UI framework. Thus, you are free to choose the framework of your choice, as you will not need to refactor it to meet your needs. Other important characteristics are the use of the latest version of the Express framework, heavy use of Jade templates and some middleware such as Passport - a Node module to manage authentication with various social network sites such as Twitter, Facebook, APIs for LinkedIn, Github, Last.fm, Foursquare, and many more. They provide the necessary boilerplate code to start your projects very fast, and as we said before, it is very simple to install; just clone the Git open source repository: git clone --depth=1 https://github.com/sahat/hackathon-starter.git myproject Run the NPM install command inside the project folder: npm install Then, start the Node server: node app.js Remember, it is very important to have your local database up and running, in this case MongoDB, otherwise the command node app.js will return the error: Error connecting to database: failed to connect to [localhost: 27017] MEAN.io or MEAN.JS This is perhaps the most popular and currently available boilerplate. MEAN.JS is a fork of the original project MEAN.io; both are open source, with a very peculiar similarity, both have the same author. You can check for more details at http://meanjs.org/. However, there are some differences. We consider MEAN.JS to be a more complete and robust environment. It has a structure of directories, better organized, subdivided modules, and better scalability by adopting a vertical modules development. To install it, follow the same steps as previously: Clone the repository to your machine: git clone https://github.com/meanjs/mean.git Go to the installation directory and type on your terminal: npm install Finally, execute the application; this time with the Grunt.js command: grunt If you are on Windows, type the following command: grunt.cmd Now, you have your app up and running on your localhost. The most common problem when we need to scale a SPA is undoubtedly the structure of directories and how we manage all of the frontend JavaScript files and HTML templates using MVC/MVVM. Later, we will see an alternative to deal with this on a large-scale application; for now, let's see the module structure adopted by MEAN.JS: Note that MEAN.JS leaves more flexibility to the AngularJS framework to deal with the MVC approach for the frontend application, as we can see inside the public folder. Also, note the modules approach; each module has its own structure, keeping some conventions for controllers, services, views, config, and tests. This is very useful for team development, so keep all the structure well organized. It is a complete solution that makes use of additional modules such as passport, swig, mongoose, karma, among others. The Passport module Some things about the Passport module must be said; it can be defined as a simple, unobtrusive authentication module. It is a powerful middleware to use with Node; it is very flexible and also modular. It can also adapt easily within applications that use the Express. It has more than 140 alternative authentications and support session persistence; it is very lightweight and extremely simple to be implemented. It provides us with all the necessary structure for authentication, redirects, and validations, and hence it is possible to use the username and password of social networks such as Facebook, Twitter, and others. The following is a simple example of how to use local authentication: var passport = require('passport'), LocalStrategy = require('passport-local').Strategy, User = require('mongoose').model('User');   module.exports = function() { // Use local strategy passport.use(new LocalStrategy({ usernameField: 'username', passwordField: 'password' }, function(username, password, done) { User.findOne({    username: username }, function(err, user) { if (err) { return done(err); } if (!user) {    return done(null, false, {    message: 'Unknown user'    }); } if (!user.authenticate(password)) {    return done(null, false, {    message: 'Invalid password'    }); } return done(null, user); }); } )); }; Here's a sample screenshot of the login page using the MEAN.JS boilerplate with the Passport module: Back to the boilerplates topic; most boilerplates and generators already have the Passport module installed and ready to be configured. Moreover, it has a code generator so that it can be used with Yeoman, which is another essential frontend tool to be added to your tool belt. Yeoman is the most popular code generator for scaffold for modern web applications; it's easy to use and it has a lot of generators such as Backbone, Angular, Karma, and Ember to mention a few. More information can be found at http://yeoman.io/. Generators Generators are for the frontend as gem is for Ruby on Rails. We can create the foundation for any type of application, using available generators. Here's a console output from a Yeoman generator: It is important to bear in mind that we can solve almost all our problems using existing generators in our community. However, if you cannot find the generator you need, you can create your own and make it available to the entire community, such as what has been done with RubyGems by the Rails community. RubyGem, or simply gem, is a library of reusable Ruby files, labeled with a name and a version (a file called gemspec). Keep in mind the Don't Repeat Yourself (DRY) concept; always try to reuse an existing block of code. Don't reinvent the wheel. One of the great advantages of using a code generator structure is that many of the generators that we have currently, have plenty of options for the installation process. With them, you can choose whether or not to use many alternatives/frameworks that usually accompany the generator. The Express generator Another good option is the Express generator, which can be found at https://github.com/expressjs/generator. In all versions up to Express Version 4, the generator was already pre-installed and served as a scaffold to begin development. However, in the current version, it was removed and now must be installed as a supplement. They provide us with the express command directly in terminal and are quite useful to start the basic settings for utilization of the framework, as we can see in the following commands: create : . create : ./package.json create : ./app.js create : ./public create : ./public/javascripts create : ./public/images create : ./public/stylesheets create : ./public/stylesheets/style.css create : ./routes create : ./routes/index.js create : ./routes/users.js create : ./views create : ./views/index.jade create : ./views/layout.jade create : ./views/error.jade create : ./bin create : ./bin/www   install dependencies:    $ cd . && npm install   run the app:    $ DEBUG=express-generator ./bin/www Very similar to the Rails scaffold, we can observe the creation of the directory and files, including the public, routes, and views folders that are the basis of any application using Express. Note the npm install command; it installs all dependencies provided with the package.json file, created as follows: { "name": "express-generator", "version": "0.0.1", "private": true, "scripts": {    "start": "node ./bin/www" }, "dependencies": {    "express": "~4.2.0",    "static-favicon": "~1.0.0",    "morgan": "~1.0.0",    "cookie-parser": "~1.0.1",    "body-parser": "~1.0.0",    "debug": "~0.7.4",    "jade": "~1.3.0" } } This has a simple and effective package.json file to build web applications with the Express framework. The speakers API concept Let's go directly to build the example API. To be more realistic, let's write a user story similar to a backlog list in agile methodologies. Let's understand what problem we need to solve by the API. The user history We need a web application to manage speakers on a conference event. The main task is to store the following speaker information on an API: Name Company Track title Description A speaker picture Schedule presentation For now, we need to add, edit, and delete speakers. It is a simple CRUD function using exclusively the API with JSON format files. Creating the package.json file Although not necessarily required at this time, we recommend that you install the Webstorm IDE, as we'll use it throughout the article. Note that we are using the Webstorm IDE with an integrated environment with terminal, Github version control, and Grunt to ease our development. However, you are absolutely free to choose your own environment. From now on, when we mention terminal, we are referring to terminal Integrated WebStorm, but you can access it directly by the chosen independent editor, terminal for Mac and Linux and Command Prompt for Windows. Webstorm is very useful when you are using a Windows environment, because Windows Command Prompt does not have the facility to copy and paste like Mac OS X on the terminal window. Initiating the JSON file Follow the steps to initiate the JSON file: Create a blank folder and name it as conference-api, open your terminal, and place the command: npm init This command will walk you through creating a package.json file with the baseline configuration for our application. Also, this file is the heart of our application; we can control all the dependencies' versions and other important things like author, Github repositories, development dependencies, type of license, testing commands, and much more. Almost all commands are questions that guide you to the final process, so when we are done, we'll have a package.json file very similar to this: { "name": "conference-api", "version": "0.0.1", "description": "Sample Conference Web Application", "main": "server.js", "scripts": {    "test": "test" }, "keywords": [    "api" ], "author": "your name here", "license": "MIT" } Now, we need to add the necessary dependencies, such as Node modules, which we will use in our process. You can do this in two ways, either directly via terminal as we did here, or by editing the package.json file. Let's see how it works on the terminal first; let's start with the Express framework. Open your terminal in the api folder and type the following command: npm install express@4.0.0 –-save This command installs the Express module, in this case, Express Version 4, and updates the package.json file and also creates dependencies automatically, as we can see: { "name": "conference-api", "version": "0.0.1", "description": "Sample Conference Web Application", "main": "server.js", "scripts": {    "test": "test" }, "keywords": [    "api" ], "author": "your name here", "license": "MIT", "dependencies": {    "express": "^4.0.0" } } Now, let's add more dependencies directly in the package.json file. Open the file in your editor and add the following lines: { "name": "conference-api", "version": "0.0.1", "description": "Sample Conference Web Application", "main": "server.js", "scripts": {    "test": "test" }, "keywords": [    "api" ], "author": "your name here", "license": "MIT", "engines": {        "node": "0.8.4",        "npm": "1.1.49" }, "dependencies": {    "body-parser": "^1.0.1",    "express": "^4.0.0",    "method-override": "^1.0.0",    "mongoose": "^3.6.13",    "morgan": "^1.0.0",    "nodemon": "^1.2.0" }, } It's very important when you deploy your application using some services such as Travis Cl or Heroku hosting company. It's always good to set up the Node environment. Open the terminal again and type the command: npm install You can actually install the dependencies in two different ways, either directly into the directory of your application or globally with the -g command. This way, you will have the modules installed to use them in any application. When using this option, make sure that you are the administrator of the user machine, as this command requires special permissions to write to the root directory of the user. At the end of the process, we'll have all Node modules that we need for this project; we just need one more action. Let's place our code over a version control, in our case Git. More information about the Git can be found at http://git-scm.com however, you can use any version control as subversion or another. We recommend using Git, as we will need it later to deploy our application in the cloud, more specificly, on Heroku cloud hosting. At this time, our project folder must have the same structure as that of the example shown here: We must point out the utilization of an important module called the Nodemon module. Whenever a file changes it restarts the server automatically; otherwise, you will have to restart the server manually every time you make a change to a file, especially in a development environment that is extremely useful, as it constantly updates our files. Node server with server.js With this structure formed, we will start the creation of the server itself, which is the creation of a main JavaScript file. The most common name used is server.js, but it is also very common to use the app.js name, especially in older versions. Let's add this file to the root folder of the project and we will start with the basic server settings. There are many ways to configure our server, and probably you'll find the best one for yourself. As we are still in the initial process, we keep only the basics. Open your editor and type in the following code: // Import the Modules installed to our server var express   = require('express'); var bodyParser = require('body-parser');   // Start the Express web framework var app       = express();   // configure app app.use(bodyParser());   // where the application will run var port     = process.env.PORT || 8080;   // Import Mongoose var mongoose   = require('mongoose');   // connect to our database // you can use your own MongoDB installation at: mongodb://127.0.0.1/databasename mongoose.connect('mongodb://username:password@kahana.mongohq.com:10073/node-api');   // Start the Node Server app.listen(port); console.log('Magic happens on port ' + port); Realize that the line-making connection with MongoDB on our localhost is commented, because we are using an instance of MongoDB in the cloud. In our case, we use MongoHQ, a MongoDB-hosting service. Later on, will see how to connect with MongoHQ. Model with the Mongoose schema Now, let's create our model, using the Mongoose schema to map our speakers on MongoDB. // Import the Mongoose module. var mongoose     = require('mongoose'); var Schema       = mongoose.Schema;   // Set the data types, properties and default values to our Schema. var SpeakerSchema   = new Schema({    name:           { type: String, default: '' },    company:       { type: String, default: '' },    title:         { type: String, default: '' },    description:   { type: String, default: '' },    picture:       { type: String, default: '' },    schedule:       { type: String, default: '' },    createdOn:     { type: Date,   default: Date.now} }); module.exports = mongoose.model('Speaker', SpeakerSchema); Note that on the first line, we added the Mongoose module using the require() function. Our schema is pretty simple; on the left-hand side, we have the property name and on the right-hand side, the data type. We also we set the default value to nothing, but if you want, you can set a different value. The next step is to save this file to our project folder. For this, let's create a new directory named server; then inside this, create another folder called models and save the file as speaker.js. At this point, our folder looks like this: The README.md file is used for Github; as we are using the Git version control, we host our files on Github. Defining the API routes One of the most important aspects of our API are routes that we take to create, read, update, and delete our speakers. Our routes are based on the HTTP verb used to access our API, as shown in the following examples: To create record, use the POST verb To read record, use the GET verb To update record, use the PUT verb To delete records, use the DELETE verb So, our routes will be as follows: Routes Verb and Action /api/speakers GET retrieves speaker's records /api/speakers/ POST inserts speakers' record /api/speakers/:speaker_id GET retrieves a single record /api/speakers/:speaker_id PUT updates a single record /api/speakers/:speaker_id DELETE deletes a single record Configuring the API routes: Let's start defining the route and a common message for all requests: var Speaker     = require('./server/models/speaker');   // Defining the Routes for our API   // Start the Router var router = express.Router();   // A simple middleware to use for all Routes and Requests router.use(function(req, res, next) { // Give some message on the console console.log('An action was performed by the server.'); // Is very important using the next() function, without this the Route stops here. next(); });   // Default message when access the API folder through the browser router.get('/', function(req, res) { // Give some Hello there message res.json({ message: 'Hello SPA, the API is working!' }); }); Now, let's add the route to insert the speakers when the HTTP verb is POST: // When accessing the speakers Routes router.route('/speakers')   // create a speaker when the method passed is POST .post(function(req, res) {   // create a new instance of the Speaker model var speaker = new Speaker();   // set the speakers properties (comes from the request) speaker.name = req.body.name; speaker.company = req.body.company; speaker.title = req.body.title; speaker.description = req.body.description; speaker.picture = req.body.picture; speaker.schedule = req.body.schedule;   // save the data received speaker.save(function(err) {    if (err)      res.send(err);   // give some success message res.json({ message: 'speaker successfully created!' }); }); }) For the HTTP GET method, we need this: // get all the speakers when a method passed is GET .get(function(req, res) { Speaker.find(function(err, speakers) {    if (err)      res.send(err);      res.json(speakers); }); }); Note that in the res.json() function, we send all the object speakers as an answer. Now, we will see the use of different routes in the following steps: To retrieve a single record, we need to pass speaker_id, as shown in our previous table, so let's build this function: // on accessing speaker Route by id router.route('/speakers/:speaker_id')   // get the speaker by id .get(function(req, res) { Speaker.findById(req.params.speaker_id, function(err,    speaker) {    if (err)      res.send(err);      res.json(speaker);    }); }) To update a specific record, we use the PUT HTTP verb and then insert the function: // update the speaker by id .put(function(req, res) { Speaker.findById(req.params.speaker_id, function(err,     speaker) {      if (err)      res.send(err);   // set the speakers properties (comes from the request) speaker.name = req.body.name; speaker.company = req.body.company; speaker.title = req.body.title; speaker.description = req.body.description; speaker.picture = req.body.picture; speaker.schedule = req.body.schedule;   // save the data received speaker.save(function(err) {    if (err)      res.send(err);      // give some success message      res.json({ message: 'speaker successfully       updated!'}); });   }); }) To delete a specific record by its id: // delete the speaker by id .delete(function(req, res) { Speaker.remove({    _id: req.params.speaker_id }, function(err, speaker) {    if (err)      res.send(err);   // give some success message res.json({ message: 'speaker successfully deleted!' }); }); }); Finally, register the Routes on our server.js file: // register the route app.use('/api', router); All necessary work to configure the basic CRUD routes has been done, and we are ready to run our server and begin creating and updating our database. Open a small parenthesis here, for a quick step-by-step process to introduce another tool to create a database using MongoDB in the cloud. There are many companies that provide this type of service but we will not go into individual merits here; you can choose your preference. We chose Compose (formerly MongoHQ) that has a free sandbox for development, which is sufficient for our examples. Using MongoDB in the cloud Today, we have many options to work with MongoDB, from in-house services to hosting companies that provide Platform as a Service (PaaS) and Software as a Service (SaaS). We will present a solution called Database as a Service (DbaaS) that provides database services for highly scalable web applications. Here's a simple step-by-step process to start using a MongoDB instance with a cloud service: Go to https://www.compose.io/. Create your free account. On your dashboard panel, click on add Database. On the right-hand side, choose Sandbox Database. Name your database as node-api. Add a user to your database. Go back to your database title, click on admin. Copy the connection string. The string connection looks like this: mongodb://<user>:<password>@kahana.mongohq.com:10073/node-api. Let's edit the server.js file using the following steps: Place your own connection string to the Mongoose.connect() function. Open your terminal and input the command: nodemon server.js Open your browser and place http://localhost:8080/api. You will see a message like this in the browser: { Hello SPA, the API is working! } Remember the api folder was defined on the server.js file when we registered the routes: app.use('/api', router); But, if you try to access http://localhost:8080/api/speakers, you must have something like this: [] This is an empty array, because we haven't input any data into MongoDB. We use an extension for the Chrome browser called JSONView. This way, we can view the formatted and readable JSON files. You can install this for free from the Chrome Web Store. Inserting data with Postman To solve our empty database and before we create our frontend interface, let's add some data with the Chrome extension Postman. By the way, it's a very useful browser interface to work with RESTful APIs. As we already know that our database is empty, our first task is to insert a record. To do so, perform the following steps: Open Postman and enter http://localhost:8080/api/speakers. Select the x-www-form-urlencoded option and add the properties of our model: var SpeakerSchema   = new Schema({ name:           { type: String, default: '' }, company:       { type: String, default: '' }, title:         { type: String, default: '' }, description:   { type: String, default: '' }, picture:       { type: String, default: '' }, schedule:       { type: String, default: '' }, createdOn:     { type: Date,   default: Date.now} }); Now, click on the blue button at the end to send the request. With everything going as expected, you should see message: speaker successfully created! at the bottom of the screen, as shown in the following screenshot: Now, let's try http://localhost:8080/api/speakers in the browser again. Now, we have a JSON file like this, instead of an empty array: { "_id": "53a38ffd2cd34a7904000007", "__v": 0, "createdOn": "2014-06-20T02:20:31.384Z", "schedule": "10:20", "picture": "fernando.jpg", "description": "Lorem ipsum dolor sit amet, consectetur     adipisicing elit, sed do eiusmod...", "title": "MongoDB", "company": "Newaeonweb", "name": "Fernando Monteiro" } When performing the same action on Postman, we see the same result, as shown in the following screenshot: Go back to Postman, copy _id from the preceding JSON file and add to the end of the http://localhost:8080/api/speakers/53a38ffd2cd34a7904000005 URL and click on Send. You will see the same object on the screen. Now, let's test the method to update the object. In this case, change the method to PUT on Postman and click on Send. The output is shown in the following screenshot: Note that on the left-hand side, we have three methods under History; now, let's perform the last operation and delete the record. This is very simple to perform; just keep the same URL, change the method on Postman to DELETE, and click on Send. Finally, we have the last method executed successfully, as shown in the following screenshot: Take a look at your terminal, you can see four messages that are the same: An action was performed by the server. We configured this message in the server.js file when we were dealing with all routes of our API. router.use(function(req, res, next) { // Give some message on the console console.log('An action was performed by the server.'); // Is very important using the next() function, without this the Route stops here. next(); }); This way, we can monitor all interactions that take place at our API. Now that we have our API properly tested and working, we can start the development of the interface that will handle all this data. Summary In this article, we have covered almost all modules of the Node ecosystem to develop the RESTful API. Resources for Article: Further resources on this subject: Web Application Testing [article] A look into responsive design frameworks [article] Top Features You Need to Know About – Responsive Web Design [article]
Read more
  • 0
  • 0
  • 4259
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-integrating-typeaheadjs-wordpress-and-ruby-rails
Packt
17 Oct 2013
6 min read
Save for later

Integrating typeahead.js into WordPress and Ruby on Rails

Packt
17 Oct 2013
6 min read
(For more resources related to this topic, see here.) Integrating typeahead.js into WordPress (Become an expert) WordPress is an incredibly well known and well used open source blogging platform, and it is almost fully featured, except of course for the ability to have a typeahead style lookup on your site! In this article we are going to fix that. Getting ready In order to create this we are going to first need to have a working WordPress installed. WordPress runs off a LAMP stack so if you haven't got one of those running locally you will need to set this up. Once set up you can download WordPress from http://wordpress.org/, extract the files, place them in your localhost, and visit http://localhost/install/. This will then guide you through the rest of the install process. Now we should be ready to get typeahead.js working with WordPress. How to do it... Like so many things in WordPress, when it comes to adding new functionality, there is probably already a plugin, and in our case there is one made by Kyle Reicks that can be found at https://github.com/kylereicks/typeahead.js.wp. Download the code and add the folder it downloads to /wp-content/plugins/ Log into our administration panel at http://localhost/wp-admin/ and go to the Plugins section. You will see an option to activate our new plugin, so activate it now. Once activated, under plugins you will now have access to typeahead Settings. In here you can set up what type of things you want typeahead to be used for; pick posts, tags, pages, and categories. How it works... This plugin hijacks the default search form that WordPress uses out of the box and adds the typeahead functionality to it. For each of the post types that you have associated with typeahead plugin, it will create a JSON file, with each JSON file representing a different dataSet and getting loaded in with prefetch. There's more... The plugin is a great first start, but there is plenty that could be done to improve it. For example, by editing /js/typeahead-activation.js we could edit the amount of values that get returned by our typeahead search: if(typeahead.datasets.length){ typeahead.data = []; for(i = 0, arrayLength = typeahead.datasets.length; i < arrayLength; i++){ typeahead.data[i] = { name: typeahead.datasets[i], prefetch: typeahead.dataUrl + '?data=' + typeahead.datasets[i], limit: 10 }; } jQuery(document).ready(function($){ $('#searchform input[type=text], #searchform input[type=search]').typeahead(typeahead.data); }); } Integrating typeahead.js into Ruby on Rails (Become an expert) Ruby on Rails has become one of the most popular frameworks for developing web applications in, and it comes as little surprise that Rails developers would like to be able to harness the power of typeahead.js. In this recipe we will look at how you can quickly get up and running with typeahead.js in your Rails project. Getting ready Ruby on Rails is an open source web application framework for the Ruby language. It famously champions the idea of convention over configuration, which is one of the reasons it has been so widely adopted. Obviously in order to do this we will need a rails application. Setting up Ruby on Rails is an entire article to itself, but if you follow the guides on http://rubyonrails.org/, you should be able to get up and start running quickly with your chosen setup. We will start from the point that both Ruby and Ruby on Rails have been installed and set up correctly. We will also be using a Gem made by Yousef Ourabi, which has the typeahead.js functionality we need. We can find it at https://github.com/yourabi/twitter-typeahead-rails. How to do it... The first thing we will need is a Rails project, and we can create one of these by typing; rails new typeahead_rails This will generate the basic rails application for us, and one of the files it will generate is the Gemfile which we need to edit to include our new Gem; source 'https://rubygems.org' gem 'rails', '3.2.13' gem 'sqlite3' gem 'json' group :assets do gem 'sass-rails', '~> 3.2.3' gem 'coffee-rails', '~> 3.2.1' gem 'uglifier', '>= 1.0.3' end gem 'jquery-rails' gem 'twitter-typeahead-rails' With this change made, we need to reinstall our Gems: bundle install We will now have the required file, but before we can access them we need to add a reference to them in our manifest file. We do this by editing app/assets/javascripts and adding a reference to typeahead.js: //= require jquery //= require jquery_ujs //= require_tree //= require twitter/typeahead Of course we need a page to try this out on, so let's have Rails make us one; rails generate controller Pages home One of the files generated by the above command will be found in app/views/pages/home.html.erb. Let's edit this now: <label for="friends">Pick Your Friend</label> <input type="text" name="friends" /> <script> $('input').typeahead({ name: 'people', local: ['Elaine', 'Column', 'Kirsty', 'Chris Elder'] }); </script> Finally we will start up a web server to be able to view what we have accomplished; rails s And now if we go to localhost:3000/pages/home we should see something very much. How it works... The Gem we installed brings together the required JavaScript files that we normally need to include manually, allowing them to be accessed from our manifest file, which will load all mentioned JavaScript on every page. There's more... Of course we don't need to use a Gem to install typeahead functionality, we could have manually copied the code into a file called typeahead.js that sat inside of app/assets/javascripts/twitter/ and this would have been accessible to the manifest file too and produced the same functionality. This would mean one less dependency on a Gem, which in my opinion is always a good thing, although this isn't necessarily the Rails way, which is why I didn't lead with it. Summary In this article, we explained the functionality of WordPress, which is probably the biggest open source blogging platform in the world right now and it is pretty feature complete. One thing the search doesn't have, though, is good typeahead functionality. In this article we learned how to change that by incorporating a WordPress plugin that gives us this functionality out of the box. It also discussed how Ruby on Rails is fast becoming the framework of choice among developers wanting to build web applications fast, along with out of the box benefits of using Ruby on Rails. Using Ruby gives you access to a host of excellent resources in the form of Gems. In this article we had a look at one Gem that gives us typeahead.js functionality in our Ruby on Rails project. Resources for Article: Further resources on this subject: Customizing WordPress Settings for SEO [Article] Getting Started with WordPress 3 [Article] Building tiny Web-applications in Ruby using Sinatra [Article]
Read more
  • 0
  • 0
  • 4257

article-image-professional-environment-react-native-part-2
Pierre Monge
12 Jan 2017
4 min read
Save for later

A Professional Environment for React Native, Part 2

Pierre Monge
12 Jan 2017
4 min read
In Part 1 of this series, I covered the full environment and everything you need to start creating your own React Native applications. Now here in Part 2, we are going to dig in and go over the tools that you can take advantage of for maintaining those React Native apps. Maintaining the application Maintaining a React Native application, just like any software, is very complex and requires a lot of organization. In addition to having strict code (a good syntax with eslint or a good understanding of the code with flow), you must have intelligent code, and you must organize your files, filenames, and variables. It is necessary to have solutions for the maintenance of the application in the long term as well as have tools that provide feedback. Here are some tools that we use, which should be in place early in the cycle of your React Native development. GitHub GitHub is a fantastic tool, but you need to know how to control it. In my company, we have our own Git flow with a Dev branch, a master branch, release branches, bugs and other useful branches. It's up to you to make your own flow for Git! One of the most important things is the Pull Request, or the PR! And if there are many people on your project, it is important for your group to agree on the organization of the code. BugTracker & Tooling We use many tools in my team, but here is our Must-Have list to maintain the application: circleCI: This is a continuous integration tool that we integrate with GitHub. It allows us to pass recurrent tests with each new commit. BugSnag: This is a bug tracking tool that can be used in a React Native integration, which makes it possible to raise user bugs by the webs without the user noticing it. codePush: This is useful for deploying code on versions already in production. And yes, you can change business code while the application is already in production. I do not pay much attention to it, yet the states of applications (Debug, Beta, and Production) are a big part that has to be treated because it is a workset to have for quality work and a long application life. We also have quality assurance in our company, which allows us to validate a product before it is set up, which provides a regular process of putting a React Native app into production. As you can see, there are many tools that will help you maintain a React Native mobile application. Despite the youthfulness of the product, the community grows quickly and developers are excited about creating apps. There are more and more large companies using React Native, such as AirBnB , Wix, Microsoft, and many others. And with the technology growing and improving, there are more and more new tools and integrations coming to React Native. I hope this series has helped you create and maintain your own React Native applications. Here is a summary of the tools covered: Atom is a text editor that's modern, approachable, yet hackable to the core—a tool that you can customize to do anything, but you also need to use it productively without ever touching a config file. GitHub is a web-based Git repository hosting service. CircleCI is a modern continuous integration and delivery platform that software teams love to use. BugSnag monitors application errors to improve customer experiences and code quality. react-native-code-push is a plugin that provides client-side integration, allowing you to easily add a dynamic update experience to your React Native app. About the author Pierre Monge (liroo.pierre@gmail.com) is a 21-year-old student. He is a developer in C, JavaScript, and all things web development, and he has recently been creating mobile applications. He is currently working as an intern at a company named Azendoo, where he is developing a 100% React Native application.
Read more
  • 0
  • 0
  • 4253

article-image-load-validate-and-submit-forms-using-ext-js-30-part-3
Packt
19 Nov 2009
4 min read
Save for later

Load, Validate, and Submit Forms using Ext JS 3.0: Part 3

Packt
19 Nov 2009
4 min read
Loading form data from the server An important part of working with forms is loading the data that a form will display. Here's how to create a sample contact form and populate it with data sent from the server. How to do it... Declare the name and company panel: var nameAndCompany = { columnWidth: .5, layout: 'form', items: [ { xtype: 'textfield', fieldLabel: 'First Name', name: 'firstName', anchor: '95%' }, { xtype: 'textfield', fieldLabel: 'Last Name', name: 'lastName', anchor: '95%' }, { xtype: 'textfield', fieldLabel: 'Company', name: 'company', anchor: '95%' }, { xtype: 'textfield', fieldLabel: 'Title', name: 'title', anchor: '95%' } ]} Declare the picture box panel: var picBox = { columnWidth: .5, bodyStyle: 'padding:0px 0px 0px 40px', items: [ { xtype: 'box', autoEl: { tag: 'div', style: 'padding-bottom:20px', html: '<img id="pic" src="' + Ext.BLANK_IMAGE_URL + '" class="img-contact" />' } }, { xtype: 'button', text: 'Change Picture' } ]} Define the Internet panel: var internet = { columnWidth: .5, layout: 'form', items: [ { xtype: 'fieldset', title: 'Internet', autoHeight: true, defaultType: 'textfield', items: [{ fieldLabel: 'Email', name: 'email', vtype: 'email', anchor: '95%' }, { fieldLabel: 'Web page', name: 'webPage', vtype: 'url', anchor: '95%' }, { fieldLabel: 'IM', name: 'imAddress', anchor: '95%' }] }]} Declare the phone panel: var phones = { columnWidth: .5, layout: 'form', items: [{ xtype: 'fieldset', title: 'Phone Numbers', autoHeight: true, defaultType: 'textfield', items: [{ fieldLabel: 'Home', name: 'homePhone', anchor: '95%' }, { fieldLabel: 'Business', name: 'busPhone', anchor: '95%' }, { fieldLabel: 'Mobile', name: 'mobPhone', anchor: '95%' }, { fieldLabel: 'Fax', name: 'fax', anchor: '95%' }] }]} Define the business address panel: var busAddress = { columnWidth: .5, layout: 'form', labelAlign: 'top', defaultType: 'textarea', items: [{ fieldLabel: 'Business', labelSeparator:'', name: 'bAddress', anchor: '95%' }, { xtype: 'radio', boxLabel: 'Mailing Address', hideLabel: true, name: 'mailingAddress', value:'bAddress', id:'mailToBAddress' }]} Define the home address panel: var homeAddress = { columnWidth: .5, layout: 'form', labelAlign: 'top', defaultType: 'textarea', items: [{ fieldLabel: 'Home', labelSeparator:'', name: 'hAddress', anchor: '95%' }, { xtype: 'radio', boxLabel: 'Mailing Address', hideLabel: true, name: 'mailingAddress', value:'hAddress', id:'mailToHAddress' }]} Create the contact form: var contactForm = new Ext.FormPanel({ frame: true, title: 'TODO: Load title dynamically', bodyStyle: 'padding:5px', width: 650, items: [{ bodyStyle: { margin: '0px 0px 15px 0px' }, items: [{ layout: 'column', items: [nameAndCompany, picBox] }] }, { items: [{ layout: 'column', items: [phones, internet] }] }, { xtype: 'fieldset', title: 'Addresses', autoHeight: true, hideBorders: true, layout: 'column', items: [busAddress, homeAddress] }], buttons: [{ text: 'Save' }, { text: 'Cancel' }]}); Handle the form's actioncomplete event: contactForm.on({ actioncomplete: function(form, action){ if(action.type == 'load'){ var contact = action.result.data; Ext.getCmp(contact.mailingAddress).setValue(true); contactForm.setTitle(contact.firstName + ' ' + contact.lastName); Ext.getDom('pic').src = contact.pic; } }}); Render the form: contactForm.render(document.body); Finally, load the form: contactForm.getForm().load({ url: 'contact.php', params:{id:'contact1'}, waitMsg: 'Loading'}); How it works... The contact form's building sequence consists of defining each of the contained panels, and then defining a form panel that will serve as a host. The following screenshot shows the resulting form, with the placement of each of the panels pinpointed: Moving on to how the form is populated, the JSON-encoded response to a request to provide form data has a structure similar to this: {success:true,data:{id:'1',firstName:'Jorge',lastName:'Ramon',company:'MiamiCoder',title:'Mr',pic:'img/jorger.jpg',email:'ramonj@miamicoder.net',webPage:'http://www.miamicoder.com',imAddress:'',homePhone:'',busPhone:'555 555-5555',mobPhone:'',fax:'',bAddress:'123 Acme Rd #001nMiami, FL 33133',hAddress:'',mailingAddress:'mailToBAddress'}} The success property indicates whether the request has succeeded or not. If the request succeeds, success is accompanied by a data property, which contains the contact's information. Although some fields are automatically populated after a call to load(), the form's title, the contact's picture, and the mailing address radio button require further processing. This can be done in the handler for the actioncomplete event: contactForm.on({ actioncomplete: function(form, action){ if(action.type == 'load'){} }}); As already mentioned, the contact's information arrives in the data property of the action's result: var contact = action.result.data; The default mailing address comes in the contact's mailingAddress property. Hence, the radio button for the default mailing address is set as shown in the following line of code: Ext.getCmp(contact.mailingAddress).setValue(true); The source for the contact's photo is the value of contact.pic: Ext.getDom('pic').src = contact.pic; And finally, the title of the form: contactForm.setTitle(contact.firstName + ' ' + contact.lastName); There's more... Although this recipe's focus is on loading form data, you should also pay attention to the layout techniques used—multiple rows, multiple columns, fieldsets—that allow you to achieve rich and flexible user interfaces for your forms. See Also... The next recipe, Serving the XML data to a form, explains how to use a form to load the XML data sent from the server.
Read more
  • 0
  • 0
  • 4250

article-image-programming-littlebits-circuits-javascript-part-1
Anna Gerber
12 Feb 2015
6 min read
Save for later

Programming littleBits circuits with JavaScript Part 1

Anna Gerber
12 Feb 2015
6 min read
littleBits are electronic building blocks that snap together with magnetic connectors. They are great for getting started with electronics and robotics and for prototyping circuits. The littleBits Arduino Coding Kit includes an Arduino-compatible microcontroller, which means that you can use the Johnny-Five JavaScript Robotics programming framework to program your littleBits creations using JavaScript, the programming language of the web. Setup Plug the Arduino bit into your computer from the port at the top of the Arduino module. You'll need to supply power to the Arduino by connecting a blue power module to any of the input connectors. The Arduino will appear as a device with a name like /dev/cu.usbmodemfa131 on Mac, or COM3 on Windows. Johnny-Five uses a communication protocol called Firmata to communicate with the Arduino microcontroller. We'll load the Standard Firmata sketch onto the Arduino the first time we go to use it, to make this communication possible. Installing Firmata via the Chrome App One of the easiest ways to get started programming with Johnny-Five is by using this app for Google Chrome. After you have installed it, open the 'Johnny-Five Chrome' app from the Chrome apps page. To send the Firmata sketch to your board using the extension, select the port corresponding to your Arduino bit from the drop-down menu and then hit the Install Firmata button. If the device does not appear in the list at first, try the app's refresh button. Installing Firmata via the command line If you would prefer not to use the Chrome app, you can skip straight to using Node.js via the command line. You'll need a recent version of Node.js installed. Create a folder for your project's code. On a Mac run the Terminal app, and on Windows run Command Prompt. From the command line change directory so you are inside your project folder, and then use npm to install the Johnny-Five library and nodebots-interchange: npm install johnny-five npm install -g nodebots-interchange Use the interchange program from nodebots-interchange to send the StandardFirmata sketch to your Arduino: interchange install StandardFirmata -a leonardo -p /dev/cu.usbmodemfa131 Note: If you are familiar with Arduino IDE, you could alternatively use it to write Firmata to your Arduino. Open File > Examples > Firmata > StandardFirmata and select your port and Arduino Leonardo from Tools > Board then hit Upload. Inputs and Outputs Programming with hardware is all about I/O: inputs and outputs. These can be either analog (continuous values) or digital (discrete 0 or 1 values). littleBits input modules are color coded pink, while outputs are green. The Arduino Coding Kit includes analog inputs (dimmers) as well as a digital input module (button). The output modules included in the kit are a servo motor and an LED bargraph, which can be used as a digital output (i.e. on or off) or as an analog output to control the number of LEDs displayed, or with Pulse-Width-Modulation (PWM) - using a pattern of pulses on a digital output - to control LED brightness. Building a circuit Let's start with our output modules: the LED bargraph and servo. Connect a blue power module to any connector on the left-hand side of the Arduino. Connect the LED bargraph to the connector labelled d5 and the servo module to the connector labelled d9. Flick the switch next to both outputs to PWM. The mounting boards that come with the Arduino Coding Kit come in handy for holding your circuit together. Blinking an LED bargraph You can write the JavaScript program using the editor inside the Chrome app, or any text editor. We require the johnny-five library tocreate a board object with a "ready" handler. Our code for working with inputs and outputs will go inside the ready handler so that it will run after the Arduino has started up and communication has been established: var five = require("johnny-five"); var board = new five.Board(); board.on("ready", function() { // code for button, dimmers, servo etc goes here }); We'll treat the bargraph like a single output. It's connected to digital "pin" 5 (d5), so we'll need to provide this with a parameter when we create the Led object. The strobe function causes the LED to blink on and off The parameter to the function indicates the number of milliseconds between toggling the LED on or off (one second in this case): var led = new five.Led(5); led.strobe( 1000 ); Running the code Note: Make sure the power switch on your power module is switched on. If you are using the Chrome app, hit the Run button to start the program. You should see the LED bargraph start blinking. Any errors will be printed to the console below the code. If you have unplugged your Arduino since the last time you ran code via the app, you'll probably need to hit refresh and select the port for your device again from the drop-down above the code editor. The Chrome app is great for getting started, but eventually you'll want to switch to running programs using Node.js, because the Chrome app only supports a limited number of built-in libraries. Use a text editor to save your code to a file (e.g. blink.js) within your project directory, and run it from the command line using Node.js: node blink.js You can hit control-D on Windows or command-D on Mac to end the program. Controlling a Servo Johnny-Five includes a Servo class, but this is for controlling servo motors directly using PWM. The littleBits servo module already takes care of that for us, so we can treat it like a simple motor. Create a Motor object on pin 9 to correspond to the servo. We can start moving it using the start function. The parameter is a number between 0 and 255, which controls the speed. The stop function stops the servo. We'll use the board's wait function to stop the servo after 5 seconds (i.e. 5000 milliseconds). var servo = new five.Motor(9); servo.start(255); this.wait(5000, function(){ servo.stop(); }); In Part 2, we'll read data from our littleBits input modules and use these values to trigger changes to the servo and bargraph. About the author Anna Gerber is a full-stack developer with 15 years of experience in the university sector. Specializing in Digital Humanities, she was a Technical Project Manager at the University of Queensland’s eResearch centre, and she has worked at Brisbane’s Distributed System Technology Centre as a Research Scientist. Anna is a JavaScript robotics enthusiast who enjoys tinkering with soft circuits and 3D printers.
Read more
  • 0
  • 0
  • 4184
article-image-architecture-backbone
Packt
04 Nov 2015
18 min read
Save for later

Architecture of Backbone

Packt
04 Nov 2015
18 min read
In this article by Abiee Echamea, author of the book Mastering Backbone.js, you will see that one of the best things about Backbone is the freedom of building applications with the libraries of your choice, no batteries included. Backbone is not a framework but a library. Building applications with it can be challenging as no structure is provided. The developer is responsible for code organization and how to wire the pieces of code across the application; it's a big responsibility. Bad decisions about code organization can lead to buggy and unmaintainable applications that nobody wants to see. In this article, you will learn the following topics: Delegating the right responsibilities to Backbone objects Splitting the application into small and maintainable scripts (For more resources related to this topic, see here.) The big picture We can split application into two big logical parts. The first is an infrastructure part or root application, which is responsible for providing common components and utilities to the whole system. It has handlers to show error messages, activate menu items, manage breadcrumbs, and so on. It also owns common views such as dialog layouts or loading the progress bar. A root application is responsible for providing common components and utilities to the whole system. A root application is the main entry point to the system. It bootstraps the common objects, sets the global configuration, instantiates routers, attaches general services to a global application, renders the main application layout at the body element, sets up third-party plugins, starts a Backbone history, and instantiates, renders, and initializes components such as a header or breadcrumb. However, the root application itself does nothing; it is just the infrastructure to provide services to the other parts that we can call subapplications or modules. Subapplications are small applications that run business value code. It's where the real work happens. Subapplications are focused on a specific domain area, for example, invoices, mailboxes, or chats, and should be decoupled from the other applications. Each subapplication has its own router, entities, and views. To decouple subapplications from the root application, communication is made through a message bus implemented with the Backbone.Events or Backbone.Radio plugin such that services are requested to the application by triggering events instead of call methods on an object. Subapplications are focused on a specific domain area and should be decoupled from the root application and other subapplications. Figure 1.1 shows a component diagram of the application. As you can see, the root application depends on the routers of the subapplications due to the Backbone.history requirement to instantiate all the routers before calling the start method and the root application does this. Once Backbone.history is started, the browser's URL is processed and a route handler in a subapplication is triggered; this is the entry point for subapplications. Additionally, a default route can be defined in the root application for any route that is not handled on the subapplications. Figure 1.1: Logical organization of a Backbone application When you build Backbone applications in this way, you know exactly which object has the responsibility, so debugging and improving the application is easier. Remember, divide and conquer. Also by doing this, you make your code more testable, improving its robustness. Responsibilities of the Backbone objects One of the biggest issues with the Backbone documentation is no clues about how to use its objects. Developers should figure out the responsibilities for each object across the application but at least you have some experience working with Backbone already and this is not an easy task. The next sections will describe the best uses for each Backbone object. In this way, you will have a clearer idea about the scope of responsibilities of Backbone, and this will be the starting point of designing our application architecture. Keep in mind, Backbone is a library with foundation objects, so you will need to bring your own objects and structure to make an awesome Backbone application. Models This is the place where the general business logic lives. A specific business logic should be placed on other sites. A general business logic is all the rules that are so general that they can be used on multiple use cases, while specific business logic is a use case itself. Let's imagine a shopping cart. A model can be an item in the cart. The logic behind this model can include calculating the total by multiplying the unit price by the quantity or setting a new quantity. In this scenario, assume that the shop has a business rule that a customer can buy the same product only three times. This is a specific business rule because it is specific for this business, or how many stores do you know with this rule? These business rules take place on other sites and should be avoided on models. Also, it's a good idea to validate the model data before sending requests to the server. Backbone helps us with the validate method for this, so it's reasonable to put validation logic here too. Models often synchronize the data with the server, so direct calls to servers such as AJAX calls should be encapsulated at the model level. Models are the most basic pieces of information and logic; keep this in mind. Collections Consider collections as data repositories similar to a database. Collections are often used to fetch the data from the server and render its contents as lists or tables. It's not usual to see business logic here. Resource servers have different ways to deal with lists of resources. For instance, while some servers accept a skip parameter for pagination, others have a page parameter for the same purpose. Another case is responses; a server can respond with a plain array while other prefer sending an object with a data, list, or some other key, where an array of objects is placed. There is no standard way. Collections can deal with these issues, making server requests transparent for the rest of the application. Views Views have the responsibility of handling Document Object Model (DOM). Views work closely with the template engines rendering the templates and putting the results in DOM. Listen for low-level events using a jQuery API and transform them into domain ones. Views abstract the user interactions transforming his/her actions into data structures for the application, for example, clicking on a save button in a form view will create a plain object with the information in the input and trigger a domain event such as save:contact with this object attached. Then a domain-specific object can apply domain logic to the data and show a result. Business logic on views should be avoided, but basic form validations are allowed, such as accepting only numbers, but complex validations should be done on the model. Routers Routers have a simple responsibility: listening for URL changes on the browser and transforming them into a call to a handler. A router knows which handler to call for a given URL and also decodes the URL parameters and passes them to the handlers. The root application bootstraps the infrastructure, but routers decide which subapplication will be executed. In this way, routers are a kind of entry point. Domain objects It is possible to develop Backbone applications using only the Backbone objects described in the previous section, but for a medium-to-large application, it's not sufficient. We need to introduce a new kind of object with well-delimited responsibilities that use and coordinate the Backbone foundation objects. Subapplication facade This object is the public interface of a subapplication. Any interaction with the subapplication should be done through its methods. Direct calls to internal objects of the subapplication are discouraged. Typically, methods on this controller are called from the router but can be called from anywhere. The main responsibility of this object is simplifying the subapplication internals, so its work is to fetch the data from the server through models or collections and in case an error occurs during the process, it has to show an error message to the user. Once the data is loaded in a model or collection, it creates a subapplication controller that knows the views that should be rendered and has the handlers to deal with its events. The subapplication facade will transform the URL request into a Backbone data object. It shows the right error message; creates a subapplication controller, and delegates the control to it. The subapplication controller or mediator This object acts as an air traffic controller for the views, models, and collections. With a Backbone data object, it will instantiate and render the appropriate views and then coordinate them. However, the coordination task is not easy in complex layouts. Due to loose coupling reasons, a view cannot call the methods or events of the other views directly. Instead of this, a view triggers an event and the controller handles the event and orchestrates the view's behavior, if necessary. Note how the views are isolated, handling just their owned portion of DOM and triggering events when they need to communicate something. Business logic for simple use cases can be implemented here, but for more complex interactions, another strategy is needed. This object implements the mediator pattern allowing other basic objects such as views and models to keep it simple and allow loose coupling. The logic workflow The application starts bootstrapping common components and then initializes all the routers available for the subapplications and starts Backbone.history. See Figure 1.2, After initialization, the URL on the browser will trigger a route for a subapplication, then a route handler instantiates a subapplication facade object and calls the method that knows how to handle the request. The facade will create a Backbone data object, such as a collection, and fetch the data from the server calling its fetch method. If an error is issued while fetching the data, the subapplication facade will ask the root application to show the error, for example, a 500 Internal Server Error. Figure 1.2: Abstract architecture for subapplications Once the data is in a model or collection, the subapplication facade will instantiate the subapplication object that knows the business rules for the use case and pass the model or collection to it. Then, it renders one or more view with the information of the model or collection and places the results in the DOM. The views will listen for DOM events, for example, click, and transform them into a higher-level event to be consumed by the application object. The subapplication object listens for events on models and views and coordinates them when an event is triggered. When the business rules are not too complex, they can be implemented on this application object, such as deleting a model. Models and views can be in sync with the Backbone events or use a library for bindings such as Backbone.Stickit. In the next section, we will describe this process step by step with code examples for a better understanding of the concepts explained. Route handling The entry point for a subapplication is given by its routes, which ideally share the same namespace. For instance, a contacts subapplication can have these routes: contacts: Lists all the available contacts Contacts/page/:page: Paginates the contacts collection contacts/new: Shows a form to create a new contact contacts/view/:id: Shows an invoice given its ID contacts/edit/:id: Shows a form to edit a contact Note how all the routes start with the /contacts prefix. It's a good practice to use the same prefix for all the subapplication routes. In this way, the user will know where he/she is in the application, and you will have a clean separation of responsibilities. Use the same prefix for all URLs in one subapplication; avoid mixing routes with the other subapplications. When the user points the browser to one of these routes, a route handler is triggered. The function handler parses the URL request and delegates the request to the subapplication object, as follows: var ContactsRouter = Backbone.Router.extend({ routes: { "contacts": "showContactList", "contacts/page/:page": "showContactList", "contacts/new": "createContact", "contacts/view/:id": "showContact", "contacts/edit/:id": "editContact" }, showContactList: function(page) { page = page || 1; page = page > 0 ? page : 1; var region = new Region({el: '#main'}); var app = new ContactsApp({region: region}); app.showContactList(page); }, createContact: function() { var region = new Region({el: '#main'}); var app = new ContactsApp({region: region}); app.showNewContactForm(); }, showContact: function(contactId) { var region = new Region({el: '#main'}); var app = new ContactsApp({region: region}); app.showContactById(contactId); }, editContact: function(contactId) { var region = new Region({el: '#main'}); var app = new ContactsApp({region: region}); app.showContactEditorById(contactId); } }); The validation of the URL parameters should be done on the router as shown in the showContactList method. Once the validation is done, ContactsRouter instantiates an application object, ContactsApp, which is a facade for the Contacts subapplication; finally, ContactsRouter calls an API method to handle the user request. The router doesn't know anything about business logic; it just knows how to decode the URL requests and which object to call in order to handle the request. Here, the region object points to an existing DOM node by passing the application and tells us where the application should be rendered. The subapplication facade A subapplication is composed of smaller pieces that handle specific use cases. In the case of the contacts app, a use case can be see a contact, create a new contact, or edit a contact. The implementation of these use cases is separated on different objects that handle views, events, and business logic for a specific use case. The facade basically fetches the data from the server, handles the connection errors, and creates the objects needed for the use case, as shown here: function ContactsApp(options) { this.region = options.region; this.showContactList = function(page) { App.trigger("loading:start"); new ContactCollection().fetch({ success: _.bind(function(collection, response, options) { this._showList(collection); App.trigger("loading:stop"); }, this), fail: function(collection, response, options) { App.trigger("loading:stop"); App.trigger("server:error", response); } }); }; this._showList = function(contacts) { var contactList = new ContactList({region: this.region}); contactList.showList(contacts); } this.showNewContactForm = function() { this._showEditor(new Contact()); }; this.showContactEditorById = function(contactId) { new Contact({id: contactId}).fetch({ success: _.bind(function(model, response, options) { this._showEditor(model); App.trigger("loading:stop"); }, this), fail: function(collection, response, options) { App.trigger("loading:stop"); App.trigger("server:error", response); } }); }; this._showEditor = function(contact) { var contactEditor = new ContactEditor({region: this.region}); contactEditor.showEditor(contact); } this.showContactById = function(contactId) { new Contact({id: contactId}).fetch({ success: _.bind(function(model, response, options) { this._showViewer(model); App.trigger("loading:stop"); }, this), fail: function(collection, response, options) { App.trigger("loading:stop"); App.trigger("server:error", response); } }); }; this._showViewer = function(contact) { var contactViewer = new ContactViewer({region: this.region}); contactViewer.showContact(contact); } } The simplest handler is showNewContactForm, which is called when the user wants to create a new contact. This creates a new Contact object and passes to the _showEditor method, which will render an editor for a blank Contact. The handler doesn't need to know how to do this because the ContactEditor application will do the job. Other handlers follow the same pattern, triggering an event for the root application to show a loading widget to the user while fetching the data from the server. Once the server responds successfully, it calls another method to handle the result. If an error occurs during the operation, it triggers an event to the root application to show a friendly error to the user. Handlers receive an object and create an application object that renders a set of views and handles the user interactions. The object created will respond to the action of the users, that is, let's imagine the object handling a form to save a contact. When users click on the save button, it will handle the save process and maybe show a message such as Are you sure want to save the changes and take the right action? The subapplication mediator The responsibility of the subapplication mediator object is to render the required layout and views to be showed to the user. It knows which views need to be rendered and in which order, so instantiate the views with the models if needed and put the results on the DOM. After rendering the necessary views, it will listen for user interactions as Backbone events triggered from the views; methods on the object will handle the interaction as described in the use cases. The mediator pattern is applied to this object to coordinate efforts between the views. For example, imagine that we have a form with contact data. As the user made some input in the edition form, other views will render a preview business card for the contact; in this case, the form view will trigger changes to the application object and the application object will tell the business card view to use a new set of data each time. As you can see, the views are decoupled and this is the objective of the application object. The following snippet shows the application that shows a list of contacts. It creates a ContactListView view, which knows how to render a collection of contacts and pass the contacts collection to be rendered: var ContactList = function(options) { _.extend(this, Backbone.Events); this.region = options.region; this.showList = function(contacts) { var contactList = new ContactListView({ collection: contacts }); this.region.show(contactList); this.listenTo(contactList, "item:contact:delete", this._deleteContact); } this._deleteContact = function(contact) { if (confirm('Are you sure?')) { contact.collection.remove(contact); } } this.close = function() { this.stopListening(); } } The ContactListView view will be responsible for transforming this into the DOM nodes and responding to collection events such as adding a new contact or removing one. Once the view is initialized, it is rendered on a specific region previously specified. When the view is finally on DOM, the application listens for the "item:contact:delete" event, which will be triggered if the user clicks on a delete button rendered for each contact. To see a contact, a ContactViewer application is responsible for managing the use case, which is as follows: var ContactViewer = function(options) { _.extend(this, Backbone.Events); this.region = options.region; this.showContact = function(contact) { var contactView = new ContactView({model: contact}); this.region.show(contactView); this.listenTo(contactView, "contact:delete", this._deleteContact); }, this._deleteContact = function(contact) { if (confirm("Are you sure?")) { contact.destroy({ success: function() { App.router.navigate("/contacts", true); }, error: function() { alert("Something goes wrong"); } }); } } } It's the same situation, that is, the contact list creates a view that manages the DOM interactions, renders on the specified region, and listens for events. From the details view of a contact, users can delete them. Similar to a list, a _deleteContact method handles the event, but the difference is when a contact is deleted, the application is redirected to the list of contacts, which is the expected behavior. You can see how the handler uses the root application infrastructure by calling the navigate method of the global App.router. The handler forms to create or edit contacts are very similar, so the same ContactEditor can be used for both the cases. This object will show a form to the user and will wait for the save action, as shown in the following code: var ContactEditor = function(options) { _.extend(this, Backbone.Events) this.region = options.region; this.showEditor = function(contact) { var contactForm = new ContactForm({model: contact}); this.region.show(contactForm); this.listenTo(contactForm, "contact:save", this._saveContact); }, this._saveContact = function(contact) { contact.save({ success: function() { alert("Successfully saved"); App.router.navigate("/contacts"); }, error: function() { alert("Something goes wrong"); } }); } } In this case, the model can have modifications in its data. In simple layouts, the views and model can work nicely with the model-view data bindings, so no extra code is needed. In this case, we will assume that the model is updated as the user puts in information in the form, for example, Backbone.Stickit. When the save button is clicked, a "contact:save" event is triggered and the application responds with the _saveContact method. See how the method issues a save call to the standard Backbone model and waits for the result. In successful requests, a message will be displayed and the user is redirected to the contact list. In errors, a message will tell the user that the application found a problem while saving the contact. The implementation details about the views are outside of the scope of this article, but you can abstract the work made by this object by seeing the snippets in this section. Summary In this article, we started by describing in a general way how a Backbone application works. It describes two main parts, a root application and subapplications. A root application provides common infrastructure to the other smaller and focused applications that we call subapplications. Subapplications are loose-coupled with the other subapplications and should own resources such as views, controllers, routers, and so on. A subapplication manages a small part of the system and no more. Communication between the subapplications and root application is made through an event-driven bus, such as Backbone.Events or Backbone.Radio. The user interacts with the application using views that a subapplication renders. A subapplication mediator orchestrates interaction between the views, models, and collections. It also handles the business logic such as saving or deleting a resource. Resources for Article: Further resources on this subject: Object-Oriented JavaScript with Backbone Classes [article] Building a Simple Blog [article] Marionette View Types and Their Use [article]
Read more
  • 0
  • 0
  • 4182

article-image-constructing-common-ui-widgets
Packt
22 Apr 2015
21 min read
Save for later

Constructing Common UI Widgets

Packt
22 Apr 2015
21 min read
One of the biggest features that draws developers to Ext JS is the vast array of UI widgets available out of the box. The ease with which they can be integrated with each other and the attractive and consistent visuals each of them offers is also a big attraction. No other framework can compete on this front, and this is a huge reason Ext JS leads the field of large-scale web applications. In this article by Stuart Ashworth and Andrew Duncan by authors of the book, Ext JS Essentials, we will look at how UI widgets fit into the framework's structure, how they interact with each other, and how we can retrieve and reference them. We will then delve under the surface and investigate the lifecycle of a component and the stages it will go through during the lifetime of an application. (For more resources related to this topic, see here.) Anatomy of a UI widget Every UI element in Ext JS extends from the base component class Ext.Component. This class is responsible for rendering UI elements to the HTML document. They are generally sized and positioned by layouts used by their parent components and participate in the automatic component lifecycle process. You can imagine an instance of Ext.Component as a single section of the user interface in a similar way that you might think of a DOM element when building traditional web interfaces. Each subclass of Ext.Component builds upon this simple fact and is responsible for generating more complex HTML structures or combining multiple Ext.Components to create a more complex interface. Ext.Component classes, however, can't contain other Ext.Components. To combine components, one must use the Ext.container.Container class, which itself extends from Ext.Component. This class allows multiple components to be rendered inside it and have their size and positioning managed by the framework's layout classes. Components and HTML Creating and manipulating UIs using components requires a slightly different way of thinking than you may be used to when creating interactive websites with libraries such as jQuery. The Ext.Component class provides a layer of abstraction from the underlying HTML and allows us to encapsulate additional logic to build and manipulate this HTML. This concept is different from the way other libraries allow you to manipulate UI elements and provides a hurdle for new developers to get over. The Ext.Component class generates HTML for us, which we rarely need to interact with directly; instead, we manipulate the configuration and properties of the component. The following code and screenshot show the HTML generated by a simple Ext.Component instance: var simpleComponent = Ext.create('Ext.Component', { html   : 'Ext JS Essentials!', renderTo: Ext.getBody() }); As you can see, a simple <DIV> tag is created, which is given some CSS classes and an autogenerated ID, and has the HTML config displayed inside it. This generated HTML is created and managed by the Ext.dom.Element class, which wraps a DOM element and its children, offering us numerous helper methods to interrogate and manipulate it. After it is rendered, each Ext.Component instance has the element instance stored in its el property. You can then use this property to manipulate the underlying HTML that represents the component. As mentioned earlier, the el property won't be populated until the component has been rendered to the DOM. You should put logic dependent on altering the raw HTML of the component in an afterrender event listener or override the afterRender method. The following example shows how you can manipulate the underlying HTML once the component has been rendered. It will set the background color of the element to red: Ext.create('Ext.Component', { html     : 'Ext JS Essentials!', renderTo : Ext.getBody(), listeners: {    afterrender: function(comp) {      comp.el.setStyle('background-color', 'red');    } } }); It is important to understand that digging into and updating the HTML and CSS that Ext JS creates for you is a dangerous game to play and can result in unexpected results when the framework tries to update things itself. There is usually a framework way to achieve the manipulations you want to include, which we recommend you use first. We always advise new developers to try not to fight the framework too much when starting out. Instead, we encourage them to follow its conventions and patterns, rather than having to wrestle it to do things in the way they may have previously done when developing traditional websites and web apps. The component lifecycle When a component is created, it follows a lifecycle process that is important to understand, so as to have an awareness of the order in which things happen. By understanding this sequence of events, you will have a much better idea of where your logic will fit and ensure you have control over your components at the right points. The creation lifecycle The following process is followed when a new component is instantiated and rendered to the document by adding it to an existing container. When a component is shown explicitly (for example, without adding to a parent, such as a floating component) some additional steps are included. These have been denoted with a * in the following process. constructor First, the class' constructor function is executed, which triggers all of the other steps in turn. By overriding this function, we can add any setup code required for the component. Config options processed The next thing to be handled is the config options that are present in the class. This involves each option's apply and update methods being called, if they exist, meaning the values are available via the getter from now onwards. initComponent The initComponent method is now called and is generally used to apply configurations to the class and perform any initialization logic. render Once added to a container, or when the show method is called, the component is rendered to the document. boxready At this stage, the component is rendered and has been laid out by its parent's layout class, and is ready at its initial size. This event will only happen once on the component's first layout. activate (*) If the component is a floating item, then the activate event will fire, showing that the component is the active one on the screen. This will also fire when the component is brought back to focus, for example, in a Tab panel when a tab is selected. show (*) Similar to the previous step, the show event will fire when the component is finally visible on screen. The destruction process When we are removing a component from the Viewport and want to destroy it, it will follow a destruction sequence that we can use to ensure things are cleaned up sufficiently, so as to avoid memory leaks and so on. The framework takes care of the majority of this cleanup for us, but it is important that we tidy up any additional things we instantiate. hide (*) When a component is manually hidden (using the hide method), this event will fire and any additional hide logic can be included here. deactivate (*) Similar to the activate step, this is fired when the component becomes inactive. As with the activate step, this will happen when floating and nested components are hidden and are no longer the items under focus. destroy This is the final step in the teardown process and is implemented when the component and its internal properties and objects are cleaned up. At this stage, it is best to remove event handlers, destroy subclasses, and ensure any other references are released. Component Queries Ext JS boasts a powerful system to retrieve references to components called Component Queries. This is a CSS/XPath style query syntax that lets us target broad sets or specific components within our application. For example, within our controller, we may want to find a button with the text "Save" within a component of type MyForm. In this section, we will demonstrate the Component Query syntax and how it can be used to select components. We will also go into details about how it can be used within Ext.container.Container classes to scope selections. xtypes Before we dive in, it is important to understand the concept of xtypes in Ext JS. An xtype is a shorthand name for an Ext.Component that allows us to identify its declarative component configuration objects. For example, we can create a new Ext.Component as a child of an Ext.container.Container using an xtype with the following code: Ext.create('Ext.Container', { items: [    {      xtype: 'component',      html : 'My Component!'    } ] }); Using xtypes allows you to lazily instantiate components when required, rather than having them all created upfront. Common component xtypes include: Classes xtypes Ext.tab.Panel tabpanel Ext.container.Container container Ext.grid.Panel gridpanel Ext.Button button xtypes form the basis of our Component Query syntax in the same way that element types (for example, div, p, span, and so on) do for CSS selectors. We will use these heavily in the following examples. Sample component structure We will use the following sample component structure—a panel with a child tab panel, form, and buttons—to perform our example queries on: var panel = Ext.create('Ext.panel.Panel', { height : 500, width : 500, renderTo: Ext.getBody(), layout: {    type : 'vbox',    align: 'stretch' }, items : [    {      xtype : 'tabpanel',      itemId: 'mainTabPanel',      flex : 1,      items : [        {          xtype : 'panel',          title : 'Users',          itemId: 'usersPanel',          layout: {            type : 'vbox',            align: 'stretch'            },            tbar : [              {                xtype : 'button',                text : 'Edit',                itemId: 'editButton'                }              ],              items : [                {                  xtype : 'form',                  border : 0,                  items : [                  {                      xtype : 'textfield',                      fieldLabel: 'Name',                      allowBlank: false                    },                    {                      xtype : 'textfield',                      fieldLabel: 'Email',                      allowBlank: false                    }                  ],                  buttons: [                    {                      xtype : 'button',                      text : 'Save',                      action: 'saveUser'                    }                  ]                },                {                  xtype : 'grid',                  flex : 1,                  border : 0,                  columns: [                    {                     header : 'Name',                      dataIndex: 'Name',                      flex : 1                    },                    {                      header : 'Email',                      dataIndex: 'Email'                    }                   ],                  store : Ext.create('Ext.data.Store', {                    fields: [                      'Name',                      'Email'                    ],                    data : [                      {                        Name : 'Joe Bloggs',                        Email: 'joe@example.com'                      },                      {                        Name : 'Jane Doe',                        Email: 'jane@example.com'                      }                    ]                  })                }              ]            }          ]        },        {          xtype : 'component',          itemId : 'footerComponent',          html : 'Footer Information',          extraOptions: {            option1: 'test',            option2: 'test'          },          height : 40        }      ]    }); Queries with Ext.ComponentQuery The Ext.ComponentQuery class is used to perform Component Queries, with the query method primarily used. This method accepts two parameters: a query string and an optional Ext.container.Container instance to use as the root of the selection (that is, only components below this one in the hierarchy will be returned). The method will return an array of components or an empty array if none are found. We will work through a number of scenarios and use Component Queries to find a specific set of components. Finding components based on xtype As we have seen, we use xtypes like element types in CSS selectors. We can select all the Ext.panel.Panel instances using its xtype—panel: var panels = Ext.ComponentQuery.query('panel'); We can also add the concept of hierarchy by including a second xtype separated by a space. The following code will select all Ext.Button instances that are descendants (at any level) of an Ext.panel.Panel class: var buttons = Ext.ComponentQuery.query('panel buttons'); We could also use the > character to limit it to buttons that are direct descendants of a panel. var directDescendantButtons = Ext.ComponentQuery.query('panel > button'); Finding components based on attributes It is simple to select a component based on the value of a property. We use the XPath syntax to specify the attribute and the value. The following code will select buttons with an action attribute of saveUser: var saveButtons = Ext.ComponentQuery.query('button[action="saveUser"]); Finding components based on itemIds ItemIds are commonly used to retrieve components, and they are specially optimized for performance within the ComponentQuery class. They should be unique only within their parent container and not globally unique like the id config. To select a component based on itemId, we prefix the itemId with a # symbol: var usersPanel = Ext.ComponentQuery.query('#usersPanel'); Finding components based on member functions It is also possible to identify matching components based on the result of a function of that component. For example, we can select all text fields whose values are valid (that is, when a call to the isValid method returns true): var validFields = Ext.ComponentQuery.query('form > textfield{isValid()}'); Scoped Component Queries All of our previous examples will search the entire component tree to find matches, but often we may want to keep our searches local to a specific container and its descendants. This can help reduce the complexity of the query and improve the performance, as fewer components have to be processed. Ext.Containers have three handy methods to do this: up, down, and query. We will take each of these in turn and explain their features. up This method accepts a selector and will traverse up the hierarchy to find a single matching parent component. This can be useful to find the grid panel that a button belongs to, so an action can be taken on it: var grid = button.up('gridpanel'); down This returns the first descendant component that matches the given selector: var firstButton = grid.down('button'); query The query method performs much like Ext.ComponentQuery.query but is automatically scoped to the current container. This means that it will search all descendant components of the current container and return all matching ones as an array. var allButtons = grid.query('button'); Hierarchical data with trees Now that we know and understand components, their lifecycle, and how to retrieve references to them, we will move on to more specific UI widgets. The tree panel component allows us to display hierarchical data in a way that reflects the data's structure and relationships. In our application, we are going to use a tree panel to represent our navigation structure to allow users to see how the different areas of the app are linked and structured. Binding to a data source Like all other data-bound components, tree panels must be bound to a data store—in this particular case it must be an Ext.data.TreeStore instance or subclass, as it takes advantage of the extra features added to this specialist store class. We will make use of the BizDash.store.Navigation TreeStore to bind to our tree panel. Defining a tree panel The tree panel is defined in the Ext.tree.Panel class (which has an xtype of treepanel), which we will extend to create a custom class called BizDash.view.navigation.NavigationTree: Ext.define('BizDash.view.navigation.NavigationTree', { extend: 'Ext.tree.Panel', alias: 'widget.navigation-NavigationTree', store : 'Navigation', columns: [    {      xtype : 'treecolumn',      text : 'Navigation',      dataIndex: 'Label',      flex : 1    } ], rootVisible: false, useArrows : true }); We configure the tree to be bound to our TreeStore by using its storeId, in this case, Navigation. A tree panel is a subclass of the Ext.panel.Table class (similar to the Ext.grid.Panel class), which means it must have a columns configuration present. This tells the component what values to display as part of the tree. In a simple, traditional tree, we might only have one column showing the item and its children; however, we can define multiple columns and display additional fields in each row. This would be useful if we were displaying, for example, files and folders and wanted to have additional columns to display the file type and file size of each item. In our example, we are only going to have one column, displaying the Label field. We do this by using the treecolumn xtype, which is responsible for rendering the tree's navigation elements. Without defining treecolumn, the component won't display correctly. The treecolumn xtype's configuration allows us to define which of the attached data model's fields to use (dataIndex), the column's header text (text), and the fact that the column should fill the horizontal space. Additionally, we set the rootVisible to false, so the data's root is hidden, as it has no real meaning other than holding the rest of the data together. Finally, we set useArrows to true, so the items with children use an arrow instead of the +/- icon. Summary In this article, we have learnt how Ext JS' components fit together and the lifecycle that they follow when created and destroyed. We covered the component lifecycle and Component Queries. Resources for Article: Further resources on this subject: So, what is Ext JS? [article] Function passing [article] Static Data Management [article]
Read more
  • 0
  • 0
  • 4163

article-image-creating-functions-and-operations
Packt
10 Aug 2015
18 min read
Save for later

Creating Functions and Operations

Packt
10 Aug 2015
18 min read
In this article by Alex Libby, author of the book Sass Essentials, we will learn how to use operators or functions to construct a whole site theme from just a handful of colors, or defining font sizes for the entire site from a single value. You will learn how to do all these things in this article. Okay, so let's get started! (For more resources related to this topic, see here.) Creating values using functions and operators Imagine a scenario where you're creating a masterpiece that has taken days to put together, with a stunning choice of colors that has taken almost as long as the building of the project and yet, the client isn't happy with the color choice. What to do? At this point, I'm sure that while you're all smiles to the customer, you'd be quietly cursing the amount of work they've just landed you with, this late on a Friday. Sound familiar? I'll bet you scrap the colors and go back to poring over lots of color combinations, right? It'll work, but it will surely take a lot more time and effort. There's a better way to achieve this; instead of creating or choosing lots of different colors, we only need to choose one and create all of the others automatically. How? Easy! When working with Sass, we can use a little bit of simple math to build our color palette. One of the key tenets of Sass is its ability to work out values dynamically, using nothing more than a little simple math; we could define font sizes from H1 to H6 automatically, create new shades of colors, or even work out the right percentages to use when creating responsive sites! We will take a look at each of these examples throughout the article, but for now, let's focus on the principles of creating our colors using Sass. Creating colors using functions We can use simple math and functions to create just about any type of value, but colors are where these two really come into their own. The great thing about Sass is that we can work out the hex value for just about any color we want to, from a limited range of colors. This can easily be done using techniques such as adding two values together, or subtracting one value from another. To get a feel of how the color operators work, head over to the official documentation at http://sass-lang.com/documentation/file.SASS_REFERENCE.html#color_operations—it is worth reading! Nothing wrong with adding or subtracting values—it's a perfectly valid option, and will result in a valid hex code when compiled. But would you know that both values are actually deep shades of blue? Therein lies the benefit of using functions; instead of using math operators, we can simply say this: p { color: darken(#010203, 10%); } This, I am sure you will agree, is easier to understand as well as being infinitely more readable! The use of functions opens up a world of opportunities for us. We can use any one of the array of functions such as lighten(), darken(), mix(), or adjust-hue() to get a feel of how easy it is to get the values. If we head over to http://jackiebalzer.com/color, we can see that the author has exploded a number of Sass (and Compass—we will use this later) functions, so we can see what colors are displayed, along with their numerical values, as soon as we change the initial two values. Okay, we could play with the site ad infinitum, but I feel a demo coming on—to explore the effects of using the color functions to generate new colors. Let's construct a simple demo. For this exercise, we will dig up a copy of the colorvariables demo and modify it so that we're only assigning one color variable, not six. For this exercise, I will assume you are using Koala to compile the code. Okay, let's make a start: We'll start with opening up a copy of colorvariables.scss in your favorite text editor and removing lines 1 to 15 from the start of the file. Next, add the following lines, so that we should be left with this at the start of the file: $darkRed: #a43; $white: #fff; $black: #000;   $colorBox1: $darkRed; $colorBox2: lighten($darkRed, 30%); $colorBox3: adjust-hue($darkRed, 35%); $colorBox4: complement($darkRed); $colorBox5: saturate($darkRed, 30%); $colorBox6: adjust-color($darkRed, $green: 25); Save the file as colorfunctions.scss. We need a copy of the markup file to go with this code, so go ahead and extract a copy of colorvariables.html from the code download, saving it as colorfunctions.html in the root of our project area. Don't forget to change the link for the CSS file within to colorfunctions.css! Fire up Koala, then drag and drop colorfunctions.scss from our project area over the main part of the application window to add it to the list: Right-click on the file name and select Compile, and then wait for it to show Success in a green information box. If we preview the results of our work in a browser, we should see the following boxes appear: At this point, we have a working set of colors—granted, we might have to work a little on making sure that they all work together. But the key point here is that we have only specified one color, and that the others are all calculated automatically through Sass. Now that we are only defining one color by default, how easy is it to change the colors in our code? Well, it is a cinch to do so. Let's try it out using the help of the SassMeister playground. Changing the colors in use We can easily change the values used in the code, and continue to refresh the browser after each change. However, this isn't a quick way to figure out which colors work; to get a quicker response, there is an easier way: use the online Sass playground at http://www.sassmeister.com. This is the perfect way to try out different colors—the site automatically recompiles the code and updates the result as soon as we make a change. Try copying the HTML and SCSS code into the play area to view the result. The following screenshot shows the same code used in our demo, ready for us to try using different calculations: All images work on the principle that we take a base color (in this case, $dark-blue, or #a43), then adjust the color either by a percentage or a numeric value. When compiled, Sass calculates what the new value should be and uses this in the CSS. Take, for example, the color used for #box6, which is a dark orange with a brown tone, as shown in this screenshot: To get a feel of some of the functions that we can use to create new colors (or shades of existing colors), take a look at the main documentation at http://sass-lang.com/documentation/Sass/Script/Functions.html, or https://www.makerscabin.com/web/sass/learn/colors. These sites list a variety of different functions that we can use to create our masterpiece. We can also extend the functions that we have in Sass with the help of custom functions, such as the toolbox available at https://github.com/at-import/color-schemer—this may be worth a look. In our demo, we used a dark red color as our base. If we're ever stuck for ideas on colors, or want to get the right HEX, RGB(A), or even HSL(A) codes, then there are dozens of sites online that will give us these values. Here are a couple of them that you can try: HSLa Explorer, by Chris Coyier—this is available at https://css-tricks.com/examples/HSLaExplorer/. HSL Color Picker by Brandon Mathis—this is available at http://hslpicker.com/. If we know the name, but want to get a Sass value, then we can always try the list of 1,500+ colors at https://github.com/FearMediocrity/sass-color-palettes/blob/master/colors.scss. What's more, the list can easily be imported into our CSS, although it would make better sense to simply copy the chosen values into our Sass file, and compile from there instead. Mixing colors The one thing that we've not discussed, but is equally useful is that we are not limited to using functions on their own; we can mix and match any number of functions to produce our colors. A great way to choose colors, and get the appropriate blend of functions to use, is at http://sassme.arc90.com/. Using the available sliders, we can choose our color, and get the appropriate functions to use in our Sass code. The following image shows how: In most cases, we will likely only need to use two functions (a mix of darken and adjust hue, for example); if we are using more than two–three functions, then we should perhaps rethink our approach! In this case, a better alternative is to use Sass's mix() function, as follows: $white: #fff; $berry: hsl(267, 100%, 35%); p { mix($white, $berry, 0.7) } …which will give the following valid CSS: p { color: #5101b3; } This is a useful alternative to use in place of the command we've just touched on; after all, would you understand what adjust_hue(desaturate(darken(#db4e29, 2), 41), 67) would give as a color? Granted, it is something of an extreme calculation, nonetheless, it is technically valid. If we use mix() instead, it matches more closely to what we might do, for example, when mixing paint. After all, how else would we lighten its color, if not by adding a light-colored paint? Okay, let's move on. What's next? I hear you ask. Well, so far we've used core Sass for all our functions, but it's time to go a little further afield. Let's take a look at how you can use external libraries to add extra functionality. In our next demo, we're going to introduce using Compass, which you will often see being used with Sass. Using an external library So far, we've looked at using core Sass functions to produce our colors—nothing wrong with this; the question is, can we take things a step further? Absolutely, once we've gained some experience with using these functions, we can introduce custom functions (or helpers) that expand what we can do. A great library for this purpose is Compass, available at http://www.compass-style.org; we'll make use of this to change the colors which we created from our earlier boxes demo, in the section, Creating colors using functions. Compass is a CSS authoring framework, which provides extra mixins and reusable patterns to add extra functionality to Sass. In our demo, we're using shade(), which is one of the several color helpers provided by the Compass library. Let's make a start: We're using Compass in this demo, so we'll begin with installing the library. To do this, fire up Command Prompt, then navigate to our project area. We need to make sure that our installation RubyGems system software is up to date, so at Command Prompt, enter the following, and then press Enter: gem update --system Next, we're installing Compass itself—at the prompt, enter this command, and then press Enter: gem install compass Compass works best when we get it to create a project shell (or template) for us. To do this, first browse to http://www.compass-style.org/install, and then enter the following in the Tell us about your project… area: Leave anything in grey text as blank. This produces the following commands—enter each at Command Prompt, pressing Enter each time: Navigate back to Command Prompt. We need to compile our SCSS code, so go ahead and enter this command at the prompt (or copy and paste it), then press Enter: compass watch –sourcemap Next, extract a copy of the colorlibrary folder from the code download, and save it to the project area. In colorlibrary.scss, comment out the existing line for $backgrd_box6_color, and add the following immediately below it: $backgrd_box6_color: shade($backgrd_box5_color, 25%); Save the changes to colorlibrary.scss. If all is well, Compass's watch facility should kick in and recompile the code automatically. To verify that this has been done, look in the css subfolder of the colorlibrary folder, and you should see both the compiled CSS and the source map files present. If you find Compass compiles files in unexpected folders, then try using the following command to specify the source and destination folders when compiling: compass watch --sass-dir sass --css-dir css If all is well, we will see the boxes, when previewing the results in a browser window, as in the following image. Notice how Box 6 has gone a nice shade of deep red (if not almost brown)? To really confirm that all the changes have taken place as required, we can fire up a DOM inspector such as Firebug; a quick check confirms that the color has indeed changed: If we explore even further, we can see that the compiled code shows that the original line for Box 6 has been commented out, and that we're using the new function from the Compass helper library: This is a great way to push the boundaries of what we can do when creating colors. To learn more about using the Compass helper functions, it's worth exploring the official documentation at http://compass-style.org/reference/compass/helpers/colors/. We used the shade() function in our code, which darkens the color used. There is a key difference to using something such as darken() to perform the same change. To get a feel of the difference, take a look at the article on the CreativeBloq website at http://www.creativebloq.com/css3/colour-theming-sass-and-compass-6135593, which explains the difference very well. The documentation is a little lacking in terms of how to use the color helpers; the key is not to treat them as if they were normal mixins or functions, but to simply reference them in our code. To explore more on how to use these functions, take a look at the article by Antti Hiljá at http://clubmate.fi/how-to-use-the-compass-helper-functions/. We can, of course, create mixins to create palettes—for a more complex example, take a look at http://www.zingdesign.com/how-to-generate-a-colour-palette-with-compass/ to understand how such a mixin can be created using Compass. Okay, let's move on. So far, we've talked about using functions to manipulate colors; the flip side is that we are likely to use operators to manipulate values such as font sizes. For now, let's change tack and take a look at creating new values for changing font sizes. Changing font sizes using operators We already talked about using functions to create practically any value. Well, we've seen how to do it with colors; we can apply similar principles to creating font sizes too. In this case, we set a base font size (in the same way that we set a base color), and then simply increase or decrease font sizes as desired. In this instance, we won't use functions, but instead, use standard math operators, such as add, subtract, or divide. When working with these operators, there are a couple of points to remember: Sass math functions preserve units—this means we can't work on numbers with different units, such as adding a px value to a rem value, but can work with numbers that can be converted to the same format, such as inches to centimeters If we multiply two values with the same units, then this will produce square units (that is, 10px * 10px == 100px * px). At the same time, px * px will throw an error as it is an invalid unit in CSS. There are some quirks when working with / as a division operator —in most instances, it is normally used to separate two values, such as defining a pair of font size values. However, if the value is surrounded in parentheses, used as a part of another arithmetic expression, or is stored in a variable, then this will be treated as a division operator. For full details, it is worth reading the relevant section in the official documentation at http://sass-lang.com/documentation/file.Sass_REFERENCE.html#division-and-slash. With these in mind, let's create a simple demo—a perfect use for Sass is to automatically work out sizes from H1 through to H6. We could just do this in a simple text editor, but this time, let's break with tradition and build our demo directly into a session on http://www.sassmeister.com. We can then play around with the values set, and see the effects of the changes immediately. If we're happy with the results of our work, we can copy the final version into a text editor and save them as standard SCSS (or CSS) files. Let's begin by browsing to http://www.sassmeister.com, and adding the following HTML markup window: <html> <head>    <meta charset="utf-8" />    <title>Demo: Assigning colors using variables</title>    <link rel="stylesheet" type="text/css" href="css/     colorvariables.css"> </head> <body>    <h1>The cat sat on the mat</h1>    <h2>The cat sat on the mat</h2>    <h3>The cat sat on the mat</h3>    <h4>The cat sat on the mat</h4>    <h5>The cat sat on the mat</h5>    <h6>The cat sat on the mat</h6> </body> </html> Next, add the following to the SCSS window—we first set a base value of 3.0, followed by a starting color of #b26d61, or a dark, moderate red: $baseSize: 3.0; $baseColor: #b26d61; We need to add our H1 to H6 styles. The rem mixin was created by Chris Coyier, at https://css-tricks.com/snippets/css/less-mixin-for-rem-font-sizing/. We first set the font size, followed by setting the font color, using either the base color set earlier, or a function to produce a different shade: h1 { font-size: $baseSize; color: $baseColor; }   h2 { font-size: ($baseSize - 0.2); color: darken($baseColor, 20%); }   h3 { font-size: ($baseSize - 0.4); color: lighten($baseColor, 10%); }   h4 { font-size: ($baseSize - 0.6); color: saturate($baseColor, 20%); }   h5 { font-size: ($baseSize - 0.8); color: $baseColor - 111; }   h6 { font-size: ($baseSize - 1.0); color: rgb(red($baseColor) + 10, 23, 145); } SassMeister will automatically compile the code to produce a valid CSS, as shown in this screenshot: Try changing the base size of 3.0 to a different value—using http://www.sassmeister.com, we can instantly see how this affects the overall size of each H value. Note how we're multiplying the base variable by 10 to set the pixel value, or simply using the value passed to render each heading. In each instance, we can concatenate the appropriate unit using a plus (+) symbol. We then subtract an increasing value from $baseSize, before using this value as the font size for the relevant H value. You can see a similar example of this by Andy Baudoin as a CodePen, at http://codepen.io/baudoin/pen/HdliD/. He makes good use of nesting to display the color and strength of shade. Note that it uses a little JavaScript to add the text of the color that each line represents, and can be ignored; it does not affect the Sass used in the demo. The great thing about using a site such SassMeister is that we can play around with values and immediately see the results. For more details on using number operations in Sass, browse to the official documentation, which is at http://sass-lang.com/documentation/file.Sass_REFERENCE.html#number_operations. Okay, onwards we go. Let's turn our attention to creating something a little more substantial; we're going to create a complete site theme using the power of Sass and a few simple calculations. Summary Phew! What a tour! One of the key concepts of Sass is the use of functions and operators to create values, so let's take a moment to recap what we have covered throughout this article. We kicked off with a look at creating color values using functions, before discovering how we can mix and match different functions to create different shades, or using external libraries to add extra functionality to Sass. We then moved on to take a look at another key use of functions, with a look at defining different font sizes, using standard math operators. Resources for Article: Further resources on this subject: Nesting, Extend, Placeholders, and Mixins [article] Implementation of SASS [article] Constructing Common UI Widgets [article]
Read more
  • 0
  • 0
  • 4138
article-image-frontend-soa-taming-beast-frontend-web-development
Wesley Cho
08 May 2015
6 min read
Save for later

Frontend SOA: Taming the beast of frontend web development

Wesley Cho
08 May 2015
6 min read
Frontend web development is a difficult domain for creating scalable applications.  There are many challenges when it comes to architecture, such as how to best organize HTML, CSS, and JavaScript files, or how to create build tooling to allow an optimal development & production environment. In addition, complexity has increased measurably. Templating & routing have been transplanted to the concern of frontend web engineers as a result of the push towards single page applications (SPAs).  A wealth of frameworks can be found as listed on todomvc.com.  AngularJS is one that rose to prominence almost two years ago on the back of declarative html, strong testability, and two-way data binding, but even now it is seeing some churn due to Angular 2.0 breaking backwards compatibility completely and the rise of React, which is Facebook’s new view layer bringing the idea of a virtual DOM for performance optimization not previously seen in frontend web architecture.  Angular 2.0 itself is also looking like a juggernaut with decoupled components that harkens to more pure JavaScript & is already boasting of performance gains of roughly 5x compared to Angular 1.x. With this much churn, frontend web apps have become difficult to architect for the long term.  This requires us to take a step back and think about the direction of browsers. The Future of Browsers We know that ECMAScript 6 (ES6) is already making its headway into browsers - ES6 changes how JavaScript is structured greatly with a proper module system, and adds a lot of syntactical sugar.  Web Components are also going to change how we build our views as well. Instead of: .home-view { ... } We will be writing: <template id=”home-view”> <style> … </style> <my-navbar></my-navbar> <my-content></my-content> <script> … </script> </template> <home-view></home-view> <script> var proto = Object.create(HTMLElement.prototype); proto.createdCallback = function () { var root = this.createRoot(); var template = document.querySelector(‘#home-view’); var clone = document.importNode(template.content, true); root.appendChild(clone); }; document.registerElement(‘home-view’, { prototype: proto }); </script> This is drastically different from how we build components now.  In addition, libraries & frameworks are already being built with this in mind.  Angular 2 is using annotations provided by Traceur, Google’s ES6 + ES7 to ES5 transpiler, to provide syntactical sugar for creating one way bindings to the DOM and to DOM events.  React and Ember also have plans to integrate Web Components into their workflows.  Aurelia is already structured in a way to take advantage of it when it drops. What can we do to future proof ourselves for when these technologies drop? Solution  For starters, it is important to realize that creating HTML and CSS is relatively cheap compared to managing a complex JavaScript codebase built on top of a framework or library.  Frontend web development is seeing architecture pains that have already been solved in other domains, except it has the additional problem of the standard challenge of integrating UI into that structure.  This seems to suggest that the solution is to create a frontend service-oriented architecture (SOA) where most of the heavy logic is offloaded to pure JavaScript with only utility library additions (i.e. Underscore/Lodash).  This would allow us to choose view layers with relative ease, and move fast in case it turns out a particular view library/framework turns out not to meet requirements.  It also prevents the endemic problem of having to rewrite whole codebases due to having to swap out libraries/frameworks. For example, consider this sample Angular controller (a similarly contrived example can be created using other pieces of tech as well): angular.module(‘DemoApp’) .controller(‘DemoCtrl’, function ($scope, $http) { $scope.getItems = function () { $http.get(‘/items/’) .then(function (response) { $scope.items = response.data.items; $scope.$emit(‘items:received’, $scope.items); }); }; }); This sample controller has a method getItems that fetches items, updates the model, and then emits the information so that parent views have access to that change.  This is ugly because it hardcodes application structure hierarchy and mixes it with server query logic, which is a separate concern.  In addition, it also mixes the usage of Angular’s internals into the application code, tying some pure abstract logic heavily in with the framework’s internals.  It is not all that uncommon to see developers make these simple architecture mistakes. With the proper module system that ES6 brings, this simplifies to (items.js): import {fetch} from ‘fetch’; export class items { getAll() { return fetch.get(‘/items’) .then(function (response) { return response.json(); }); } }; And demoCtrl.js: import {BaseCtrl} from ‘./baseCtrl.js’; import {items} from ‘./items’; export class DemoCtrl extends BaseCtrl { constructor() { super(); } getItems() { let self = this; return Items.getAll() .then(function (items) { self.items = items; return items; }); } }; And main.js: import {items} from ‘./items’; import {DemoCtrl} from ‘./DemoCtrl’; angular.module(‘DemoApp’, []) .factory(‘items’, items) .controller(‘DemoCtrl’, DemoCtrl);  If you want to use anything from $scope, you can modify the usage of DemoCtrl straight in the controller definition and just instantiate it inside the function.  With promises, which are also available natively in ES6, you can chain upon them in the implementation of DemoCtrl in the Angular code base. The kicker about this approach is that this can also be done currently in ES5, and is not limited with using Angular - it applies equally as well with any other library or framework, such as Backbone, Ember, and React!  It also allows you to churn out very testable code. I recommend this as a best practice for architecting complex frontend web apps - the only caveat is if the other aspects of engineering prevent this from being a possibility, such as the business requirements of time and people resources available.  This approach allows us to tame the beast of maintaining & scaling frontend web apps while still being able to adapt quickly to the constantly changing landscape. About this author Wesley Cho is a senior frontend engineer at Jiff (http://www.jiff.com/).  He has contributed features & bug fixes and reported numerous issues to numerous libraries in the Angular ecosystem, including AngularJS, Ionic, UI Bootstrap, and UI Router, as well as authored several libraries.
Read more
  • 0
  • 0
  • 4112

article-image-selecting-attributes-should-know
Packt
16 Sep 2013
9 min read
Save for later

Selecting by attributes (Should know)

Packt
16 Sep 2013
9 min read
(For more resources related to this topic, see here.) Getting ready These selectors are easily recognizable because they are wrapped by square brackets (for example, [selector]). This type of selector is always used coupled with other, like those seen so far, although this can be implicit as we'll see in few moments. In my experience, you'll often use them with the Element selector, but this can vary based on your needs. How many and what are the selectors of this type? Glad you asked! Here is a table that gives you an overview: Name Syntax Description Contains [attribute*="value"] (for example input[name*="cod"]) Selects the elements that have the value specified as a substring of the given attribute. Contains Prefix [attribute|="value"] (for example, a[class|="audero-"]) Selects nodes with the given value equal or equal followed by a hyphen inside the specified attribute. Contains Word [attribute~="value"] (for example, span[data-level~="hard"]) Selects elements that have the specified attribute with a value equal to or containing the given value delimited by spaces. Ends With [attribute$="value"] (for example, div[class$="wrapper"]) Selects nodes having the value specified at the end of the given attribute's value. Equals [attribute="value"] (for example, p[draggable="true"]) Selects elements that have the specified attribute with a value equal to the given value delimited by spaces. This selector performs an exact match. Not Equal [attribute!="value"] (for example, a[target!="_blank"]) Selects elements that don't have the specified attribute or have it but with a value not equal to the given value delimited by spaces. Starts With [attribute^="value"] (for example, img[alt^="photo"]) Selects nodes having the value specified at the start of the given attribute's value. Has [attribute] (for example, input[placeholder]) Selects elements that have the attribute specified, regardless of its value. As you've seen in the several examples in the table, we've used all of these selectors with other ones. Recalling what I said few moments ago, sometimes you can have used them with an implicit selector. In fact, take the following example: $('[placeholder]') What's the "hidden" selector? If you guessed All, you can pat yourself on the back. You're really smart! In fact, it's equivalent to write: $('*[placeholder]') How to do it... There are quite a lot of Attribute selectors, therefore, we won't build an example for each of them, and I'm going to show you two demos. The first will teach you the use of the Attribute Contains Word selector to print on the console the value of the collected elements. The second will explain the use of the Attribute Has selector to print the value of the placeholder's attribute of the retrieved nodes. Let's write some code! To build the first example, follow these steps: Create a copy of the template.html file and rename it as contain-word-selector.html. Inside the <body> tag, add the following HTML markup: <h1>Rank</h1> <table> <thead> <th>Name</th> <th>Surname</th> <th>Points</th> </thead> <tbody> <tr> <td class="name">Aurelio</td> <td>De Rosa</td> <td class="highlight green">100</td> </tr> <tr> <td class="name">Nikhil</td> <td>Chinnari</td> <td class="highlight">200</td> </tr> <tr> <td class="name">Esha</td> <td>Thakker</td> <td class="red highlight void">50</td> </tr> </tbody> </table> Edit the <head> section adding the following lines just after the <title>: <style> .highlight { background-color: #FF0A27; } </style> Edit the <head> section of the page adding this code: <script> $(document).ready(function() { var $elements = $('table td[class~="highlight"]'); console.log($elements.length); }); </script> Save the file and open it with your favorite browser. To create the second example, performs the following steps instead: Create a copy of the template.html file and rename it as has-selector.html. Inside the <body> tag, add the following HTML markup: <form name="registration-form" id="registration-form" action="registration.php" method="post"> <input type="text" name="name" placeholder="Name" /> <input type="text" name="surname" placeholder="Surname" /> <input type="email" name="email" placeholder="Email" /> <input type="tel" name="phone-number" placeholder="Phone number" /> <input type="submit" value="Register" /> <input type="reset" value="Reset" /> </form> Edit the <head> section of the page adding this code: <script> $(document).ready(function() { var $elements = $('input[placeholder]'); for(var i = 0; i < $elements.length; i++) { console.log($elements[i].placeholder); } }); </script> Save the file and open it with your favorite browser. How it works... In the first example, we created a table with four rows, one for the header and three for the data, and three columns. We put some classes to several columns, and in particular, we used the class highlight. Then, we set the definition of this class so that an element having it assigned, will have a red background color. In the next step, we created our usual script (hey, this is still a article on jQuery, isn't it?) where we selected all of the <td> having the class highlight assigned that are descendants (in this case we could use the Child selector as well) of a <table>. Once done, we simply print the number of the collected elements. The console will confirm that, as you can see by yourself loading the page, that the matched elements are three. Well done! In the second step, we created a little registration form. It won't really work since the backend is totally missing, but it's good for our discussion. As you can see, our form takes advantage of some of the new features of HTML5, like the new <input> types e-mail and tel and the placeholder attribute. In our usual handler, we're picking up all of the <input>instance's in the page having the placeholder attribute specified and assigning them to a variable called $elements. We're prepending a dollar sign to the variable name to highlight that it stores a jQuery object. With the next block of code, we iterate over the object to access the elements by their index position. Then we log on the console the placeholder's value accessing it using the dot operator. As you can see, we accessed the property directly, without using a method. This happens because the collection's elements are plain DOM elements and not jQuery objects. If you replicated correctly the demo you should see this output in your console: In this article, we chose all of the page's <input> instance's, not just those inside the form since we haven't specified it. A better selection would be to restrict the selection using the form's id, that is very fast as we've already discussed. Thus, our selection will turn into: $('#registration-form input[placehoder]') We can have an even better selection using the jQuery's find() method that retrieves the descendants that matches the given selector: $('#registration-form').find('input[placehoder]') There's more... You can also use more than one attribute selector at once. Multiple attribute selector In case you need to select nodes that match two or more criteria, you can use the Multiple Attribute selector. You can chain how many selectors you liked, it isn't limited to two. Let's say that you want to select all of the <input> instance's of type email and have the placeholder attribute specified, you would need to write: $('input[type="email"][placeholder]') Not equal selector This selector isn't part of the CSS specifications, so it can't take advantage of the native querySelectorAll() method. The official documentation has a good hint to avoid the problem and have better performance: For better performance in modern browsers, use $("your-pure-css-selector").not('[name="value"]') instead. Using filter() and attr() jQuery really has a lot of methods to help you in your work and thanks to this, you can achieve the same task in a multitude of ways. While the attribute selectors are important, it can be worth to see how you could achieve the same result seen before using filter() and attr(). filter() is a function that accepts only one argument that can be of different types, but you'll usually see codes that pass a selector or a function. Its aim is to reduce a collection, iterating over it, and keeping only the elements that match the given parameter. The attr() method, instead, accepts up to two arguments and the first is usually an attribute name. We'll use it simply as a getter to retrieve the value of the elements' placeholder. To achieve our goal, replace the selection instruction with these lines: var $elements = $('#registration-form input').filter(function() { return ($(this).attr("placeholder") !== undefined) }); The main difference here is the anonymous function we passed to the filter() method. Inside the function, this refers to the current DOM element processed, so to be able to use jQuery's methods we need to wrap the element in a jQuery object. Some of you may guess why we haven't used the plain DOM elements accessing the placeholder attribute directly. The reason is that the result won't be the one expected. In fact, by doing so, you'll have an empty string as a value even if the placeholder attribute wasn't set for that element making the strict test against undefined useless. Summary Thus in this article we have learned how to select elements using the attributes. Resources for Article : Further resources on this subject: jQuery refresher [Article] Tips and Tricks for Working with jQuery and WordPress [Article] jQuery User Interface Plugins: Tooltip Plugins [Article]
Read more
  • 0
  • 0
  • 4099
Modal Close icon
Modal Close icon