Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-troubleshooting-freenas-server
Packt
28 Oct 2009
9 min read
Save for later

Troubleshooting FreeNAS server

Packt
28 Oct 2009
9 min read
Where to Look for Log Information The first place to head whenever you have a configuration problem with FreeNAS is to the related configuration section and check that it is configured as expected. If, having double checked the settings, the problem persists, the next port of call is the log and information files in the Diagnostics: section of the web interface. Keep Diagnostics Section ExpandedBy default, the menu tree in the Diagnostics section of the web interface is collapsed, meaning the menu items aren't visible. To see the menu items, you need to click the word Diagnostics and the tree will expand. During initial setup and if you are doing lots of troubleshooting, you can save yourself a click by having the Diagnostics section permanently expanded. To set this option, go to System: Advanced and click on the Navigation - Keep diagnostics in navigation expanded tick box. The Diagnostics sections has five sections, the first two are logs and information pages about the status of the FreeNAS server. The other three are networking diagnostic tools and information. Diagnostics: Logs This section collates all the different log files that are generated by the FreeNAS server into one convenient place. There are several tabs, one for each different service to log file type. Some of the information can be very technical, especially in the System tab. However, with some key information they can become more readable. The tabs are as follows: Tab Meaning System When FreeBSD (the underlying OS of FreeNAS) boots, various log entries are recorded here about the hardware of the server and various messages about the boot process. FTP This shows the activity on the FTP server including successful logins and failed logins. RSYNC The log information for the RSYNC server is divided into three sections: Server, Client, and Local. Depending on which type of RSYNC operation you are interested, click the appropriate tab. SSHD Here you will find log entries from the SSH server including some limited startup information and records of logins and failed login attempts. SMARTD This tab logs the output of the S.M.A.R.T daemon. Daemon Any other minor system service like the built-in HTTP server, the Apple Filing Protocol server and Windows networking server (Samab) will log information to this page. UPnP The log information from the FreeNAS UPnP server called "MediaTomb" is displayed here. The logging can be quite verbose so careful attention is needed when reading it. Don't be distracted by entires such as "INFO: Config: option not found:" as this is just the server logging that it will be using a default value for that particular attribute. Settings The settings tab allows you to change how the log information is displayed including the sort order and the number of entries shown. What is a Daemon?In UNIX speak, a Daemon is a system service. It is a program that runs in the background performing certain tasks. The Daemons in FreeNAS don't work with the users in an interactive mode (via the monitor, mouse, and keyboard) and as such need a place to log the results (or problems)of their actives. The FreeNAS Daemons are launched automatically by FreeBSD when it boots and some are dependent on being enabled in the web interface. Understanding Diagnostics Logs: System The most complicated of all the log pages is the System log page. Here, FreeBSD logs information about the system, its hardware, and the startup process. At first, this page can seem intimidating but with a little help, this page can be very helpful particularly in tracking down hardware or driver related problems. 50 Log Entries Might Not be EnoughThe default number of log entries shown on the Diagnostics: Logs page is 50. For most situations, this will be sufficient but there can be times when it is not enough. For example in the Diagnostics: Logs: System tab, the total number of log entries made during the boot up process is more than 50. If you want to see how much system memory has been recognized by FreeBSD, you won't find it within the standard 50 entries. The solution is to increase the Number of log entries to show parameter on the Diagnostics: Logs: Setting tab. The best way to learn to read the Diagnostics: Logs: System page is by example, below are several different log entry examples including logs about the CPU, memory, disks, and disk controllers: kernel: FreeBSD 6.2-RELEASE-p11 #0: Wed Mar 12 18:17:49 CET 2008 This first entry shows the heritage of the FreeNAS server. It is based on FreeBSD and in this particular case, we see that this version of FreeNAS is using FreeBSD 6.2. There are plans (which may have already become reality) to use FreeBSD version 7.0 as the base for FreeNAS. kernel: CPU: Intel(R) Xeon(TM) CPU 1.70GHz (1680.52-MHz 686-class CPU) Here, the type of CPU that was detected by the FreeBSD is displayed. In this case, it is an Intel Xeon CPU running at 1.7GHz. kernel: FreeBSD/SMP: Multiprocessor System Detected: 2 CPUs If your system has more than one CPU or is a dual core machine then you will see an entry in the log file (like the one above) recognizing the second CPU. If your machine has Hyper-threading technology, then the second logical processor will be reported like this: Logical CPUs per core:2 Apr 1 11:06:00 kernel: real memory = 268435456 (256 MB)Apr 1 11:06:00 kernel: avail memory = 252907520 (241 MB) These log entries show how much memory the system has detected. The difference in size between real memory and available memory is the difference between the amount of RAM physically installed in the computer and the amount of memory left over after the FreeBSD kernel is loaded. kernel: atapci0: <Intel PIIX4 UDMA33 controller> port 0x1f0-0x1f7,0x3f6,0x170-0x177,0x376,0x1050-0x105f at device 7.1 on pci0 kernel: ata0: <ATA channel 0> on atapci0 kernel: ata1: <ATA channel 1> on atapci0 For disks to work on your FreeNAS server, a disk controller is needed and it will be either a standard ATA/IDE controller, a SATA controller or a SCSI controller. Above are the log entries for a standard ATA controller built into the motherboard. You can see that it is an Intel controller and that two channels have been seen (the primary and the secondary). kernel: atapci1: <SiS 181 SATA150 controller> irq 17 at device 5.0 on pci0kernel: ata2: <ATA channel 0> on atapci1kernel: ata3: <ATA channel 1> on atapci1 Like the ATA controller listed a moment ago, SATA controllers are all recognized at boot up. Here is a SiS 181 SATA 150 controller with two channels. They are listed as devices ata2 and ata3 as ata0 and ata1 are used by the standard ATA/IDE controller. kernel: mpt0: <LSILogic 1030 Ultra4 Adapter> irq 17 at device 16.0 on pci0 Like IDE and SATA controllers, all recognized SCSI drivers are listed in the boot up system log. Here, the controller is an LSILogic 1030 Ultra4. kernel: ad0: 476940MB <WDC WD5000AAJB-00YRA0 12.01C02> at ata0-master UDMA100kernel: ad4: 476940MB <Seagate ST3500320AS SD04> at ata2-master SATA150 Once the disk controllers are recognized by the system, FreeBSD can search to see which disks are attached. Above is an example of a Western Digital 500GB hard drive using the standard ATA100 interface at 100MB/s. There is also a 500GB Seagate drive connected using the SATA interface. acd0: CDROM <TOSHIBA CD-ROM XM-7002B/1005> at ata1 as master UDMA33 When the CDROM (which is normally attached to an ATA/IDE controller) is recognized, it will look like the above. kernel: da0 at ahd0 bus 0 target 0 lun 0kernel: da0: <MAXTOR ATLAS10K4_73WLS DFL0> Fixed Direct Access SCSI-3 devicekernel: da0: 320.000MB/s transfers (160.000MHz, offset 127, 16bit), Tagged Queueing Enabledkernel: da0: 70149MB (143666192 512 byte sectors: 255H 63S/T 8942C) SCSI addressing is a little more complicated than that of ATA/IDE. In SCSI land, you have a controller, a channel (bus), a disk (target), and the Logical Unit Number (LUN). The example above shows that a disk (which has been assigned the device name da0) is found on the controller ahd0 on bus 0, as target 0 with the LUN 0. SCSI controllers can have multiple buses and multiple targets. Further down, you can see that the disk is a MAXTOR 73GB SCSI-3 disk. kernel: da0 at umass-sim0 bus 0 target 0 lun 0kernel: da0: <Verbatim Store 'n' Go 1.30> Removable Direct Access SCSI-2 devicekernel: da0: 40.000MB/s transferskernel: da0: 963MB (1974271 512 byte sectors: 64H 32S/T 963C) If you are using a USB flash disk for storing the configuration information, it will most likely appear in the log file as a type of SCSI disk. The above example shows a 1GB Verbatim Store 'n' Go disk. kernel: lnc0: <PCNet/PCI Ethernet adapter> irq 18 at device 17.0 on pci0kernel: lnc0: Ethernet address: 00:0c:29:a5:9a:28 Another important device that needs to work correctly on your system is the network interface card. Like disk controllers and disks, it will be logged in the log file when FreeBSD recognizes it. Above is an example of an AMD Lance/PCNet-based Ethernet adapter. Each Ethernet card has a unique address know as the Ethernet address or the MAC address. It is made up of 6 numbers specified using a colon notation. Once found, FreeBSD queries the card to find its MAC address and logs the result. In the above example, it is "00:0c:29:a5:9a:28". Converting between Device Names and the Real World In the SCSI example above, the SCSI controller listed is ahd0. The trick to understanding these log entries better is to know how to interpret the device name ahd0. First of all ahd0 means it is a device using the ahd driver and it is the first one in the system (with numbering starting from 0). So what is a ahd? The first place to look is further up in the log file. There should be an entry like: kernel: ahd0: <Adaptec 39320 Ultra320 SCSI adapter> irq 11 at device 1.0 on pci2 This shows that the particular device is an Adaptec 39320 SCSI 3 controller. You can also find out more about the the ahd driver (and all FreeBSD drivers) at: http://www.freebsd.org/releases/6.2R/hardware-i386.html Search for ahd and you will find which controllers this driver supports (in this case, they are all controllers from Adaptec. If you click on the link provided, you will be taken to a specific help page about this driver. When FreeNAS moves to FreeBSD 7, then the relevant web page will be: http://www.freebsd.org/releases/7.0R/hardware.html.
Read more
  • 0
  • 0
  • 13837

article-image-core-data-ios-designing-data-model-and-building-data-objects
Packt
05 May 2011
7 min read
Save for later

Core Data: Designing a Data Model and Building Data Objects

Packt
05 May 2011
7 min read
To design a data model with Core Data, we need to create a new project. So, let's start there... Creating a new project To create a new project, perform the following steps: Launch Xcode and create a new project by selecting the File | New Project option. The New Project Assistant window will appear, prompting us to select a template for the new project, as shown in the next screenshot. We will select the Navigation-based Application template. Ensure that the Use Core Data for storage checkbox is checked and click on the Choose... button. On selecting the Choose... button, we will get a dialog box to specify the name and the location of the project. Let us keep the location the same as default (Documents folder) and assign the project name as: prob (any name). Click on Save. Xcode will then generate the project files and the project gets opened in the Xcode project window. The checkbox Use Core Data for storage will ask Xcode to provide all the default code that is required for using Core Data. This option is visible with only two project templates: Navigationbased Application and Window-based Application templates. Designing the data model Designing a data model means defining entities, attributes, and relationships for our application using a special tool. Xcode includes a data modeling tool (also known as Data Model Editor or simply a modeler) that facilitates the creation of entities, defining attributes, and the relationships among them. Data Model Editor The Data Model Editor is a data modeling tool provided by Xcode that makes the job of designing a data model quite easy. It displays the browser as well as a diagram view of the data model. The Browser view displays two panes, the Entity pane and the Properties pane, for defining entities and their respective properties. The diagram view displays rounded rectangles that designate entities and lines to show relationships among the entities. Adding an entity To add an entity to our data model, perform the following steps: Invoke the data modeling tool by double-clicking the prob.xcdatamodel file in the Resources group found in the Xcode Project window. Xcode's data modeling tool will open and we will find that an entity by default is already created for us by the name: Event (as shown in the next image) with an attribute: timeStamp. We can delete or rename the default entity Event as desired. Let us select the default Event entity and delete it by clicking on the minus (-) button in the Entity pane followed by either choosing plus (+) button in the Entity pane or by choosing Design | Data Model | Add Entity option from the menu bar. This will add a blank entity (by the name Entity) to our data model, which we can rename as per our requirements. Let us set the name of the new entity as: Customer. Automatically, an instance of NSManagedObject will be created to represent our newly created Customer entity. The next step is to add attributes to this entity. Adding an attribute property We want to add three attributes by name—name, emailid, and contactno—to the Customer entity. Let's follow the steps mentioned next for the same: Select the entity and choose the Design | Data Model | Add Attribute option from the menu bar or select the + (plus) button in the Property pane. A menu with several options such as Add Attribute, Add Fetched property, Add Relationship, and Add Fetch Request will pop up. We select the Add Attribute option from the popped up menu. We see that a new attribute property is created for our Customer entity by a default name: newAttribute in the inspector. Let us rename our new attribute as: name (as we will be using this attribute to store the names of the customers). Then, we set the type of the name attribute to String as shown in the next screenshot (as names consists of strings): Below the Name field are three checkboxes: Optional, Transient, and Indexed. Though we will be using the Optional checkbox for the name attribute, let us see the usage of all three: Optional: If this checkbox is checked, it means the entity can be saved even if the attribute is nil (empty). If this checkbox is unchecked and we try to save the entity with this attribute set to nil, it will result in a validation error. When used with a relationship, if the checkbox is checked it means that the relationship can be empty. Suppose that we create one more entity say: Credit Card (where information of the customer's credit card is kept). In that case, the relationship from customer to the credit card will be optional (we have to leave this checkbox checked) as a customer may or may not have a credit card. And if we create an entity say: Product—in that case, the relationship from the Customer to the Product cannot be empty as a customer will definitely buy at least a single product (the checkbox has to be unchecked). Transient: This checkbox, if checked, means that the attribute or the relationship is of a temporary nature and we don't want it to be stored (persist) in the persistent store. This checkbox must be unchecked for the attributes or relationship that we want to persist (to be stored on the disk). Indexed: This checkbox has to be checked to apply indexing on the attribute. It is used when we want to perform sorting or searching on some attribute. By checking this checkbox, an index will be created on that attribute and the database will be ordered on that attribute. Types of attributes Using the Type drop-down list control, we select the data type (that is, numerical, string, date, and so on) of the attribute to specify the kind of information that can be stored in the attribute. The following is the list of data types: Integer 16, Integer 32, and Integer 64 data types are for storing signed integers. The range of values that these types are able to store is as follows: Integer 16:-32,768 to 32, 767 Integer 32:-2,147,483,648 to 2,147,483,647 Integer 64:-9,223,372,036,854,775,808 to 9,223,372,036,854,775,807 Decimal, Double, and Float data types are for storing fractional numbers. The Double data type uses 64 bits to store a value while the Float data type uses 32 bits for storing a value. The only limitation with these two data types is that they round off the values. To avoid any rounding of values, the Decimal data type is preferred. The Decimal type uses fixed point numbers for storing values, so the numerical value stored in it is not rounded off. String data type is used for storing text contents. Boolean data type is used for storing YES or NO values. Date data type is used for storing dates as well as timestamps. Binary data type is used for storing binary data. Transformable data type works along with Value Transformers that help us create attributes based on any Objective-C class, that is, we can create custom data types other than the standard data types. This data type can be used to store an instance of UIColor, UIImage, and so on. It archives objects to instances of NSData. Below the Type drop-down menu, we will see a few more fields in the detail pane, as shown in the next screenshot: Fields applying constraints Min Length: and Max Length: fields are for applying constraints of minimum and maximum number of characters to an attribute. If we exceed the range supplied in these fields, we get a validation error. Meaning, if we enter the string of fewer characters than the value supplied in Min Length: or the string has more characters than the value supplied in Max Length: field, this will result in a validation error while saving managed objects. Reg. Ex: field stands for regular expression and is used for applying validation checks on the data entered in the attribute by making use of regular expressions. Default Value: field is for specifying default value of the attribute. If we create a new managed object, the attribute will automatically be set to the default value specified in this field. Let us add two more attributes to the Customer entity: emailid and contactno (for storing a customer's e-mail address and contact number, respectively). These two attributes will also be of type: String as shown in the next screenshot. Now, save the .xcdatamodel.
Read more
  • 0
  • 0
  • 13768

article-image-getting-started-react
Packt
24 Feb 2016
7 min read
Save for later

Getting Started with React

Packt
24 Feb 2016
7 min read
In this article by Vipul Amler and Prathamesh Sonpatki, author of the book ReactJS by Example- Building Modern Web Applications with React, we will learn how web development has seen a huge advent of Single Page Application (SPA) in the past couple of years. Early development was simple—reload a complete page to perform a change in the display or perform a user action. The problem with this was a huge round-trip time for the complete request to reach the web server and back to the client. Then came AJAX, which sent a request to the server, and could update parts of the page without reloading the current page. Moving in the same direction, we saw the emergence of the SPAs. Wrapping up the heavy frontend content and delivering it to the client browser just once, while maintaining a small channel for communication with the server based on any event; this is usually complemented by thin API on the web server. The growth in such apps has been complemented by JavaScript libraries and frameworks such as Ext JS, KnockoutJS, BackboneJS, AngularJS, EmberJS, and more recently, React and Polymer. (For more resources related to this topic, see here.) Let's take a look at how React fits in this ecosystem and get introduced to it in this article. What is React? ReactJS tries to solve the problem from the View layer. It can very well be defined and used as the V in any of the MVC frameworks. It's not opinionated about how it should be used. It creates abstract representations of views. It breaks down parts of the view in the Components. These components encompass both the logic to handle the display of view and the view itself. It can contain data that it uses to render the state of the app. To avoid complexity of interactions and subsequent render processing required, React does a full render of the application. It maintains a simple flow of work. React is founded on the idea that DOM manipulation is an expensive operation and should be minimized. It also recognizes that optimizing DOM manipulation by hand will result in a lot of boilerplate code, which is error-prone, boring, and repetitive. React solves this by giving the developer a virtual DOM to render to instead of the actual DOM. It finds difference between the real DOM and virtual DOM and conducts the minimum number of DOM operations required to achieve the new state. React is also declarative. When the data changes, React conceptually hits the refresh button and knows to only update the changed parts. This simple flow of data, coupled with dead simple display logic, makes development with ReactJS straightforward and simple to understand. Who uses React? If you've used any of the services such as Facebook, Instagram, Netflix, Alibaba, Yahoo, E-Bay, Khan-Academy, AirBnB, Sony, and Atlassian, you've already come across and used React on the Web. In just under a year, React has seen adoption from major Internet companies in their core products. In its first-ever conference, React also announced the development of React Native. React Native allows the development of mobile applications using React. It transpiles React code to the native application code, such as Objective-C for iOS applications. At the time of writing this, Facebook already uses React Native in its Groups iOS app. In this article, we will be following a conversation between two developers, Mike and Shawn. Mike is a senior developer at Adequate Consulting and Shawn has just joined the company. Mike will be mentoring Shawn and conducting pair programming with him. When Shawn meets Mike and ReactJS It's a bright day at Adequate Consulting. Its' also Shawn's first day at the company. Shawn had joined Adequate to work on its amazing products and also because it uses and develops exciting new technologies. After onboarding the company, Shelly, the CTO, introduced Shawn to Mike. Mike, a senior developer at Adequate, is a jolly man, who loves exploring new things. "So Shawn, here's Mike", said Shelly. "He'll be mentoring you as well as pairing with you on development. We follow pair programming, so expect a lot of it with him. He's an excellent help." With that, Shelly took leave. "Hey Shawn!" Mike began, "are you all set to begin?" "Yeah, all set! So what are we working on?" "Well we are about to start working on an app using https://openlibrary.org/. Open Library is collection of the world's classic literature. It's an open, editable library catalog for all the books. It's an initiative under https://archive.org/ and lists free book titles. We need to build an app to display the most recent changes in the record by Open Library. You can call this the Activities page. Many people contribute to Open Library. We want to display the changes made by these users to the books, addition of new books, edits, and so on, as shown in the following screenshot: "Oh nice! What are we using to build it?" "Open Library provides us with a neat REST API that we can consume to fetch the data. We are just going to build a simple page that displays the fetched data and format it for display. I've been experimenting and using ReactJS for this. Have you used it before?" "Nope. However, I have heard about it. Isn't it the one from Facebook and Instagram?" "That's right. It's an amazing way to define our UI. As the app isn't going to have much of logic on the server or perform any display, it is an easy option to use it." "As you've not used it before, let me provide you a quick introduction." "Have you tried services such as JSBin and JSFiddle before?" "No, but I have seen them." "Cool. We'll be using one of these, therefore, we don't need anything set up on our machines to start with." "Let's try on your machine", Mike instructed. "Fire up http://jsbin.com/?html,output" "You should see something similar to the tabs and panes to code on and their output in adjacent pane." "Go ahead and make sure that the HTML, JavaScript, and Output tabs are clicked and you can see three frames for them so that we are able to edit HTML and JS and see the corresponding output." "That's nice." "Yeah, good thing about this is that you don't need to perform any setups. Did you notice the Auto-run JS option? Make sure its selected. This option causes JSBin to reload our code and see its output so that we don't need to keep saying Run with JS to execute and see its output." "Ok." Requiring React library "Alright then! Let's begin. Go ahead and change the title of the page, to say, React JS Example. Next, we need to set up and we require the React library in our file." "React's homepage is located at http://facebook.github.io/react/. Here, we'll also locate the downloads available for us so that we can include them in our project. There are different ways to include and use the library. We can make use of bower or install via npm. We can also just include it as an individual download, directly available from the fb.me domain. There are development versions that are full version of the library as well as production version which is its minified version. There is also its version of add-on. We'll take a look at this later though." "Let's start by using the development version, which is the unminified version of the React source. Add the following to the file header:" <script src="http://fb.me/react-0.13.0.js"></script> "Done". "Awesome, let's see how this looks." <!DOCTYPE html> <html> <head> <script src="http://fb.me/react-0.13.0.js"></script> <meta charset="utf-8"> <title>React JS Example</title> </head> <body> </body> </html> Summary In this article, we started with React and built our first component. In the process we studied top level API of React for constructing components and elements. Resources for Article: Further resources on this subject: Create Your First React Element [article] An Introduction to ReactJs [article] An Introduction to Reactive Programming [article]
Read more
  • 0
  • 0
  • 13735

article-image-exploring-streams
Packt
22 Nov 2013
15 min read
Save for later

Exploring streams

Packt
22 Nov 2013
15 min read
(For more resources related to this topic, see here.) According to Bjarne Stoustrup in his book The C++ Programming Language, Third Edition: Designing and implementing a general input/output facility for a programming language is notoriously difficult... An I/O facility should be easy, convenient, and safe to use; efficient and flexible; and, above all, complete. It shouldn't surprise anyone that a design team, focused on providing efficient and easy I/O, has delivered such a facility through Node. Through a symmetrical and simple interface, which handles data buffers and stream events so that the implementer does not have to, Node's Stream module is the preferred way to manage asynchronous data streams for both internal modules and, hopefully, for the modules developers will create. A stream in Node is simply a sequence of bytes. At any time, a stream contains a buffer of bytes, and this buffer has a zero or greater length: Because each character in a stream is well defined, and because every type of digital data can be expressed in bytes, any part of a stream can be redirected, or "piped", to any other stream, different chunks of the stream can be sent do different handlers, and so on. In this way stream input and output interfaces are both flexible and predictable and can be easily coupled. Digital streams are well described using the analogy of fluids, where individual bytes (drops of water) are being pushed through a pipe. In Node, streams are objects representing data flows that can be written to and read from asynchronously. The Node philosophy is a non-blocking flow, I/O is handled via streams, and so the design of the Stream API naturally duplicates this general philosophy. In fact, there is no other way of interacting with streams except in an asynchronous, evented manner—you are prevented, by design, from blocking I/O. Five distinct base classes are exposed via the abstract Stream interface: Readable, Writable, Duplex, Transform, and PassThrough. Each base class inherits from EventEmitter, which we know of as an interface to which event listeners and emitters can be bound. As we will learn, and here will emphasize, the Stream interface is an abstract interface. An abstract interface functions as a kind of blueprint or definition, describing the features that must be built into each constructed instance of a Stream object. For example, a readable stream implementation is required to implement a public read method which delegates to the interface's internal _read method. In general, all stream implementations should follow these guidelines: As long as data exists to send, write to a stream until that operation returns false, at which point the implementation should wait for a drain event, indicating that the buffered stream data has emptied Continue to call read until a null value is received, at which point wait for a readable event prior to resuming reads Several Node I/O modules are implemented as streams. Network sockets, file readers and writers, stdin and stdout, zlib, and so on. Similarly, when implementing a readable data source, or data reader, one should implement that interface as a Stream interface. It is important to note that as of Node 0.10.0 the Stream interface changed in some fundamental ways. The Node team has done its best to implement backwards-compatible interfaces, such that (most) older programs will continue to function without modification. In this article we will not spend any time discussing the specific features of this older API, focusing on the current (and future) design. The reader is encouraged to consult Node's online documentation for information on migrating older programs. Implementing readable streams Streams producing data that another process may have an interest in are normally implemented using a Readable stream. A Readable stream saves the implementer all the work of managing the read queue, handling the emitting of data events, and so on. To create a Readable stream: var stream = require('stream'); var readable = new stream.Readable({ encoding : "utf8", highWaterMark : 16000, objectMode: true }); As previously mentioned, Readable is exposed as a base class, which can be initialized through three options: encoding: Decode buffers into the specified encoding, defaulting to UTF-8. highWaterMark: Number of bytes to keep in the internal buffer before ceasing to read from the data source. The default is 16 KB. objectMode: Tell the stream to behave as a stream of objects instead of a stream of bytes, such as a stream of JSON objects instead of the bytes in a file. Default false. In the following example we create a mock Feed object whose instances will inherit the Readable stream interface. Our implementation need only implement the abstract _read method of Readable, which will push data to a consumer until there is nothing more to push, at which point it triggers the Readable stream to emit an "end" event by pushing a null value: var Feed = function(channel) { var readable = new stream.Readable({ encoding : "utf8" }); var news = [ "Big Win!", "Stocks Down!", "Actor Sad!" ]; readable._read = function() { if(news.length) { return readable.push(news.shift() + "n"); } readable.push(null); }; return readable; } Now that we have an implementation, a consumer might want to instantiate the stream and listen for stream events. Two key events are readable and end. The readable event is emitted as long as data is being pushed to the stream. It alerts the consumer to check for new data via the read method of Readable. Note again how the Readable implementation must provide a private _read method, which services the public read method exposed to the consumer API. The end event will be emitted whenever a null value is passed to the push method of our Readable implementation. Here we see a consumer using these methods to display new stream data, providing a notification when the stream has stopped sending data: var feed = new Feed(); feed.on("readable", function() { var data = feed.read(); data && process.stdout.write(data); }); feed.on("end", function() { console.log("No more news"); }); Similarly, we could implement a stream of objects through the use of the objectMode option: var readable = new stream.Readable({ objectMode : true }); var prices = [ { price : 1 }, { price : 2 } ]; ... readable.push(prices.shift()); // } { prices : 1 } // } { prices : 2 } Here we see that each read event is receiving an object, rather than a buffer or string. Finally, the read method of a Readable stream can be passed a single argument indicating the number of bytes to be read from the stream's internal buffer. For example, if it was desired that a file should be read one byte at a time, one might implement a consumer using a routine similar to: readable.push("Sequence of bytes"); ... feed.on("readable", function() { var character; while(character = feed.read(1)) { console.log(character); }; }); // } S // } e // } q // } ... Here it should be clear that the Readable stream's buffer was filled with a number of bytes all at once, but was read from discretely. Pushing and pulling We have seen how a Readable implementation will use push to populate the stream buffer for reading. When designing these implementations it is important to consider how volume is managed, at either end of the stream. Pushing more data into a stream than can be read can lead to complications around exceeding available space (memory). At the consumer end it is important to maintain awareness of termination events, and how to deal with pauses in the data stream. One might compare the behavior of data streams running through a network with that of water running through a hose. As with water through a hose, if a greater volume of data is being pushed into the read stream than can be efficiently drained out of the stream at the consumer end through read, a great deal of back pressure builds, causing a data backlog to begin accumulating in the stream object's buffer. Because we are dealing with strict mathematical limitations, read simply cannot be compelled to release this pressure by reading more quickly—there may be a hard limit on available memory space, or other limitation. As such, memory usage can grow dangerously high, buffers can overflow, and so forth. A stream implementation should therefore be aware of, and respond to, the response from a push operation. If the operation returns false this indicates that the implementation should cease reading from its source (and cease pushing) until the next _read request is made. In conjunction with the above, if there is no more data to push but more is expected in the future the implementation should push an empty string (""), which adds no data to the queue but does ensure a future readable event. While the most common treatment of a stream buffer is to push to it (queuing data in a line), there are occasions where one might want to place data on the front of the buffer (jumping the line). Node provides an unshift operation for these cases, which behavior is identical to push, outside of the aforementioned difference in buffer placement. Writable streams A Writable stream is responsible for accepting some value (a stream of bytes, a string) and writing that data to a destination. Streaming data into a file container is a common use case. To create a Writable stream: var stream = require('stream'); var readable = new stream.Writable({ highWaterMark : 16000, decodeStrings: true }); The Writable streams constructor can be instantiated with two options: highWaterMark: The maximum number of bytes the stream's buffer will accept prior to returning false on writes. Default is 16 KB decodeStrings: Whether to convert strings into buffers before writing. Default is true. As with Readable streams, custom Writable stream implementations must implement a _write handler, which will be passed the arguments sent to the write method of instances. One should think of a Writable stream as a data target, such as for a file you are uploading. Conceptually this is not unlike the implementation of push in a Readable stream, where one pushes data until the data source is exhausted, passing null to terminate reading. For example, here we write 100 bytes to stdout: var stream = require('stream'); var writable = new stream.Writable({ decodeStrings: false }); writable._write = function(chunk, encoding, callback) { console.log(chunk); callback(); } var w = writable.write(new Buffer(100)); writable.end(); console.log(w); // Will be `true` There are two key things to note here. First, our _write implementation fires the callback function immediately after writing, a callback that is always present, regardless of whether the instance write method is passed a callback directly. This call is important for indicating the status of the write attempt, whether a failure (error) or a success. Second, the call to write returned true. This indicates that the internal buffer of the Writable implementation has been emptied after executing the requested write. What if we sent a very large amount of data, enough to exceed the default size of the internal buffer? Modifying the above example, the following would return false: var w = writable.write(new Buffer(16384)); console.log(w); // Will be 'false' The reason this write returns false is that it has reached the highWaterMark option—default value of 16 KB (16 * 1024). If we changed this value to 16383, write would again return true (or one could simply increase its value). What to do when write returns false? One should certainly not continue to send data! Returning to our metaphor of water in a hose: when the stream is full, one should wait for it to drain prior to sending more data. Node's Stream implementation will emit a drain event whenever it is safe to write again. When write returns false listen for the drain event before sending more data. Putting together what we have learned, let's create a Writable stream with a highWaterMark value of 10 bytes. We will send a buffer containing more than 10 bytes (composed of A characters) to this stream, triggering a drain event, at which point we write a single Z character. It should be clear from this example that Node's Stream implementation is managing the buffer overflow of our original payload, warning the original write method of this overflow, performing a controlled depletion of the internal buffer, and notifying us when it is safe to write again: var stream = require('stream'); var writable = new stream.Writable({ highWaterMark: 10 }); writable._write = function(chunk, encoding, callback) { process.stdout.write(chunk); callback(); } writable.on("drain", function() { writable.write("Zn"); }); var buf = new Buffer(20, "utf8"); buf.fill("A"); console.log(writable.write(buf.toString())); // false The result should be a string of 20 A characters, followed by false, then followed by the character Z. The fluid data in a Readable stream can be easily redirected to a Writable stream. For example, the following code will take any data sent by a terminal (stdin is a Readable stream) and pass it to the destination Writable stream, stdout: process.stdin.pipe(process.stdout); Whenever a Writable stream is passed to a Readable stream's pipe method, a pipe event will fire. Similarly, when a Writable stream is removed as a destination for a Readable stream, the unpipe event fires. To remove a pipe, use the following: unpipe(destination stream)   Duplex streams A duplex stream is both readable and writeable. For instance, a TCP server created in Node exposes a socket that can be both read from and written to: var stream = require("stream"); var net = require("net"); net .createServer(function(socket) { socket.write("Go ahead and type something!"); socket.on("readable", function() { process.stdout.write(this.read()) }); }) .listen(8080); When executed, this code will create a TCP server that can be connected to via Telnet: telnet 127.0.0.1 8080 Upon connection, the connecting terminal will print out Go ahead and type something! —writing to the socket. Any text entered in the connecting terminal will be echoed to the stdout of the terminal running the TCP server (reading from the socket). This implementation of a bi-directional (duplex) communication protocol demonstrates clearly how independent processes can form the nodes of a complex and responsive application, whether communicating across a network or within the scope of a single process. The options sent when constructing a Duplex instance merge those sent to Readable and Writable streams, with no additional parameters. Indeed, this stream type simply assumes both roles, and the rules for interacting with it follow the rules for the interactive mode being used. As a Duplex stream assumes both read and write roles, any implementation is required to implement both _write and _read methods, again following the standard implementation details given for the relevant stream type. Transforming streams On occasion stream data needs to be processed, often in cases where one is writing some sort of binary protocol or other "on the fly" data transformation. A Transform stream is designed for this purpose, functioning as a Duplex stream that sits between a Readable stream and a Writable stream. A Transform stream is initialized using the same options used to initialize a typical Duplex stream. Where Transform differs from a normal Duplex stream is in its requirement that the custom implementation merely provide a _transform method, excluding the _write and _read method requirement. The _transform method will receive three arguments, first the sent buffer, an optional encoding argument, and finally a callback which _transform is expected to call when the transformation is complete: _transform = function(buffer, encoding, cb) { var transformation = "..."; this.push(transformation) cb(); } Let's imagine a program that wishes to convert ASCII (American Standard Code for Information Interchange) codes into ASCII characters, receiving input from stdin. We would simply pipe our input to a Transform stream, then piping its output to stdout: var stream = require('stream'); var converter = new stream.Transform(); converter._transform = function(num, encoding, cb) { this.push(String.fromCharCode(new Number(num)) + "n") cb(); } process.stdin.pipe(converter).pipe(process.stdout); Interacting with this program might produce an output resembling the following: 65 A 66 B 256 A 257 a   Using PassThrough streams This sort of stream is a trivial implementation of a Transform stream, which simply passes received input bytes through to an output stream. This is useful if one doesn't require any transformation of the input data, and simply wants to easily pipe a Readable stream to a Writable stream. PassThrough streams have benefits similar to JavaScript's anonymous functions, making it easy to assert minimal functionality without too much fuss. For example, it is not necessary to implement an abstract base class, as one does with for the _read method of a Readable stream. Consider the following use of a PassThrough stream as an event spy: var fs = require('fs'); var stream = new require('stream').PassThrough(); spy.on('end', function() { console.log("All data has been sent"); }); fs.createReadStream("./passthrough.js").pipe(spy).pipe(process.std out); Summary As we have learned, Node's designers have succeeded in creating a simple, predictable, and convenient solution to the very difficult problem of enabling efficient I/O between disparate sources and targets. Its abstract Stream interface facilitates the instantiation of consistent readable and writable interfaces, and the extension of this interface into HTTP requests and responses, the filesystem, child processes, and other data channels makes stream programming with Node a pleasant experience. Resources for Article: Further resources on this subject: So, what is Node.js? [Article] Getting Started with Zombie.js [Article] So, what is KineticJS? [Article]
Read more
  • 0
  • 0
  • 13701

article-image-building-responsive-image-sliders
Packt
20 Nov 2013
7 min read
Save for later

Building Responsive Image Sliders

Packt
20 Nov 2013
7 min read
(For more resources related to this topic, see here.) Responsive image sliders Opening a website and seeing an image slider in the header area is common nowadays. Image sliders display highlighted content, which are really useful, within a limited space. Although the free space is more limited when a site is viewed through mobile devices, the slider element still catches the client's attention. The difference between how much area can be used to display a highlighted content and the resource available to render it is really big if compared with desktop, where we generally do not have problems with script performance, and the interaction of each transition is performed through the use of arrow signs to switch images. When the responsive era started, the way that people normally interacted with image sliders was observed, and changes, such as the way to change each slide, were identified, based on the progressive enhancement concept. The solution was to provide a similar experience to the users of mobile devices: the adoption of gestures and touches on image slider elements for devices that accept them instead of displaying fallbacks. With the constant evolution of browsers and technologies, there are many image slider plugins with responsive characteristics. My personal favorite plugins are Elastislide, FlexSlider2, ResponsiveSlides, Slicebox, and Swiper. There are plenty available, and the only way to find one you truly like is to try them! Let's look in detail at how each of them works. Elastislide plugin Elastislide is a responsive image slider that will adapt its size and behavior in order to work on any screen size based on jQuery. This jQuery plugin handles the slider's structure, including images with percentage-based width inside, displaying it horizontally or vertically with a predefined minimum number of shown images. Elastislide is licensed under the MIT license and can be downloaded from https://github.com/codrops/Elastislide. When we are implementing an image slider, simply decreasing the container size and displaying a horizontal scrollbar will not solve the problem for small devices gracefully. The recommendation is to resize the internal items too. Elastislide fixes this resizing issue very well and defines the minimum elements we want to show instead of simply hiding those using CSS. Also, Elastislide uses a complementary and customized version of jQuery library named jQuery++. jQuery++ is another JavaScript library very useful to deal with DOM and special events. In this case, Elastislide has a custom version of jQuery++, which enables the plugin working with swipe events on touch devices. How to do it As we will see four different applications of this plugin for the same carousel, we will use the same HTML carousel's structure and may modify only the JavaScript before executing the plugin, specifying the parameters: <ul id="carousel" class="elastislide-list"> <li><a href="#"><img src = "image-photo.jpg" /></a></li> <li><a href="#"><img src = "image-sky.jpg" /></a></li> <li><a href="#"><img src = "image-gardem.jpg" /></a></li> <li><a href="#"><img src = "image-flower.jpg" /></a></li> <li><a href="#"><img src = "image-belt.jpg" /></a></li> <li><a href="#"><img src = "image-wall.jpg" /></a></li> <li><a href="#"><img src = "image-street.jpg" /></a></li> </ul> At the bottom of the DOM (before the </body> closing tag), we will need to include the jQuery and jQuery++ libraries (required for this solution), and then the ElastiSlide script: <script src = "http://code.jquery.com/jquery-1.9.1.min.js"></script> <script src = "js/jquerypp.custom.js"></script> <script src = "js/modernizr.custom.17475.js"></script> <script src = "js/jquery.elastislide.js"></script> Then, include the CSS stylesheet inside the <head> tag: <link rel="stylesheet" type="text/css" href="css/elastislide.css" /> Alright, now we already have the basis to show four different examples. For each example, you must add different parameters when executing the plugin script, in order to get different rendering, depending on the project need. Example 1 – minimum of three visible images (default) In this first example, we will see the default visual and behavior, and whether we will put the following code right after it, including the ElastiSlide plugin: <script type="text/javascript"> $('#carousel').elastislide(); </script> The default options that come with this solution are: Minimum three items will be shown Speed of scroll effect is 0.5 seconds Horizontal orientation Easing effect is defined as ease-in-out The carousel will start to show the first image on the list The following screenshot represents what the implementation of this code will look like. Look at the difference between its versions shown on tablets and smartphones: Example 2 – vertical with a minimum of three visible images There is an option to render the carousel vertically, just by changing one parameter. Furthermore, we may speed up the scrolling effect. Remember to include the same files used in Example 1, and then insert the following code into the DOM: <script type="text/javascript"> $('#carousel').elastislide({ orientation: 'vertical', speed: 250 }); </script> By default, three images are displayed as a minimum. But this minimum value can be modified as we will see in our next example: Example 3 – fixed wrapper with a minimum of two visible images In this example, we will define the minimum visible items in the carousel, the difference may be noticed when the carousel is viewed on small screens and the images will not reduce too much. Also, we may define the image to be shown starting from the third one. Remember to include the same fi les that were used in Example 1, and then execute the scripts informing the following parameters and positioning them after including the ElastiSlide plugin: <script> $('#carousel').elastislide({ minItems: 2, start: 2 }); </script> Example 4 – minimum of four images visible in an image gallery In the fourth example, we can see many JavaScript implementations. However, the main objective of this example is to show the possibility which this plugin provides to us. Through the use of plugin callback functions and private functions we may track the click and the current image, and then handle this image change on demand by creating an image gallery: <script> var current = 0; var $preview = $('#preview'); var $carouselEl = $('#carousel'); var $carouselItems = $carouselEl.children(); var carousel = $carouselEl.elastislide({ current: current, minItems: 4, onClick: function(el, pos, evt){ changeImage(el, pos); evt.preventDefault(); }, onReady: function(){ changeImage($carouselItems.eq(current), current); } }); function changeImage(el, pos) { $preview.attr('src', el.data('preview')); $carouselItems.removeClass('current-img'); el.addClass('current-img'); carousel.setCurrent(pos); } </script> For this purpose, ElastiSlide may not have big advantages if compared with other plugins because it depends on our extra development to finalize this gallery. So, let's see what the next plugin offers to solve this problem. Summary This article, explains four different image-slider plugins and their implementation. These examples are elaborative for the use of the plugins. Resources for Article: Further resources on this subject: Top Features You Need to Know About – Responsive Web Design [Article] Different strategies to make responsive websites [Article] So, what is KineticJS? [Article]
Read more
  • 0
  • 0
  • 13684

article-image-how-build-javascript-microservices-platform
Andrea Falzetti
20 Dec 2016
6 min read
Save for later

How to build a JavaScript Microservices platform

Andrea Falzetti
20 Dec 2016
6 min read
Microservices is one of the most popular topics in software development these days, as well as JavaScript and chat bots. In this post, I share my experience in designing and implementing a platform using a microservices architecture. I will also talk about which tools were picked for this platform and how they work together. Let’s start by giving you some context about the project. The challenge The startup I am working with had the idea of building a platform to help people with depression and-tracking and managing the habits that keep them healthy, positive, and productive. The final product is an iOS app with a conversational UI, similar to a chat bot, and probably not very intelligent in version 1.0, but still with some feelings! Technology stack For this project, we decided to use Node.js for building the microservices, ReactNative for the mobile app, ReactJS for the Admin Dashboard, and ElasticSearch and Kibana for logging and monitoring the applications. And yes, we do like JavaScript! Node.js Microservices Toolbox There are many definitions of a microservice, but I am assuming we agree on a common statement that describes a microservice as an independent component that performs certain actions within your systems, or in a nutshell, a part of your software that solves a specific problem that you have. I got interested in microservices this year, especially when I found out there was a node.js toolkit called Seneca that helps you organize and scale your code. Unfortunately, my enthusiasm didn’t last long as I faced the first issue: the learning curve to approach Seneca was too high for this project; however, even if I ended up not using it, I wanted to include it here because many people are successfully using it, and I think you should be aware of it, and at least consider looking at it. Instead, we decided to go for a simpler way. We split our project into small Node applications, and using pm2, we deploy our microservices using a pm2 configuration file called ecosystem.json. As explained in the documentation, this is a good way of keeping your deployment simple, organized, and monitored. If you like control dashboards, graphs, and colored progress bars, you should look at pm2 Keymetrics--it offers a nice overview of all your processes. It has also been extremely useful in creating a GitHub Machine User, which essentially is a normal GitHub account, with its own attached ssh-key, which grants access to the repositories that contain the project’s code. Additionally, we created a Node user on our virtual machine with the ssh-key loaded in. All of the microservices run under this node user, which has access to the code base through the machine user ssh-key. In this way, we can easily pull the latest code and deploy new versions. We finally attached our ssh-keys to the Node user so each developer can login as the Node user via ssh: ssh node@<IP> cd ./project/frontend-api git pull Without being prompted for a password or authorization token from GitHub, and then using pm2, restart the microservice: pm2 restart frontend-api Another very important element that makes our microservices architecture is the API Gateway. After evaluating different options, including AWS API Gateway, HAproxy, Varnish, and Tyk, we decided to use Kong, an open source API Layer that offers modularity via plugins (everybody loves plugins!) and scalability features. We picked Kong because it supports WebSockets, as well as HTTP, but unfortunately, not all of the alternatives did. Using Kong in combination with nginx, we mapped our microservices under a single domain name, http://api.example.com, and exposed each microservices under a specific path: api.example.com/chat-bot api.example.com/admin-backend api.example.com/frontend-api … This allows us to run the microservices on separate servers on different ports, and having one single gate for the clients consuming our APIs. Finally, the API Gateway is responsible of allowing only authenticated requests to pass through, so this is a great way of protecting the microservices, because the gateway is the only public component, and all of the microservices run in a private network. What does a microservice look like, and how do they talk to each other? We started creating a microservice-boilerplate package that includes Express to expose some APIs, Passport to allow only authorized clients to use them, winston for logging their activities, and mocha and chai for testing the microservice. We then created an index.js file that initializes the express app with a default route: /api/ping. This returns a simple JSON containing the message ‘pong’, which we use to know if the service is down. One alternative is to get the status of the process using pm2: pm2 list pm2 status <microservice-pid> Whenever we want to create a new microservice, we start from this boilerplate code. It saves a lot of time, especially if the microservices have a similar shape. The main communication channel between microservices is HTTP via API calls. We are also using web sockets to allow a faster communication between some parts of the platform. We decided to use socket.io, a very simple and efficient way of implementing web sockets. I recommend creating a Node package that contains the business logic, including the object, models, prototypes, and common functions, such as read and write methods for the database. Using this approach allows you to include the package into each microservice, with the benefit of having just one place to update if something needs to change. Conclusions In this post, I covered the tools used for building a microservice architecture in Node.js. I hope you have found this useful. About the author Andrea Falzetti is an enthusiastic Full Stack Developer based in London. He has been designing and developing web applications for over 5 years. He is currently focused on node.js, react, microservices architecture, serverless, conversational UI, chat bots, and machine learning. He is currently working at Activate Media, where his role is to estimate, design, and lead the development of web and mobile platforms.
Read more
  • 0
  • 0
  • 13625
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-optimizing-jquery-applications
Packt
13 Jan 2016
19 min read
Save for later

Optimizing jQuery Applications

Packt
13 Jan 2016
19 min read
This article, by Thodoris Greasidis, the author of jQuery Design Patterns, presents some optimization techniques that can be used to improve the performance of jQuery applications, especially when they become large and complex. We will start with simple practices to write performant JavaScript code, and learn how to write efficient CSS selectors in order to improve the page's rendering speed and DOM traversals using jQuery. We will continue with jQuery-specific practices, such as caching of jQuery Composite Collection Objects, how to minimize DOM manipulations, and how to use the Delegate Event Observer pattern as a good example of the Flyweight pattern. (For more resources related to this topic, see here.) Optimizing the common JavaScript code In this section, we will analyze some performance tips that are not jQuery-specific and can be applied to most JavaScript implementations. Writing better for loops When iterating over the items of an array or an array-like collection with a for loop, a simple way to improve the performance of the iteration is to avoid accessing the length property on every loop. This can easily be done by storing the iteration length in a separate variable, which is declared just before the loop or even along with it, as shown in the following code: for (var i = 0, len = myArray.length; i < len; i++) {     var item = myArray[i];     /*...*/ } Moreover, if we need to iterate over the items of an array that does not contain "falsy" values, we can use an even better pattern, which is commonly applied when iterating over arrays that contain objects: var objects = [{ }, { }, { }]; for (var i = 0, item; item = objects[i]; i++) {     console.log(item); } In this case, instead of relying on the length property of the array, we are exploiting the fact that access to an out-of-bounds position of the array returns undefined, which is "falsy" and stops the iteration. Another sample case in which this trick can be used is when iterating over Node Lists or jQuery Composite Collection Objects, as shown in the following code: var anchors = $('a'); // or document.getElementsByTagName('a'); for (var i = 0, anchor; anchor = anchors[i]; i++) {     console.log(anchor.href); } For more information on the "truthy" and "falsy" JavaScript values, you can visit https://developer.mozilla.org/en-US/docs/Glossary/Truthy and https://developer.mozilla.org/en-US/docs/Glossary/Falsy. Using performant CSS selectors Even though Sizzle (jQuery's selector engine) hides the complexity of DOM traversals that are based on a complex CSS selector, we should have an idea of how our selectors perform. By understanding how CSS selectors can be matched against the elements of the DOM can help us write more efficient selectors, which will perform in a better way when used with jQuery. The key characteristic of efficient CSS selectors is specificity. According to this, IDs and Class selectors will always be more efficient than selectors with many results, such as div and *. When writing complex CSS selectors, keep in mind that they are evaluated from right to left, and a selector gets rejected after recursively testing it against every parent element until the root of the DOM is reached. As a result, try to be as specific as possible with the right-most selector in order to cut down the matched elements as early as possible during the execution of the selector: // initially matches all the anchors of the page // and then removes those that are not children of the container $('.container a');   // performs better, since it matches fewer elements // in the first step of the selector's evaluation $('.container .mySpecialLinks'); Another performance tip is to use the Child Selector (parent > child) wherever applicable in an effort to eliminate the recursion over all the hierarchies of the DOM tree. A great example of this can be applied in cases where the target elements can be found at a specific descendant level of a common ancestor element: // initially matches all the div's of the page, which is bad $('.container div') ;   // a lot better better, since it avoids the recursion // until the root of the DOM tree $('.container > div');   // best of all, but can't be used always $('.container > .specialDivs'); The same tips can also be applied to CSS selectors that are used to style pages. Even though browsers have been trying to optimize any given CSS selector, the tips mentioned earlier can greatly reduce the time that is required to render a web page. For more information on jQuery CSS selector performance, you can visit http://learn.jquery.com/performance/optimize-selectors/. Writing efficient jQuery code Let's now proceed and analyze the most important jQuery-specific performance tips. For more information on the most up-to-date performance tips about jQuery, you can go to the respective page of jQuery's Learning Center at http://learn.jquery.com/performance. Minimizing DOM traversals Since jQuery has made DOM traversals such simple tasks, a big number of web developers have started to overuse the $() function everywhere, even in subsequent lines of code, making their implementations slower by executing unnecessary code. One of the main reasons that the complexity of the operation is so often overlooked is the elegant and minimalistic syntax that jQuery uses. Despite the fact that JavaScript browser engines have already become more faster, with performance comparable with many compiled languages, the DOM API is still one of their slowest parts, and as a result, developers have to minimize their interactions with it. Caching jQuery objects Storing the result of the $() function to a local variable, and subsequently, using it to operate on the retrieved elements is the simplest way to eliminate unnecessary executions of the same DOM traversals: var $element = $('.Header'); if ($element.css('position') === 'static') {     $element.css({ position: 'relative' }); } $element.height('40px'); $element.wrapInner('<b>'); It is also highly suggested that you store Composite Collection Objects of important page elements as properties of our modules and reuse them everywhere in our application: window.myApp = window.myApp || {}; myApp.$container = null; myApp.init = function() {     myApp.$container = $('.myAppContainer');     }; $(document).ready(myApp.init); Caching retrieved elements on modules is a very good practice when the elements are not going to be removed from the page. Keep in mind that when dealing with elements with shorter lifespans, in order to avoid memory leaks, you either need to ensure that you clear their references when they are removed from the page, or have a fresh reference retrieved when required and cache it only inside your functions. Scoping element traversals Instead of writing complex CSS selectors for your traversals, as follows: $('.myAppContainer .myAppSection'); You can instead have the same result in a more efficient way using an already retried ancestor element to scope the DOM traversal. This way, you are not only using simpler CSS selectors that are faster to match against page elements, but you are also reducing the number of elements that need to be checked. Moreover, the resulting implementations have less code repetitions (they are DRYer), and the CSS selectors used are simple and more readable: var $container = $('.myAppContainer'); $container.find('.myAppSection'); Additionally, this practice works even better with module-wide cached elements: var $sections = myApp.$container.find('.myAppSection'); Chaining jQuery methods One of the characteristics of all jQuery APIs is that they are fluent interface implementations that enable you to chain several method invocations on a single Composite Collection Object: $('.Content').html('')     .append('<a href="#">')     .height('40px')     .wrapInner('<b>'); Chaining allows us to reduce the number of used variables and leads us to more readable implementations with less code repetitions. Don't overdo it Keep in mind that jQuery also provides the $.fn.end() method (http://api.jquery.com/end/) as a way to move back from a chained traversal: $('.box')     .filter(':even')     .find('.Header')     .css('background-color', '#0F0')     .end()     .end() // undo the filter and find traversals     .filter(':odd') // applied on  the initial .box results     .find('.Header')     .css('background-color', '#F00'); Even though this is a handy method for many cases, you should avoid overusing it, since it can affect the readability of your code. In many cases, using cached element collections instead of $.fn.end() can result in faster and more readable implementations. Improving DOM manipulations Extensive use of the DOM API is one of the most common things that makes an application slower, especially when it is used to manipulate the state of the DOM tree. In this section, we will showcase some tips that can reduce the performance hit when manipulating the DOM tree. Creating DOM elements The most efficient way to create DOM elements is to construct an HTML string and append it to the DOM tree using the $.fn.html() method. Additionally, since this can be too restrictive for some use cases, you can also use the $.fn.append() and $.fn.prepend() methods, which are slightly slower but can be better matches for your implementation. Ideally, if multiple elements need to be created, you should try to minimize the invocation of these methods by creating an HTML string that defines all the elements and then inserting it into the DOM tree, as follows: var finalHtml = ''; for (var i = 0, len = questions.length; i < len; i++) {     var question = questions[i];     finalHtml += '<div><label><span>' + question.title + ':</span>' + '<input type="checkbox" name="' + question.name + '" />' + '</label></div>'; } $('form').html(finalHtml); Another way to achieve the same result is using an array to store the HTML for each intermediate element and then joining them right before inserting them into the DOM tree: var parts = []; for (var i = 0, len = questions.length; i < len; i++) {     var question = questions[i];     parts.push('<div><label><span>' + question.title + ':</span>' +     '<input type="checkbox" name="' + question.name + '" />' +     '</label></div>'); } $('form').html(parts.join('')); This is a commonly used pattern, since until recently it was performing better than concatenating the intermediate results with "+=". Styling and animating Whenever possible, try using CSS classes for your styling manipulations by utilizing the $.fn.addClass() and $.fn.removeClass() methods instead of manually manipulating the style of elements with the $.fn.css() method. This is especially beneficial when you need to style a big number of elements, since this is the main use case of CSS classes and the browsers have already spent years optimizing it. As an extra optimization step used to minimize the number of manipulated elements, you can try to apply CSS classes on a single common ancestor element, and use a descendant CSS selector to apply your styling ( https://developer.mozilla.org/en-US/docs/Web/CSS/Descendant_selectors). When you still need to use the $.fn.css() method; for example, when your implementation needs to be imperative, we prefer using the invocation overload that accepts object parameters: http://api.jquery.com/css/#css-properties. This way, when applying multiple styles on elements, the required method invocations are minimized, and your code also gets better organized. Moreover, we need to avoid mixing methods that manipulate the DOM with methods that are read from the DOM, since this will force a reflow of the page, so that the browser can calculate the new positions of the page elements. Instead of doing something like this: $('h1').css('padding-left', '2%'); $('h1').css('padding-right', '2%'); $('h1').append('<b>!!</b>'); var h1OuterWidth = $('h1').outerWidth();   $('h1').css('margin-top', '5%'); $('body').prepend('<b>--!!--</b>'); var h1Offset = $('h1').offset(); We will prefer grouping the nonconflicting manipulations together like this: $('h1').css({     'padding-left': '2%',     'padding-right': '2%',     'margin-top': '5%' }).append('<b>!!</b>'); $('body').prepend('<b>--!!--</b>');   var h1OuterWidth = $('h1').outerWidth(); var h1Offset = $('h1').offset(); This way, the browser can try to skip some rerenderings of the page, resulting in less pauses of the execution of your code. For more information on reflows, you can refer to https://developers.google.com/speed/articles/reflow. Lastly, note that all jQuery generated animations in v1.x and v2.x are implemented using the setTimeout() function. This is going to change in v3.x of jQuery, which is designed to use the requestAnimationFrame() function, which is a better match to create imperative animations. Until then, you can use the jQuery-requestAnimationFrame plugin (https://github.com/gnarf/jquery-requestAnimationFrame), which monkey-patches jQuery to use the requestAnimationFrame() function for its animations when it is available. Manipulating detached elements Another way to avoid unnecessary repaints of the page while manipulating DOM elements is to detach the element from the page and reattach it after completing your manipulations. Working with a detached in-memory element is more faster and does not cause reflows of the page. In order to achieve this, we will use the $.fn.detach() method, which in contrast with $.fn.remove() preserves all event handlers and jQuery data on the detached element: var $h1 = $('#pageHeader'); var $h1Cont = $h1.parent(); $h1.detach();   $h1.css({     'padding-left': '2%',     'padding-right': '2%',     'margin-top': '5%' }).append('<b>!!</b>');   $h1Cont.append($h1); Additionally, in order to be able to place the manipulated element back to its original position, we can create and insert a hidden "placeholder" element into the DOM. This empty and hidden element does not affect the rendering of the page and is removed right after the original item is placed back into its original position: var $h1PlaceHolder = $('<div style="display: none;"></div>'); var $h1 = $('#pageHeader'); $h1PlaceHolder.insertAfter($h1);   $h1.detach();   $h1.css({     'padding-left': '2%',     'padding-right': '2%',     'margin-top': '5%' }).append('<b>!!</b>');   $h1.insertAfter($h1PlaceHolder); $h1PlaceHolder.remove(); $h1PlaceHolder = null; For more information on the $.fn.detach() method, you can visit its documentation page at http://api.jquery.com/detach/. Using the Flyweight pattern According to computer science, a Flyweight is an object that is used to reduce the memory consumption of an implementation that provides the functionality and/or data that will be shared with other object instances. The prototypes of JavaScript constructor functions can be characterized as Flyweights to some degree since every object instance can use all the methods and properties that are defined on its Prototype, until it overwrites them. On the other hand, classical Flyweights are separate objects from the object family that they are used with and often hold the shared data and functionality in special data structures. Using Delegated Event Observers A great example of Flyweights in jQuery applications are Delegated Event Observers that can greatly reduce the memory requirements of an implementation by working as a centralized event handler for a large group of elements. This way, we can avoid the cost of setting up separate observers and event handlers for every element and utilize the browser's event bubbling mechanism to observe them on a single common ancestor element and filter their origin. Moreover, this pattern can also simplify our implementation when we need to deal with dynamically constructed elements since it removes the need of attaching extra event handlers for each created element. For example, the following code attaches a single observer on the common ancestor element of several <button> elements. Whenever a click happens on one of the <button> elements, the event will bubble up to the parent element with the buttonsContainer CSS class, and the attached handler will be fired. Even if we add extra buttons later to that container, clicking on them will still fire the original handler: $('.buttonsContainer').on('click', 'button', function() {     var $button = $(this);     alert($button.text()); }); The actual Flyweight object is the event handler along with the callback that is attached to the ancestor element. Using $.noop() The jQuery library offers the $.noop() method, which is actually an empty function that can be shared with implementations. Using empty functions as default callback values can simplify and increase the readability of an implementation by reducing the number of the required if statements. Such a thing can be greatly beneficial for jQuery plugins that encapsulate the complex functionality: function doLater(callbackFn) {     setTimeout(function() {         if (callbackFn) {             callbackFn();         }     }, 500); }   // with $.noop() function doLater(callbackFn) {     callbackFn = callbackFn || $.noop();     setTimeout(function() {         callbackFn();     }, 500); } In such situations, where the implementation requirements or the personal taste of the developer leads to using empty functions, the $.noop() method can be beneficial to lower the memory consumption by sharing a single empty function instance among all the different parts of an implementation. As an added benefit of using the $.noop() method for every part of an implementation, we can also check whether a passed function reference is actually the empty function by simply checking whether callbackFn is equal to $.noop(). For more information, you can visit its documentation page at http://api.jquery.com/jQuery.noop/. Using the $.single plugin Another simple example of the Flyweight pattern in a jQuery application is the jQuery.single plugin, as described by James Padolsey in his article 76 bytes for faster jQuery, which tries to eliminate the creation of new jQuery objects in cases where we need to apply jQuery methods on a single page element. The implementation is quite small and creates a single jQuery Composite Collection Object that is returned on every invocation of the jQuery.single() method, containing the page element that was used as an argument: jQuery.single = (function(){     var collection = jQuery([1]); // Fill with 1 item, to make sure length === 1     return function(element) {         collection[0] = element; // Give collection the element:         return collection; // Return the collection:     }; }()); The jQuery.single plugin can be quite useful when used in observers, such as $.fn.on() and iterations with methods, such as $.each(): $buttonsContainer.on('click', '.button', function() {     // var $button = $(this);     var $button = $.single(this); // this is not creating any new object     alert($button); }); The advantages of using the jQuery.single plugin originate from the fact that we are creating less objects, and as a result, the browser's Garbage Collector will also have less work to do when freeing the memory of short-lived objects. Keep in mind that the side effects of having a single jQuery object returned by every invocation of the $.single() method, and the fact that the last invocation argument will be stored until the next invocation of the method: var buttons = document.getElementsByTagName('button'); var $btn0 = $.single(buttons[0]); var $btn1 = $.single(buttons[1]); $btn0 === $btn1 Also, if you use something, such as $btn1.remove(), then the element will not be freed until the next invocation of the $.single() method, which will remove it from the plugin's internal collection object. Another similar but more extensive plugin is the jQuery.fly plugin, which supports the case of being invoked with arrays and jQuery objects as parameters. For more information on jQuery.single and jQuery.fly, you can visit http://james.padolsey.com/javascript/76-bytes-for-faster-jquery/ and https://github.com/matjaz/jquery.fly. On the other hand, the jQuery implementation that handles the invocation of the $() method with a single page element is not complex at all and only creates a single simple object: jQuery = function( selector, context ) {     return new jQuery.fn.init( selector, context ); }; /*...*/ jQuery = jQuery.fn.init = function( selector, context ) {     /*... else */     if ( selector.nodeType ) {         this.context = this[0] = selector;         this.length = 1;         return this;     } /* ... */ }; Moreover, the JavaScript engines of modern browsers have already become quite efficient when dealing with short lived objects, since such objects are commonly passed around an application as method invocation parameters. Summary In this article, we learned some optimization techniques that can be used to improve the performance of jQuery applications, especially when they become large and complex. We initially started with simple practices to write performant JavaScript code, and learned how to write efficient CSS selectors in order to improve the page's rendering speed and DOM traversals using jQuery. We continued with jQuery-specific practices, such as caching of jQuery Composite Collection Objects and ways to minimize DOM manipulations. Lastly, we saw some representatives of the Flyweight pattern and took a look at an example of the Delegated Event Observer pattern. Resources for Article: Further resources on this subject: Working with Events [article] Learning jQuery [article] Preparing Your First jQuery Mobile Project [article]
Read more
  • 0
  • 0
  • 13604

article-image-creating-direct2d-game-window-class
Packt
23 Dec 2013
12 min read
Save for later

Creating a Direct2D game window class

Packt
23 Dec 2013
12 min read
(For more resources related to this topic, see here.) To put some graphics on the screen; the first step for us would be creating a new game window class that will use Direct2D. This new game window class will derive from our original game window class, while adding the Direct2D functionality. Open Visual Studio. Add a new class to the project called GameWindow2D. We need to change its declaration to: public class GameWindow2D : GameWindow, IDispoable As you can see, it inherits from the GameWindow class meaning that it has all of the public and protected members of the GameWindow class, as though we had implemented them again in this class. It also implements the IDisposable interface, just as the GameWindow class does. Also, don't forget to add a reference to SlimDX to this project if you haven't already. We need to add some using statements to the top of this class file as well. They are all the same using statements that the GameWindow class has, plus one more. The new one is SlimDX.Direct2D. They are as follows: using System.Windows.Forms; using System.Diagnostics; using System.Drawing; using System; using SlimDX; using SlimDX.Direct2D; using SlimDX.Windows; Next, we need to create a handful of member variables: WindowRenderTarget m_RenderTarget; Factory m_Factory; PathGeometry m_Geometry; SolidColorBrush m_BrushRed; SolidColorBrush m_BrushGreen; SolidColorBrush m_BrushBlue; The first variable is a WindowRenderTarget object. The term render target is used to refer to the surface we are going to draw on. In this case, it is our game window. However, this is not always the case. Games can render to other places as well. For example, rendering into a texture object is used to create various effects. One example would be a simple security camera effect. Say, we have a security camera in one room and a monitor in another room. We want the monitor to display what our security camera sees. To do this, we can render the camera's view into a texture, which can then be used to texture the screen of the monitor. Of course, this has to be re-done in every frame so that the monitor screen shows what the camera is currently seeing. This idea is useful in 2D too. Back to our member variables, the second one is a Factory object that we will be using to set up our Direct2D stuff. It is used to create Direct2D resources such as RenderTargets. The third variable is a PathGeometry object that will hold the geometry for the first thing we will draw, which will be a rectangle. The last three variables are all SolidColorBrush objects. We use these to specify the color we want to draw something with. There is a little more to them than that, but that's all we need right now. The constructor Let's turn our attention now to the constructor of our Direct2D game window class. It will do two things. Firstly, it will call the base class constructor (remember the base class is the original GameWindow class), and it will then get our Direct2D stuff initialized. The following is the initial code for our constructor: public GameWindow2D(string title, int width, int height,   bool fullscreen)     : base(title, width, height, fullscreen) {     m_Factory = new Factory();     WindowRenderTargetProperties properties = new       WindowRenderTargetProperties();     properties.Handle = FormObject.Handle;     properties.PixelSize = new Size(width, height);     m_RenderTarget = new WindowRenderTarget(m_Factory,       properties); } In the preceding code, the line starting with a colon is calling the constructor of the base class for us. This ensures that everything inherited from the base class is initialized. In the body of the constructor, the first line creates a new Factory object and stores it in our m_Factory member variable. Next, we create a WindowRenderTargetProperties object and store the handle of our RenderForm object in it. Note that FormObject is one of the properties defined in our GameWindow base class. Remember that the RenderForm object is a SlimDX object that represents a window for us to draw on. The next line saves the size of our game window in the PixelSize property. The WindowRenderTargetProperties object is basically how we specify the initial configuration for a WindowRenderTarget object when we create it. The last line in our constructor creates our WindowRenderTarget object, storing it in our m_RenderTarget member variable. The two parameters we pass in are our Factory object and the WindowRenderTargetProperties object we just created. A WindowRenderTarget object is a render target that refers to the client area of a window. We use the WindowRenderTarget object to draw in a window. Creating our rectangle Now that our render target is set up, we are ready to draw stuff, but first we need to create something to draw! So, we will add a bit more code at the bottom of our constructor. First, we need to initialize our three SolidColorBrush objects. Add these three lines of code at the bottom of the constructor: m_BrushRed = new SolidColorBrush(m_RenderTarget, new Color4(1.0f,   1.0f, 0.0f, 0.0f)); m_BrushGreen = new SolidColorBrush(m_RenderTarget, new   Color4(1.0f, 0.0f, 1.0f, 0.0f)); m_BrushBlue = new SolidColorBrush(m_RenderTarget, new Color4(1.0f,   0.0f, 0.0f, 1.0f)); This code is fairly simple. For each brush, we pass in two parameters. The first parameter is the render target we will use this brush on. The second parameter is the color of the brush, which is an ARGB (Alpha Red Green Blue) value. The first parameter we give for the color is 1.0f. The f character on the end indicates that this number is of the float data type. We set alpha to 1.0 because we want the brush to be completely opaque. A value of 0.0 will make it completely transparent, and a value of 0.5 will be 50 percent transparent. Next, we have the red, green, and blue parameters. These are all float values in the range 0.0 to 1.0 as well. As you can see for the red brush, we set the red channel to 1.0f and the green and blue channels are both set to 0.0f. This means we have maximum red, but no green or blue in our color. With our SolidColorBrush objects set up, we now have three brushes we can draw with, but we still lack something to draw! So, let's fix that by adding some code to make our rectangle. Add this code to the end of the constructor: m_Geometry = new PathGeometry(m_RenderTarget.Factory); using (GeometrySink sink = m_Geometry.Open()) {     int top = (int) (0.25f * FormObject.Height);     int left = (int) (0.25f * FormObject.Width);     int right = (int) (0.75f * FormObject.Width);     int bottom = (int) (0.75f * FormObject.Height);     PointF p0 = new Point(left, top);     PointF p1 = new Point(right, top);     PointF p2 = new Point(right, bottom);     PointF p3 = new Point(left, bottom);     sink.BeginFigure(p0, FigureBegin.Filled);     sink.AddLine(p1);     sink.AddLine(p2);     sink.AddLine(p3);     sink.EndFigure(FigureEnd.Closed);     sink.Close(); } This code is a bit longer, but it's still fairly simple. The first line creates a new PathGeometry object and stores it in our m_Geometry member variable. The next line starts the using block and creates a new GeometrySink object that we will use to build the geometry of our rectangle. The using block will automatically dispose of the GeometrySink object for us when program execution reaches the end of the using block. The using blocks only work with objects that implement the IDisposable interface. The next four lines calculate where each edge of our rectangle will be. For example, the first line calculates the vertical position of the top edge of the rectangle. In this case, we are making the rectangle's top edge be 25 percent of the way down from the top of the screen. Then, we do the same thing for the other three sides of our rectangle. The second group of four lines of code creates four Point objects and initializes them using the values we just calculated. These four Point objects represent the corners of our rectangle. A point is also often referred to as a vertex. When we have more than one vertex, we call them vertices (pronounced as vert-is-ces). The final group of code has six lines. They use the GeometrySink and the Point objects we just created to set up the geometry of our rectangle inside the PathGeometry object. The first line uses the BeginFigure() method to begin the creation of a new geometric figure. The next three lines each add one more line segment to the figure by adding another point or vertex to it. With all four vertices added, we then call the EndFigure() method to specify that we are done adding vertices. The last line calls the Close() method to specify that we are finished adding geometric figures, since we can have more than one if we want. In this case, we are only adding one geometric figure, our rectangle. Drawing our rectangle Since our rectangle never changes, we don't need to add any code to our UpdateScene() method. We will override the base class's UpdateScene() method anyway, in case we need to add some code in here later, which is given as follows: public override void UpdateScene(double frameTime) {     base.UpdateScene(frameTime); } As you can see, we only have one line of code in this override modifier of the base class's UpdateScene() method. It simply calls the base class's version of this method. This is important because the base class's UpdateScene() method contains our code that gets the latest user input data each frame. Now, we are finally ready to write the code that will draw our rectangle on the screen! We will override the RenderScene() method so we can add our custom code. The following is the code: public override void RenderScene() {     if ((!this.IsInitialized) || this.IsDisposed)     {         return;     }     m_RenderTarget.BeginDraw();     m_RenderTarget.Clear(ClearColor);     m_RenderTarget.FillGeometry(m_Geometry, m_BrushBlue);     m_RenderTarget.DrawGeometry(m_Geometry, m_BrushRed, 1.0f);     m_RenderTarget.EndDraw(); } First, we have an if statement, which happens to be identical to the one we put in the base class's RenderScene() method. This is because we are not calling the base class's RenderScene() method, since the only code in it is this if statement. Not calling the base class version of this method will give us a slight performance boost, since we don't have the overhead of that function call. We could do the same thing with the UpdateScene() method as well. In this case we didn't though, because the base class version of that method has a lot more code in it. In your own projects you may want to copy and paste that code into your override of the UpdateScene() method. The next line of code calls the render target's BeginDraw() method to tell it that we are ready to begin drawing. Then, we clear the screen on the next line by filling it with the color stored in the ClearColor property that is defined by our GameWindow base class. The last three lines draw our geometry twice. First, we draw it using the FillGeometry() method of our render target. This will draw our rectangle filled in with the specified brush (in this case, solid blue). Then, we draw the rectangle a second time, but this time with the DrawGeometry() method. This draws only the lines of our shape but doesn't fill it in, so this draws a border on our rectangle. The extra parameter on the DrawGeometry() method is optional and specifies the width of the lines we are drawing. We set it to 1.0f, which means the lines will be one-pixel wide. And the last line calls the EndDraw() method to tell the render target that we are finished drawing. Cleanup As usual, we need to clean things up after ourselves when the program closes. So, we need to add override of the base class's Dispose(bool) method. We've already done this a few times, so it should be somewhat familiar and is not shown here. Our blue rectangle with a red border As you might guess, there is a lot more you can do with drawing geometry. You can draw curved line segments and draw shapes with gradient brushes too for example. You can also draw text on the screen using the render target's DrawText() method. But since we have limited space on these pages, we're going to look at how to draw bitmap images on the screen. These images are something that make up the graphics of most 2D games. Summary In this article, we first made a simple demo application that drew a rectangle on the screen. Then, we got a bit more ambitious and built a 2D tile-based game world. Resources for Article: Further resources on this subject: HTML5 Games Development: Using Local Storage to Store Game Data [Article] Flash Game Development: Creation of a Complete Tetris Game [Article] Interface Designing for Games in iOS [Article]
Read more
  • 0
  • 0
  • 13536

article-image-mastering-fundamentals
Packt
08 Apr 2016
10 min read
Save for later

Mastering of Fundamentals

Packt
08 Apr 2016
10 min read
In this article by Piotr Sikora, author of the book Professional CSS3, you will master box model, floating's troubleshooting positioning and display types. Readers, after this article, will be more aware of the foundation of HTML and CSS. In this article, we shall cover the following topics: Get knowledge about the traditional box model Basics of floating elements The foundation of positioning elements on webpage Get knowledge about display types (For more resources related to this topic, see here.) Traditional box model Understanding box model is the foundation in CSS theories. You have to know the impact of width, height, margin, and borders on the size of the box and how can you manage it to match the element on a website. Main questions for coders and frontend developers on interviews are based on box model theories. Let's begin this important lesson, which will be the foundation for every subject. Padding/margin/border/width/height The ingredients of final width and height of the box are: Width Height Margins Paddings Borders For a better understanding of box model, here is the image from Chrome inspector: For a clear and better understanding of box model, let's analyze the image: On the image, you can see that, in the box model, we have four edges: Content edge Padding edge Border edge Margin edge The width and height of the box are based on: Width/height of content Padding Border Margin The width and height of the content in box with default box-sizing is controlled by properties: Min-width Max-width Width Min-height Max-height Height An important thing about box model is how background properties will behave. Background will be included in the content section and in the padding section (to padding edge). Let's get a code and try to point all the elements of the box model. HTML: <div class="element">   Lorem ipsum dolor sit amet consecteur </div> CSS: .element {    background: pink;    padding: 10px;    margin: 20px;   width: 100px;   height: 100px;    border: solid 10px black; }   In the browser, we will see the following: This is the view from the inspector of Google Chrome: Let's check how the areas of box model are placed in this specific example: The basic task for interviewed Front End Developer is—the box/element is described with the styles: .box {     width: 100px;     height: 200px;     border: 10px solid #000;     margin: 20px;     padding: 30px; } Please count the final width and height (the real space that is needed for this element) of this element. So, as you can see, the problem is to count the width and height of the box. Ingridients of width: Width Border left Border right Padding left Padding right Additionally, for the width of the space taken by the box: Margin left Margin right Ingridients of height: Height Border top Border bottom Padding top Padding bottom Additionally, for height of the space taken by the box: Margin top Margin bottom So, when you will sum the element, you will have an equation: Width: Box width = width + borderLeft + borderRight + paddingLeft + paddingRight Box width = 100px + 10px + 10px + 30px + 30px = 180px Space width: width = width + borderLeft + borderRight + paddingLeft + paddingRight +  marginLeft + marginRight width = 100px + 10px + 10px + 30px + 30px + 20px + 20 px = 220px Height: Box height = height + borderTop + borderBottom + paddingTop + paddingBottom Box height  = 200px + 10px + 10px + 30px + 30px = 280px Space height: Space height = height + borderTop + borderBottom + paddingTop + paddingBottom +  marginTop + marginBottom Space height = 200px + 10px + 10px + 30px + 30px + 20px + 20px = 320px Here, you can check it in a real browser: Omiting problems with traditional box model (box sizing) The basic theory of box model is pretty hard to learn. You need to remember about all the elements of width/height, even if you set the width and height. The hardest for beginners is the understanding of padding, which shouldn't be counted as a component of width and height. It should be inside the box, and it should impact on these values. To change this behavior to support CSS3 since Internet Explorer 8, box sizing comes to picture. You can set the value: box-sizing: border-box What it gives to you? Finally, the counting of box width and height will be easier because box padding and border is inside the box. So, if we are taking our previous class: .box {     width: 100px;     height: 200px;     border: 10px solid #000;     margin: 20px;     padding: 30px; } We can count the width and height easily: Width = 100px Height = 200px Additionally, the space taken by the box: Space width = 140 px (because of the 20 px margin on both sides: left and right) Space height = 240 px (because of the 20 px margin on both sides: top and bottom) Here is a sample from Chrome: So, if you don't want to repeat all the problems of a traditional box model, you should use it globally for all the elements. Of course, it's not recommended in projects that you are getting in some old project, for example, from new client that needs some small changes:  * { width: 100px; } Adding the preceding code can make more harm than good because of the inheritance of this property for all the elements, which are now based on a traditional box model. But for all the new projects, you should use it. Floating elements Floating boxes are the most used in modern layouts. The theory of floating boxes was still used especially in grid systems and inline lists in CSS frameworks. For example, class and mixin inline-list (in Zurb Foundation framework) are based on floats. Possibilities of floating elements Element can be floated to the left and right. Of course, there is a method that is resetting floats too. The possible values are: float: left; // will float element to left float: right; // will float element to right float: none; // will reset float Most known floating problems When you are using floating elements, you can have some issues. Most known problems with floated elements are: Too big elements (because of width, margin left/right, padding left/right, and badly counted width, which is based on box model) Not cleared floats All of these problems provide a specific effect, which you can easily recognize and then fix. Too big elements can be recognized when elements are not in one line and it should. What you should check first is if the box-sizing: border-box is applied. Then, check the width, padding, and margin. Not cleared floats you can easily recognize when to floating structure some elements from next container are floated. It means that you have no clearfix in your floating container. Define clearfix/class/mixin When I was starting developing HTML and CSS code, there was a method to clear the floats with classes .cb or .clear, both defined as: .clearboth, .cb {     clear: both } This element was added in the container right after all the floated elements. This is important to remember about clearing the floats because the container which contains floating elements won't inherit the height of highest floating element (will have a height equal 0). For example: <div class="container">     <div class="float">         … content ...     </div>     <div class="float">         … content ...     </div>     <div class="clearboth"></div> </div> Where CSS looks like this: .float {     width: 100px;     height: 100px;     float: left; }   .clearboth {     clear: both } Nowadays, there is a better and faster way to clear floats. You can do this with clearfix, which can be defined like this: .clearfix:after {     content: " ";     visibility: hidden;     display: block;     height: 0;     clear: both; } You can use in HTML code: <div class="container clearfix">     <div class="float">         … content ...     </div>     <div class="float">         … content ...     </div> </div> The main reason to switch on clearfix is that you save one tag (with clears both classes). Recommended usage is based on the clearfix mixin, which you can define like this in SASS: =clearfix   &:after     content: " "     visibility: hidden     display: block     height: 0     clear: both So, every time you need to clear floating in some container, you need to invoke it. Let's take the previous code as an example: <div class="container">     <div class="float">         … content ...     </div>     <div class="float">         … content ...     </div> </div> A container can be described as: .container   +clearfix Example of using floating elements The most known usage of float elements is grids. Grid is mainly used to structure the data displayed on a webpage. In this article, let's check just a short draft of grid. Let's create an HTML code: <div class="row">     <div class="column_1of2">         Lorem     </div>     <div class="column_1of2">         Lorem     </div>   </div> <div class="row">     <div class="column_1of3">         Lorem     </div>     <div class="column_1of3">         Lorem     </div>     <div class="column_1of3">         Lorem     </div>   </div>   <div class="row">     <div class="column_1of4">         Lorem     </div>     <div class="column_1of4">         Lorem     </div>     <div class="column_1of4">         Lorem     </div>     <div class="column_1of4">         Lorem     </div> </div> And SASS: *   box-sizing: border-box =clearfix   &:after     content: " "     visibility: hidden     display: block     height: 0     clear: both .row   +clearfix .column_1of2   background: orange   width: 50%   float: left   &:nth-child(2n)     background: red .column_1of3   background: orange   width: (100% / 3)   float: left   &:nth-child(2n)     background: red .column_1of4   background: orange   width: 25%   float: left   &:nth-child(2n)     background: red The final effect: As you can see, we have created a structure of a basic grid. In places where HTML code is placed, Lorem here is a full lorem ipsum to illustrate the grid system. Summary In this article, we studied about the traditional box model and floating elements in detail. Resources for Article: Further resources on this subject: Flexbox in CSS [article] CodeIgniter Email and HTML Table [article] Developing Wiki Seek Widget Using Javascript [article]
Read more
  • 0
  • 0
  • 13522

article-image-writing-blog-application-nodejs-and-angularjs
Packt
16 Feb 2016
35 min read
Save for later

Writing a Blog Application with Node.js and AngularJS

Packt
16 Feb 2016
35 min read
In this article, we are going to build a blog application by using Node.js and AngularJS. Our system will support adding, editing, and removing articles, so there will be a control panel. The MongoDB or MySQL database will handle the storing of the information and the Express framework will be used as the site base. It will deliver the JavaScript, CSS, and the HTML to the end user, and will provide an API to access the database. We will use AngularJS to build the user interface and control the client-side logic in the administration page. (For more resources related to this topic, see here.) This article will cover the following topics: AngularJS fundamentals Choosing and initializing a database Implementing the client-side part of an application with AngularJS Exploring AngularJS AngularJS is an open source, client-side JavaScript framework developed by Google. It's full of features and is really well documented. It has almost become a standard framework in the development of single-page applications. The official site of AngularJS, http://angularjs.org, provides a well-structured documentation. As the framework is widely used, there is a lot of material in the form of articles and video tutorials. As a JavaScript library, it collaborates pretty well with Node.js. In this article, we will build a simple blog with a control panel. Before we start developing our application, let's first take a look at the framework. AngularJS gives us very good control over the data on our page. We don't have to think about selecting elements from the DOM and filling them with values. Thankfully, due to the available data-binding, we may update the data in the JavaScript part and see the change in the HTML part. This is also true for the reverse. Once we change something in the HTML part, we get the new values in the JavaScript part. The framework has a powerful dependency injector. There are predefined classes in order to perform AJAX requests and manage routes. You could also read Mastering Web Development with AngularJS by Peter Bacon Darwin and Pawel Kozlowski, published by Packt Publishing. Bootstrapping AngularJS applications To bootstrap an AngularJS application, we need to add the ng-app attribute to some of our HTML tags. It is important that we pick the right one. Having ng-app somewhere means that all the child nodes will be processed by the framework. It's common practice to put that attribute on the <html> tag. In the following code, we have a simple HTML page containing ng-app: <html ng-app> <head> <script src="angular.min.js"></script> </head> <body> ... </body> </html>   Very often, we will apply a value to the attribute. This will be a module name. We will do this while developing the control panel of our blog application. Having the freedom to place ng-app wherever we want means that we can decide which part of our markup will be controlled by AngularJS. That's good, because if we have a giant HTML file, we really don't want to spend resources parsing the whole document. Of course, we may bootstrap our logic manually, and this is needed when we have more than one AngularJS application on the page. Using directives and controllers In AngularJS, we can implement the Model-View-Controller pattern. The controller acts as glue between the data (model) and the user interface (view). In the context of the framework, the controller is just a simple function. For example, the following HTML code illustrates that a controller is just a simple function: <html ng-app> <head> <script src="angular.min.js"></script> <script src="HeaderController.js"></script> </head> <body> <header ng-controller="HeaderController"> <h1>{{title}}</h1> </header> </body> </html>   In <head> of the page, we are adding the minified version of the library and HeaderController.js; a file that will host the code of our controller. We also set an ng-controller attribute in the HTML markup. The definition of the controller is as follows: function HeaderController($scope) { $scope.title = "Hello world"; } Every controller has its own area of influence. That area is called the scope. In our case, HeaderController defines the {{title}} variable. AngularJS has a wonderful dependency-injection system. Thankfully, due to this mechanism, the $scope argument is automatically initialized and passed to our function. The ng-controller attribute is called the directive, that is, an attribute, which has meaning to AngularJS. There are a lot of directives that we can use. That's maybe one of the strongest points of the framework. We can implement complex logic directly inside our templates, for example, data binding, filtering, or modularity. Data binding Data binding is a process of automatically updating the view once the model is changed. As we mentioned earlier, we can change a variable in the JavaScript part of the application and the HTML part will be automatically updated. We don't have to create a reference to a DOM element or attach event listeners. Everything is handled by the framework. Let's continue and elaborate on the previous example, as follows: <header ng-controller="HeaderController"> <h1>{{title}}</h1> <a href="#" ng-click="updateTitle()">change title</a> </header>   A link is added and it contains the ng-click directive. The updateTitle function is a function defined in the controller, as seen in the following code snippet: function HeaderController($scope) { $scope.title = "Hello world"; $scope.updateTitle = function() { $scope.title = "That's a new title."; } }   We don't care about the DOM element and where the {{title}} variable is. We just change a property of $scope and everything works. There are, of course, situations where we will have the <input> fields and we want to bind their values. If that's the case, then the ng-model directive can be used. We can see this as follows: <header ng-controller="HeaderController"> <h1>{{title}}</h1> <a href="#" ng-click="updateTitle()">change title</a> <input type="text" ng-model="title" /> </header>   The data in the input field is bound to the same title variable. This time, we don't have to edit the controller. AngularJS automatically changes the content of the h1 tag. Encapsulating logic with modules It's great that we have controllers. However, it's not a good practice to place everything into globally defined functions. That's why it is good to use the module system. The following code shows how a module is defined: angular.module('HeaderModule', []); The first parameter is the name of the module and the second one is an array with the module's dependencies. By dependencies, we mean other modules, services, or something custom that we can use inside the module. It should also be set as a value of the ng-app directive. The code so far could be translated to the following code snippet: angular.module('HeaderModule', []) .controller('HeaderController', function($scope) { $scope.title = "Hello world"; $scope.updateTitle = function() { $scope.title = "That's a new title."; } });   So, the first line defines a module. We can chain the different methods of the module and one of them is the controller method. Following this approach, that is, putting our code inside a module, we will be encapsulating logic. This is a sign of good architecture. And of course, with a module, we have access to different features such as filters, custom directives, and custom services. Preparing data with filters The filters are very handy when we want to prepare our data, prior to be displayed to the user. Let's say, for example, that we need to mention our title in uppercase once it reaches a length of more than 20 characters: angular.module('HeaderModule', []) .filter('customuppercase', function() { return function(input) { if(input.length > 20) { return input.toUpperCase(); } else { return input; } }; }) .controller('HeaderController', function($scope) { $scope.title = "Hello world"; $scope.updateTitle = function() { $scope.title = "That's a new title."; } });   That's the definition of the custom filter called customuppercase. It receives the input and performs a simple check. What it returns, is what the user sees at the end. Here is how this filter could be used in HTML: <h1>{{title | customuppercase}}</h1> Of course, we may add more than one filter per variable. There are some predefined filters to limit the length, such as the JavaScript to JSON conversion or, for example, date formatting. Dependency injection Dependency management can be very tough sometimes. We may split everything into different modules/components. They have nicely written APIs and they are very well documented. However, very soon, we may realize that we need to create a lot of objects. Dependency injection solves this problem by providing what we need, on the fly. We already saw this in action. The $scope parameter passed to our controller, is actually created by the injector of AngularJS. To get something as a dependency, we need to define it somewhere and let the framework know about it. We do this as follows: angular.module('HeaderModule', []) .factory("Data", function() { return { getTitle: function() { return "A better title."; } } }) .controller('HeaderController', function($scope, Data) { $scope.title = Data.getTitle(); $scope.updateTitle = function() { $scope.title = "That's a new title."; } });   The Module class has a method called factory. It registers a new service that could later be used as a dependency. The function returns an object with only one method, getTitle. Of course, the name of the service should match the name of the controller's parameter. Otherwise, AngularJS will not be able to find the dependency's source. The model in the context of AngularJS In the well-known Model-View-Controller pattern, the model is the part that stores the data in the application. AngularJS doesn't have a specific workflow to define models. The $scope variable could be considered a model. We keep the data in properties attached to the current scope. Later, we can use the ng-model directive and bind a property to the DOM element. We already saw how this works in the previous sections. The framework may not provide the usual form of a model, but it's made like that so that we can write our own implementation. The fact that AngularJS works with plain JavaScript objects, makes this task easily doable. Final words on AngularJS AngularJS is one of the leading frameworks, not only because it is made by Google, but also because it's really flexible. We could use just a small piece of it or build a solid architecture using the giant collection of features. Selecting and initializing the database To build a blog application, we need a database that will store the published articles. In most cases, the choice of the database depends on the current project. There are factors such as performance and scalability and we should keep them in mind. In order to have a better look at the possible solutions, we will have a look at the two of the most popular databases: MongoDB and MySQL. The first one is a NoSQL type of database. According to the Wikipedia entry (http://en.wikipedia.org/wiki/ NoSQL) on NoSQL databases: "A NoSQL or Not Only SQL database provides a mechanism for storage and retrieval of data that is modeled in means other than the tabular relations used in relational databases." In other words, it's simpler than a SQL database, and very often stores information in the key value type. Usually, such solutions are used when handling and storing large amounts of data. It is also a very popular approach when we need flexible schema or when we want to use JSON. It really depends on what kind of system we are building. In some cases, MySQL could be a better choice, while in some other cases, MongoDB. In our example blog, we're going to use both. In order to do this, we will need a layer that connects to the database server and accepts queries. To make things a bit more interesting, we will create a module that has only one API, but can switch between the two database models. Using NoSQL with MongoDB Let's start with MongoDB. Before we start storing information, we need a MongoDB server running. It can be downloaded from the official page of the database https://www.mongodb.org/downloads. We are not going to handle the communication with the database manually. There is a driver specifically developed for Node.js. It's called mongodb and we should include it in our package.json file. After successful installation via npm install, the driver will be available in our scripts. We can check this as follows: "dependencies": { "mongodb": "1.3.20" }   We will stick to the Model-View-Controller architecture and the database-related operations in a model called Articles. We can see this as follows: var crypto = require("crypto"), type = "mongodb", client = require('mongodb').MongoClient, mongodb_host = "127.0.0.1", mongodb_port = "27017", collection; module.exports = function() { if(type == "mongodb") { return { add: function(data, callback) { ... }, update: function(data, callback) { ... }, get: function(callback) { ... }, remove: function(id, callback) { ... } } } else { return { add: function(data, callback) { ... }, update: function(data, callback) { ... }, get: function(callback) { ... }, remove: function(id, callback) { ... } } } }   It starts with defining a few dependencies and settings for the MongoDB connection. Line number one requires the crypto module. We will use it to generate unique IDs for every article. The type variable defines which database is currently accessed. The third line initializes the MongoDB driver. We will use it to communicate with the database server. After that, we set the host and port for the connection and at the end a global collection variable, which will keep a reference to the collection with the articles. In MongoDB, the collections are similar to the tables in MySQL. The next logical step is to establish a database connection and perform the needed operations, as follows: connection = 'mongodb://'; connection += mongodb_host + ':' + mongodb_port; connection += '/blog-application'; client.connect(connection, function(err, database) { if(err) { throw new Error("Can't connect"); } else { console.log("Connection to MongoDB server successful."); collection = database.collection('articles'); } });   We pass the host and the port, and the driver is doing everything else. Of course, it is a good practice to handle the error (if any) and throw an exception. In our case, this is especially needed because without the information in the database, the frontend has nothing to show. The rest of the module contains methods to add, edit, retrieve, and delete records: return { add: function(data, callback) { var date = new Date(); data.id = crypto.randomBytes(20).toString('hex'); data.date = date.getFullYear() + "-" + date.getMonth() + "-" + date.getDate(); collection.insert(data, {}, callback || function() {}); }, update: function(data, callback) { collection.update( {ID: data.id}, data, {}, callback || function(){ } ); }, get: function(callback) { collection.find({}).toArray(callback); }, remove: function(id, callback) { collection.findAndModify( {ID: id}, [], {}, {remove: true}, callback ); } }   The add and update methods accept the data parameter. That's a simple JavaScript object. For example, see the following code: { title: "Blog post title", text: "Article's text here ..." }   The records are identified by an automatically generated unique id. The update method needs it in order to find out which record to edit. All the methods also have a callback. That's important, because the module is meant to be used as a black box, that is, we should be able to create an instance of it, operate with the data, and at the end continue with the rest of the application's logic. Using MySQL We're going to use an SQL type of database with MySQL. We will add a few more lines of code to the already working Articles.js model. The idea is to have a class that supports the two databases like two different options. At the end, we should be able to switch from one to the other, by simply changing the value of a variable. Similar to MongoDB, we need to first install the database to be able use it. The official download page is http://www.mysql.com/downloads. MySQL requires another Node.js module. It should be added again to the package. json file. We can see the module as follows: "dependencies": { "mongodb": "1.3.20", "mysql": "2.0.0" }   Similar to the MongoDB solution, we need to firstly connect to the server. To do so, we need to know the values of the host, username, and password fields. And because the data is organized in databases, a name of the database. In MySQL, we put our data into different databases. So, the following code defines the needed variables: var mysql = require('mysql'), mysql_host = "127.0.0.1", mysql_user = "root", mysql_password = "", mysql_database = "blog_application", connection;   The previous example leaves the password field empty but we should set the proper value of our system. The MySQL database requires us to define a table and its fields before we start saving data. So, the following code is a short dump of the table used in this article: CREATE TABLE IF NOT EXISTS `articles` ( `id` int(11) NOT NULL AUTO_INCREMENT, `title` longtext NOT NULL, `text` longtext NOT NULL, `date` varchar(100) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1;   Once we have a database and its table set, we can continue with the database connection, as follows: connection = mysql.createConnection({ host: mysql_host, user: mysql_user, password: mysql_password }); connection.connect(function(err) { if(err) { throw new Error("Can't connect to MySQL."); } else { connection.query("USE " + mysql_database, function(err, rows, fields) { if(err) { throw new Error("Missing database."); } else { console.log("Successfully selected database."); } }) } });   The driver provides a method to connect to the server and execute queries. The first executed query selects the database. If everything is ok, you should see Successfully selected database as an output in your console. Half of the job is done. What we should do now is replicate the methods returned in the first MongoDB implementation. We need to do this because when we switch to the MySQL usage, the code using the class will not work. And by replicating them we mean that they should have the same names and should accept the same arguments. If we do everything correctly, at the end our application will support two types of databases. And all we have to do is change the value of the type variable: return { add: function(data, callback) { var date = new Date(); var query = ""; query += "INSERT INTO articles (title, text, date) VALUES ("; query += connection.escape(data.title) + ", "; query += connection.escape(data.text) + ", "; query += "'" + date.getFullYear() + "-" + date.getMonth() + "-" + date.getDate() + "'"; query += ")"; connection.query(query, callback); }, update: function(data, callback) { var query = "UPDATE articles SET "; query += "title=" + connection.escape(data.title) + ", "; query += "text=" + connection.escape(data.text) + " "; query += "WHERE id='" + data.id + "'"; connection.query(query, callback); }, get: function(callback) { var query = "SELECT * FROM articles ORDER BY id DESC"; connection.query(query, function(err, rows, fields) { if (err) { throw new Error("Error getting."); } else { callback(rows); } }); }, remove: function(id, callback) { var query = "DELETE FROM articles WHERE id='" + id + "'"; connection.query(query, callback); } }   The code is a little longer than the one generated in the first MongoDB variant. That's because we needed to construct MySQL queries from the passed data. Keep in mind that we have to escape the information, which comes to the module. That's why we use connection.escape(). With these lines of code, our model is completed. Now we can add, edit, remove, or get data. Let's continue with the part that shows the articles to our users. Developing the client side with AngularJS Let's assume that there is some data in the database and we are ready to present it to the users. So far, we have only developed the model, which is the class that takes care of the access to the information. To simplify the process, we will Express here. We need to first update the package.json file and include that in the framework, as follows: "dependencies": { "express": "3.4.6", "jade": "0.35.0", "mongodb": "1.3.20", "mysql": "2.0.0" }   We are also adding Jade, because we are going to use it as a template language. The writing of markup in plain HTML is not very efficient nowadays. By using the template engine, we can split the data and the HTML markup, which makes our application much better structured. Jade's syntax is kind of similar to HTML. We can write tags without the need to close them: body p(class="paragraph", data-id="12"). Sample text here footer a(href="#"). my site   The preceding code snippet is transformed to the following code snippet: <body> <p data-id="12" class="paragraph">Sample text here</p> <footer><a href="#">my site</a></footer> </body>   Jade relies on the indentation in the content to distinguish the tags. Let's start with the project structure, as seen in the following screenshot: We placed our already written class, Articles.js, inside the models directory. The public directory will contain CSS styles, and all the necessary client-side JavaScript: the AngularJS library, the AngularJS router module, and our custom code. We will skip some of the explanations about the following code. Our index.js file looks as follows: var express = require('express'); var app = express(); var articles = require("./models/Articles")(); app.set('views', __dirname + '/views'); app.set('view engine', 'jade'); app.use(express.static(__dirname + '/public')); app.use(function(req, res, next) { req.articles = articles; next(); }); app.get('/api/get', require("./controllers/api/get")); app.get('/', require("./controllers/index")); app.listen(3000); console.log('Listening on port 3000');   At the beginning, we require the Express framework and our model. Maybe it's better to initialize the model inside the controller, but in our case this is not necessary. Just after that, we set up some basic options for Express and define our own middleware. It has only one job to do and that is to attach the model to the request object. We are doing this because the request object is passed to all the route handlers. In our case, these handlers are actually the controllers. So, Articles.js becomes accessible everywhere via the req.articles property. At the end of the script, we placed two routes. The second one catches the usual requests that come from the users. The first one, /api/get, is a bit more interesting. We want to build our frontend on top of AngularJS. So, the data that is stored in the database should not enter the Node.js part but on the client side where we use Google's framework. To make this possible, we will create routes/controllers to get, add, edit, and delete records. Everything will be controlled by HTTP requests performed by AngularJS. In other words, we need an API. Before we start using AngularJS, let's take a look at the /controllers/api/get.js controller: module.exports = function(req, res, next) { req.articles.get(function(rows) { res.send(rows); }); }   The main job is done by our model and the response is handled by Express. It's nice because if we pass a JavaScript object, as we did, (rows is actually an array of objects) the framework sets the response headers automatically. To test the result, we could run the application with node index.js and open http://localhost:3000/api/ get. If we don't have any records in the database, we will get an empty array. If not, the stored articles will be returned. So, that's the URL, which we should hit from within the AngularJS controller in order to get the information. The code of the /controller/index.js controller is also just a few lines. We can see the code as follows: module.exports = function(req, res, next) { res.render("list", { app: "" }); }   It simply renders the list view, which is stored in the list.jade file. That file should be saved in the /views directory. But before we see its code, we will check another file, which acts as a base for all the pages. Jade has a nice feature called blocks. We may define different partials and combine them into one template. The following is our layout.jade file: doctype html html(ng-app="#{app}") head title Blog link(rel='stylesheet', href='/style.css') script(src='/angular.min.js') script(src='/angular-route.min.js') body block content   There is only one variable passed to this template, which is #{app}. We will need it later to initialize the administration's module. The angular.min.js and angular-route.min.js files should be downloaded from the official AngularJS site, and placed in the /public directory. The body of the page contains a block placeholder called content, which we will later fill with the list of the articles. The following is the list.jade file: extends layout block content .container(ng-controller="BlogCtrl") section.articles article(ng-repeat="article in articles") h2 {{article.title}} br small published on {{article.date}} p {{article.text}} script(src='/blog.js')   The two lines in the beginning combine both the templates into one page. The Express framework transforms the Jade template into HTML and serves it to the browser of the user. From there, the client-side JavaScript takes control. We are using the ng-controller directive saying that the div element will be controlled by an AngularJS controller called BlogCtrl. The same class should have variable, articles, filled with the information from the database. ng-repeat goes through the array and displays the content to the users. The blog.js class holds the code of the controller: function BlogCtrl($scope, $http) { $scope.articles = [ { title: "", text: "Loading ..."} ]; $http({method: 'GET', url: '/api/get'}) .success(function(data, status, headers, config) { $scope.articles = data; }) .error(function(data, status, headers, config) { console.error("Error getting articles."); }); }   The controller has two dependencies. The first one, $scope, points to the current view. Whatever we assign as a property there is available as a variable in our HTML markup. Initially, we add only one element, which doesn't have a title, but has text. It is shown to indicate that we are still loading the articles from the database. The second dependency, $http, provides an API in order to make HTTP requests. So, all we have to do is query /api/get, fetch the data, and pass it to the $scope dependency. The rest is done by AngularJS and its magical two-way data binding. To make the application a little more interesting, we will add a search field, as follows: // views/list.jade header .search input(type="text", placeholder="type a filter here", ng-model="filterText") h1 Blog hr   The ng-model directive, binds the value of the input field to a variable inside our $scope dependency. However, this time, we don't have to edit our controller and can simply apply the same variable as a filter to the ng-repeat: article(ng-repeat="article in articles | filter:filterText") As a result, the articles shown will be filtered based on the user's input. Two simple additions, but something really valuable is on the page. The filters of AngularJS can be very powerful. Implementing a control panel The control panel is the place where we will manage the articles of the blog. Several things should be made in the backend before continuing with the user interface. They are as follows: app.set("username", "admin"); app.set("password", "pass"); app.use(express.cookieParser('blog-application')); app.use(express.session());   The previous lines of code should be added to /index.js. Our administration should be protected, so the first two lines define our credentials. We are using Express as data storage, simply creating key-value pairs. Later, if we need the username we can get it with app.get("username"). The next two lines enable session support. We need that because of the login process. We added a middleware, which attaches the articles to the request object. We will do the same with the current user's status, as follows: app.use(function(req, res, next) { if (( req.session && req.session.admin === true ) || ( req.body && req.body.username === app.get("username") && req.body.password === app.get("password") )) { req.logged = true; req.session.admin = true; }; next(); });   Our if statement is a little long, but it tells us whether the user is logged in or not. The first part checks whether there is a session created and the second one checks whether the user submitted a form with the correct username and password. If these expressions are true, then we attach a variable, logged, to the request object and create a session that will be valid during the following requests. There is only one thing that we need in the main application's file. A few routes that will handle the control panel operations. In the following code, we are defining them along with the needed route handler: var protect = function(req, res, next) { if (req.logged) { next(); } else { res.send(401, 'No Access.'); } } app.post('/api/add', protect, require("./controllers/api/add")); app.post('/api/edit', protect, require("./controllers/api/edit")); app.post('/api/delete', protect, require("./controllers/api/ delete")); app.all('/admin', require("./controllers/admin"));   The three routes, which start with /api, will use the model Articles.js to add, edit, and remove articles from the database. These operations should be protected. We will add a middleware function that takes care of this. If the req.logged variable is not available, it simply responds with a 401 - Unauthorized status code. The last route, /admin, is a little different because it shows a login form instead. The following is the controller to create new articles: module.exports = function(req, res, next) { req.articles.add(req.body, function() { res.send({success: true}); }); }   We transfer most of the logic to the frontend, so again, there are just a few lines. What is interesting here is that we pass req.body directly to the model. It actually contains the data submitted by the user. The following code, is how the req.articles.add method looks for the MongoDB implementation: add: function(data, callback) { data.ID = crypto.randomBytes(20).toString('hex'); collection.insert(data, {}, callback || function() {}); } And the MySQL implementation is as follows: add: function(data, callback) { var date = new Date(); var query = ""; query += "INSERT INTO articles (title, text, date) VALUES ("; query += connection.escape(data.title) + ", "; query += connection.escape(data.text) + ", "; query += "'" + date.getFullYear() + "-" + date.getMonth() + "-" + date.getDate() + "'"; query += ")"; connection.query(query, callback); } In both the cases, we need title and text in the passed data object. Thankfully, due to Express' bodyParser middleware, this is what we have in the req.body object. We can directly forward it to the model. The other route handlers are almost the same: // api/edit.js module.exports = function(req, res, next) { req.articles.update(req.body, function() { res.send({success: true}); }); } What we changed is the method of the Articles.js class. It is not add but update. The same technique is applied in the route to delete an article. We can see it as follows: // api/delete.js module.exports = function(req, res, next) { req.articles.remove(req.body.id, function() { res.send({success: true}); }); }   What we need for deletion is not the whole body of the request but only the unique ID of the record. Every API method sends {success: true} as a response. While we are dealing with API requests, we should always return a response. Even if something goes wrong. The last thing in the Node.js part, which we have to cover, is the controller responsible for the user interface of the administration panel, that is, the. / controllers/admin.js file: module.exports = function(req, res, next) { if(req.logged) { res.render("admin", { app: "admin" }); } else { res.render("login", { app: "" }); } }   There are two templates that are rendered: /views/admin.jade and /views/login. jade. Based on the variable, which we set in /index.js, the script decides which one to show. If the user is not logged in, then a login form is sent to the browser, as follows: extends layout block content .container header h1 Administration hr section.articles article form(method="post", action="/admin") span Username: br input(type="text", name="username") br span Password: br input(type="password", name="password") br br input(type="submit", value="login")   There is no AngularJS code here. All we have is the good old HTML form, which submits its data via POST to the same URL—/admin. If the username and password are correct, the .logged variable is set to true and the controller renders the other template: extends layout block content .container header h1 Administration hr a(href="/") Public span | a(href="#/") List span | a(href="#/add") Add section(ng-view) script(src='/admin.js')   The control panel needs several views to handle all the operations. AngularJS has a great router module, which works with hashtags-type URLs, that is, URLs such as / admin#/add. The same module requires a placeholder for the different partials. In our case, this is a section tag. The ng-view attribute tells the framework that this is the element prepared for that logic. At the end of the template, we are adding an external file, which keeps the whole client-side JavaScript code that is needed by the control panel. While the client-side part of the applications needs only loading of the articles, the control panel requires a lot more functionalities. It is good to use the modular system of AngularJS. We need the routes and views to change, so the ngRoute module is needed as a dependency. This module is not added in the main angular.min.js build. It is placed in the angular-route.min.js file. The following code shows how our module starts: var admin = angular.module('admin', ['ngRoute']); admin.config(['$routeProvider', function($routeProvider) { $routeProvider .when('/', {}) .when('/add', {}) .when('/edit/:id', {}) .when('/delete/:id', {}) .otherwise({ redirectTo: '/' }); } ]);   We configured the router by mapping URLs to specific routes. At the moment, the routes are just empty objects, but we will fix that shortly. Every controller will need to make HTTP requests to the Node.js part of the application. It will be nice if we have such a service and use it all over our code. We can see an example as follows: admin.factory('API', function($http) { var request = function(method, url) { return function(callback, data) { $http({ method: method, url: url, data: data }) .success(callback) .error(function(data, status, headers, config) { console.error("Error requesting '" + url + "'."); }); } } return { get: request('GET', '/api/get'), add: request('POST', '/api/add'), edit: request('POST', '/api/edit'), remove: request('POST', '/api/delete') } });   One of the best things about AngularJS is that it works with plain JavaScript objects. There are no unnecessary abstractions and no extending or inheriting special classes. We are using the .factory method to create a simple JavaScript object. It has four methods that can be called: get, add, edit, and remove. Each one of them calls a function, which is defined in the helper method request. The service has only one dependency, $http. We already know this module; it handles HTTP requests nicely. The URLs that we are going to query are the same ones that we defined in the Node.js part. Now, let's create a controller that will show the articles currently stored in the database. First, we should replace the empty route object .when('/', {}) with the following object: .when('/', { controller: 'ListCtrl', template: ' <article ng-repeat="article in articles"> <hr /> <strong>{{article.title}}</strong><br /> (<a href="#/edit/{{article.id}}">edit</a>) (<a href="#/delete/{{article.id}}">remove</a>) </article> ' })   The object has to contain a controller and a template. The template is nothing more than a few lines of HTML markup. It looks a bit like the template used to show the articles on the client side. The difference is the links used to edit and delete. JavaScript doesn't allow new lines in the string definitions. The backward slashes at the end of the lines prevent syntax errors, which will eventually be thrown by the browser. The following is the code for the controller. It is defined, again, in the module: admin.controller('ListCtrl', function($scope, API) { API.get(function(articles) { $scope.articles = articles; }); });   And here is the beauty of the AngularJS dependency injection. Our custom-defined service API is automatically initialized and passed to the controller. The .get method fetches the articles from the database. Later, we send the information to the current $scope dependency and the two-way data binding does the rest. The articles are shown on the page. The work with AngularJS is so easy that we could combine the controller to add and edit in one place. Let's store the route object in an external variable, as follows: var AddEditRoute = { controller: 'AddEditCtrl', template: ' <hr /> <article> <form> <span>Title</spna><br /> <input type="text" ng-model="article.title"/><br /> <span>Text</spna><br /> <textarea rows="7" ng-model="article.text"></textarea> <br /><br /> <button ng-click="save()">save</button> </form> </article> ' };   And later, assign it to the both the routes, as follows: .when('/add', AddEditRoute) .when('/edit/:id', AddEditRoute)   The template is just a form with the necessary fields and a button, which calls the save method in the controller. Notice that we bound the input field and the text area to variables inside the $scope dependency. This comes in handy because we don't need to access the DOM to get the values. We can see this as follows: admin.controller( 'AddEditCtrl', function($scope, API, $location, $routeParams) { var editMode = $routeParams.id ? true : false; if (editMode) { API.get(function(articles) { articles.forEach(function(article) { if (article.id == $routeParams.id) { $scope.article = article; } }); }); } $scope.save = function() { API[editMode ? 'edit' : 'add'](function() { $location.path('/'); }, $scope.article); } })   The controller receives four dependencies. We already know about $scope and API. The $location dependency is used when we want to change the current route, or, in other words, to forward the user to another view. The $routeParams dependency is needed to fetch parameters from the URL. In our case, /edit/:id is a route with a variable inside. Inside the code, the id is available in $routeParams.id. The adding and editing of articles uses the same form. So, with a simple check, we know what the user is currently doing. If the user is in the edit mode, then we fetch the article based on the provided id and fill the form. Otherwise, the fields are empty and new records will be created. The deletion of an article can be done by using a similar approach, which is adding a route object and defining a new controller. We can see the deletion as follows: .when('/delete/:id', { controller: 'RemoveCtrl', template: ' ' })   We don't need a template in this case. Once the article is deleted from the database, we will forward the user to the list page. We have to call the remove method of the API. Here is how the RemoveCtrl controller looks like: admin.controller( 'RemoveCtrl', function($scope, $location, $routeParams, API) { API.remove(function() { $location.path('/'); }, $routeParams); } );   The preceding code depicts same dependencies like in the previous controller. This time, we simply forward the $routeParams dependency to the API. And because it is a plain JavaScript object, everything works as expected. Summary In this article, we built a simple blog by writing the backend of the application in Node.js. The module for database communication, which we wrote, can work with the MongoDB or MySQL database and store articles. The client-side part and the control panel of the blog were developed with AngularJS. We then defined a custom service using the built-in HTTP and routing mechanisms. Node.js works well with AngularJS, mainly because both are written in JavaScript. We found out that AngularJS is built to support the developer. It removes all those boring tasks such as DOM element referencing, attaching event listeners, and so on. It's a great choice for the modern client-side coding stack. You can refer to the following books to learn more about Node.js: Node.js Essentials Learning Node.js for Mobile Application Development Node.js Design Patterns Resources for Article: Further resources on this subject: Node.js Fundamentals [Article] AngularJS Project [Article] Working with Live Data and AngularJS [Article]
Read more
  • 0
  • 2
  • 13484
article-image-compression-formats-linux-shell-script
Packt
31 Jan 2011
6 min read
Save for later

Compression Formats in Linux Shell Script

Packt
31 Jan 2011
6 min read
  Linux Shell Scripting Cookbook Solve real-world shell scripting problems with over 110 simple but incredibly effective recipes Master the art of crafting one-liner command sequence to perform tasks such as text processing, digging data from files, and lot more Practical problem solving techniques adherent to the latest Linux platform Packed with easy-to-follow examples to exercise all the features of the Linux shell scripting language Part of Packt's Cookbook series: Each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible Compressing with gunzip (gzip) gzip is a commonly used compression format in GNU/Linux platforms. Utilities such as gzip, gunzip, and zcat are available to handle gzip compression file types. gzip can be applied on a file only. It cannot archive directories and multiple files. Hence we use a tar archive and compress it with gzip. When multiple files are given as input it will produce several individually compressed (.gz) files. Let's see how to operate with gzip. How to do it... In order to compress a file with gzip use the following command: $ gzip filename $ ls filename.gz Then it will remove the file and produce a compressed file called filename.gz. Extract a gzip compressed file as follows: $ gunzip filename.gz It will remove filename.gz and produce an uncompressed version of filename.gz. In order to list out the properties of a compressed file use: $ gzip -l test.txt.gz compressed uncompressed ratio uncompressed_name 35 6 -33.3% test.txt The gzip command can read a file from stdin and also write a compressed file into stdout. Read from stdin and out as stdout as follows: $ cat file | gzip -c > file.gz The -c option is used to specify output to stdout. We can specify the compression level for gzip. Use --fast or the --best option to provide low and high compression ratios, respectively. There's more... The gzip command is often used with other commands. It also has advanced options to specify the compression ratio. Let's see how to work with these features. Gzip with tarball We usually use gzip with tarballs. A tarball can be compressed by using the –z option passed to the tar command while archiving and extracting. You can create gzipped tarballs using the following methods: Method - 1 $ tar -czvvf archive.tar.gz [FILES] Or: $ tar -cavvf archive.tar.gz [FILES] The -a option specifies that the compression format should automatically be detected from the extension. Method - 2First, create a tarball: $ tar -cvvf archive.tar [FILES] Compress it after tarballing as follows: $ gzip archive.tar If many files (a few hundreds) are to be archived in a tarball and need to be compressed, we use Method - 2 with few changes. The issue with giving many files as command arguments to tar is that it can accept only a limited number of files from the command line. In order to solve this issue, we can create a tar file by adding files one by one using a loop with an append option (-r) as follows: FILE_LIST="file1 file2 file3 file4 file5" for f in $FILE_LIST; do tar -rvf archive.tar $f done gzip archive.tar In order to extract a gzipped tarball, use the following: -x for extraction -z for gzip specification Or: $ tar -xavvf archive.tar.gz -C extract_directory In the above command, the -a option is used to detect the compression format automatically. zcat – reading gzipped files without extracting zcat is a command that can be used to dump an extracted file from a .gz file to stdout without manually extracting it. The .gz file remains as before but it will dump the extracted file into stdout as follows: $ ls test.gz $ zcat test.gz A test file # file test contains a line "A test file" $ ls test.gz Compression ratio We can specify compression ratio, which is available in range 1 to 9, where: 1 is the lowest, but fastest 9 is the best, but slowest You can also specify the ratios in between as follows: $ gzip -9 test.img This will compress the file to the maximum. Compressing with bunzip (bzip) bunzip2 is another compression technique which is very similar to gzip. bzip2 typically produces smaller (more compressed) files than gzip. It comes with all Linux distributions. Let's see how to use bzip2. How to do it... In order to compress with bzip2 use: $ bzip2 filename $ ls filename.bz2 Then it will remove the file and produce a compressed file called filename.bzip2. Extract a bzipped file as follows: $ bunzip2 filename.bz2 It will remove filename.bz2 and produce an uncompressed version of filename. bzip2 can read a file from stdin and also write a compressed file into stdout. In order to read from stdin and read out as stdout use: $ cat file | bzip2 -c > file.tar.bz2 -c is used to specify output to stdout. We usually use bzip2 with tarballs. A tarball can be compressed by using the -j option passed to the tar command while archiving and extracting. Creating a bzipped tarball can be done by using the following methods: Method - 1 $ tar -cjvvf archive.tar.bz2 [FILES] Or: $ tar -cavvf archive.tar.bz2 [FILES] The -a option specifies to automatically detect compression format from the extension. Method - 2First create the tarball: $ tar -cvvf archive.tar [FILES] Compress it after tarballing: $ bzip2 archive.tar If we need to add hundreds of files to the archive, the above commands may fail. To fix that issue, use a loop to append files to the archive one by one using the –r option. Extract a bzipped tarball as follows: $ tar -xjvvf archive.tar.bz2 -C extract_directory In this command: -x is used for extraction -j is for bzip2 specification -C is for specifying the directory to which the files are to be extracted Or, you can use the following command: $ tar -xavvf archive.tar.bz2 -C extract_directory -a will automatically detect the compression format. There's more... bunzip has several additional options to carry out different functions. Let's go through few of them. Keeping input files without removing them While using bzip2 or bunzip2, it will remove the input file and produce a compressed output file. But we can prevent it from removing input files by using the –k option. For example: $ bunzip2 test.bz2 -k $ ls test test.bz2 Compression ratio We can specify the compression ratio, which is available in the range of 1 to 9 (where 1 is the least compression, but fast, and 9 is the highest possible compression but much slower). For example: $ bzip2 -9 test.img This command provides maximum compression.  
Read more
  • 0
  • 0
  • 13387

article-image-component-composition
Packt
22 Feb 2016
38 min read
Save for later

Component Composition

Packt
22 Feb 2016
38 min read
In this article, we understand how large-scale JavaScript applications amount to a series of communicating components. Composition is a big topic, and one that's relevant to scalable JavaScript code. When we start thinking about the composition of our components, we start to notice certain flaws in our design; limitations that prevent us from scaling in response to influencers. (For more resources related to this topic, see here.) The composition of a component isn't random—there's a handful of prevalent patterns for JavaScript components. We'll begin the article with a look at some of these generic component types that encapsulate common patterns found in every web application. Understanding that components implement patterns is crucial for extending these generic components in a way that scales. It's one thing to get our component composition right from a purely technical standpoint, it's another to easily map these components to features. The same challenge holds true for components we've already implemented. The way we compose our code needs to provide a level of transparency, so that it's feasible to decompose our components and understand what they're doing, both at runtime and at design time. Finally, we'll take a look at the idea of decoupling business logic from our components. This is nothing new, the idea of separation-of-concerns has been around for a long time. The challenge with JavaScript applications is that it touches so many things—it's difficult to clearly separate business logic from other implementation concerns. The way in which we organize our source code (relative to the components that use them) can have a dramatic effect on our ability to scale. Generic component types It's exceedingly unlikely that anyone, in this day and age, would set out to build a large scale JavaScript application without the help of libraries, a framework, or both. Let's refer to these collectively as tools, since we're more interested in using the tools that help us scale, and not necessarily which tools are better than other tools. At the end of the day, it's up to the development team to decide which tool is best for the application we're building, personal preferences aside. Guiding factors in choosing the tools we use are the type of components they provide, and what these are capable of. For example, a larger web framework may have all the generic components we need. On the other hand, a functional programming utility library might provide a lot of the low-level functionality we need. How these things are composed into a cohesive feature that scales, is for us to figure out. The idea is to find tools that expose generic implementations of the components we need. Often, we'll extend these components, building specific functionality that's unique to our application. This section walks through the most typical components we'd want in a large-scale JavaScript application. Modules Modules exist, in one form or another, in almost every programming language. Except in JavaScript. That's almost untrue though—ECMAScript 6, in it's final draft status at the time of this writing, introduces the notion of modules. However, there're tools out there today that allow us to modularize our code, without relying on the script tag. Large-scale JavaScript code is still a relatively new thing. Things like the script tag weren't meant to address issues like modular code and dependency management. RequireJS is probably the most popular module loader and dependency resolver. The fact that we need a library just to load modules into our front-end application speaks of the complexities involved. For example, module dependencies aren't a trivial matter when there's network latency and race conditions to consider. Another option is to use a transpiler like Browserify. This approach is gaining traction because it lets us declare our modules using the CommonJS format. This format is used by NodeJS, and the upcoming ECMAScript module specification is a lot closer to CommonJS than to AMD. The advantage is that the code we write today has better compatibility with back-end JavaScript code, and with the future. Some frameworks, like Angular or Marionette, have their own ideas of what modules are, albeit, more abstract ideas. These modules are more about organizing our code, than they are about tactfully delivering code from the server to the browser. These types of modules might even map better to other features of the framework. For example, if there's a centralized application instance that's used to manage our modules, the framework might provide a mean to manage modules from the application. Take a look at the following diagram: A global application component using modules as it's building blocks. Modules can be small, containing only one feature, or large, containing several features This lets us perform higher-level tasks at the module level (things like disabling modules or configuring them with arguments). Essentially, modules speak for features. They're a packaging mechanism that allows us to encapsulate things about a given feature that the rest of the application doesn't care about. Modules help us scale our application by adding high-level operations to our features, by treating our features as the building blocks. Without modules, we'd have no meaningful way to do this. The composition of modules look different depending on the mechanism used to declare the module. A module could be straightforward, providing a namespace from which objects can be exported. Or if we're using a specific framework module flavor, there could be much more to it. Like automatic event life cycles, or methods for performing boilerplate setup tasks. However we slice it, modules in the context of scalable JavaScript are a means to create larger building blocks, and a means to handle complex dependencies: // main.js // Imports a log() function from the util.js model. import log from 'util.js'; log('Initializing...'); // util.js // Exports a basic console.log() wrapper function. 'use strict'; export default function log(message) { if (console) { console.log(message); } } While it's easier to build large-scale applications with module-sized building blocks, it's also easier to tear a module out of an application and work with it in isolation. If our application is monolithic or our modules are too plentiful and fine-grained, it's very difficult for us to excise problem-spots from our code, or to test work in progress. Our component may function perfectly well on its own. It could have negative side-effects somewhere else in the system, however. If we can remove pieces of the puzzle, one at a time and without too much effort, we can scale the trouble-shooting process. Routers Any large-scale JavaScript application has a significant number of possible URIs. The URI is the address of the page that the user is looking at. They can navigate to this resource by clicking on links, or they may be taken to a new URI automatically by our code, perhaps in response to some user action. The web has always relied on URIs, long before the advent of large-scale JavaScript applications. URIs point to resources, and resources can be just about anything. The larger the application, the more resources, and the more potential URIs. Router components are tools we use in the front-end, to listen for these URI change events and respond to them accordingly. There's less reliance on the back-end web servers parsing the URI, and returning the new content. Most web sites still do this, but there're several disadvantages with this approach when it comes to building applications: The browser triggers events when the URI changes, and the router component responds to these changes. The URI changes can be triggered from the history API, or from location.hash The main problem is that we want the UI to be portable, as in, we want to be able to deploy it against any back-end and things should work. Since we're not assembling markup for the URI in the back-end, it doesn't make sense to parse the URI in the back-end either. We declaratively specify all the URI patterns in our router components. We generally refer to these as, routes. Think of a route as a blueprint, and a URI as an instance of that blueprint. This means that when the router receives a URI, it can correlate it to a route. That, in essence, is the responsibility of router components. Which is easy with smaller applications, but when we're talking about scale, further deliberation on router design is in order. As a starting point, we have to consider the URI mechanism we want to use. The two choices are basically listening to hash change events, or utilizing the history API. Using hash-bang URIs is probably the simplest approach. The history API available in every modern browser, on the other hand, lets us format URI's without the hash-bang—they look like real URIs. The router component in the framework we're using may support only one or the other, thus simplifying the decision. Some support both URI approaches, in which case we need to decide which one works best for our application. The next thing to consider about routing in our architecture is how to react to route changes. There're generally two approaches to this. The first is to declaratively bind a route to a callback function. This is ideal when the router doesn't have a lot of routes. The second approach is to trigger events when routes are activated. This means that there's nothing directly bound to the router. Instead, some other component listens for such an event. This approach is beneficial when there are lots of routes, because the router has no knowledge of the components, just the routes. Here's an example that shows a router component listening to route events: // router.js import Events from 'events.js' // A router is a type of event broker, it // can trigger routes, and listen to route // changes. export default class Router extends Events { // If a route configuration object is passed, // then we iterate over it, calling listen() // on each route name. This is translating from // route specs to event listeners. constructor(routes) { super(); if (routes != null) { for (let key of Object.keys(routes)) { this.listen(key, routes[key]); } } } // This is called when the caller is ready to start // responding to route events. We listen to the // "onhashchange" window event. We manually call // our handler here to process the current route. start() { window.addEventListener('hashchange', this.onHashChange.bind(this)); this.onHashChange(); } // When there's a route change, we translate this into // a triggered event. Remember, this router is also an // event broker. The event name is the current URI. onHashChange() { this.trigger(location.hash, location.hash); } }; // Creates a router instance, and uses two different // approaches to listening to routes. // // The first is by passing configuration to the Router. // The key is the actual route, and the value is the // callback function. // // The second uses the listen() method of the router, // where the event name is the actual route, and the // callback function is called when the route is activated. // // Nothing is triggered until the start() method is called, // which gives us an opportunity to set everything up. For // example, the callback functions that respond to routes // might require something to be configured before they can // run. import Router from 'router.js' function logRoute(route) { console.log(`${route} activated`); } var router = new Router({ '#route1': logRoute }); router.listen('#route2', logRoute); router.start(); Some of the code required to run these examples is omitted from the listings. For example, the events.js module is included in the code bundle that comes with this book, it's just not that relevant to the example. Also in the interest of space, the code examples avoid using specific frameworks and libraries. In practice, we're not going to write our own router or events API—our frameworks do that already. We're instead using vanillaES6 JavaScript, to illustrate points pertinent to scaling our applications Another architectural consideration we'll want to make when it comes to routing is whether we want a global, monolithic router, or a router per module, or some other component. The downside to having a monolithic router is that it becomes difficult to scale when it grows sufficiently large, as we keep adding features and routes. The advantage is that the routes are all declared in one place. Monolithic routers can still trigger events that all our components can listen to. The per-module approach to routing involves multiple router instances. For example, if our application has five components, each would have their own router. The advantage here is that the module is completely self-contained. Anyone working with this module doesn't need to look elsewhere to figure out which routes it responds to. Using this approach, we can also have a tighter coupling between the route definitions and the functions that respond to them, which could mean simpler code. The downside to this approach is that we lose the consolidated aspect of having all our routes declared in a central place. Take a look at the following diagram: The router to the left is global—all modules use the same instance to respond to URI events. The modules to the right have their own routers. These instances contain configuration specific to the module, not the entire application Depending on the capabilities of the framework we're using, the router components may or may not support multiple router instances. It may only be possible to have one callback function per route. There may be subtle nuances to the router events we're not yet aware of. Models/Collections The API our application interacts with exposes entities. Once these entities have been transferred to the browser, we will store a model of those entities. Collections are a bunch of related entities, usually of the same type. The tools we're using may or may not provide a generic model and/or collection components, or they may have something similar but named differently. The goal of modeling API data is a rough approximation of the API entity. This could be as simple as storing models as plain JavaScript objects and collections as arrays. The challenge with simply storing our API entities as plain objects in arrays is that some other component is then responsible for talking to the API, triggering events when the data changes, and for performing data transformations. We want other components to be able to transform collections and models where needed, in order to fulfill their duties. But we don't want repetitive code, and it's best if we're able to encapsulate the common things like transformations, API calls, and event life cycles. Take a look at the next diagram: Models encapsulate interaction with APIs, parsing data, and triggering events when data changes. This leads to simpler code outside of the models Hiding the details of how the API data is loaded into the browser, or how we issue commands, helps us scale our application as we grow. As we add more entities to the API, the complexity of our code grows too. We can throttle this complexity by constraining the API interactions to our model and collection components. Another scalability issue we'll face with our models and collections is where they fit in the big picture. That is, our application is really just one big component, composed of smaller components. Our models and collections map well to our API, but not necessarily to features. API entities are more generic than specific features, and are often used by several features. Which leaves us with an open question—where do our models and collections fit into components? Here's an example that shows specific views extending generic views. The same model can be passed to both: // A super simple model class. class Model { constructor(first, last, age) { this.first = first; this.last = last; this.age = age; } } // The base view, with a name method that // generates some output. class BaseView { name() { return `${this.model.first} ${this.model.last}`; } } // Extends BaseView with a constructor that accepts // a model and stores a reference to it. class GenericModelView extends BaseView { constructor(model) { super(); this.model = model; } } // Extends GenericModelView with specific constructor // arguments. class SpecificModelView extends BaseView { constructor(first, last, age) { super(); this.model = new Model(...arguments); } } var properties = [ 'Terri', 'Hodges', 41 ]; // Make sure the data is the same in both views. // The name() method should return the same result... console.log('generic view', new GenericModelView(new Model(...properties)).name()); console.log('specific view', new SpecificModelView(...properties).name()); On one hand, components can be completely generic with regard to the models and collections they use. On the other hand, some components are specific with their requirements—they can directly instantiate their collections. Configuring generic components with specific models and collections at runtime only benefits us when the component truly is generic, and is used in several places. Otherwise, we might as well encapsulate the models within the components that use them. Choosing the right approach helps us scale. Because, not all our components will be entirely generic or entirely specific. Controllers/Views Depending on the framework we're using, and the design patterns our team is following, controllers and views can represent different things. There's simply too many MV* pattern and style variations to provide a meaningful distinction in terms of scale. The minute differences have trade-offs relative to similar but different MV* approaches. For our purpose of discussing large scale JavaScript code, we'll treat them as the same type of component. If we decide to separate the two concepts in our implementation, the ideas in this section will be relevant to both types. Let's stick with the term views for now, knowing that we're covering both views and controllers, conceptually. These components interact with several other component types, including routers, models or collections, and templates, which are discussed in the next section. When something happens, the user needs to be notified about it. The view's job is to update the DOM. This could be as simple as changing an attribute on a DOM element, or as involved as rendering a new template: A view component updating the DOM in response to router and model events A view can update the DOM in response to several types of events. A route could have changed. A model could have been updated. Or something more direct, like a method call on the view component. Updating the DOM is not as straightforward as one might think. There's the performance to think about—what happens when our view is flooded with events? There's the latency to think about—how long will this JavaScript call stack run, before stopping and actually allowing the DOM to render? Another responsibility of our views is responding to DOM events. These are usually triggered by the user interacting with our UI. The interaction may start and end with our view. For example, depending on the state of something like user input or one of our models, we might update the DOM with a message. Or we might do nothing, if the event handler is debounced, for instance. A debounced function groups multiple calls into one. For example, calling foo() 20 times in 10 milliseconds may only result in the implementation of foo() being called once. For a more detailed explanation, look at: http://drupalmotion.com/article/debounce-and-throttle-visual-explanation. Most of the time, the DOM events get translated into something else, either a method call or another event. For example, we might call a method on a model, or transform a collection. The end result, most of the time, is that we provide feedback by updating the DOM. This can be done either directly, or indirectly. In the case of direct DOM updates, it's simple to scale. In the case of indirect updates, or updates through side-effects, scaling becomes more of a challenge. This is because as the application acquires more moving parts, the more difficult it becomes to form a mental map of cause and effect. Here's an example that shows a view listening to DOM events and model events. import Events from 'events.js'; // A basic model. It extending "Events" so it // can listen to events triggered by other components. class Model extends Events { constructor(enabled) { super(); this.enabled = !!enabled; } // Setters and getters for the "enabled" property. // Setting it also triggers an event. So other components // can listen to the "enabled" event. set enabled(enabled) { this._enabled = enabled; this.trigger('enabled', enabled); } get enabled() { return this._enabled; } } // A view component that takes a model and a DOM element // as arguments. class View { constructor(element, model) { // When the model triggers the "enabled" event, // we adjust the DOM. model.listen('enabled', (enabled) => { element.setAttribute('disabled', !enabled); }); // Set the state of the model when the element is // clicked. This will trigger the listener above. element.addEventListener('click', () => { model.enabled = false; }); } } new View(document.getElementById('set'), new Model()); On the plus side to all this complexity, we actually get some reusable code. The view is agnostic as to how the model or router it's listening to is updated. All it cares about is specific events on specific components. This is actually helpful to us because it reduces the amount of special-case handling we need to implement. The DOM structure that's generated at runtime, as a result of rendering all our views, needs to be taken into consideration as well. For example, if we look at some of the top-level DOM nodes, they have nested structure within them. It's these top-level nodes that form the skeleton of our layout. Perhaps this is rendered by the main application view, and each of our views has a child-relationship to it. Or perhaps the hierarchy extends further down than that. The tools we're using most likely have mechanisms for dealing with these parent-child relationships. However, bear in mind that vast view hierarchies are difficult to scale. Templates Template engines used to reside mostly in the back-end framework. That's less true today, thanks in a large part to the sophisticated template rendering libraries available in the front-end. With large-scale JavaScript applications, we rarely talk to back-end services about UI-specific things. We don't say, "here's a URL, render the HTML for me". The trend is to give our JavaScript components a certain level autonomy—letting them render their own markup. Having component markup coupled with the components that render them is a good thing. It means that we can easily discern where the markup in the DOM is being generated. We can then diagnose issues and tweak the design of a large scale application. Templates help establish a separation of concerns with each of our components. The markup that's rendered in the browser mostly comes from the template. This keeps markup-specific code out of our JavaScript. Front-end template engines aren't just tools for string replacement; they often have other tools to help reduce the amount of boilerplate JavaScript code to write. For example, we can embed things like conditionals and for-each loops in our markup, where they're suited. Application-specific components The component types we've discussed so far are very useful for implementing scalable JavaScript code, but they're also very generic. Inevitably, during implementation we're going to hit a road block—the component composition patterns we've been following will not work for certain features. This is when it's time to step back and think about possibly adding a new type of component to our architecture. For example, consider the idea of widgets. These are generic components that are mainly focused on presentation and user interactions. Let's say that many of our views are using the exact same DOM elements, and the exact same event handlers. There's no point in repeating them in every view throughout our application. Might it be better if we were to factor it into a common component? A view might be overkill, perhaps we need a new type of widget component? Sometimes we'll create components for the sole purpose of composition. For example, we might have a component that glues together router, view, model/collection, and template components together to form a cohesive unit. Modules partially solve this problem but they aren't always enough. Sometimes we're missing that added bit of orchestration that our components need in order to communicate. Extending generic components We often discover, late in the development process, that the components we rely on are lacking something we need. If the base component we're using is designed well, then we can extend it, plugging in the new properties or functionality we need. In this section, we'll walk through some scenarios where we might need to extend the common generic components used throughout our application. If we're going to scale our code, we need to leverage these base components where we can. We'll probably want to start extending our own base components at some point too. Some tools are better than others at facilitating the extension mechanism through which we implement this specialized behavior. Identifying common data and functionality Before we look at extending the specific component types, it's worthwhile to consider the common properties and functionality that's common across all component types. Some of these things will be obvious up-front, while others are less pronounced. Our ability to scale depends, in part, on our ability to identify commonality across our components. If we have a global application instance, quite common in large JavaScript applications, global values and functionality can live there. This can grow unruly down the line though, as more common things are discovered. Another approach might be to have several global modules, as shown in the following diagram, instead of just a single application instance. Or both. But this doesn't scale from an understandability perspective: The ideal component hierarchy doesn't extend beyond three levels. The top level is usually found in a framework our application depends on As a rule-of-thumb, we should, for any given component, avoid extending it more than three levels down. For example, a generic view component from the tools we're using could be extended by our generic version of it. This would include properties and functionality that every view instance in our application requires. This is only a two-level hierarchy, and easy to manage. This means that if any given component needs to extend our generic view, it can do so without complicating things. Three-levels should be the maximum extension hierarchy depth for any given type. This is just enough to avoid unnecessary global data, going beyond this presents scaling issues because the hierarchy isn't easily grasped. Extending router components Our application may only require a single router instance. Even in this case, we may still need to override certain extension points of the generic router. In case of multiple router instances, there's bound to be common properties and functionality that we don't want to repeat. For example, if every route in our application follows the same pattern, with only subtle differences, we can implement the tools in our base router to avoid repetitious code. In addition to declaring routes, events take place when a given route is activated. Depending on the architecture of our application, different things need to happen. Maybe certain things always need to happen, no matter which route has been activated. This is where extending the router to provide our own functionality comes in hand. For example, we have to validate permission for a given route. It wouldn't make much sense for us to handle this through individual components, as this would not scale well with complex access control rules and a lot of routes. Extending models/collections Our models and collections, no matter what their specific implementation looks like, will share some common properties with one another. Especially if they're targeting the same API, which is the common case. The specifics of a given model or collection revolve around the API endpoint, the data returned, and the possible actions taken. It's likely that we'll target the same base API path for all entities, and that all entities have a handful of shared properties. Rather than repeat ourselves in every model or collection instance, it's better to abstract the common data. In addition to sharing properties among our models and collections, we can share common behavior. For instance, it's quite likely that a given model isn't going to have sufficient data for a given feature. Perhaps that data can be derived by transforming the model. These types of transformations can be common, and abstracted in a base model or collection. It really depends on the types of features we're implementing and how consistent they are with one another. If we're growing fast and getting lots of requests for "outside-the-box" features, then we're more likely to implement data transformations inside the views that require these one-off changes to the models or collections they're using. Most frameworks take care of the nuances for performing XHR requests to fetch our data or perform actions. That's not the whole story unfortunately, because our features will rarely map one-to-one with a single API entity. More likely, we will have a feature that requires several collections that are related to one another somehow, and a transformed collection. This type of operation can grow complex quickly, because we have to work with multiple XHR requests. We'll likely use promises to synchronize the fetching of these requests, and then perform the data transformation once we have all the necessary sources. Here's an example that shows a specific model extending a generic model, to provide new fetching behavior: // The base fetch() implementation of a model, sets // some property values, and resolves the promise. class BaseModel { fetch() { return new Promise((resolve, reject) => { this.id = 1; this.name = 'foo'; resolve(this); }); } } // Extends BaseModel with a specific implementation // of fetch(). class SpecificModel extends BaseModel { // Overrides the base fetch() method. Returns // a promise with combines the original // implementation and result of calling fetchSettings(). fetch() { return Promise.all([ super.fetch(), this.fetchSettings() ]); } // Returns a new Promise instance. Also sets a new // model property. fetchSettings() { return new Promise((resolve, reject) => { this.enabled = true; resolve(this); }); } } // Make sure the properties are all in place, as expected, // after the fetch() call completes. new SpecificModel().fetch().then((result) => { var [ model ] = result; console.assert(model.id === 1, 'id'); console.assert(model.name === 'foo'); console.assert(model.enabled, 'enabled'); console.log('fetched'); }); Extending controllers/views When we have a base model or base collection, there're often properties shared between our controllers or views. That's because the job of a controller or a view is to render model or collection data. For example, if the same view is rendering the same model properties over and over, we can probably move that bit to a base view, and extend from that. Perhaps the repetitive parts are in the templates themselves. This means that we might want to consider having a base template inside a base view, as shown in the following diagram. Views that extend this base view, inherit this base template. Depending on the library or framework at our disposal, extending templates like this may not be feasible. Or the nature of our features may make this difficult to achieve. For example, there might not be a common base template, but there might be a lot of smaller views and templates that can plug-into larger components: A view that extends a base view can populate the template of the base view, as well as inherit other base view functionalities Our views also need to respond to user interactions. They may respond directly, or forward the events up the component hierarchy. In either case, if our features are at all consistent, there will be some common DOM event handling that we'll want to abstract into a common base view. This is a huge help in scaling our application, because as we add more features, the DOM event handling code additions is minimized. Mapping features to components Now that we have a handle on the most common JavaScript components, and the ways we'll want to extend them for use in our application, it's time to think about how to glue those components together. A router on it's own isn't very useful. Nor is a standalone model, template, or controller. Instead, we want these things to work together, to form a cohesive unit that realizes a feature in our application. To do that, we have to map our features to components. We can't do this haphazardly either—we need to think about what's generic about our feature, and about what makes it unique. These feature properties will guide our design decisions on producing something that scales. Generic features Perhaps the most important aspects of component composition are consistency and reusability. While considering that the scaling influences our application faces, we'll come up with a list of traits that all our components must carry. Things like user management, access control, and other traits unique to our application. Along with the other architectural perspectives (explored in more depth throughout the remainder of the book), which form the core of our generic features: A generic component, composed of other generic components from our framework. The generic aspects of every feature in our application serve as a blueprint. They inform us in composing larger building blocks. These generic features account for the architectural factors that help us scale. And if we can encode these factors as parts of an aggregate component, we'll have an easier time scaling our application. What makes this design task challenging is that we have to look at these generic components not only from a scalable architecture perspective, but also from a feature-complete perspective. As much as we'd like to think that if every feature behaves the same way, we'd be all set. If only every feature followed an identical pattern, the sky's the limit when it comes the time to scale. But 100% consistent feature functionality is an illusion, more visible to JavaScript programmers than to users. The pattern breaks out of necessity. It's responding to this breakage in a scalable way that matters. This is why successful JavaScript applications will continuously revisit the generic aspects of our features to ensure they reflect reality. Specific features When it's time to implement something that doesn't fit the pattern, we're faced with a scaling challenge. We have to pivot, and consider the consequences of introducing such a feature into our architecture. When patterns are broken, our architecture needs to change. This isn't a bad thing—it's a necessity. The limiting factor in our ability to scale in response to these new features, lies with generic aspects of our existing features. This means that we can't be too rigid with our generic feature components. If we're too demanding, we're setting ourselves up for failure. Before making any brash architectural decisions stemming from offbeat features, think about the specific scaling consequences. For example, does it really matter that the new feature uses a different layout and requires a template that's different from all other feature components? The state of the JavaScript scaling art revolves around finding the handful of essential blueprints to follow for our component composition. Everything else is up for discussion on how to proceed. Decomposing components Component composition is an activity that creates order; larger behavior out of smaller parts. We often need to move in the opposite direction during development. Even after development, we can learn how a component works by tearing the code apart and watching it run in different contexts. Component decomposition means that we're able to take the system apart and examine individual parts in a somewhat structured approach. Maintaining and debugging components Over the course of application development, our components accumulate abstractions. We do this to support a feature's requirement better, while simultaneously supporting some architectural property that helps us scale. The problem is that as the abstractions accumulate, we lose transparency into the functioning of our components. This is not only essential for diagnosing and fixing issues, but also in terms of how easy the code is to learn. For example, if there's a lot of indirection, it takes longer for a programmer to trace cause to effect. Time wasted on tracing code, reduces our ability to scale from a developmental point of view. We're faced with two opposing problems. First, we need abstractions to address real world feature requirements and architectural constraints. Second, is our inability to master our own code due to a lack of transparency. 'Following is an example that shows a renderer component and a feature component. Renderers used by the feature are easily substitutable: // A Renderer instance takes a renderer function // as an argument. The render() method returns the // result of calling the function. class Renderer { constructor(renderer) { this.renderer = renderer; } render() { return this.renderer ? this.renderer(this) : ''; } } // A feature defines an output pattern. It accepts // header, content, and footer arguments. These are // Renderer instances. class Feature { constructor(header, content, footer) { this.header = header; this.content = content; this.footer = footer; } // Renders the sections of the view. Each section // either has a renderer, or it doesn't. Either way, // content is returned. render() { var header = this.header ? `${this.header.render()}n` : '', content = this.content ? `${this.content.render()}n` : '', footer = this.footer ? this.footer.render() : ''; return `${header}${content}${footer}`; } } // Constructs a new feature with renderers for three sections. var feature = new Feature( new Renderer(() => { return 'Header'; }), new Renderer(() => { return 'Content'; }), new Renderer(() => { return 'Footer'; }) ); console.log(feature.render()); // Remove the header section completely, replace the footer // section with a new renderer, and check the result. delete feature.header; feature.footer = new Renderer(() => { return 'Test Footer'; }); console.log(feature.render()); A tactic that can help us cope with these two opposing scaling influencers is substitutability. In particular, the ease with which one of our components, or sub-components, can be replaced with something else. This should be really easy to do. So before we go introducing layers of abstraction, we need to consider how easy it's going to be to replace a complex component with a simple one. This can help programmers learn the code, and also help with debugging. For example, if we're able to take a complex component out of the system and replace it with a dummy component, we can simplify the debugging process. If the error goes away after the component is replaced, we have found the problematic component. Otherwise, we can rule out a component and keep digging elsewhere. Re-factoring complex components It's of course easier said than done to implement substitutability with our components, especially in the face of deadlines. Once it becomes impractical to easily replace components with others, it's time to consider re-factoring our code. Or at least the parts that make substitutability infeasible. It's a balancing act, getting the right level of encapsulation, and the right level of transparency. Substitution can also be helpful at a more granular level. For example, let's say a view method is long and complex. If there are several stages during the execution of that method, where we would like to run something custom, we can't. It's better to re-factor the single method into a handful of methods, each of which can be overridden. Pluggable business logic Not all of our business logic needs to live inside our components, encapsulated from the outside world. Instead, it would be ideal if we could write our business logic as a set of functions. In theory, this provides us with a clear separation of concerns. The components are there to deal with the specific architectural concerns that help us scale, and the business logic can be plugged into any component. In practice, excising business logic from components isn't trivial. Extending versus configuring There're two approaches we can take when it comes to building our components. As a starting point, we have the tools provided by our libraries and frameworks. From there, we can keep extending these tools, getting more specific as we drill deeper and deeper into our features. Alternatively, we can provide our component instances with configuration values. These instruct the component on how to behave. The advantage of extending things that would otherwise need to be configured is that the caller doesn't need to worry about them. And if we can get by, using this approach, all the better, because it leads to simpler code. Especially the code that's using the component. On the other hand, we could have generic feature components that can be used for a specific purpose, if only they support this configuration or that configuration option. This approach has the advantage of simpler component hierarchies, and less overall components. Sometimes it's better to keep components as generic as possible, within the realm of understandability. That way, when we need a generic component for a specific feature, we can use it without having to re-define our hierarchy. Of course, there's more complexity involved for the caller of that component, because they need to supply it with the configuration values. It's a trade-off that's up to us, the JavaScript architects of our application. Do we want to encapsulate everything, configure everything, or do we want to strike a balance between the two? Stateless business logic With functional programming, functions don't have side effects. In some languages, this property is enforced, in JavaScript it isn't. However, we can still implement side-effect-free functions in JavaScript. If a function takes arguments, and always returns the same output based on those arguments, then the function can be said to be stateless. It doesn't depend on the state of a component, and it doesn't change the state of a component. It just computes a value. If we can establish a library of business logic that's implemented this way, we can design some super flexible components. Rather than implement this logic directly in a component, we pass the behavior into the component. That way, different components can utilize the same stateless business logic functions. The tricky part is finding the right functions that can be implemented this way. It's not a good idea to implement these up-front. Instead, as the iterations of our application development progress, we can use this strategy to re-factor code into generic stateless functions that are shared by any component capable of using them. This leads to business logic that's implemented in a focused way, and components that are small, generic, and reusable in a variety of contexts. Organizing component code In addition to composing our components in such a way that helps our application scale, we need to consider the structure of our source code modules too. When we first start off with a given project, our source code files tend to map well to what's running in the client's browser. Over time, as we accumulate more features and components, earlier decisions on how to organize our source tree can dilute this strong mapping. When tracing runtime behavior to our source code, the less mental effort involved, the better. We can scale to more stable features this way because our efforts are focused more on the design problems of the day—the things that directly provide customer value: The diagram shows the mapping component parts to their implementation artifacts There's another dimension to code organization in the context of our architecture, and that's our ability to isolate specific code. We should treat our code just like our runtime components, which are self-sustained units that we can turn on or off. That is, we should be able to find all the source code files required for a given component, without having to hunt them down. If a component requires, say, 10 source code files—JavaScript, HTML, and CSS—then ideally these should all be found in the same directory. The exception, of course, is generic base functionality that's shared by all components. These should be as close to the surface as possible. Then it's easy to trace our component dependencies; they will all point to the top of the hierarchy. It's a challenge to scale the dependency graph when our component dependencies are all over the place. Summary This article introduced us to the concept of component composition. Components are the building blocks of a scalable JavaScript application. The common components we're likely to encounter include things like modules, models/collections, controllers/views, and templates. While these patterns help us achieve a level of consistency, they're not enough on their own to make our code work well under various scaling influencers. This is why we need to extend these components, providing our own generic implementations that specific features of our application can further extend and use. Depending on the various scaling factors our application encounters, different approaches may be taken in getting generic functionality into our components. One approach is to keep extending the component hierarchy, and keep everything encapsulated and hidden away from the outside world. Another approach is to plug logic and properties into components when they're created. The cost is more complexity for the code that's using the components. We ended the article with a look at how we might go about organizing our source code; so that it's structure better reflects that of our logical component design. This helps us scale our development effort, and helps isolate one component's code from others'. It's one thing to have well crafted components that stand by themselves. It's quite another to implement scalable component communication. For more information, refer to: https://www.packtpub.com/web-development/javascript-and-json-essentials https://www.packtpub.com/application-development/learning-javascript-data-structures-and-algorithms Resources for Article: Further resources on this subject: Welcome to JavaScript in the full stack [Article] Components of PrimeFaces Extensions [Article] Unlocking the JavaScript Core [Article]
Read more
  • 0
  • 0
  • 13386

article-image-creating-identity-and-resource-pools
Packt
24 Dec 2013
7 min read
Save for later

Creating Identity and Resource Pools in Cisco Unified Computing System

Packt
24 Dec 2013
7 min read
Computers and their various peripherals have some unique identities such as Universally Unique Identifiers (UUIDs), Media Access Control (MAC) addresses of Network Interface Cards (NICs), World Wide Node Numbers (WWNNs) for Host Bus Adapters (HBAs), and others. These identities are used to uniquely identify a computer system in a network. For traditional computers and peripherals, these identities were burned into the hardware and, hence, couldn't be altered easily. Operating systems and some applications rely on these identities and may fail if these identities are changed. In case of a full computer system failure or failure of a computer peripheral with unique identity, administrators have to follow cumbersome firmware upgrade procedures to replicate the identities of the failed components on the replacement components. The Unified Computing System (UCS) platform introduced the idea of creating identity and resource pools to abstract the compute node identities from the UCS Manager (UCSM) instead of using the hardware burned-in identities. In this article, we'll discuss the different pools you can create during UCS deployments and server provisioning. We'll start by looking at what pools are and then discuss the different types of pools and show how to configure each of them. Understanding identity and resource pools The salient feature of the Cisco UCS platform is stateless computing . In the Cisco UCS platform, none of the computer peripherals consume the hardware burned-in identities. Rather, all the unique characteristics are extracted from identity and resource pools, which reside on the Fabric Interconnects (FIs) and are managed using UCSM. These resource and identity pools are defined in an XML format, which makes them extremely portable and easily modifiable. UCS computers and peripherals extract these identities from UCSM in the form of a service profile. A service profile has all the server identities including UUIDs, MACs, WWNNs, firmware versions, BIOS settings, and other server settings. A service profile is associated with the physical server using customized Linux OS that assigns all the settings in a service profile to the physical server. In case of server failure, the failed server needs to be removed and the replacement server has to be associated with the existing service profile of the failed server. In this service profile association process, the new server will automatically pick up all the identities of the failed server, and the operating system or applications dependent upon these identities will not observe any change in the hardware. In case of peripheral failure, the replacement peripheral will automatically acquire the identities of the failed component. This greatly improves the time required to recover a system in case of a failure. Using service profiles with the identity and resource pools also greatly improves the server provisioning effort. A service profile with all the settings can be prepared in advance while an administrator is waiting for the delivery of the physical server. The administrator can create service profile templates that can be used to create hundreds of service profiles; these profiles can be associated with the physical servers with the same hardware specifications. Creating a server template is highly recommended as this greatly reduces the time for server provisioning. This is because a template can be created once and used for any number of physical servers with the same hardware. Server identity and resource pools are created using the UCSM. In order to better organize, it is possible to define as many pools as are needed in each category. Keep in mind that each defined resource will consume space in the UCSM database. It is, therefore, a best practice to create identity and resource pool ranges based on the current and near-future assessments. For larger deployments, it is best practice to define a hierarchy of resources in the UCSM based on geographical, departmental, or other criteria; for example, a hierarchy can be defined based on different departments. This hierarchy is defined as an organization, and the resource pools can be created for each organizational unit. In the UCSM, the main organization unit is root, and further suborganizations can be defined under this organization. The only consideration to be kept in mind is that pools defined under one organizational unit can't be migrated to other organizational units unless they are deleted first and then created again where required. The following diagram shows how identity and resource pools provide unique features to a stateless blade server and components such as the mezzanine card: Learning to create a UUID pool UUID is a 128-bit number assigned to every compute node on a network to identify the compute node globally. UUID is denoted as 32 hexadecimal numbers. In the Cisco UCSM, a server UUID can be generated using the UUID suffix pool. The UCSM software generates a unique prefix to ensure that the generated compute node UUID is unique. Operating systems including hypervisors and some applications may leverage UUID number binding. The UUIDs generated with a resource pool are portable. In case of a catastrophic failure of the compute node, the pooled UUID assigned through a service profile can be easily transferred to a replacement compute node without going through complex firmware upgrades. Following are the steps to create UUIDs for the blade servers: Log in to the UCSM screen. Click on the Servers tab in the navigation pane. Click on the Pools tab and expand root. Right-click on UUID Suffix Pools and click on Create UUID Suffix Pool as shown in the following screenshot: In the pop-up window, assign the Name and Description values to the UUID pool. Leave the Prefix value as Derived to make sure that UCSM makes the prefix unique. The selection of Assignment Order as Default is random. Select Sequential to assign the UUID sequentially. Click on Next as shown in the following screenshot: Click on Add in the next screen. In the pop-up window, change the value for Size to create a desired number of UUIDs. Click on OK and then on Finish in the previous screen as shown in the following screenshot: In order to verify the UUID suffix pool, click on the UUID Suffix Pools tab in the navigation pane and then on the UUID Suffixes tab in the work pane as shown in the following screenshot: Learning to create a MAC pool MAC is a 48-bit address assigned to the network interface for communication in the physical network. MAC address pools make server provisioning easier by providing scalable NIC configurations before the actual deployment. Following are the steps to create MAC pools: Log in to the UCSM screen. Click on the LAN tab in the navigation pane. Click on the Pools tab and expand root. Right-click on MAC Pools and click on Create MAC Pool as shown in the following screenshot: In the pop-up window, assign the Name and Description values to the MAC pool. The selection of Default as the Assignment Order value is random. Select Sequential to assign the MAC addresses sequentially. Click on Next as shown in the following screenshot: Click on Add in the next screen. In the pop-up window, change Size to create the desired number of MAC addresses. Click on OK and then on Finish in the previous screen as shown in the following screenshot: In order to verify the MAC pool, click on the MAC Pools tab in the navigation pane and then on the MAC Addresses tab in the work pane as shown in the following screenshot:
Read more
  • 0
  • 0
  • 13381
article-image-your-first-page-php-nuke
Packt
24 Feb 2010
12 min read
Save for later

Your First Page with PHP-Nuke

Packt
24 Feb 2010
12 min read
We're going to look at our new homepage and from there move on to look at some of the main concepts of PHP-Nuke: blocks, modules, themes, and site security. Along the way, we're going to create the super user, a user with absolute power over our site; we will edit our first piece of content in PHP-Nuke, and begin the construction of the Dinosaur Portal. Your New Homepage Navigate to your site's homepage in your browser. For our newly installed PHP-Nuke site, this will be http://localhost/nuke/. You should be presented with the following screen, which we saw at the end of the last article: Considering that we've not really done anything, this is impressive. I'm sure you won't be able to resist clicking on some of these links and seeing what PHP-Nuke has in store for us. Currently, the system is 'empty', so it has a rather cold and eerie feeling about it. Rest assured that it will start to warm up over the next few articles as we add content to the site. By the way, if you are impressed with the features you're seeing right now, let me tell you that there are others that haven't yet been activated. Also, there are many other add-ons that we can find from various PHP-Nuke resource sites across the Internet. Let's now talk about some of the PHP-Nuke bits that we see on the front page. First of all, there's the look of the page. There is the banner at the top, a site logo, and a horizontal navigation bar: The page 'body' begins below the navigation bar. You can see a three-column layout with a big chunk of information in the middle column. The page layout of a PHP-Nuke site need not always look this; the arrangement of the elements, the choice of color, text styles, and images is controlled by the theme. A different theme can be selected for the site, and immediately, the look and feel of your site is changed. Blocks The elements that you see in the left- and right-hand columns are known as blocks: Blocks in PHP-Nuke are little nuggets of information positioned at the sides or sometimes at the bottom of a page. They often provide 'navigation', linking to other parts of the site, and provide a report or summary of the content that is available either on your site or, possibly, on another site. Typically, many blocks are displayed on a single page. An important block is the Modules block in the left-hand column: This block shows a list of the active modules on your site, and is the standard navigational element of a typical PHP-Nuke site. Each entry in the above list is a link to a module on your site, and by clicking on the links the visitor is able to move between the modules. Modules PHP-Nuke is a modular system. Each module is like a mini website in itself, performing different tasks and working with different types of content. The PHP-Nuke 'core' provides a central mechanism for handling these modules, so that they work together sharing data and user information, and ensuring a consistent look and operation throughout your site. In short, the modules define your site. The good thing with PHP-Nuke is that you can add and remove modules as needed, selecting the best range of features to suit your site and its visitors. We will discuss the standard PHP-Nuke modules over the next few articles. When viewing a page on a PHP-Nuke site, the module currently in play can be known by looking at the URL of that page. For example, if you are looking at the Downloads module, the URL will be something like this: http://localhost/nuke/modules.php?name=Downloads The part of the URL after the ? character is the query string. The query string contains variables that are separated by the & character. In the above URL, the query string contains a single variable, name, which has the value Downloads. PHP-Nuke switches between modules according to the value specified in the name variable. The other query string variables determine what else is to be displayed on that page, such as the required news story for example. (Handling these query string variables appropriately has traditionally been a security weakness in PHP-Nuke, but that is true for many other web applications). The output of the module being currently viewed is displayed in the middle column of the web page. A Fistful of Default Modules Let's have a quick overview of what some of the standard modules offer: Home: Shows the homepage of the site. There isn't actually a Home module but some particular module is associated with the homepage. The homepage actually has the URL index.php, rather than modules.php?name=XXXX. Downloads and Web Links: Allow you to create and maintain categorized lists of downloadable resources or links to other sites. Possibly you have already seen the Downloads module in action when you downloaded PHP-Nuke itself from a PHP-Nuke powered site. This is another 'interactive' module—visitors can submit their own downloadable resources or links here. Recommend Us: Allows the visitor on your site to send a message to their friends suggesting that they come and visit your site. Search: Allows the visitor to search the contents of your site. Statistics: Provides site statistics like the number of visits to your site, the different browsers used by visitors, and the most-viewed stories on your site. Stories Archive: Contains an archive of past stories that have appeared on the site, arranged by month of publication. Submit News: Allows visitors to submit a news story to the site through a form, after which the story goes straight onto the site provided it is acceptable. The story is then said to be published. Surveys: Displays the results of polls that have appeared on the site. Polls can be attached to stories and other pieces of content. Topics: Provides a different view of the stories, this time arranged by their topic. Your Account: Allows visitors to your site to register and create their own accounts. All visitors that register at your site can have their own area, which is accessed through this module. They can customize their own area, including their own Journal. That's not even all of the modules, but it's enough to give you an idea of the breadth of the functionality that PHP-Nuke offers and the kind of experience that your visitors can look forward to. Coming back to the homepage, have a look at the message in the middle that says: For security reasons the best idea is to create the Super User right NOW by clicking HERE It's not everyday that we're invited to create a super user, so I think we should get on with that, especially as the word NOW is in upper case; that always suggests a sense of urgency. Clicking on the word HERE in that message will take you to the page http://localhost/nuke/admin.php; and we can begin creating our super user. Creating the Super User PHP-Nuke enables visitors to your site to create their own user account, and add and maintain their own personal details. The user account is required to identify them for posting news stories, making comments, or contributing to discussions in the forums, among other activities. By registering on the site and creating a user account, the visitors are given greater freedom on the site. However, their freedom has limits. We are about to create a special type of user, the super user. This is a registered user of the site who has almost total freedom on the site and absolute power over it. The super user can access, add, remove, and modify any part of the site, and can configure and control anything on the site. Given the nature of this power, there comes the obvious responsibility of ensuring that the identity of this user is kept a secret. Anyone obtaining these account details will be able to do almost anything to your site, and that could be worse than it sounds, so you must ensure that these details do not fall into the wrong hands. The super user is a site administrator, in fact, the site administrator. We will use the term administrator and super user interchangeably. It is also possible to create other, less powerful, site administrators who can manage various parts of the site, such as approving bits of content submitted by visitors. We shall now create the super user account. As with any user account on PHP-Nuke, it will consist of a username ('nickname', as it is also known in PHP-Nuke) and a password. On the page http://localhost/nuke/admin.php, you will be presented with a form asking you to choose a super user Nickname, the HomePage of that user, a contact Email address and a Password. The password should only contain alphanumeric characters (letters and numbers). This is how the form looks: The super user account is not the only type of user account that can be created with PHP-Nuke. Visitors to your site can register and create their own user accounts, which make them Registered Users of your site. When creating the super user there is an option to create a registered user with the same details, although obviously that user doesn't have the extended power of the super user. This does mean that when you log in with this administrator account, you will enjoy all the personalization benefits of the standard user account. We will create the nickname and password for the super user account now. Do not use nicknames like admin, super user, or root for the super user; these would be the first guess of any miscreant attempting to break into your system. Also, make your password difficult to guess; make it long with a mixture of digits and letters, both upper and lowercase (definitely do not use the word password as your password!). Making the password secure is another vital step toward the overall security of your site. In the page, we will enter dinoportmeister for the nickname, and use the password Pa2112cktXog. You can enter your own nickname and password here if you like, but make sure you remember them! Your email address needs to go into the Email field, this is another required field. The HomePage field does not have to correspond to the address of this site; this is for informational purposes only. The option to create a normal user with the same data will do just that, it will create a user with the same username and password as the administrator account. However, the two accounts are distinct, and changing the password for either account will not affect the other. Click Submit and the super user is created. Becoming the Administrator After you have created the details for the super user, you still have to log yourself in with these details. On the admin.php page, you will find a form for entering the administrator username and password. Hopefully you haven't forgotten them already! After entering the details here, click the Login button and you will pass over to the other side: the administration area of the site. The admin.php page is where you need to log in to access the administration area. Whenever you want to log in as an administrator to perform some site maintenance, you do so from this page. Logging in from any other place on the site will log you 'normally' into the site, as if you were a standard visitor to the site, even if the administrator username and password is accepted. If you think about it, this suggests that unless it has been specially customized, any PHP-Nuke site has an administrator login page at admin.php. This means that anyone intent on accessing the administrator area of that site does not have to look far to find the administrator login (of course, getting the right username and password combination is another matter). To counter this, from PHP-Nuke 7.6 onwards, if you want to rename the admin.php file, you can do so by storing the new name of the file in the $admin_file variable in the config.php file. This relocates your administrator login page. Once you have entered the administration username and password, you will get your first taste of the administration area: That might be more than you were expecting. We are presented with two towering graphical menus; the Administration Menu and the Modules Administration menu, the main navigation tools for the site administrator. (In versions of PHP-Nuke earlier than 7.5, these menus were one—the Administration Menu). We'll dig into more detail about these menus in the next few articles. This is the place where you will spend most of your PHP-Nuke life, so you will need to get comfortable with it. Before we go any further, click the Home link in the Modules block to return to the homepage of your site. A New Welcome When you return to the homepage, you will notice that some extra text has appeared at the bottom of the welcome message: [ View: All Visitors - Unlimited - Edit ] This text is evidence of the super user's extra powers. If you click on the Edit link, you can begin changing the site. The presence of the Edit link is an example of 'in-position' editing, whereby as you browse the site you can quickly edit or delete the content you see. This link is not available to normal users of the site and is a pretty neat feature of PHP-Nuke. When you click the Edit link, you will be taken back to the administration area.
Read more
  • 0
  • 1
  • 13355

article-image-importing-structure-and-data-using-phpmyadmin
Packt
12 Oct 2009
9 min read
Save for later

Importing Structure and Data Using phpMyAdmin

Packt
12 Oct 2009
9 min read
A feature was added in version 2.11.0: an import file may contain the DELIMITER keyword. This enables phpMyAdmin to mimic the mysql command-line interpreter. The DELIMITER separator is used to delineate the part of the file containing a stored procedure, as these procedures can themselves contain semicolons. The default values for the Import interface are defined in $cfg['Import']. Before examining the actual import dialog, let's discuss some limits issues. Limits for the transfer When we import, the source file is usually on our client machine; so, it must travel to the server via HTTP. This transfer takes time and uses resources that may be limited in the web server's PHP configuration. Instead of using HTTP, we can upload our file to the server using a protocol such as FTP, as described in the Web Server Upload Directories section. This method circumvents the web server's PHP upload limits. Time limits First, let's consider the time limit. In config.inc.php, the $cfg['ExecTimeLimit'] configuration directive assigns, by default, a maximum execution time of 300 seconds (five minutes) for any phpMyAdmin script, including the scripts that process data after the file has been uploaded. A value of 0 removes the limit, and in theory, gives us infinite time to complete the import operation. If the PHP server is running in safe mode, modifying $cfg['ExecTimeLimit'] will have no effect. This is because the limits set in php.ini or in user-related web server configuration file (such as .htaccess or virtual host configuration files) take precedence over this parameter. Of course, the time it effectively takes, depends on two key factors: Web server load MySQL server load The time taken by the file, as it travels between the client and the server,does not count as execution time because the PHP script starts to execute only once the file has been received on the server. Therefore, the $cfg['ExecTimeLimit'] parameter has an impact only on the time used to process data (like decompression or sending it to the MySQL server). Other limits The system administrator can use the php.ini file or the web server's virtual host configuration file to control uploads on the server. The upload_max_filesize parameter specifies the upper limit or the maximum file size that can be uploaded via HTTP. This one is obvious, but another less obvious parameter is post_max_size. As HTTP uploading is done via the POST method, this parameter may limit our transfers. For more details about the POST method, please refer to http://en.wikipedia.org/wiki/Http#Request_methods. The memory_limit parameter is provided to avoid web server child processes from grabbing too much of the server memory—phpMyAdmin also runs as a child process. Thus, the handling of normal file uploads, especially compressed dumps, can be compromised by giving this parameter a small value. Here, no preferred value can be recommended; the value depends on the size of uploaded data. The memory limit can also be tuned via the $cfg['MemoryLimit'] parameter in config.inc.php. Finally, file uploads must be allowed by setting file_uploads to On. Otherwise, phpMyAdmin won't even show the Location of the textfile dialog. It would be useless to display this dialog, as the connection would be refused later by the PHP component of the web server. Partial imports If the file is too big, there are ways in which we can resolve the situation. If we still have access to the original data, we could use phpMyAdmin to generate smaller CSV export files, choosing the Dump n rows starting at record # n dialog. If this were not possible, we will have to use a text editor to split the file into smaller sections. Another possibility is to use the upload directory mechanism, which accesses the directory defined in $cfg['UploadDir']. This feature is explained later in this article. In recent phpMyAdmin versions, the Partial import feature can also solve this file size problem. By selecting the Allow interrupt… checkbox, the import process will interrupt itself if it detects that it is close to the time limit. We can also specify a number of queries to skip from the start, in case we successfully import a number of rows and wish to continue from that point. Temporary directory On some servers, a security feature called open_basedir can be set up in a way that impedes the upload mechanism. In this case, or for any other reason, when uploads are problematic, the $cfg['TempDir'] parameter can be set with the value of a temporary directory. This is probably a subdirectory of phpMyAdmin's main directory, into which the web server is allowed to put the uploaded file. Importing SQL files Any file containing MySQL statements can be imported via this mechanism. The dialog is available in the Database view or the Table view, via the Import subpage, or in the Query window. There is no relation between the currently selected table (here author) and the actual contents of the SQL file that will be imported. All the contents of the SQL file will be imported, and it is those contents that determine which tables or databases are affected. However, if the imported file does not contain any SQL statements to select a database, all statements in the imported file will be executed on the currently selected database. Let's try an import exercise. First, we make sure that we have a current SQL export of the book table. This export file must contain the structure and the data. Then we drop the book table—yes, really! We could also simply rename it. Now it is time to import the file back. We should be on the Import subpage, where we can see the Location of the text file dialog. We just have to hit the Browse button and choose our file. phpMyAdmin is able to detect which compression method (if any) has been applied to the file. Depending on the phpMyAdmin version, and the extensions that are available in the PHP component of the web server, there is variation in the formats that the program can decompress. However, to import successfully, phpMyAdmin must be informed of the character set of the file to be imported. The default value is utf8. However, if we know that the import file was created with another character set, we should specify it here. An SQL compatibility mode selector is available at import time. This mode should be adjusted to match the actual data that we are about to import, according to the type of the server where the data was previously exported. To start the import, we click Go. The import procedure continues and we receive a message: Import has been successfully finished, 2 queries executed. We can browse our newly-created tables to confirm the success of the import operation. The file could be imported for testing in a different database or even in a MySQL server. Importing CSV files In this section, we will examine how to import CSV files. There are two possible methods—CSV and CSV using LOAD DATA. The first method is implemented internally by phpMyAdmin and is the recommended one for its simplicity. With the second method, phpMyAdmin receives the file to be loaded, and passes it to MySQL. In theory, this method should be faster. However, it has more requirements due to MySQL itself (see the Requirements sub-section of the CSV using LOAD DATA section). Differences between SQL and CSV formats There are some differences between these two formats. The CSV file format contains data only, so we must already have an existing table in place. This table does not need to have the same structure as the original table (from which the data comes); the Column names dialog enables us to choose which columns are affected in the target table. Because the table must exist prior to the import, the CSV import dialog is available only from the Import subpage in the Table view, and not in the Database view.   Exporting a test file Before trying an import, let's generate an author.csv export file from the author table. We use the default values in the CSV export options. We can then Empty the author table—we should avoid dropping this table because we still need the table structure. CSV From the author table menu, we select Import and then CSV. We can influence the behavior of the import in a number of ways. By default, importing does not modify existing data (based on primary or unique keys). However, the Replace table data with file option instructs phpMyAdmin to use REPLACE statement instead of INSERT statement, so that existing rows are replaced with the imported data. Using Ignore duplicate rows, INSERT IGNORE statements are generated. These cause MySQL to ignore any duplicate key problems during insertion. A duplicate key from the import file does not replace existing data, and the procedure continues for the next line of CSV data. We can then specify the character that terminates each field, the character that encloses data, and the character that escapes the enclosing character. Usually this is . For example, for a double quote enclosing character, if the data field contains a double quote, it must be expressed as "some data " some other data". For Lines terminated by, recent versions of phpMyAdmin offer the auto choice, which should be tried first as it detects the end-of-line character automatically. We can also specify manually which characters terminate the lines. The usual choice is n for UNIX-based systems, rn for DOS or Windows systems, and r for Mac-based system (up to Mac OS 9). If in doubt, we can use a hexadecimal file editor on our client computer (not part of phpMyAdmin) to examine the exact codes. By default, phpMyAdmin expects a CSV file with the same number of fields and the same field order as the target table. But this can be changed by entering a comma-separated list of column names in Column names, respecting the source file format. For example, let's say our source file contains only the author ID and the author name information: "1","John Smith" "2","Maria Sunshine" We'd have to put id, name in Column names to match the source file. When we click Go, the import is executed and we get a confirmation. We might also see the actual INSERT queries generated if the total size of the file is not too big. Import has been successfully finished, 2 queries executed.INSERT INTO `author` VALUES ('1', 'John Smith', '+01 445 789-1234')# 1 row(s) affected.INSERT INTO `author` VALUES ('2', 'Maria Sunshine', '333-3333')# 1 row(s) affected.
Read more
  • 0
  • 0
  • 13349
Modal Close icon
Modal Close icon