Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Design Patterns

32 Articles
article-image-exploring-the-strategy-behavioral-design-pattern-in-node-js
Expert Network
02 Jun 2021
10 min read
Save for later

Exploring the Strategy Behavioral Design Pattern in Node.js

Expert Network
02 Jun 2021
10 min read
A design pattern is a reusable solution to a recurring problem. The term is really broad in its definition and can span multiple domains of an application. However, the term is often associated with a well-known set of object-oriented patterns that were popularized in the 90s by the book, Design Patterns: Elements of Reusable Object- Oriented Software, Pearson Education, by the almost legendary Gang of Four (GoF): Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides. This article is an excerpt from the book Node.js Design Patterns, Third Edition by Mario Casciaro and Luciano Mammino – a comprehensive guide for learning proven patterns, techniques, and tricks to take full advantage of the Node.js platform. In this article, we’ll look at the behavior of components in software design. We’ll learn how to combine objects and how to define the way they communicate so that the behavior of the resulting structure becomes extensible, modular, reusable, and adaptable. After introducing all the behavioral design patterns, we will dive deep into the details of the strategy pattern. Now, it's time to roll up your sleeves and get your hands dirty with some behavioral design patterns. Types of Behavioral Design Patterns The Strategy pattern allows us to extract the common parts of a family of closely related components into a component called the context and allows us to define strategy objects that the context can use to implement specific behaviors. The State pattern is a variation of the Strategy pattern where the strategies are used to model the behavior of a component when under different states. The Template pattern, instead, can be considered the "static" version of the Strategy pattern, where the different specific behaviors are implemented as subclasses of the template class, which models the common parts of the algorithm. The Iterator pattern provides us with a common interface to iterate over a collection. It has now become a core pattern in Node.js. JavaScript offers native support for the pattern (with the iterator and iterable protocols). Iterators can be used as an alternative to complex async iteration patterns and even to Node.js streams. The Middleware pattern allows us to define a modular chain of processing steps. This is a very distinctive pattern born from within the Node.js ecosystem. It can be used to preprocess and postprocess data and requests. The Command pattern materializes the information required to execute a routine, allowing such information to be easily transferred, stored, and processed. The Strategy Pattern The Strategy pattern enables an object, called the context, to support variations in its logic by extracting the variable parts into separate, interchangeable objects called strategies. The context implements the common logic of a family of algorithms, while a strategy implements the mutable parts, allowing the context to adapt its behavior depending on different factors, such as an input value, a system configuration, or user preferences. Strategies are usually part of a family of solutions and all of them implement the same interface expected by the context. The following figure shows the situation we just described: Figure 1: General structure of the Strategy pattern Figure 1 shows you how the context object can plug different strategies into its structure as if they were replaceable parts of a piece of machinery. Imagine a car; its tires can be considered its strategy for adapting to different road conditions. We can fit winter tires to go on snowy roads thanks to their studs, while we can decide to fit high-performance tires for traveling mainly on motorways for a long trip. On the one hand, we don't want to change the entire car for this to be possible, and on the other, we don't want a car with eight wheels so that it can go on every possible road. The Strategy pattern is particularly useful in all those situations where supporting variations in the behavior of a component requires complex conditional logic (lots of if...else or switch statements) or mixing different components of the same family. Imagine an object called Order that represents an online order on an e-commerce website. The object has a method called pay() that, as it says, finalizes the order and transfers the funds from the user to the online store. To support different payment systems, we have a couple of options: Use an ..elsestatement in the pay() method to complete the operation based on the chosen payment option Delegate the logic of the payment to a strategy object that implements the logic for the specific payment gateway selected by the user In the first solution, our Order object cannot support other payment methods unless its code is modified. Also, this can become quite complex when the number of payment options grows. Instead, using the Strategy pattern enables the Order object to support a virtually unlimited number of payment methods and keeps its scope limited to only managing the details of the user, the purchased items, and the relative price while delegating the job of completing the payment to another object. Let's now demonstrate this pattern with a simple, realistic example. Multi-format configuration objects Let's consider an object called Config that holds a set of configuration parameters used by an application, such as the database URL, the listening port of the server, and so on. The Config object should be able to provide a simple interface to access these parameters, but also a way to import and export the configuration using persistent storage, such as a file. We want to be able to support different formats to store the configuration, for example, JSON, INI, or YAML. By applying what we learned about the Strategy pattern, we can immediately identify the variable part of the Config object, which is the functionality that allows us to serialize and deserialize the configuration. This is going to be our strategy. Creating a new module Let's create a new module called config.js, and let's define the generic part of our configuration manager: import { promises as fs } from 'fs' import objectPath from 'object-path' export class Config { constructor (formatStrategy) {                           // (1) this.data = {} this.formatStrategy = formatStrategy } get (configPath) {                                       // (2) return objectPath.get(this.data, configPath) } set (configPath, value) {                                // (2) return objectPath.set(this.data, configPath, value) } async load (filePath) {                                  // (3) console.log(`Deserializing from ${filePath}`) this.data = this.formatStrategy.deserialize( await fs.readFile(filePath, 'utf-8') ) } async save (filePath) {                                  // (3) console.log(`Serializing to ${filePath}`) await fs.writeFile(filePath, this.formatStrategy.serialize(this.data)) } } This is what's happening in the preceding code: In the constructor, we create an instance variable called data to hold the configuration data. Then we also store formatStrategy, which represents the component that we will use to parse and serialize the data. We provide two methods, set()and get(), to access the configuration properties using a dotted path notation (for example, property.subProperty) by leveraging a library called object-path (nodejsdp.link/object-path). The load() and save() methods are where we delegate, respectively, the deserialization and serialization of the data to our strategy. This is where the logic of the Config class is altered based on the formatStrategy passed as an input in the constructor. As we can see, this very simple and neat design allows the Config object to seamlessly support different file formats when loading and saving its data. The best part is that the logic to support those various formats is not hardcoded anywhere, so the Config class can adapt without any modification to virtually any file format, given the right strategy. Creating format Strategies To demonstrate this characteristic, let's now create a couple of format strategies in a file called strategies.js. Let's start with a strategy for parsing and serializing data using the INI file format, which is a widely used configuration format (more info about it here: nodejsdp.link/ini-format). For the task, we will use an npm package called ini (nodejsdp.link/ini): import ini from 'ini' export const iniStrategy = { deserialize: data => ini.parse(data), serialize: data => ini.stringify(data) } Nothing really complicated! Our strategy simply implements the agreed interface, so that it can be used by the Config object. Similarly, the next strategy that we are going to create allows us to support the JSON file format, widely used in JavaScript and in the web development ecosystem in general: export const jsonStrategy = { deserialize: data => JSON.parse(data), serialize: data => JSON.stringify(data, null, '  ') } Now, to show you how everything comes together, let's create a file named index.js, and let's try to load and save a sample configuration using different formats: import { Config } from './config.js' import { jsonStrategy, iniStrategy } from './strategies.js' async function main () { const iniConfig = new Config(iniStrategy) await iniConfig.load('samples/conf.ini') iniConfig.set('book.nodejs', 'design patterns') await iniConfig.save('samples/conf_mod.ini') const jsonConfig = new Config(jsonStrategy) await jsonConfig.load('samples/conf.json') jsonConfig.set('book.nodejs', 'design patterns') await jsonConfig.save('samples/conf_mod.json') } main() Our test module reveals the core properties of the Strategy pattern. We defined only one Config class, which implements the common parts of our configuration manager, then, by using different strategies for serializing and deserializing data, we created different Config class instances supporting different file formats. The example we've just seen shows us only one of the possible alternatives that we had for selecting a strategy. Other valid approaches might have been the following: Creating two different strategy families: One for the deserialization and the other for the serialization. This would have allowed reading from a format and saving to another. Dynamically selecting the strategy: Depending on the extension of the file provided; the Config object could have maintained a map extension → strategy and used it to select the right algorithm for the given extension. As we can see, we have several options for selecting the strategy to use, and the right one only depends on your requirements and the tradeoff in terms of features and the simplicity you want to obtain. Furthermore, the implementation of the pattern itself can vary a lot as well. For example, in its simplest form, the context and the strategy can both be simple functions: function context(strategy) {...} Even though this may seem insignificant, it should not be underestimated in a programming language such as JavaScript, where functions are first-class citizens and used as much as fully-fledged objects. Between all these variations, though, what does not change is the idea behind the pattern; as always, the implementation can slightly change but the core concepts that drive the pattern are always the same. Summary In this article, we dive deep into the details of the strategy pattern, one of the Behavioral Design Patterns in Node.js. Learn more in the book, Node.js Design Patterns, Third Edition by Mario Casciaro and Luciano Mammino. About the Authors Mario Casciaro is a software engineer and entrepreneur. Mario worked at IBM for a number of years, first in Rome, then in Dublin Software Lab. He currently splits his time between Var7 Technologies-his own software company-and his role as lead engineer at D4H Technologies where he creates software for emergency response teams. Luciano Mammino wrote his first line of code at the age of 12 on his father's old i386. Since then he has never stopped coding. He is currently working at FabFitFun as principal software engineer where he builds microservices to serve millions of users every day.
Read more
  • 0
  • 0
  • 11635

article-image-what-is-multi-layered-software-architecture
Packt Editorial Staff
17 May 2018
7 min read
Save for later

What is a multi layered software architecture?

Packt Editorial Staff
17 May 2018
7 min read
Multi layered software architecture is one of the most popular architectural patterns today. It moderates the increasing complexity of modern applications. It also makes it easier to work in a more agile manner. That's important when you consider the dominance of DevOps and other similar methodologies today. Sometimes called tiered architecture, or n-tier architecture, a multi layered software architecture consists of various layers, each of which corresponds to a different service or integration. Because each layer is separate, making changes to each layer is easier than having to tackle the entire architecture. Let's take a look at how a multi layered software architecture works, and what the advantages and disadvantages of it are. This has been taken from the book Architectural Patterns. Find it here. What does a layered software architecture consist of? Before we get into a multi layered architecture, let's start with the simplest form of layered architecture - three tiered architecture. This is a good place to start because all layered software architecture contains these three elements. These are the foundations: Presentation layer: This is the first and topmost layer which is present in the application. This tier provides presentation services, that is presentation, of content to the end user through GUI. This tier can be accessed through any type of client device like desktop, laptop, tablet, mobile, thin client, and so on. For the content to the displayed to the user, the relevant web pages should be fetched by the web browser or other presentation component which is running in the client device. To present the content, it is essential for this tier to interact with the other tiers that are present preceding it. Application layer: This is the middle tier of this architecture. This is the tier in which the business logic of the application runs. Business logic is the set of rules that are required for running the application as per the guidelines laid down by the organization. The components of this tier typically run on one or more application servers. Data layer: This is the lowest tier of this architecture and is mainly concerned with the storage and retrieval of application data. The application data is typically stored in a database server, file server, or any other device or media that supports data access logic and provides the necessary steps to ensure that only the data is exposed without providing any access to the data storage and retrieval mechanisms. This is done by the data tier by providing an API to the application tier. The provision of this API ensures complete transparency to the data operations which are done in this tier without affecting the application tier. For example, updates or upgrades to the systems in this tier do not affect the application tier of this architecture. The diagram below shows how a simple layered architecture with 3 tiers works:   These three layers are essential. But other layers can be built on top of them. That's when we get into multi layered architecture. It's sometimes called n-tiered architecture because the number of tiers or layers (n) could be anything! It depends on what you need and how much complexity you're able to handle. Multi layered software architecture A multi layered software architecture still has the presentation layer and data layer. It simply splits up and expands the application layer. These additional aspects within the application layer are essentially different services. This means your software should now be more scalable and have extra dimensions of functionality. Of course, the distribution of application code and functions among the various tiers will vary from one architectural design to another, but the concept remains the same. The diagram below illustrates what a multi layered software architecture looks like. As you can see, it's a little more complex that a three-tiered architecture, but it does increase scalability quite significantly: What are the benefits of a layered software architecture? A layered software architecture has a number of benefits - that's why it has become such a popular architectural pattern in recent years. Most importantly, tiered segregation allows you to manage and maintain each layer accordingly. In theory it should greatly simplify the way you manage your software infrastructure. The multi layered approach is particularly good for developing web-scale, production-grade, and cloud-hosted applications very quickly and relatively risk-free. It also makes it easier to update any legacy systems - when you're architecture is broken up into multiple layers, the changes that need to be made should be simpler and less extensive than they might otherwise have to be. When should you use a multi layered software architecture? Clearly, the argument for a multi layered software architecture is pretty clear. However, there are some instances when it is particularly appropriate: If you are building a system in which it is possible to split the application logic into smaller components that could be spread across several servers. This could lead to the design of multiple tiers in the application tier. If the system under consideration requires faster network communications, high reliability, and great performance, then n-tier has the capability to provide that as this architectural pattern is designed to reduce the overhead which is caused by network traffic. An example of a multi layered software architecture We can illustrate the working of an multi layered architecture with the help of an example of a shopping cart web application which is present in all e-commerce sites. The shopping cart web application is used by the e-commerce site user to complete the purchase of items through the e-commerce site. You'd expect the application to have several features that allow the user to: Add selected items to the cart Change the quantity of items in their cart Make payments The client tier, which is present in the shopping cart application, interacts with the end user through a GUI. The client tier also interacts with the application that runs in the application servers present in multiple tiers. Since the shopping cart is a web application, the client tier contains the web browser. The presentation tier present in the shopping cart application displays information related to the services like browsing merchandise, buying them, adding them to the shopping cart, and so on. The presentation tier communicates with other tiers by sending results to the client tier and all other tiers which are present in the network. The presentation tier also makes calls to database stored procedures and web services. All these activities are done with the objective of providing a quick response time to the end user. The presentation tier plays a vital role by acting as a glue which binds the entire shopping cart application together by allowing the functions present in different tiers to communicate with each other and display the outputs to the end user through the web browser. In this multi layered architecture, the business logic which is required for processing activities like calculation of shipping cost and so on are pulled from the application tier to the presentation tier. The application tier also acts as the integration layer and allows the applications to communicate seamlessly with both the data tier and the presentation tier. The last tier which is the data tier is used to maintain data. This layer typically contains database servers. This layer maintains data independent from the application server and the business logic. This approach provides enhanced scalability and performance to the data tier. Read next Microservices and Service Oriented Architecture What is serverless architecture and why should I be interested?
Read more
  • 0
  • 1
  • 42361

article-image-common-design-patterns-javascript
Richa Tripathi
01 May 2018
14 min read
Save for later

Implementing 5 Common Design Patterns in JavaScript (ES8)

Richa Tripathi
01 May 2018
14 min read
In this tutorial, we'll see how common design patterns can be used as blueprints for organizing larger structures. Defining steps with template functions A template is a design pattern that details the order a given set of operations are to be executed in; however, a template does not outline the steps themselves. This pattern is useful when behavior is divided into phases that have some conceptual or side effect dependency that requires them to be executed in a specific order. Here, we'll see how to use the template function design pattern. We assume you already have a workspace that allows you to create and run ES modules in your browser for all the recipes given below: How to do it... Open your command-line application and navigate to your workspace. Create a new folder named 09-01-defining-steps-with-template-functions. Copy or create an index.html file that loads and runs a main function from main.js. Create a main.js file that defines a new abstract class named Mission: // main.js class Mission { constructor () { if (this.constructor === Mission) { throw new Error('Mission is an abstract class, must extend'); } } } Add a function named execute that calls three instance methods—determineDestination, determinPayload, and launch: // main.js class Mission { execute () { this.determinDestination(); this.determinePayload(); this.launch(); } } Create a LunarRover class that extends the Mission class: // main.js class LunarRover extends Mission {} Add a constructor that assigns name to an instance property: // main.js class LunarRover extends Mission constructor (name) { super(); this.name = name; } } Implement the three methods called by Mission.execute: // main.js class LunarRover extends Mission {} determinDestination() { this.destination = 'Oceanus Procellarum'; } determinePayload() { this.payload = 'Rover with camera and mass spectrometer.'; } launch() { console.log(` Destination: ${this.destination} Playload: ${this.payload} Lauched! Rover Will arrive in a week. `); } } Create a JovianOrbiter class that also extends the Mission class: // main.js class LunarRover extends Mission {} constructor (name) { super(); this.name = name; } determinDestination() { this.destination = 'Jovian Orbit'; } determinePayload() { this.payload = 'Orbiter with decent module.'; } launch() { console.log(` Destination: ${this.destination} Playload: ${this.payload} Lauched! Orbiter Will arrive in 7 years. `); } } Create a main function that creates both concrete mission types and executes them: // main.js export function main() { const jadeRabbit = new LunarRover('Jade Rabbit'); jadeRabbit.execute(); const galileo = new JovianOrbiter('Galileo'); galileo.execute(); } Start your Python web server and open the following link in your browser: http://localhost:8000/. The output should appear as follows: How it works... The Mission abstract class defines the execute method, which calls the other instance methods in a particular order. You'll notice that the methods called are not defined by the Mission class. This implementation detail is the responsibility of the extending classes. This use of abstract classes allows child classes to be used by code that takes advantage of the interface defined by the abstract class. In the template function pattern, it is the responsibility of the child classes to define the steps. When they are instantiated, and the execute method is called, those steps are then performed in the specified order. Ideally, we'd be able to ensure that Mission.execute was not overridden by any inheriting classes. Overriding this method works against the pattern and breaks the contract associated with it. This pattern is useful for organizing data-processing pipelines. The guarantee that these steps will occur in a given order means that, if side effects are eliminated, the instances can be organized more flexibly. The implementing class can then organize these steps in the best possible way. Assembling customized instances with builders The previous recipe shows how to organize the operations of a class. Sometimes, object initialization can also be complicated. In these situations, it can be useful to take advantage of another design pattern: builders. Now, we'll see how to use builders to organize the initialization of more complicated objects. How to do it... Open your command-line application and navigate to your workspace. Create a new folder named 09-02-assembling-instances-with-builders. Create a main.js file that defines a new class named Mission, which that takes a name constructor argument and assigns it to an instance property. Also, create a describe method that prints out some details: // main.js class Mission { constructor (name) { this.name = name; } describe () { console.log(` The ${this.name} mission will be launched by a ${this.rocket.name} rocket, and deliver a ${this.payload.name} to ${this.destination.name}. `); } } Create classes named Destination, Payload, and Rocket, which receive a name property as a constructor parameter and assign it to an instance property: // main.js class Destination { constructor (name) { this.name = name; } } class Payload { constructor (name) { this.name = name; } } class Rocket { constructor (name) { this.name = name; } }   Create a MissionBuilder class that defines the setMissionName, setDestination, setPayload, and setRocket methods: // main.js class MissionBuilder { setMissionName (name) { this.missionName = name; return this; } setDestination (destination) { this.destination = destination; return this; } setPayload (payload) { this.payload = payload; return this; } setRocket (rocket) { this.rocket = rocket; return this; } } Create a build method that creates a new Mission instance with the appropriate properties: // main.js class MissionBuilder { build () { const mission = new Mission(this.missionName); mission.rocket = this.rocket; mission.destination = this.destination; mission.payload = this.payload; return mission; } } Create a main function that uses MissionBuilder to create a new mission instance: // main.js export function main() { // build an describe a mission new MissionBuilder() .setMissionName('Jade Rabbit') .setDestination(new Destination('Oceanus Procellarum')) .setPayload(new Payload('Lunar Rover')) .setRocket(new Rocket('Long March 3B Y-23')) .build() .describe(); } Start your Python web server and open the following link in your browser: http://localhost:8000/. Your output should appear as follows: How it works... The builder defines methods for assigning all the relevant properties and defines a build method that ensures that each is called and assigned appropriately. Builders are like template functions, but instead of ensuring that a set of operations are executed in the correct order, they ensure that an instance is properly configured before returning. Because each instance method of MissionBuilder returns the this reference, the methods can be chained. The last line of the main function calls describe on the new Mission instance that is returned from the build method. Replicating instances with factories Like builders, factories are a way of organizing object construction. They differ from builders in how they are organized. Often, the interface of factories is a single function call. This makes factories easier to use, if less customizable, than builders. Now, we'll see how to use factories to easily replicate instances. How to do it... Open your command-line application and navigate to your workspace. Create a new folder named 09-03-replicating-instances-with-factories. Copy or create an index.html that loads and runs a main function from main.js. Create a main.js file that defines a new class named Mission. Add a constructor that takes a name constructor argument and assigns it to an instance property. Also, define a simple describe method: // main.js class Mission { constructor (name) { this.name = name; } describe () { console.log(` The ${this.name} mission will be launched by a ${this.rocket.name} rocket, and deliver a ${this.payload.name} to ${this.destination.name}. `); } } Create three classes named Destination, Payload, and Rocket, that take name as a constructor argument and assign it to an instance property: // main.js class Destination { constructor (name) { this.name = name; } } class Payload { constructor (name) { this.name = name; } } class Rocket { constructor (name) { this.name = name; } } Create a MarsMissionFactory object with a single create method that takes two arguments: name and rocket. This method should create a new Mission using those arguments: // main.js const MarsMissionFactory = { create (name, rocket) { const mission = new Mission(name); mission.destination = new Destination('Martian surface'); mission.payload = new Payload('Mars rover'); mission.rocket = rocket; return mission; } } Create a main method that creates and describes two similar missions: // main.js export function main() { // build an describe a mission MarsMissionFactory .create('Curiosity', new Rocket('Atlas V')) .describe(); MarsMissionFactory .create('Spirit', new Rocket('Delta II')) .describe(); } Start your Python web server and open the following link in your browser: http://localhost:8000/. Your output should appear as follows: How it works... The create method takes a subset of the properties needed to create a new mission. The remaining values are provided by the method itself. This allows factories to simplify the process of creating similar instances. In the main function, you can see that two Mars missions have been created, only differing in name and Rocket instance. We've halved the number of values needed to create an instance. This pattern can help reduce instantiation logic. In this recipe, we simplified the creation of different kinds of missions by identifying the common attributes, encapsulating those in the body of the factory function, and using arguments to supply the remaining properties. In this way, commonly used instance shapes can be created without additional boilerplate code. Processing a structure with the visitor pattern The patterns we've seen thus far organize the construction of objects and the execution of operations. The next pattern we'll look at is specially made to traverse and perform operations on hierarchical structures. Here, we'll be looking at the visitor pattern. How to do it... Open your command-line application and navigate to your workspace. Copy the 09-02-assembling-instances-with-builders folder to a new 09-04-processing-a-structure-with-the-visitor-pattern directory. Add a class named MissionInspector to main.js. Create a visitor method that calls a corresponding method for each of the following types: Mission, Destination, Rocket, and Payload: // main.js /* visitor that inspects mission */ class MissionInspector { visit (element) { if (element instanceof Mission) { this.visitMission(element); } else if (element instanceof Destination) { this.visitDestination(element); } else if (element instanceof Rocket) { this.visitRocket(element); } else if (element instanceof Payload) { this.visitPayload(element); } } } Create a visitMission method that logs out an ok message: // main.js class MissionInspector { visitMission (mission) { console.log('Mission ok'); mission.describe(); } } Create a visitDestination method that throws an error if the destination is not in an approved list: // main.js class MissionInspector { visitDestination (destination) { const name = destination.name.toLowerCase(); if ( name === 'mercury' || name === 'venus' || name === 'earth' || name === 'moon' || name === 'mars' ) { console.log('Destination: ', name, ' approved'); } else { throw new Error('Destination: '' + name + '' not approved at this time'); } } } Create a visitPayload method that throws an error if the payload isn't valid: // main.js class MissionInspector { visitPayload (payload) { const name = payload.name.toLowerCase(); const payloadExpr = /(orbiter)|(rover)/; if ( payloadExpr.test(name) ) { console.log('Payload: ', name, ' approved'); } else { throw new Error('Payload: '' + name + '' not approved at this time'); } } } Create a visitRocket method that logs out an ok message: // main.js class MissionInspector { visitRocket (rocket) { console.log('Rocket: ', rocket.name, ' approved'); } } Add an accept method to the Mission class that calls accept on its constituents, then tells visitor to visit the current instance: // main.js class Mission { // other mission code ... accept (visitor) { this.rocket.accept(visitor); this.payload.accept(visitor); this.destination.accept(visitor); visitor.visit(this); } } Add an accept method to the Destination class that tells visitor to visit the current instance: // main.js class Destination { // other mission code ... accept (visitor) { visitor.visit(this); } } Add an accept method to the Payload class that tells visitor to visit the current instance: // main.js class Payload { // other mission code ... accept (visitor) { visitor.visit(this); } } Add an accept method to the Rocket class that tells visitor to visit the current instance: // main.js class Rocket { // other mission code ... accept (visitor) { visitor.visit(this); } } Create a main function that creates different instances with the builder, visits them with the MissionInspector instance, and logs out any thrown errors: // main.js export function main() { // build an describe a mission const jadeRabbit = new MissionBuilder() .setMissionName('Jade Rabbit') .setDestination(new Destination('Moon')) .setPayload(new Payload('Lunar Rover')) .setRocket(new Rocket('Long March 3B Y-23')) .build(); const curiosity = new MissionBuilder() .setMissionName('Curiosity') .setDestination(new Destination('Mars')) .setPayload(new Payload('Mars Rover')) .setRocket(new Rocket('Delta II')) .build(); // expect error from Destination const buzz = new MissionBuilder() .setMissionName('Buzz Lightyear') .setDestination(new Destination('Too Infinity And Beyond')) .setPayload(new Payload('Interstellar Orbiter')) .setRocket(new Rocket('Self Propelled')) .build(); // expect error from payload const terraformer = new MissionBuilder() .setMissionName('Mars Terraformer') .setDestination(new Destination('Mars')) .setPayload(new Payload('Terraformer')) .setRocket(new Rocket('Light Sail')) .build(); const inspector = new MissionInspector(); [jadeRabbit, curiosity, buzz, terraformer].forEach((mission) => { try { mission.accept(inspector); } catch (e) { console.error(e); } }); } Start your Python web server and open the following link in your browser: http://localhost:8000/. Your output should appear as follows: How it works... The visitor pattern has two components. The visitor processes the subject objects and the subjects tell other related subjects about the visitor, and when the current subject should be visited. The accept method is required for each subject to receive a notification that there is a visitor. That method then makes two types of method call. The first is the accept method on its related subjects. The second is the visitor method on the visitor. In this way, the visitor traverses a structure by being passed around by the subjects. The visitor methods are used to process different types of node. In some languages, this is handled by language-level polymorphism. In JavaScript, we can use run-time type checks to do this. The visitor pattern is a good option for processing hierarchical structures of objects, where the structure is not known ahead of time, but the types of subjects are known. Using a singleton to manage instances Sometimes, there are objects that are resource intensive. They may require time, memory, battery power, or network usage that are unavailable or inconvenient. It is often useful to manage the creation and sharing of instances. Here, we'll see how to use singletons to manage instances. How to do it... Open your command-line application and navigate to your workspace. Create a new folder named 09-05-singleton-to-manage-instances. Copy or create an index.html that loads and runs a main function from main.js. Create a main.js file that defines a new class named Rocket. Add a constructor takes a name constructor argument and assigns it to an instance property: // main.js class Rocket { constructor (name) { this.name = name; } } Create a RocketManager object that has a rockets property. Add a findOrCreate method that indexes Rocket instances by the name property: // main.js const RocketManager = { rockets: {}, findOrCreate (name) { const rocket = this.rockets[name] || new Rocket(name); this.rockets[name] = rocket; return rocket; } } Create a main function that creates instances with and without the manager. Compare the instances and see whether they are identical: // main.js export function main() { const atlas = RocketManager.findOrCreate('Atlas V'); const atlasCopy = RocketManager.findOrCreate('Atlas V'); const atlasClone = new Rocket('Atlas V'); console.log('Copy is the same: ', atlas === atlasCopy); console.log('Clone is the same: ', atlas === atlasClone); } Start your Python web server and open the following link in your browser: http://localhost:8000/. Your output should appear as follows: How it works... The object stores references to the instances, indexed by the string value given with name. This map is created when the module loads, so it is persisted through the life of the program. The singleton is then able to look up the object and returns instances created by findOrCreate with the same name. Conserving resources and simplifying communication are primary motivations for using singletons. Creating a single object for multiple uses is more efficient in terms of space and time needed than creating several. Plus, having single instances for messages to be communicated through makes communication between different parts of a program easier. Singletons may require more sophisticated indexing if they are relying on more complicated data. You read an excerpt from a book written by Ross Harrison, titled ECMAScript Cookbook. This book contains over 70 recipes to help you improve your coding skills and solving practical JavaScript problems. 6 JavaScript micro optimizations you need to know Mozilla is building a bridge between Rust and JavaScript Behavior Scripting in C# and Javascript for game developers  
Read more
  • 0
  • 0
  • 8060

article-image-introduction-creational-patterns-using-go-programming
Packt
02 Jan 2017
12 min read
Save for later

Introduction to Creational Patterns using Go Programming

Packt
02 Jan 2017
12 min read
This article by Mario Castro Contreras, author of the book Go Design Patterns, introduces you to the Creational design patterns that are explained in the book. As the title implies, this article groups common practices for creating objects. Creational patterns try to give ready-to-use objects to users instead of asking for their input, which, in some cases, could be complex and will couple your code with the concrete implementations of the functionality that should be defined in an interface. (For more resources related to this topic, see here.) Singleton design pattern – Having a unique instance of an object in the entire program Have you ever done interviews for software engineers? It's interesting that when you ask them about design patterns, more than 80% will start saying Singleton design pattern. Why is that? Maybe it's because it is one of the most used design patterns out there or one of the easiest to grasp. We will start our journey on creational design patterns because of the latter reason. Description Singleton pattern is easy to remember. As the name implies, it will provide you a single instance of an object, and guarantee that there are no duplicates. At the first call to use the instance, it is created and then reused between all the parts in the application that need to use that particular behavior. Objective of the Singleton pattern You'll use Singleton pattern in many different situations. For example: When you want to use the same connection to a database to make every query When you open a Secure Shell (SSH) connection to a server to do a few tasks, and don't want to reopen the connection for each task If you need to limit the access to some variable or space, you use a Singleton as the door to this variable. If you need to limit the number of calls to some places, you create a Singleton instance to make the calls in the accepted window The possibilities are endless, and we have just mentioned some of them. Implementation Finally, we have to implement the Singleton pattern. You'll usually write a static method and instance to retrieve the Singleton instance. In Go, we don't have the keyword static, but we can achieve the same result by using the scope of the package. First, we create a structure that contains the object which we want to guarantee to be a Singleton during the execution of the program: package creational type singleton struct{ count int } var instance *singleton func GetInstance() *singleton { if instance == nil { instance = new(singleton) } return instance } func (s *singleton) AddOne() int { s.count++ return s.count } We must pay close attention to this piece of code. In languages like Java or C++, the variable instance would be initialized to NULL at the beginning of the program. In Go, you can initialize a pointer to a structure as nil, but you cannot initialize a structure to nil (the equivalent of NULL). So the var instance *singleton line defines a pointer to a structure of type Singleton as nil, and the variable called instance. We created a GetInstance method that checks if the instance has not been initialized already (instance == nil), and creates an instance in the space already allocated in the line instance = new(singleton). Remember, when we use the keyword new, we are creating a pointer to the type between the parentheses. The AddOne method will take the count of the variable instance, raise it by one, and return the current value of the counter. Lets run now our unit tests again: $ go test -v -run=GetInstance === RUN TestGetInstance --- PASS: TestGetInstance (0.00s) PASS ok Factory method – Delegating the creation of different types of payments The Factory method pattern (or simply, Factory) is probably the second-best known and used design pattern in the industry. Its purpose is to abstract the user from the knowledge of the structure it needs to achieve a specific purpose. By delegating this decision to a Factory, this Factory can provide the object that best fits the user needs or the most updated version. It can also ease the process of downgrading or upgrading of the implementation of an object if needed. Description When using the Factory method design pattern, we gain an extra layer of encapsulation so that our program can grow in a controlled environment. With the Factory method, we delegate the creation of families of objects to a different package or object to abstract us from the knowledge of the pool of possible objects we could use. Imagine that you have two ways to access some specific resource: by HTTP or FTP. For us, the specific implementation of this access should be invisible. Maybe, we just know that the resource is in HTTP or in FTP, and we just want a connection that uses one of these protocols. Instead of implementing the connection by ourselves, we can use the Factory method to ask for the specific connection. With this approach, we can grow easily in the future if we need to add an HTTPS object. Objective of the Factory method After the previous description, the following objectives of the Factory Method design pattern must be clear to you: Delegating the creation of new instances of structures to a different part of the program Working at the interface level instead of with concrete implementations Grouping families of objects to obtain a family object creator Implementation We will start with the GetPaymentMethod method. It must receive an integer that matches with one of the defined constants of the same file to know which implementation it should return. package creational import ( "errors" "fmt" ) type PaymentMethod interface { Pay(amount float32) string } const ( Cash = 1 DebitCard = 2 ) func GetPaymentMethod(m int) (PaymentMethod, error) { switch m { case Cash: return new(CashPM), nilcase DebitCard: return new(DebitCardPM), nil default: return nil, errors.New(fmt.Sprintf("Payment method %d not recognizedn", m)) } } We use a plain switch to check the contents of the argument m (method). If it matches any of the known methods—cash or debit card, it returns a new instance of them. Otherwise, it will return a nil and an error indicating that the payment method has not been recognized. Now we can run our tests again to check the second part of the unit tests: $go test -v -run=GetPaymentMethod . === RUN TestGetPaymentMethodCash --- FAIL: TestGetPaymentMethodCash (0.00s) factory_test.go:16: The cash payment method message wasn't correct factory_test.go:18: LOG: === RUN TestGetPaymentMethodDebitCard --- FAIL: TestGetPaymentMethodDebitCard (0.00s) factory_test.go:28: The debit card payment method message wasn't correct factory_test.go:30: LOG: === RUN TestGetPaymentMethodNonExistent --- PASS: TestGetPaymentMethodNonExistent (0.00s) factory_test.go:38: LOG: Payment method 20 not recognized FAIL exit status 1 FAIL Now we do not get the errors saying it couldn't find the type of payment methods. Instead, we receive a message not correct error when it tries to use any of the methods that it covers. We also got rid of the Not implemented message that was being returned when we asked for an unknown payment method. Lets implement the structures now: type CashPM struct{} type DebitCardPM struct{} func (c *CashPM) Pay(amount float32) string { return fmt.Sprintf("%0.2f paid using cashn", amount) } func (c *DebitCardPM) Pay(amount float32) string { return fmt.Sprintf("%#0.2f paid using debit cardn", amount) } We just get the amount, printing it in a nice formatted message. With this implementation, the tests will all passing now: $ go test -v -run=GetPaymentMethod . === RUN TestGetPaymentMethodCash --- PASS: TestGetPaymentMethodCash (0.00s) factory_test.go:18: LOG: 10.30 paid using cash === RUN TestGetPaymentMethodDebitCard --- PASS: TestGetPaymentMethodDebitCard (0.00s) factory_test.go:30: LOG: 22.30 paid using debit card === RUN TestGetPaymentMethodNonExistent --- PASS: TestGetPaymentMethodNonExistent (0.00s) factory_test.go:38: LOG: Payment method 20 not recognized PASS ok Do you see the LOG: messages? They aren't errors—we just print some information that we receive when using the package under test. These messages can be omitted unless you pass the -v flag to the test command: $ go test -run=GetPaymentMethod . ok Abstract Factory – A factory of factories After learning about the factory design pattern is when we grouped a family of related objects in our case payment methods, one can be quick to think: what if I group families of objects in a more structured hierarchy of families? Description The Abstract Factory design pattern is a new layer of grouping to achieve a bigger (and more complex) composite object, which is used through its interfaces. The idea behind grouping objects in families and grouping families is to have big factories that can be interchangeable and can grow more easily. In the early stages of development, it is also easier to work with factories and abstract factories than to wait until all concrete implementations are done to start your code. Also, you won't write an Abstract Factory from the beginning unless you know that your object's inventory for a particular field is going to be very large and it could be easily grouped into families. The objective Grouping related families of objects is very convenient when your object number is growing so much that creating a unique point to get them all seems the only way to gain flexibility of the runtime object creation. Following objectives of the Abstract Factory method must be clear to you: Provide a new layer of encapsulation for Factory methods that returns a common interface for all factories Group common factories into a super Factory (also called factory of factories) Implementation The implementation of every factory is already done for the sake of brevity. They are very similar to the factory method with the only difference being that in the factory method, we don't use an instance of the factory, because we use the package functions directly. The implementation of the vehicle factory is as follows: func GetVehicleFactory(f int) (VehicleFactory, error) { switch f { case CarFactoryType: return new(CarFactory), nil case MotorbikeFactoryType: return new(MotorbikeFactory), nil default: return nil, errors.New(fmt.Sprintf("Factory with id %d not recognizedn", f)) } } Like in any factory, we switched between the factory possibilities to return the one that was demanded. As we have already implemented all concrete vehicles, the tests must be run too: go test -v -run=Factory -cover . === RUN TestMotorbikeFactory --- PASS: TestMotorbikeFactory (0.00s) vehicle_factory_test.go:16: Motorbike vehicle has 2 wheels vehicle_factory_test.go:22: Sport motorbike has type 1 === RUN TestCarFactory --- PASS: TestCarFactory (0.00s) vehicle_factory_test.go:36: Car vehicle has 4 seats vehicle_factory_test.go:42: Luxury car has 4 doors. PASS coverage: 45.8% of statements ok All of them passed. Take a close look and note that we have used the -cover flag when running the tests to return a coverage percentage of the package 45.8%. What this tells us is that 45.8% of the lines are covered by the tests we have written, but 54.2% is still not under the tests. This is because we haven't covered the cruise motorbike and the Family car with tests. If you write those tests, the result should rise to around 70.8%. Prototype design pattern The last pattern we will see in this article is the Prototype pattern. Like all creational patterns, this too comes in handy when creating objects and it is very common to see the Prototype pattern surrounded by more patterns. Description The aim of the Prototype pattern is to have an object or a set of objects that are already created at compilation time, but which you can clone as many times as you want at runtime. This is useful, for example, as a default template for a user who has just registered with your webpage or a default pricing plan in some service. The key difference between this and a Builder pattern is that objects are cloned for the user instead of building them at runtime. You can also build a cache-like solution, storing information using a prototype. Objective Maintain a set of objects that will be cloned to create new instances Free CPU of complex object initialization to take more memory resources We will start with the GetClone method. This method should return an item of the specified type: type ShirtsCache struct {} func (s *ShirtsCache)GetClone(m int) (ItemInfoGetter, error) { switch m { case White: newItem := *whitePrototype return &newItem, nil case Black: newItem := *blackPrototype return &newItem, nil case Blue: newItem := *bluePrototype return &newItem, nil default: return nil, errors.New("Shirt model not recognized") } } The Shirt structure also needs a GetInfo implementation to print the contents of the instances. type ShirtColor byte type Shirt struct { Price float32 SKU string Color ShirtColor } func (s *Shirt) GetInfo() string { return fmt.Sprintf("Shirt with SKU '%s' and Color id %d that costs %fn", s.SKU, s.Color, s.Price) } Finally, lets run the tests to see that everything is now working: go test -run=TestClone -v . === RUN TestClone --- PASS: TestClone (0.00s) prototype_test.go:41: LOG: Shirt with SKU 'abbcc' and Color id 1 that costs 15.000000 prototype_test.go:42: LOG: Shirt with SKU 'empty' and Color id 1 that costs 15.000000 prototype_test.go:44: LOG: The memory positions of the shirts are different 0xc42002c038 != 0xc42002c040 PASS ok In the log (remember to set the -v flag when running the tests), you can check that shirt1 and shirt2 have different SKUs. Also, we can see the memory positions of both objects. Take into account that the positions shown on your computer will probably be different. Summary We have seen the creational design patterns commonly used in the software industry. Their purpose is to abstract the user from the creation of objects for handling complexity or maintainability purposes. Design patterns have been the foundation of thousands of applications and libraries since the nineties, and most of the software we use today has many of these creational patterns under the hood. Resources for Article: Further resources on this subject: Getting Started [article] Thinking Functionally [article] Auditing and E-discovery [article]
Read more
  • 0
  • 0
  • 3343

article-image-clean-your-code
Packt
19 Dec 2016
23 min read
Save for later

Clean Up Your Code

Packt
19 Dec 2016
23 min read
 In this article by Michele Bertoli, the author of the book React Design Patterns and Best Practices, we will learn to use JSX without any problems or unexpected behaviors, it is important to understand how it works under the hood and the reasons why it is a useful tool to build UIs. Our goal is to write clean and maintainable JSX code and to achieve that we have to know where it comes from, how it gets translated to JavaScript and which features it provides. In the first section, we will do a little step back but please bear with me because it is crucial to master the basics to apply the best practices. In this article, we will see: What is JSX and why we should use it What is Babel and how we can use it to write modern JavaScript code The main features of JSX and the differences between HTML and JSX The best practices to write JSX in an elegant and maintainable way (For more resources related to this topic, see here.) JSX Let's see how we can declare our elements inside our components. React gives us two ways to define our elements: the first one is by using JavaScript functions and the second one is by using JSX, an optional XML-like syntax. In the beginning, JSX is one of the main reasons why people fails to approach to React because looking at the examples on the homepage and seeing JavaScript mixed with HTML for the first time does not seem right to most of us. As soon as we get used to it, we realize that it is very convenient exactly because it is similar to HTML and it looks very familiar to anyone who already created User Interfaces on the web. The opening and closing tags, make it easier to represent nested trees of elements, something that would have been unreadable and hard to maintain using plain JavaScript. Babel In order to use JSX (and es2015) in our code, we have to install Babel. First of all, it is important to understand clearly the problems it can solve for us and why we need to add a step in our process. The reason is that we want to use features of the language that have not been implemented yet in the browser, our target environment. Those advanced features make our code more clean for the developers but the browser cannot understand and execute it. So the solution is to write our scripts in JSX and es2015 and when we are ready to ship, we compile the sources into es5, the standard specification that is implemented in the major browsers today. Babel is a popular JavaScript compiler widely adopted within the React community: It can compile es2015 code into es5 JavaScript as well as compile JSX into JavaScript functions. The process is called transpilation, because it compiles the source into a new source rather than into an executable. Using it is pretty straightforward, we just install it: npm install --global babel-cli If you do not like to install it globally (developers usually tend to avoid it), you can install Babel locally to a project and run it through a npm script but for the purpose of this article a global instance is fine. When the installation is completed we can run the following command to compile our JavaScript files: babel source.js -o output.js One of the reasons why Babel is so powerful is because it is highly configurable. Babel is just a tool to transpile a source file into an output file but to apply some transformations we need to configure it. Luckily, there are some very useful presets of configurations which we can easily install and use: npm install --global babel-preset-es2015 babel-preset-react Once the installation is done, we create a configuration file called .babelrc and put the following lines into it to tell Babel to use those presets: { "presets": [ "es2015", "React" ] } From this point on we can write es2015 and JSX in our source files and execute the output files in the browser. Hello, World! Now that our environment has been set up to support JSX, we can dive into the most basic example: generating a div element. This is how you would create a div with React'screateElementfunction: React.createElement('div') React has some shortcut methods for DOM elements and the following line is equivalent to the one above: React.DOM.div() This is the JSX for creating a div element: <div /> It looks identical to the way we always used to create the markup of our HTML pages. The big difference is that we are writing the markup inside a .js file but it is important to notice that JSX is only a syntactic sugar and it gets transpiled into the JavaScript before being executed in the browser. In fact, our <div /> is translated into React.createElement('div') when we run Babel and that is something we should always keep in mind when we write our templates. DOM elements and React components With JSX we can obviously create both HTML elements and React components, the only difference is if they start with a capital letter or not. So for example to render an HTML button we use <button />, while to render our Button components we use <Button />. The first button gets transpiled into: React.createElement('button') While the second one into: React.createElement(Button) The difference here is that in the first call we are passing the type of the DOM element as a string while in the second one we are passing the component itself, which means that it should exist in the scope to work. As you may have noticed, JSX supports self-closing tags which are pretty good to keep the code terse and they do not require us to repeat unnecessary tags. Props JSX is very convenient when your DOM elements or React components have props, in fact following XML is pretty easy to set attributes on elements: <imgsrc="https://facebook.github.io/react/img/logo.svg" alt="React.js" /> The equivalent in JavaScript would be: React.createElement("img", { src: "https://facebook.github.io/react/img/logo.svg", alt: "React.js" }); Which is way less readable and even with only a couple of attributes it starts getting hard to be read without a bit of reasoning. Children JSX allows you to define children to describe the tree of elements and compose complex UIs. A basic example could be a link with a text inside it: <a href="https://facebook.github.io/react/">Click me!</a> Which would be transpiled into: React.createElement( "a", { href: "https://facebook.github.io/react/" }, "Click me!" ); Our link can be enclosed inside a div for some layout requirements and the JSX snippet to achieve that is the following: <div> <a href="https://facebook.github.io/react/">Click me!</a> </div> With the JSX equivalent being: React.createElement( "div", null, React.createElement( "a", { href: "https://facebook.github.io/react/" }, "Click me!" ) ); It becomes now clear how the XML-like syntax of JSX makes everything more readable and maintainable but it is always important to know what is the JavaScript parallel of our JSX to take control over the creation of elements. The good part is that we are not limited to have elements as children of elements but we can use JavaScript expressions like functions or variables. For doing that we just have to put the expression inside curly braces: <div> Hello, {variable}. I'm a {function()}. </div> The same applies to non-string attributes: <a href={this.makeHref()}>Click me!</a> Differences with HTML So far we have seen how the JSX is similar to HTML, let's now see the little differences between them and the reasons why they exist. Attributes We always have to keep in mind that JSX is not a standard language and it gets transpiled into JavaScript and because of that, some attributes cannot be used. For example instead of class we have to use className and instead of for we have to use htmlFor: <label className="awesome-label"htmlFor="name" /> The reason is that class and for are reserved word in JavaScript. Style A pretty significant difference is the way the style attribute works.The style attribute does not accept a CSS string as the HTML parallel does, but it expects a JS Object where the style names are camelCased. <div style={{ backgroundColor: 'red' }} /> Root One important difference with HTML worth mentioning is that since JSX elements get translated into JavaScript functions and you cannot return two functions in JavaScript, whenever you have multiple elements at the same level you are forced to wrap them into a parent. Let's see a simple example: <div /> <div /> Gives us the following error: Adjacent JSX elements must be wrapped in an enclosing tag While this: <div> <div /> <div /> </div> It is pretty annoying having to add unnecessary divtags just for making JSX work but the React developers are trying to find a solution: https://github.com/reactjs/core-notes/blob/master/2016-07/july-07.md Spaces There's one thing that could be a little bit tricky at the beginning and again it regards the fact that we should always have in mind that JSX is not HTML, even if it has an XML-like syntax. JSX, in fact, handles the spaces between text and elements differently from HTML in a way that's counter-intuitive. Consider the following snippet: <div> <span>foo</span> bar <span>baz</span> </div> In the browser, which interprets HTML, this code would give you foo bar baz, which is exactly what we expect it to be. In JSX instead, the same code would be rendered as foobarbaz and that is because the three nested lines get transpiled as individual children of the div element, without taking in account the spaces. A common solution is to put a space explicitly between the elements: <div> <span>foo</span> {''} bar {''} <span>baz</span> </div> As you may have noticed, we are using an empty string wrapped inside a JavaScript expression to force the compiler to apply the space between the elements. Boolean Attributes A couple of more things worth mentioning before starting for real regard the way you define Boolean attributes in JSX. If you set an attribute without a value, JSX assumes that its value is true, following the same behavior of the HTML disabled attribute, for example. That means that if we want to set an attribute to false we have to declare it explicitly to false: <button disabled /> React.createElement("button", { disabled: true }); And: <button disabled={false} /> React.createElement("button", { disabled: false }); This can be confusing in the beginning because we may think that omitting an attribute would mean false but it is not like that: with React we should always be explicit to avoid confusion. Spread attributes An important feature is the spread attributes operator, which comes from the Rest/Spread Properties for ECMAScript proposal and it is very convenient whenever we want to pass all the attributes of a JavaScript object to an element. A common practice that leads to fewer bugs is not to pass entire JavaScript objects down to children by reference but using their primitive values which can be easily validated making components more robust and error proof. Let's see how it works: const foo = { bar: 'baz' } return <div {...foo} /> That gets transpiled into this: var foo = { bar: 'baz' }; return React.createElement('div', foo); JavaScript templating Last but not least, we started from the point that one of the advantages of moving the templates inside our components instead of using an external template library is that we can use the full power of JavaScript, so let's start looking at what it means. The spread attributes is obviously an example of that and another common one is that JavaScript expressions can be used as attributes values by wrapping them into curly braces: <button disabled={errors.length} /> Now that we know how JSX works and we master it, we are ready to see how to use it in the right way following some useful conventions and techniques. Common Patterns Multi-line Let's start with a very simple one: as we said, on the main reasons why we should prefer JSX over React'screateClass is because of its XML-like syntax and the way balanced opening/closing tags are perfect to represent a tree of nodes. Therefore, we should try to use it in the right way and get the most out of it. One example is that, whenever we have nested elements, we should always go multi-line: <div> <Header /> <div> <Main content={...} /> </div> </div> Instead of: <div><Header /><div><Main content={...} /></div></div> Unless the children are not elements, such as text or variables. In that case it can make sense to remain on the same line and avoid adding noise to the markup, like: <div> <Alert>{message}</Alert> <Button>Close</Button> </div> Always remember to wrap your elements inside parenthesis when you write them in multiple lines. In fact, JSX always gets replaced by functions and functions written in a new line can give you an unexpected result. Suppose for example that you are returning JSX from your render method, which is how you create UIs in React. The following example works fine because the div is in the same line of the return: return <div /> While this is not right: return <div /> Because you would have: return; React.createElement("div", null); That is why you have to wrap the statement into parenthesis: return ( <div /> ) Multi-properties A common problem in writing JSX comes when an element has multiples attributes. One solution would be to write all the attributes on the same line but this would lead to very long lines which we do not want in our code (see in the next section how to enforce coding style guides). A common solution is to write each attribute on a new line with one level of indentation and then putting the closing bracket aligned with the opening tag: <button foo="bar" veryLongPropertyName="baz" onSomething={this.handleSomething} /> Conditionals Things get more interesting when we start working with conditionals, for example if we want to render some components only when some conditions are matched. The fact that we can use JavaScript is obviously a plus but there are many different ways to express conditions in JSX and it is important to understand the benefits and the problems of each one of those to write code that is readable and maintainable at the same time. Suppose we want to show a logout button only if the user is currently logged in into our application. A simple snippet to start with is the following: let button if (isLoggedIn) { button = <LogoutButton /> } return <div>{button}</div> It works but it is not very readable, especially if there are multiple components and multiple conditions. What we can do in JSX is using an inline condition: <div> {isLoggedIn&&<LoginButton />} </div> This works because if the condition is false, nothing gets rendered but if the condition is true the createElement function of the Loginbutton gets called and the element is returned to compose the resulting tree. If the condition has an alternative, the classic if…else statement, and we want for example to show a logout button if the user is logged in and a login button otherwise, we can either use JavaScript's if…else: let button if (isLoggedIn) { button = <LogoutButton /> } else { button = <LoginButton /> } return <div>{button}</div> Alternatively, better, using a ternary condition, which makes our code more compact: <div> {isLoggedIn ? <LogoutButton /> : <LoginButton />} </div> You can find the ternary condition used in popular repositories like the Redux real world example (https://github.com/reactjs/redux/blob/master/examples/real-world/src/components/List.js) where the ternary is used to show a loading label if the component is fetching the data or "load more" inside a button according to the value of the isFetching variable: <button [...]> {isFetching ? 'Loading...' : 'Load More'} </button> Let's now see what is the best solution when things get more complicated and, for example, we have to check more than one variable to determine if render a component or not: <div> {dataIsReady&& (isAdmin || userHasPermissions) &&<SecretData />} </div> In this case is clear that using the inline condition is a good solution but the readability is strongly impacted so what we can do instead is creating a helper function inside our component and use it in JSX to verify the condition: canShowSecretData() { const { dataIsReady, isAdmin, userHasPermissions } = this.props return dataIsReady&& (isAdmin || userHasPermissions) } <div> {this.canShowSecretData() &&<SecretData />} </div> As you can see, this change makes the code more readable and the condition more explicit. Looking into this code in six month time you will still find it clear just by reading the name of the function. If we do not like using functions you can use object's getters which make the code more elegant. For example, instead of declaring a function we define a getter: get canShowSecretData() { const { dataIsReady, isAdmin, userHasPermissions } = this.props return dataIsReady&& (isAdmin || userHasPermissions) } <div> {this.canShowSecretData&&<SecretData />} </div> The same applies to computed properties: suppose you have two single properties for currency and value. Instead of creating the price string inside you render method you can create a class function for that: getPrice() { return `${this.props.currency}${this.props.value}` } <div>{this.getPrice()}</div> Which is better because it is isolated and you can easily test it in case it contains logic. Alternatively going a step further and, as we have just seen, use getters: get price() { return `${this.props.currency}${this.props.value}` } <div>{this.price}</div> Going back to conditional statements, there are other solutions that require using external dependencies. A good practice is to avoid external dependencies as much as we can to keep our bundle smaller but it may be worth it in this particular case because improving the readability of our templates is a big win. The first solution is renderIf which we can install with: npm install --save render-if And easily use in our projects like this: const { dataIsReady, isAdmin, userHasPermissions } = this.props constcanShowSecretData = renderIf(dataIsReady&& (isAdmin || userHasPermissions)) <div> {canShowSecretData(<SecretData />)} </div> We wrap our conditions inside the renderIf function. The utility function that gets returned can be used as a function that receives the JSX markup to be shown when the condition is true. One goal that we should always keep in mind is never to add too much logic inside our components. Some of them obviously will require a bit of it but we should try to keep them as simple and dumb as possible in a way that we can spot and fix error easily. At least, we should try to keep the renderIf method as clean as possible and for doing that we could use another utility library called React Only If which let us write our components as if the condition is always true by setting the conditional function using a higher-order component. To use the library we just need to install it: npm install --save react-only-if Once it is installed, we can use it in our apps in the following way: constSecretDataOnlyIf = onlyIf( SecretData, ({ dataIsReady, isAdmin, userHasPermissions }) => { return dataIsReady&& (isAdmin || userHasPermissions) } ) <div> <SecretDataOnlyIf dataIsReady={...} isAdmin={...} userHasPermissions={...} /> </div>  As you can see here there is no logic at all inside the component itself. We pass the condition as the second parameter of the onlyIf function when the condition is matched, the component gets rendered. The function that is used to validate the condition receives the props, the state, and the context of the component. In this way we avoid polluting our component with conditionals so that it is easier to understand and reason about. Loops A very common operation in UI development is displaying lists of items. When it comes to showing lists we realize that using JavaScript as a template language is a very good idea. If we write a function that returns an array inside our JSX template, each element of the array gets compiled into an element. As we have seen before we can use any JavaScript expressions inside curly braces and the more obvious way to generate an array of elements, given an array of objects is using map. Let's dive into a real-world example, suppose you have a list of users, each one with a name property attached to it. To create an unordered list to show the users you can do: <ul> {users.map(user =><li>{user.name}</li>)} </ul> This snippet is in incredibly simple and incredibly powerful at the same time, where the power of the HTML and the JavaScript converge. Control Statements Conditional and loops are very common operations in UI templates and you may feel wrong using the JavaScript ternary or the map function to do that. JSX has been built in a way that it only abstract the creation of the elements leaving the logic parts to real JavaScript which is great but sometimes the code could become less clear. In general, we aim to remove all the logic from our components and especially from our render method but sometimes we have to show and hide elements according to the state of the application and very often we have to loop through collections and arrays. If you feel that using JSX for that kind of operations would make your code more readable there is a Babel plugin for that: jsx-control-statements. It follows the same philosophy of JSX and it does not add any real functionality to the language, it is just a syntactic sugar that gets compiled into JavaScript. Let's see how it works. First of all, we have to install it: npm install --save jsx-control-statements Once it is installed we have to add it to the list of our babel plugins in our .babelrc file: "plugins": ["jsx-control-statements"] From now on we can use the syntax provided by the plugin and Babel will transpile it together with the common JSX syntax. A conditional statement written using the plugin looks like the following snippet: <If condition={this.canShowSecretData}> <SecretData /> </If> Which get transpiled into a ternary expression: {canShowSecretData ? <SecretData /> : null} The If component is great but if for some reasons you have nested conditions in your render method it can easily become messy and hard to follow. Here is where the Choose component comes to help: <Choose> <When condition={...}> <span>if</span> </When> <When condition={...}> <span>else if</span> </When> <Otherwise> <span>else</span> </Otherwise> </Choose>   Please notice that the code above gets transpiled into multiple ternaries. Last but not least there is a "component" (always remember that we are not talking about real components but just a syntactic sugar) to manage the loops which is very convenient as well. <ul> <For each="user" of={this.props.users}> <li>{user.name}</li> </For> </ul> The code above gets transpiled into a map function, no magic in there. If you are used to using linters, you might wonder how the linter is not complaining about that code. In fact, the variable item doesn't exist before the transpilation nor it is wrapped into a function. To avoid those linting errors there's another plugin to install: eslint-plugin-jsx-control-statements. If you did not understand the previous sentence don't worry: in the next section we will talk about linting. Sub-render It is worth stressing that we always want to keep our components very small and our render methods very clean and simple. However, that is not an easy goal, especially when you are creating an application iteratively and in the first iteration you are not sure exactly how to split the components into smaller ones. So, what should we be doing when the render method becomes big to keep it maintainable? One solution is splitting it into smaller functions in a way that let us keeping all the logic in the same component. Let's see an example: renderUserMenu() { // JSX for user menu } renderAdminMenu() { // JSX for admin menu } render() { return ( <div> <h1>Welcome back!</h1> {this.userExists&&this.renderUserMenu()} {this.userIsAdmin&&this.renderAdminMenu()} </div> ) }  This is not always considered a best practice because it seems more obvious to split the component into smaller ones but sometimes it helps just to keep the render method cleaner. For example in the Redux Real World examples a sub-render method is used to render the load more button. Now that we are JSX power user it is time to move on and see how to follow a style guide within our code to make it consistent. Summary In this article we deeply understood how JSX works and how to use it in the right way in our components. We started from the basics of the syntax to create a solid knowledge that will let us mastering JSX and its features. Resources for Article: Further resources on this subject: Getting Started with React and Bootstrap [article] Create Your First React Element [article] Getting Started [article]
Read more
  • 0
  • 0
  • 3258

article-image-why-we-need-design-patterns
Packt
10 Nov 2016
16 min read
Save for later

Why we need Design Patterns?

Packt
10 Nov 2016
16 min read
In this article by Praseed Pai, and Shine Xavier, authors of the book .NET Design Patterns, we will try to understand the necessity of choosing a pattern-based approach to software development. We start with some principles of software development, which one might find useful while undertaking large projects. The working example in the article starts with a requirements specification and progresses towards a preliminary implementation. We will then try to iteratively improve the solution using patterns and idioms, and come up with a good design that supports a well-defined programming Interface. In this process, we will learn about some software development principles (listed below) one can adhere to, including the following: SOLID principles for OOP Three key uses of design patterns Arlow/Nuestadt archetype patterns Entity, value, and data transfer objects Leveraging the .NET Reflection API for plug-in architecture (For more resources related to this topic, see here.) Some principles of software development Writing quality production code consistently is not easy without some foundational principles under your belt. The purpose of this section is to whet the developer's appetite, and towards the end, some references are given for detailed study. Detailed coverage of these principles warrants a separate book on its own scale. The authors have tried to assimilate the following key principles of software development which would help one write quality code: KISS: Keep it simple, Stupid DRY: Don't repeat yourself YAGNI: You aren't gonna need it Low coupling: Minimize coupling between classes SOLID principles: Principles for better OOP William of Ockham had framed the maxim Keep it simple, Stupid (KISS). It is also called law of parsimony. In programming terms, it can be translated as "writing code in a straightforward manner, focusing on a particular solution that solves the problem at hand". This maxim is important because, most often, developers fall into the trap of writing code in a generic manner for unwarranted extensibility. Even though it initially looks attractive, things slowly go out of bounds. The accidental complexity introduced in the code base for catering to improbable scenarios, often reduces readability and maintainability. The KISS principle can be applied to every human endeavor. Learn more about KISS principle by consulting the Web. Don't repeat yourself (DRY), a maxim which most programmers often forget while implementing their domain logic. Most often, in a collaborative development scenario, code gets duplicated inadvertently due to lack of communication and proper design specifications. This bloats the code base, induces subtle bugs, and make things really difficult to change. By following the DRY maxim at all stages of development, we can avoid additional effort and make the code consistent. The opposite of DRY is write everything twice (WET). You aren't gonna need it (YAGNI), a principle that compliments the KISS axiom. It serves as a warning for people who try to write code in the most general manner, anticipating changes right from the word go,. Too often, in practice, most of this code is not used to make potential code smells. While writing code, one should try to make sure that there are no hard-coded references to concrete classes. It is advisable to program to an interface as opposed to an implementation. This is a key principle which many patterns use to provide behavior acquisition at runtime. A dependency injection framework could be used to reduce coupling between classes. SOLID principles are a set of guidelines for writing better object-oriented software. It is a mnemonic acronym that embodies the following five principles: 1 Single Responsibility Principle (SRP) A class should have only one responsibility. If it is doing more than one unrelated thing, we need to split the class. 2 Open Close Principle (OCP) A class should be open for extension, closed for modification. 3 Liskov Substitution Principle (LSP) Named after Barbara Liskov, a Turing Award laureate, who postulated that a sub-class (derived class) could substitute any super class (base class) references without affecting the functionality. Even though it looks like stating the obvious, most implementations have quirks which violate this principle. 4 Interface segregation principle (ISP) It is more desirable to have multiple interfaces for a class (such classes can also be called components) than having one Uber interface that forces implementation of all methods (both relevant and non-relevant to the solution context). 5 Dependency Inversion (DI) This is a principle which is very useful for Framework design. In the case of Frameworks, the client code will be invoked by server code, as opposed to the usual process of client invoking the server. The main principle here is that abstraction should not depend upon details, rather, details should depend upon abstraction. This is also called the "Hollywood Principle" (Do not call us, we will call you back). The authors consider the preceding five principles primarily as a verification mechanism. This will be demonstrated by verifying the ensuing case study implementations for violation of these principles. Karl Seguin has written an e-book titled Foundations of Programming – Building Better Software, which covers most of what has been outlined here. Read his book to gain an in-depth understanding of most of these topics. The SOLID principles are well covered in the Wikipedia page on the subject, which can be retrieved from https://en.wikipedia.org/wiki/SOLID_(object-oriented_design). Robert Martin's Agile Principles, Patterns, and Practices in C# is a definitive book on learning about SOLID, as Robert Martin itself is the creator of these principles, even though Michael Feathers coined the acronym. Why patterns are required? According to the authors, the three key advantages of pattern-oriented software development that stand out are as follows: A language/platform-agnostic way to communicate about software artifacts A tool for refactoring initiatives (targets for refactoring) Better API design With the advent of the pattern movement, the software development community got a canonical language to communicate about software design, architecture, and implementation. Software development is a craft which has got trade-offs attached to each strategy, and there are multiple ways to develop software. The various pattern catalogues brought some conceptual unification for this cacophony in software development. Most developers around the world today who are worth their salt, can understand and speak this language. We believe you will be able to do the same at the end of the article. Fancy yourself stating the following about your recent implementation: For our tax computation example, we have used command pattern to handle the computation logic. The commands (handlers) are configured using an XML file, and a factory method takes care of the instantiation of classes on the fly using Lazy loading. We cache the commands, and avoid instantiation of more objects by imposing singleton constraints on the invocation. We support prototype pattern where command objects can be cloned. The command objects have a base implementation, where concrete command objects use the template method pattern to override methods which are necessary. The command objects are implemented using the design by contracts idiom. The whole mechanism is encapsulated using a Façade class, which acts as an API layer for the application logic. The application logic uses entity objects (reference) to store the taxable entities, attributes like tax parameters are stored as value objects. We use data transfer object (DTO) to transfer the data from the application layer to the computational layer. Arlow/Nuestadt-based archetype pattern is the unit of structuring the tax computation logic. For some developers, the preceding language/platform-independent description of the software being developed is enough to understand the approach taken. This will boos developer productivity (during all phases of SDLC, including development, maintenance, and support) as the developers will be able to get a good mental model of the code base. Without Pattern catalogs, such succinct descriptions of the design or implementation would have been impossible. In an Agile software development scenario, we develop software in an iterative fashion. Once we reach a certain maturity in a module, developers refactor their code. While refactoring a module, patterns do help in organizing the logic. The case study given next will help you to understand the rationale behind "Patterns as refactoring targets". APIs based on well-defined patterns are easy to use and impose less cognitive load on programmers. The success of the ASP.NET MVC framework, NHibernate, and API's for writing HTTP modules and handlers in the ASP.NET pipeline are a few testimonies to the process. Personal income tax computation - A case study Rather than explaining the advantages of patterns, the following example will help us to see things in action. Computation of the annual income tax is a well-known problem domain across the globe. We have chosen an application domain which is well known to focus on the software development issues. The application should receive inputs regarding the demographic profile (UID, Name, Age, Sex, Location) of a citizen and the income details (Basic, DA, HRA, CESS, Deductions) to compute his tax liability. The System should have discriminants based on the demographic profile, and have a separate logic for senior citizens, juveniles, disabled people, old females, and others. By discriminant we mean demographic that parameters like age, sex and location should determine the category to which a person belongs and apply category-specific computation for that individual. As a first iteration, we will implement logic for the senior citizen and ordinary citizen category. After preliminary discussion, our developer created a prototype screen as shown in the following image: Archetypes and business archetype pattern The legendary Swiss psychologist, Carl Gustav Jung, created the concept of archetypes to explain fundamental entities which arise from a common repository of human experiences. The concept of archetypes percolated to the software industry from psychology. The Arlow/Nuestadt patterns describe business archetype patterns like Party, Customer Call, Product, Money, Unit, Inventory, and so on. An Example is the Apache Maven archetype, which helps us to generate projects of different natures like J2EE apps, Eclipse plugins, OSGI projects, and so on. The Microsoft patterns and practices describes archetypes for targeting builds like Web applications, rich client application, mobile applications, and services applications. Various domain-specific archetypes can exist in respective contexts as organizing and structuring mechanisms. In our case, we will define some archetypes which are common in the taxation domain. Some of the key archetypes in this domain are: Sr.no Archetype Description 1 SeniorCitizenFemale Tax payers who are female, and above the age of 60 years 2 SeniorCitizen Tax payers who are male, and above the age of 60 years 3 OrdinaryCitizen Tax payers who are Male/Female, and above 18 years of age 3 DisabledCitizen Tax payers who have any disability 4 MilitaryPersonnel Tax payers who are military personnel 5 Juveniles Tax payers whose age is less than 18 years We will use demographic parameters as discriminant to find the archetype which corresponds to the entity. The whole idea of inducing archetypes is to organize the tax computation logic around them. Once we are able to resolve the archetypes, it is easy to locate and delegate the computations corresponding to the archetypes. Entity, value, and data transfer objects We are going to create a class which represents a citizen. Since citizen needs to be uniquely identified, we are going to create an entity object, which is also called reference object (from DDD catalog). The universal identifier (UID) of an entity object is the handle which an application refers. Entity objects are not identified by their attributes, as there can be two people with the same name. The ID uniquely identifies an entity object. The definition of an entity object is given as follows: public class TaxableEntity { public int Id { get; set; } public string Name { get; set; } public int Age { get; set; } public char Sex { get; set; } public string Location { get; set; } public TaxParamVO taxparams { get; set; } } In the preceding class definition, Id uniquely identifies the entity object. TaxParams is a value object (from DDD catalog) associated with the entity object. Value objects do not have a conceptual identity. They describe some attributes of things (entities). The definition of TaxParams is given as follows: public class TaxParamVO { public double Basic {get;set;} public double DA { get; set; } public double HRA { get; set; } public double Allowance { get; set; } public double Deductions { get; set; } public double Cess { get; set; } public double TaxLiability { get; set; } public bool Computed { get; set; } } While writing applications ever since Smalltalk, Model-view-controller (MVC) is the most dominant paradigm for structuring applications. The application is split into a model layer ( which mostly deals with data), view layer (which acts as a display layer), and a controller (to mediate between the two). In the Web development scenario, they are physically partitioned across machines. To transfer data between layers, the J2EE pattern catalog identified the DTO to transfer data between layers. The DTO object is defined as follows: public class TaxDTO { public int id { } public TaxParamVO taxparams { } } If the layering exists within the same process, we can transfer these objects as-is. If layers are partitioned across processes or systems, we can use XML or JSON serialization to transfer objects between the layers. A computation engine We need to separate UI processing, input validation, and computation to create a solution which can be extended to handle additional requirements. The computation engine will execute different logic depending upon the command received. The GoF command pattern is leveraged for executing the logic based on the command received. The command pattern consists of four constituents. They are: Command object Parameters Command Dispatcher Client The command object's interface has an Execute method. The parameters to the command objects are passed through a bag. The client invokes the command object by passing the parameters through a bag to be consumed by the Command Dispatcher. The Parameters are passed to the command object through the following data structure: public class COMPUTATION_CONTEXT { private Dictionary<String, Object> symbols = new Dictionary<String, Object>(); public void Put(string k, Object value) { symbols.Add(k, value); } public Object Get(string k) { return symbols[k]; } } The ComputationCommand interface, which all the command objects implement, has only one Execute method, which is shown next. The Execute method takes a bag as parameter. The COMPUTATION_CONTEXT data structure acts as the bag here. Interface ComputationCommand { bool Execute(COMPUTATION_CONTEXT ctx); } Since we have already implemented a command interface and bag to transfer the parameters, it is time that we implement a command object. For the sake of simplicity, we will implement two commands where we hardcode the tax liability. public class SeniorCitizenCommand : ComputationCommand { public bool Execute(COMPUTATION_CONTEXT ctx) { TaxDTO td = (TaxDTO)ctx.Get("tax_cargo"); //---- Instead of computation, we are assigning //---- constant tax for each arcetypes td.taxparams.TaxLiability = 1000; td.taxparams.Computed = true; return true; } } public class OrdinaryCitizenCommand : ComputationCommand { public bool Execute(COMPUTATION_CONTEXT ctx) { TaxDTO td = (TaxDTO)ctx.Get("tax_cargo"); //---- Instead of computation, we are assigning //---- constant tax for each arcetypes td.taxparams.TaxLiability = 1500; td.taxparams.Computed = true; return true; } } The commands will be invoked by a CommandDispatcher Object, which takes an archetype string and a COMPUTATION_CONTEXT object. The CommandDispatcher acts as an API layer for the application. class CommandDispatcher { public static bool Dispatch(string archetype, COMPUTATION_CONTEXT ctx) { if (archetype == "SeniorCitizen") { SeniorCitizenCommand cmd = new SeniorCitizenCommand(); return cmd.Execute(ctx); } else if (archetype == "OrdinaryCitizen") { OrdinaryCitizenCommand cmd = new OrdinaryCitizenCommand(); return cmd.Execute(ctx); } else { return false; } } } The application to engine communication The data from the application UI, be it Web or Desktop, has to flow to the computation engine. The following ViewHandler routine shows how data, retrieved from the application UI, is passed to the engine, via the Command Dispatcher, by a client: public static void ViewHandler(TaxCalcForm tf) { TaxableEntity te = GetEntityFromUI(tf); if (te == null){ ShowError(); return; } string archetype = ComputeArchetype(te); COMPUTATION_CONTEXT ctx = new COMPUTATION_CONTEXT(); TaxDTO td = new TaxDTO { id = te.id, taxparams = te.taxparams}; ctx.Put("tax_cargo",td); bool rs = CommandDispatcher.Dispatch(archetype, ctx); if ( rs ) { TaxDTO temp = (TaxDTO)ctx.Get("tax_cargo"); tf.Liabilitytxt.Text = Convert.ToString(temp.taxparams.TaxLiability); tf.Refresh(); } } At this point, imagine that a change in requirement has been received from the stakeholders. Now, we need to support tax computation for new categories. Initially, we had different computations for senior citizen and ordinary citizen. Now we need to add new Archetypes. At the same time, to make the software extensible (loosely coupled) and maintainable, it would be ideal if we provide the capability to support new Archetypes in a configurable manner as opposed to recompiling the application for every new archetype owing to concrete references. The Command Dispatcher object does not scale well to handle additional archetypes. We need to change the assembly whenever a new archetype is included, as the tax computation logic varies for each archetype. We need to create a pluggable architecture to add or remove archetypes at will. The plugin system to make system extensible Writing system logic without impacting the application warrants a mechanism—that of loading a class on the fly. Luckily, the .NET Reflection API provides a mechanism for one to load a class during runtime, and invoke methods within it. A developer worth his salt should learn the Reflection API to write systems which change dynamically. In fact, most of the technologies like ASP.NET, Entity framework, .NET Remoting, and WCF work because of the availability of Reflection API in the .NET stack. Henceforth, we will be using an XML configuration file to specify our tax computation logic. A sample XML file is given next: <?xml version="1.0"?> <plugins> <plugin archetype ="OrindaryCitizen" command="TaxEngine.OrdinaryCitizenCommand"/> <plugin archetype="SeniorCitizen" command="TaxEngine.SeniorCitizenCommand"/> </plugins> The contents of the XML file can be read very easily using LINQ to XML. We will be generating a Dictionary object by the following code snippet: private Dictionary<string,string> LoadData(string xmlfile) { return XDocument.Load(xmlfile) .Descendants("plugins") .Descendants("plugin") .ToDictionary(p => p.Attribute("archetype").Value, p => p.Attribute("command").Value); } Summary In this article, we have covered quite a lot of ground in understanding why pattern-oriented software development is a good way to develop modern software. We started the article citing some key principles. We progressed further to demonstrate the applicability of these key principles by iteratively skinning an application which is extensible and resilient to changes. Resources for Article: Further resources on this subject: Debugging Your .NET Application [article] JSON with JSON.Net [article] Using ASP.NET Controls in SharePoint [article]
Read more
  • 0
  • 0
  • 9261
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-decoding-why-good-php-developerisnt-oxymoron
Packt
14 Sep 2016
20 min read
Save for later

Decoding Why "Good PHP Developer"Isn't an Oxymoron

Packt
14 Sep 2016
20 min read
In this article by Junade Ali, author of the book Mastering PHP Design Patterns, we will be revisiting object-oriented programming. Back in 2010 MailChimp published a post on their blog, it was entitled Ewww, You Use PHP? In this blog post they described the horror when they explained their choice of PHP to developers who consider the phrase good PHP programmer an oxymoron. In their rebuttal they argued that their PHP wasn't your grandfathers PHP and they use a sophisticated framework. I tend to judge the quality of PHP on the basis of, not only how it functions, but how secure it is and how it is architected. This book focuses on ideas of how you should architect your code. The design of software allows for developers to ease the extension of the code beyond its original purpose, in a bug free and elegant fashion. (For more resources related to this topic, see here.) As Martin Fowler put it: Any fool can write code that a computer can understand. Good programmers write code that humans can understand. This isn't just limited to code style, but how developers architect and structure their code. I've encountered many developers with their noses constantly stuck in documentation, copying and pasting bits of code until it works; hacking snippets together until it works. Moreover, I far too often see the software development process rapidly deteriorate as developers ever more tightly couple their classes with functions of ever increasing length. Software engineers mustn't just code software; they must know how to design it. Indeed often a good software engineer, when interviewing other software engineers will ask questions surrounding the design of the code itself. It is trivial to get a piece of code that will execute, and it is also benign to question a developer as to whether strtolower or str2lower is the correct name of a function (for the record, it's strtolower). Knowing the difference between a class and an object doesn't make you a competent developer; a better interview question would, for example, be how one could apply subtype polymorphism to a real software development challenge. Failure to assess software design skills dumbs down an interview and results in there being no way to differentiate between those who are good at it, and those who aren't. These advanced topics will be discussed throughout this book, by learning these tactics you will better understand what the right questions to ask are when discussing software architecture. Moxie Marlinspike once tweeted: As a software developer, I envy writers, musicians, and filmmakers. Unlike software, when they create something it is really done, forever. When developing software we mustn't forget we are authors, not just of instructions for a machine, but we are also authoring something that we later expect others to extend upon. Therefore, our code mustn't just be targeted at machines, but humans also. Code isn't just poetry for a machine, it should be poetry for humans also. This is, of course, better said than done. In PHP this may be found especially difficult given the freedom PHP offers developers on how they may architect and structure their code. By the very nature of freedom, it may be both used and abused, so it is true with the freedom offered in PHP. PHP offers freedom to developers to decide how to architect this code. By the very nature of freedom it can be both used and abused, so it is true with the freedom offered in PHP. Therefore, it is increasingly important that developers understand proper software design practices to ensure their code maintains long term maintainability. Indeed, another key skill lies in refactoringcode, improving design of existing code to make it easier to extend in the longer term Technical debt, the eventual consequence of poor system design, is something that I've found comes with the career of a PHP developer. This has been true for me whether it has been dealing with systems that provide advanced functionality or simple websites. It usually arises because a developer elects to implement bad design for a variety of reasons; this is when adding functionality to an existing codebase or taking poor design decisions during the initial construction of software. Refactoring can help us address these issues. SensioLabs (the creators of the Symfonyframework) have a tool called Insight that allows developers to calculate the technical debt in their own code. In 2011 they did an evaluation of technical debt in various projects using this tool; rather unsurprisingly they found that WordPress 4.1 topped the chart of all platforms they evaluated with them claiming it would take 20.1 years to resolve the technical debt that the project contains. Those familiar with the WordPress core may not be surprised by this, but this issue of course is not only associated to WordPress. In my career of working with PHP, from working with security critical cryptography systems to working with systems that work with mission critical embedded systems, dealing with technical debt comes with the job. Dealing with technical debt is not something to be ashamed of for a PHP Developer, indeed some may consider it courageous. Dealing with technical debt is no easy task, especially in the face of an ever more demanding user base, client, or project manager; constantly demanding more functionality without being familiar with the technical debt the project has associated to it. I recently emailed the PHP Internals group as to whether they should consider deprecating the error suppression operator @. When any PHP function is prepended by an @ symbol, the function will suppress an error returned by it. This can be brutal; especially where that function renders a fatal error that stops the execution of the script, making debugging a tough task. If the error is suppressed, the script may fail to execute without providing developers a reason as to why this is. Despite the fact that no one objected to the fact that there were better ways of handling errors (try/catch, proper validation) than abusing the error suppression operator and that deprecation should be an eventual aim of PHP, it is the case that some functions return needless warnings even though they already have a success/failure value. This means that due to technical debt in the PHP core itself, this operator cannot be deprecated until a lot of other prerequisite work is done. In the meantime, it is down to developers to decide the best methodologies of handling errors. Until the inherent problem of unnecessary error reporting is addressed, this operator cannot be deprecated. Therefore, it is down to developers to be educated as to the proper methodologies that should be used to address error handling and not to constantly resort to using an @ symbol. Fundamentally, technical debt slows down development of a project and often leads to code being deployed that is broken as developers try and work on a fragile project. When starting a new project, never be afraid to discus architecture as architecture meetings are vital to developer collaboration; as one scrum master I've worked with said in the face of criticism that "meetings are a great alternative to work", he said "meetings are work…how much work would you be doing without meetings?". Coding style - thePSR standards When it comes to coding style, I would like to introduce you to the PSR standards created by the PHP Framework Interop Group. Namely, the two standards that apply to coding standards are PSR-1 (Basic Coding Style) and PSR-2 (Coding Style Guide). In addition to this there are PSR standards that cover additional areas, for example, as of today; the PSR-4 standard is the most up-to-date autoloading standard published by the group. You can find out more about the standards at http://www.php-fig.org/. Coding style being used to enforce consistency throughout a codebase is something I strongly believe in, it does make a difference to your code readability throughout a project. It is especially important when you are starting a project (chances are you may be reading this book to find out how to do that right) as your coding style determines the style the developers following you in working on this project will adopt. Using a global standard such as PSR-1 or PSR-2 means that developers can easily switch between projects without having to reconfigure their code style in their IDE. Good code style can make formatting errors easier to spot. Needless to say that coding styles will develop as time progresses, to date I elect to work with the PSR standards. I am a strong believer in the phrase: Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live. It isn't known who wrote this phrase originally; but it's widely thought that it could have been John Woods or potentially Martin Golding. I would strongly recommend familiarizingyourself with these standards before proceeding in this book. Revising object-oriented programming Object-oriented programming is more than just classes and objects, it's a whole programming paradigm based around objects(data structures) that contain data fields and methods. It is essential to understand this; using classes to organize a bunch of unrelated methods together is not object orientation. Assuming you're aware of classes (and how to instantiate them), allow me to remind you of a few different bits and pieces. Polymorphism Polymorphism is a fairly long word for a fairly simple concept. Essentially, polymorphism means the same interfaceis used with a different underlying code. So multiple classes could have a draw function, each accepting the same arguments, but at an underlying level the code is implemented differently. In this article, I would also like to talk about Subtype Polymorphism in particular (also known as Subtyping or Inclusion Polymorphism). Let's say we have animals as our supertype;our subtypes may well be cats, dogs, and sheep. In PHP, interfaces allow you to define a set of functionality that a class that implements it must contain, as of PHP 7 you can also use scalar type hints to define the return types we expect. So for example, suppose we defined the following interface: interface Animal { public function eat(string $food) : bool; public function talk(bool $shout) : string; } We could then implement this interface in our own class, as follows: class Cat implements Animal { } If we were to run this code without defining the classes we would get an error message as follows: Class Cat contains 2 abstract methods and must therefore be declared abstract or implement the remaining methods (Animal::eat, Animal::talk) Essentially, we are required to implement the methods we defined in our interface, so now let's go ahead and create a class that implements these methods: class Cat implements Animal { public function eat(string $food): bool { if ($food === "tuna") { return true; } else { return false; } } public function talk(bool $shout): string { if ($shout === true) { return "MEOW!"; } else { return "Meow."; } } } Now that we've implemented these methods we can then just instantiate the class we are after and use the functions contained in it: $felix = new Cat(); echo $felix->talk(false); So where does polymorphism come into this? Suppose we had another class for a dog: class Dog implements Animal { public function eat(string $food): bool { if (($food === "dog food") || ($food === "meat")) { return true; } else { return false; } } public function talk(bool $shout): string { if ($shout === true) { return "WOOF!"; } else { return "Woof woof."; } } } Now let's suppose we have multiple different types of animals in a pets array: $pets = array( 'felix' => new Cat(), 'oscar' => new Dog(), 'snowflake' => new Cat() ); We can now actually go ahead and loop through all these pets individually in order to run the talk function.We don't care about the type of pet because the talkmethod that is implemented in every class we getis by virtue of us having extended the Animals interface. So let's suppose we wanted to have all our animals run the talk method, we could just use the following code: foreach ($pets as $pet) { echo $pet->talk(false); } No need for unnecessary switch/case blocks in order to wrap around our classes, we just use software design to make things easier for us in the long-term. Abstract classes work in a similar way, except for the fact that abstract classes can contain functionality where interfaces cannot. It is important to note that any class that defines one or more abstract classes must also be defined as abstract. You cannot have a normal class defining abstract methods, but you can have normal methods in abstract classes. Let's start off by refactoring our interface to be an abstract class: abstract class Animal { abstract public function eat(string $food) : bool; abstract public function talk(bool $shout) : string; public function walk(int $speed): bool { if ($speed > 0) { return true; } else { return false; } } } You might have noticed that I have also added a walk method as an ordinary, non-abstract method; this is a standard method that can be used or extended by any classes that inherit the parent abstract class. They already have implementation. Note that it is impossible to instantiate an abstract class (much like it's not possible to instantiate an interface). Instead we must extend it. So, in our Cat class let's substitute: class Cat implements Animal With the following code: class Cat extends Animal That's all we need to refactor in order to get classes to extend the Animal abstract class. We must implement the abstract functions in the classes as we outlined for the interfaces, plus we can use the ordinary functions without needing to implement them: $whiskers = new Cat(); $whiskers->walk(1); As of PHP 5.4 it has also become possible to instantiate a class and access a property of it in one system. PHP.net advertised it as: Class member access on instantiation has been added, e.g. (new Foo)->bar(). You can also do it with individual properties, for example,(new Cat)->legs. In our example, we can use it as follows: (new IcyAprilChapterOneCat())->walk(1); Just to recap a few other points about how PHP implemented OOP, the final keyword before a class declaration or indeed a function declaration means that you cannot override such classes or functions after they've been defined. So, if we were to try extending a class we have named as final: final class Animal { public function walk() { return "walking..."; } } class Cat extends Animal { } This results in the following output: Fatal error: Class Cat may not inherit from final class (Animal) Similarly, if we were to do the same except at a function level: class Animal { final public function walk() { return "walking..."; } } class Cat extends Animal { public function walk () { return "walking with tail wagging..."; } } This results in the following output: Fatal error: Cannot override final method Animal::walk() Traits (multiple inheritance) Traits were introduced into PHP as a mechanism for introducing Horizontal Reuse. PHP conventionally acts as a single inheritance language, namely because of the fact that you can't inherit more than one class into a script. Traditional multiple inheritance is a controversial process that is often looked down upon by software engineers. Let me give you an example of using Traits first hand; let's define an abstract Animal class which we want to extend into another class: class Animal { public function walk() { return "walking..."; } } class Cat extends Animal { public function walk () { return "walking with tail wagging..."; } } So now let's suppose we have a function to name our class, but we don't want it to apply to all our classes that extend the Animal class, we want it to apply to certain classes irrespective of whether they inherit the properties of the abstract Animal class or not. So we've defined our functions like so: function setFirstName(string $name): bool { $this->firstName = $name; return true; } function setLastName(string $name): bool { $this->lastName = $name; return true; } The problem now is that there is no place we can put them without using Horizontal Reuse, apart from copying and pasting different bits of code or resorting to using conditional inheritance. This is where Traits come to the rescue; let's start off by wrapping these methods in a Trait called Name: trait Name { function setFirstName(string $name): bool { $this->firstName = $name; return true; } function setLastName(string $name): bool { $this->lastName = $name; return true; } } So now that we've defined our Trait, we can just tell PHP to use it in our Cat class: class Cat extends Animal { use Name; public function walk() { return "walking with tail wagging..."; } } Notice the use of theName statement? That's where the magic happens. Now you can call the functions in that Trait without any problems: $whiskers = new Cat(); $whiskers->setFirstName('Paul'); echo $whiskers->firstName; All put together, the new code block looks as follows: trait Name { function setFirstName(string $name): bool { $this->firstName = $name; return true; } function setLastName(string $name): bool { $this->lastName = $name; return true; } } class Animal { public function walk() { return "walking..."; } } class Cat extends Animal { use Name; public function walk() { return "walking with tail wagging..."; } } $whiskers = new Cat(); $whiskers->setFirstName('Paul'); echo $whiskers->firstName; Scalar type hints Let me take this opportunity to introduce you to a PHP7 concept known as scalar type hinting; it allows you to define the return types (yes, I know this isn't strictly under the scope of OOP; deal with it). Let's define a function, as follows: function addNumbers (int $a, int $b): int { return $a + $b; } Let's take a look at this function; firstly you will notice that before each of the arguments we define the type of variable we want to receive, in this case,int or integer. Next up you'll notice there's a bit of code after the function definition : int, which defines our return type so our function can only receive an integer. If you don't provide the right type of variable as a function argument or don't return the right type of variable from the function; you will get a TypeError exception. In strict mode, PHP will also throw a TypeError exception in the event that strict mode is enabled and you also provide the incorrect number of arguments. It is also possible in PHP to define strict_types; let me explain why you might want to do this. Without strict_types, PHP will attempt to automatically convert a variable to the defined type in very limited circumstances. For example, if you pass a string containing solely numbers it will be converted to an integer, a string that's non-numeric, however, will result in a TypeError exception. Once you enable strict_typesthis all changes, you can no longer have this automatic casting behavior. Taking our previous example, without strict_types, you could do the following: echo addNumbers(5, "5.0"); Trying it again after enablingstrict_types, you will find that PHP throws a TypeError exception. This configuration only applies on an individual file basis, putting it before you include other files will not result in this configuration being inherited to those files. There are multiple benefits of why PHP chose to go down this route; they are listed very clearly in Version: 0.5.3 of the RFC that implemented scalar type hints called PHP RFC: Scalar Type Declarations. You can read about it by going to http://www.wiki.php.net (the wiki, not the main PHP website) and searching for scalar_type_hints_v5. In order to enable it, make sure you put this as the very first statement in your PHP script: declare(strict_types=1); This will not work unless you define strict_typesas the very first statement in a PHP script; no other usages of this definition are permitted. Indeed if you try to define it later on, your script PHP will throw a fatal error. Of course, in the interests of the rage induced PHP core fanatic reading this book in its coffee stained form, I should mention that there are other valid types that can be used in type hinting. For example, PHP 5.1.0 introduced this with arrays and PHP 5.0.0 introduced the ability for a developer to do this with their own classes. Let me give you a quick example of how this would work in practice, suppose we had an Address class: class Address { public $firstLine; public $postcode; public $country; public function __construct(string $firstLine, string $postcode, string $country) { $this->firstLine = $firstLine; $this->postcode = $postcode; $this->country = $country; } } We can then type the hint of the Address class that we inject into a Customer class: class Customer { public $name; public $address; public function __construct($name, Address $address) { $this->name = $name; $this->address = $address; } } And just to show how it all can come together: $address = new Address('10 Downing Street', 'SW1A2AA', 'UK'); $customer = new Customer('Davey Cameron', $address); var_dump($customer); Limiting debug access to private/protected properties If you define a class which contains private or protected variables, you will notice an odd behavior if you were to var_dumpthe object of that class. You will notice that when you wrap the object in a var_dumpit reveals all variables; be they protected, private, or public. PHP treats var_dump as an internal debugging function, meaning all data becomes visible. Fortunately, there is a workaround for this. PHP 5.6 introduced the __debugInfo magic method. Functions in classes preceded by a double underscore represent magic methods and have special functionality associated to them. Every time you try to var_dump an object that has the __debugInfo magic method set, the var_dump will be overridden with the result of that function call instead. Let me show you how this works in practice, let's start by defining a class: class Bear { private $hasPaws = true; } Let's instantiate this class: $richard = new Bear(); Now if we were to try and access the private variable that ishasPaws, we would get a fatal error; so this call: echo $richard->hasPaws; Would result in the following fatal error being thrown: Fatal error: Cannot access private property Bear::$hasPaws That is the expected output, we don't want a private property visible outside its object. That being said, if we wrap the object with a var_dump as follows: var_dump($richard); We would then get the following output: object(Bear)#1 (1) { ["hasPaws":"Bear":private]=> bool(true) } As you can see, our private property is marked as private, but nevertheless it is visible. So how would we go about preventing this? So, let's redefine our class as follows: class Bear { private $hasPaws = true; public function __debugInfo () { return call_user_func('get_object_vars', $this); } } Now, after we instantiate our class and var_dump the resulting object, we get the following output: object(Bear)#1 (0) { } The script all put together looks like this now, you will notice I've added an extra public property called growls, which I have set to true: <?php class Bear { private $hasPaws = true; public $growls = true; public function __debugInfo () { return call_user_func('get_object_vars', $this); } } $richard = new Bear(); var_dump($richard); If we were to var_dump this script (with both public and private property to play with), we would get the following output: object(Bear)#1 (1) { ["growls"]=> bool(true) } As you can see, only the public property is visible. So what is the moral of the story from this little experiment? Firstly, that var_dumps exposesprivate and protected properties inside objects, and secondly, that this behavior can be overridden. Summary In this article, we revised some PHP principles, including OOP principles. We also revised some PHP syntax basics. Resources for Article: Further resources on this subject: Running Simpletest and PHPUnit [article] Data Tables and DataTables Plugin in jQuery 1.3 with PHP [article] Understanding PHP basics [article]
Read more
  • 0
  • 0
  • 2655

article-image-asynchronous-control-flow-patterns-es2015-and-beyond
Packt
07 Jun 2016
6 min read
Save for later

Asynchronous Control Flow Patterns with ES2015 and beyond

Packt
07 Jun 2016
6 min read
In this article,by Luciano Mammino, the author of the book Node.js Design Patterns, Second Edition, we will explore async await, an innovative syntaxthat will be available in JavaScript as part of the release of ECMAScript 2017. (For more resources related to this topic, see here.) Async await using Babel Callbacks, promises, and generators turn out to be the weapons at our disposal to deal with asynchronous code in JavaScript and in Node.js. As we have seen, generators are very interesting because they offer a way to actually suspend the execution of a function and resume it at a later stage. Now we can adopt this feature to write asynchronous codethatallowsdevelopers to write functions that "appear" to block at each asynchronous operation, waiting for the results before continuing with the following statement. The problem is that generator functions are designed to deal mostly with iterators and their usage with asynchronous code feels a bit cumbersome.It might be hard to understand,leading to code that is hard to read and maintain. But there is hope that there will be a cleaner syntax sometime in the near future. In fact, there is an interesting proposal that will be introduced with the ECMAScript 2017 specification that defines the async function's syntax. You can read more about the current status of the async await proposal at https://tc39.github.io/ecmascript-asyncawait/. The async function specification aims to dramatically improve the language-level model for writing asynchronous code by introducing two new keywords into the language: async and await. To clarify how these keywords are meant to be used and why they are useful, let's see a very quick example: const request = require('request'); function getPageHtml(url) { return new Promise(function(resolve, reject) { request(url, function(error, response, body) { resolve(body); }); }); } async function main() { const html = awaitgetPageHtml('http://google.com'); console.log(html); } main(); console.log('Loading...'); In this code,there are two functions: getPageHtml and main. The first one is a very simple function that fetches the HTML code of a remote web page given its URL. It's worth noticing that this function returns a promise. The main function is the most interesting one because it's where the new async and await keywords are used. The first thing to notice is that the function is prefixed with the async keyword. This means that the function executes asynchronous code and allows it to use the await keyword within its body. The await keyword before the call to getPageHtml tells the JavaScript interpreter to "await" the resolution of the promise returned by getPageHtml before continuing to the next instruction. This way, the main function is internally suspended until the asynchronous code completes without blocking the normal execution of the rest of the program. In fact, we will see the string Loading… in the console and, after a moment, the HTML code of the Google landing page. Isn't this approach much more readable and easy to understand? Unfortunately, this proposal is not yet final, and even if it will be approved we will need to wait for the next version of the ECMAScript specification to come out and be integrated in Node.js to be able to use this new syntax natively. So what do we do today? Just wait? No, of course not! We can already leverage async await in our code thanks to transpilers such as Babel. Installing and running Babel Babel is a JavaScript compiler (or transpiler) that is able to convert JavaScript code into other JavaScript code using syntax transformers. Syntax transformers allowsthe use of new syntax such as ES2015, ES2016, JSX, and others to produce backward compatible equivalent code that can be executed in modernJavaScript runtimes, such as browsers or Node.js. You can install Babel in your project using NPM with the following command: npm install --save-dev babel-cli We also need to install the extensions to support async await parsing and transformation: npm install --save-dev babel-plugin-syntax-async-functions babel-plugin-transform-async-to-generator Now let's assume we want to run our previous example (called index.js).We need to launch the following command: node_modules/.bin/babel-node --plugins "syntax-async-functions,transform-async-to-generator" index.js This way, we are transforming the source code in index.js on the fly, applying the transformers to support async await. This new backward compatible code is stored in memory and then executed on the fly on the Node.js runtime. Babel can also be configured to act as a build processor that stores the generated code into files so that you can easily deploy and run the generated code. You can read more about how to install and configure Babel on the official website at https://babeljs.io. Comparison At this point, we should have a better understanding of the options we have to tame the asynchronous nature of JavaScript. Each one of the solutions presented has its own pros and cons. Let's summarize them in the following table: Solutions Pros Cons Plain JavaScript Does not require any additional libraries or technology Offers the best performances Provides the best level of compatibility with third-party libraries Allows the creation of ad hoc and more advanced algorithms Might require extra code and relatively complex algorithms Async (library) Simplifies the most common control flow patterns Is still a callback-based solution Good performance Introduces an external dependency Might still not be enough for advanced flows   Promises Greatly simplify the most common control flow patterns Robust error handling Part of the ES2015 specification Guarantee deferred invocation of onFulfilled and onRejected Require to promisify callback-based APIs Introduce a small performance hit   Generators Make non-blocking API look like a blocking one Simplify error handling Part of ES2015 specification Require a complementary control flow library Still require callbacks or promises to implement non-sequential flows Require to thunkify or promisify nongenerator-based APIs   Async await Make non-blocking API look like blocking Clean and intuitive syntax Not yet available in JavaScript and Node.js natively Requires Babel or other transpilers and some configuration to be used today   It is worth mentioning that we chose to present only the most popular solutions to handle asynchronous control flow, or the ones receiving a lot of momentum, but it's good to know that there are a few more options you might want to look at, for example, Fibers (https://npmjs.org/package/fibers) and Streamline (https://npmjs.org/package/streamline). Summary In this article, we analyzed how Babel can be used for performing async await and how to install Babel.
Read more
  • 0
  • 0
  • 3111

article-image-understanding-patterns-and-architecturesin-typescript
Packt
01 Jun 2016
19 min read
Save for later

Understanding Patterns and Architecturesin TypeScript

Packt
01 Jun 2016
19 min read
In this article by Vilic Vane,author of the book TypeScript Design Patterns, we'll study architecture and patterns that are closely related to the language or its common applications. Many topics in this articleare related to asynchronous programming. We'll start from a web architecture for Node.js that's based on Promise. This is a larger topic that has interesting ideas involved, including abstractions of response and permission, as well as error handling tips. Then, we'll talk about how to organize modules with ES module syntax. Due to the limited length of this article, some of the related code is aggressively simplified, and nothing more than the idea itself can be applied practically. (For more resources related to this topic, see here.) Promise-based web architecture The most exciting thing for Promise may be the benefits brought to error handling. In a Promise-based architecture, throwing an error could be safe and pleasant. You don't have to explicitly handle errors when chaining asynchronous operations, and this makes it tougher for mistakes to occur. With the growing usage with ES2015 compatible runtimes, Promise has already been there out of the box. We have actually plenty of polyfills for Promises (including my ThenFail, written in TypeScript) as people who write JavaScript roughly, refer to the same group of people who create wheels. Promises work great with other Promises: A Promises/A+ compatible implementation should work with other Promises/A+ compatible implementations Promises do their best in a Promise-based architecture If you are new to Promise, you may complain about trying Promise with a callback-based project. You may intend to use helpers provided by Promise libraries, such asPromise.all, but it turns out that you have better alternatives,such as the async library. So, the reason that makes you decide to switch should not be these helpers (as there are a lot of them for callbacks).They should be because there's an easier way to handle errors or because you want to take the advantages of ES async and awaitfeatures which are based on Promise. Promisifying existing modules or libraries Though Promises do their best with a Promise-based architecture, it is still possible to begin using Promise with a smaller scope by promisifying existing modules or libraries. Taking Node.js style callbacks as an example, this is how we use them: import * as FS from 'fs';   FS.readFile('some-file.txt', 'utf-8', (error, text) => { if (error) {     console.error(error);     return; }   console.log('Content:', text); }); You may expect a promisified version of readFile to look like the following: FS .readFile('some-file.txt', 'utf-8') .then(text => {     console.log('Content:', text); }) .catch(reason => {     Console.error(reason); }); Implementing the promisified version of readFile can be easy as the following: function readFile(path: string, options: any): Promise<string> { return new Promise((resolve, reject) => {     FS.readFile(path, options, (error, result) => {         if (error) { reject(error);         } else {             resolve(result);         }     }); }); } I am using any here for parameter options to reduce the size of demo code, but I would suggest that you donot useany whenever possible in practice. There are libraries that are able to promisify methods automatically. Unfortunately, you may need to write declaration files yourself for the promisified methods if there is no declaration file of the promisified version that is available. Views and controllers in Express Many of us may have already been working with frameworks such as Express. This is how we render a view or send back JSON data in Express: import * as Path from 'path'; import * as express from 'express';   let app = express();   app.set('engine', 'hbs'); app.set('views', Path.join(__dirname, '../views'));   app.get('/page', (req, res) => {     res.render('page', {         title: 'Hello, Express!',         content: '...'     }); });   app.get('/data', (req, res) => {     res.json({         version: '0.0.0',         items: []     }); });   app.listen(1337); We will usuallyseparate controller from routing, as follows: import { Request, Response } from 'express';   export function page(req: Request, res: Response): void {     res.render('page', {         title: 'Hello, Express!',         content: '...'     }); } Thus, we may have a better idea of existing routes, and we may have controllers managed more easily. Furthermore, automated routing can be introduced so that we don't always need to update routing manually: import * as glob from 'glob';   let controllersDir = Path.join(__dirname, 'controllers');   let controllerPaths = glob.sync('**/*.js', {     cwd: controllersDir });   for (let path of controllerPaths) {     let controller = require(Path.join(controllersDir, path));     let urlPath = path.replace(/\/g, '/').replace(/.js$/, '');       for (let actionName of Object.keys(controller)) {         app.get(             `/${urlPath}/${actionName}`, controller[actionName] );     } } The preceding implementation is certainly too simple to cover daily usage. However, it displays the one rough idea of how automated routing could work: via conventions that are based on file structures. Now, if we are working with asynchronous code that is written in Promises, an action in the controller could be like the following: export function foo(req: Request, res: Response): void {     Promise         .all([             Post.getContent(),             Post.getComments()         ])         .then(([post, comments]) => {             res.render('foo', {                 post,                 comments             });         }); } We use destructuring of an array within a parameter. Promise.all returns a Promise of an array with elements corresponding to values of resolvablesthat are passed in. (A resolvable means a normal value or a Promise-like object that may resolve to a normal value.) However, this is not enough, we need to handle errors properly. Or in some case, the preceding code may fail in silence (which is terrible). In Express, when an error occurs, you should call next (the third argument that is passed into the callback) with the error object, as follows: import { Request, Response, NextFunction } from 'express';   export function foo( req: Request, res: Response, next: NextFunction ): void {     Promise         // ...         .catch(reason => next(reason)); } Now, we are fine with the correctness of this approach, but this is simply not how Promises work. Explicit error handling with callbacks could be eliminated in the scope of controllers, and the easiest way to do this is to return the Promise chain and hand over to code that was previously performing routing logic. So, the controller could be written like the following: export function foo(req: Request, res: Response) {     return Promise         .all([             Post.getContent(),             Post.getComments()         ])         .then(([post, comments]) => {             res.render('foo', {                 post,                 comments             });         }); } Or, can we make this even better? Abstraction of response We've already been returning a Promise to tell whether an error occurs. So, for a server error, the Promise actually indicates the result, or in other words, the response of the request. However, why we are still calling res.render()to render the view? The returned Promise object could be an abstraction of the response itself. Think about the following controller again: export class Response {}   export class PageResponse extends Response {     constructor(view: string, data: any) { } }   export function foo(req: Request) {     return Promise         .all([             Post.getContent(),             Post.getComments()         ])         .then(([post, comments]) => {             return new PageResponse('foo', {                 post,                 comments             });         }); } The response object that is returned could vary for a different response output. For example, it could be either a PageResponse like it is in the preceding example, a JSONResponse, a StreamResponse, or even a simple Redirection. As in most of the cases, PageResponse or JSONResponse is applied, and the view of a PageResponse can usually be implied with the controller path and action name.It is useful to have these two responses automatically generated from a plain data object with proper view to render with, as follows: export function foo(req: Request) {     return Promise         .all([             Post.getContent(),             Post.getComments()         ])         .then(([post, comments]) => {             return {                 post,                 comments             };         }); } This is how a Promise-based controller should respond. With this idea in mind, let's update the routing code with an abstraction of responses. Previously, we were passing controller actions directly as Express request handlers. Now, we need to do some wrapping up with the actions by resolving the return value, and applying operations that are based on the resolved result, as follows: If it fulfills and it's an instance of Response, apply it to the resobjectthat is passed in by Express. If it fulfills and it's a plain object, construct a PageResponse or a JSONResponse if no view found and apply it to the resobject. If it rejects, call thenext function using this reason. As seen previously,our code was like the following: app.get(`/${urlPath}/${actionName}`, controller[actionName]); Now, it gets a little bit more lines, as follows: let action = controller[actionName];   app.get(`/${urlPath}/${actionName}`, (req, res, next) => {     Promise         .resolve(action(req))         .then(result => {             if (result instanceof Response) {                 result.applyTo(res);             } else if (existsView(actionName)) {                 new PageResponse(actionName, result).applyTo(res);             } else {                 new JSONResponse(result).applyTo(res);             }         })         .catch(reason => next(reason)); });   However, so far we can only handle GET requests as we hardcoded app.get() in our router implementation. The poor view matching logic can hardly be used in practice either. We need to make these actions configurable, and ES decorators could perform a good job here: export default class Controller { @get({     View: 'custom-view-path' })     foo(req: Request) {         return {             title: 'Action foo',             content: 'Content of action foo'         };     } } I'll leave the implementation to you, and feel free to make them awesome. Abstraction of permission Permission plays an important role in a project, especially in systems that have different user groups. For example, a forum. The abstraction of permission should be extendable to satisfy changing requirements, and it should be easy to use as well. Here, we are going to talk about the abstraction of permission in the level of controller actions. Consider the legibility of performing one or more actions a privilege. The permission of a user may consist of several privileges, and usually most of the users at the same level would have the same set of privileges. So, we may have a larger concept, namely groups. The abstraction could either work based on both groups and privileges, or work based on only privileges (groups are now just aliases to sets of privileges): Abstraction that validates based on privileges and groups at the same time is easier to build. You do not need to create a large list of which actions can be performed for a certain group of user, as granular privileges are only required when necessary. Abstraction that validates based on privileges has better control and more flexibility to describe the permission. For example, you can remove a small set of privileges from the permission of a user easily. However, both approaches have similar upper-level abstractions, and they differ mostly on implementations. The general structure of the permission abstractions that we've talked about is like in the following diagram: The participants include the following: Privilege: This describes detailed privilege corresponding to specific actions Group: This defines a set of privileges Permission: This describes what a user is capable of doing, consist of groups that the user belongs to, and the privileges that the user has. Permission descriptor: This describes how the permission of a user works and consists of possible groups and privileges. Expected errors A great concern that was wiped away after using Promises is that we do not need to worry about whether throwing an error in a callback would crash the application most of the time. The error will flow through the Promises chain and if not caught, it will be handled by our router. Errors can be roughly divided as expected errors and unexpected errors. Expected errors are usually caused by incorrect input or foreseeable exceptions, and unexpected errors are usually caused by bugs or other libraries that the project relies on. For expected errors, we usually want to give users a friendly response with readable error messages and codes. So that the user can help themselves searching the error or report to us with useful context. For unexpected errors, we would also want a reasonable response (usually a message described as an unknown error), a detailed server-side log (including real error name, message, stack information, and so on), and even alerts to let the team know as soon as possible. Defining and throwing expected errors The router will need to handle different types of errors, and an easy way to achieve this is to subclass a universal ExpectedError class and throw its instances out, as follows: import ExtendableError from 'extendable-error';   class ExpectedError extends ExtendableError { constructor(     message: string,     public code: number ) {     super(message); } } The extendable-error is a package of mine that handles stack trace and themessage property. You can directly extend Error class as well. Thus, when receiving an expected error, we can safely output the error name and message as part of the response. If this is not an instance of ExpectedError, we can display predefined unknown error messages. Transforming errors Some errors such as errors that are caused by unstable networks or remote services are expected.We may want to catch these errors and throw them out again as expected errors. However, it could be rather trivial to actually do this. A centralized error transforming process can then be applied to reduce the efforts required to manage these errors. The transforming process includes two parts: filtering (or matching) and transforming. These are the approaches to filter errors: Filter by error class: Many third party libraries throws error of certain class. Taking Sequelize (a popular Node.js ORM) as an example, it has DatabaseError, ConnectionError, ValidationError, and so on. By filtering errors by checking whether they are instances of a certain error class, we may easily pick up target errors from the pile. Filter by string or regular expression: Sometimes a library might be throw errors that are instances of theError class itself instead of its subclasses.This makes these errors hard to distinguish from others. In this situation, we can filter these errors by their message with keywords or regular expressions. Filter by scope: It's possible that instances of the same error class with the same error message should result in a different response. One of the reasons may be that the operation throwing a certain error is at a lower-level, but it is being used by upper structures within different scopes. Thus, a scope mark can be added for these errors and make it easier to be filtered. There could be more ways to filter errors, and they are usually able to cooperate as well. By properly applying these filters and transforming errors, we can reduce noises, analyze what's going on within a system,and locate problems faster if they occur. Modularizing project Before ES2015, there are actually a lot of module solutions for JavaScript that work. The most famous two of them might be AMD and CommonJS. AMD is designed for asynchronous module loading, which is mostly applied in browsers. While CommonJSperforms module loading synchronously, and this is the way that the Node.js module system works. To make it work asynchronously, writing an AMD module takes more characters. Due to the popularity of tools, such asbrowserify and webpack, CommonJS becomes popular even for browser projects. Proper granularity of internal modules can help a project keep a healthy structure. Consider project structure like the following: project├─controllers├─core│  │ index.ts│  ││  ├─product│  │   index.ts│  │   order.ts│  │   shipping.ts│  ││  └─user│      index.ts│      account.ts│      statistics.ts│├─helpers├─models├─utils└─views Let's assume that we are writing a controller file that's going to import a module defined by thecore/product/order.ts file. Previously, usingCommonJS style'srequire, we would write the following: const Order = require('../core/product/order'); Now, with the new ES import syntax, this would be like the following: import * as Order from '../core/product/order'; Wait, isn't this essentially the same? Sort of. However, you may have noticed several index.ts files that I've put into folders. Now, in the core/product/index.tsfile, we could have the following: import * as Order from './order'; import * as Shipping from './shipping';   export { Order, Shipping } Or, we could also have the following: export * from './order'; export * from './shipping'; What's the difference? The ideal behind these two approaches of re-exporting modules can vary. The first style works better when we treat Order and Shipping as namespaces, under which the identifier names may not be easy to distinguish from one another. With this style, the files are the natural boundaries of building these namespaces. The second style weakens the namespace property of two files, and then uses them as tools to organize objects and classes under the same larger category. A good thingabout using these files as namespaces is that multiple-level re-exporting is fine, while weakening namespaces makes it harder to understand different identifier names as the number of re-exporting levels grows. Summary In this article, we discussed some interesting ideas and an architecture formed by these ideas. Most of these topics focused on limited examples, and did their own jobs.However, we also discussed ideas about putting a whole system together. Resources for Article: Further resources on this subject: Introducing Object Oriented Programmng with TypeScript [article] Writing SOLID JavaScript code with TypeScript [article] Optimizing JavaScript for iOS Hybrid Apps [article]
Read more
  • 0
  • 0
  • 3569

article-image-introduction-nodejs-design-patterns
Packt
18 Feb 2016
27 min read
Save for later

An Introduction to Node.js Design Patterns

Packt
18 Feb 2016
27 min read
A design pattern is a reusable solution to a recurring problem; the term is really broad in its definition and can span multiple domains of application. However, the term is often associated with a well-known set of object-oriented patterns that were popularized in the 90's by the book, Design Patterns: Elements of Reusable Object-Oriented Software, Pearson Education by the almost legendary Gang of Four (GoF): Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides. We will often refer to these specific set of patterns as traditional design patterns, or GoF design patterns. (For more resources related to this topic, see here.) Applying this set of object-oriented design patterns in JavaScript is not as linear and formal as it would happen in a class-based object-oriented language. As we know, JavaScript is multi-paradigm, object-oriented, and prototype-based, and has dynamic typing; it treats functions as first class citizens, and allows functional programming styles. These characteristics make JavaScript a very versatile language, which gives tremendous power to the developer, but at the same time, it causes a fragmentation of programming styles, conventions, techniques, and ultimately the patterns of its ecosystem. There are so many ways to achieve the same result using JavaScript that everybody has their own opinion on what the best way is to approach a problem. A clear demonstration of this phenomenon is the abundance of frameworks and opinionated libraries in the JavaScript ecosystem; probably, no other language has ever seen so many, especially now that Node.js has given new astonishing possibilities to JavaScript and has created so many new scenarios. In this context, the traditional design patterns too are affected by the nature of JavaScript. There are so many ways in which they can be implemented so that their traditional, strongly object-oriented implementation is not a pattern anymore, and in some cases, not even possible because JavaScript, as we know, doesn't have real classes or abstract interfaces. What doesn't change though, is the original idea at the base of each pattern, the problem it solves, and the concepts at the heart of the solution. In this article, we will see how some of the most important GoF design patterns apply to Node.js and its philosophy, thus rediscovering their importance from another perspective. The design patterns explored in this article are as follows: Factory Proxy Decorator Adapter Strategy State Template Middleware Command This article assumes that the reader has some notion of how inheritance works in JavaScript. Please also be advised that throughout this article we will often use generic and more intuitive diagrams to describe a pattern in place of standard UML, since many patterns can have an implementation based not only on classes, but also on objects and even functions. Factory We begin our journey starting from what probably is the most simple and common design pattern in Node.js: Factory. A generic interface for creating objects We already stressed the fact that, in JavaScript, the functional paradigm is often preferred to a purely object-oriented design, for its simplicity, usability, and small surface area. This is especially true when creating new object instances. In fact, invoking a factory, instead of directly creating a new object from a prototype using the new operator or Object.create(), is so much more convenient and flexible under several aspects. First and foremost, a factory allows us to separate the object creation from its implementation; essentially, a factory wraps the creation of a new instance, giving us more flexibility and control in the way we do it. Inside the factory, we can create a new instance leveraging closures, using a prototype and the new operator, using Object.create(), or even returning a different instance based on a particular condition. The consumer of the factory is totally agnostic about how the creation of the instance is carried out. The truth is that, by using new, we are binding our code to one specific way of creating an object, while in JavaScript we can have much more flexibility, almost for free. As a quick example, let's consider a simple factory that creates an Image object: function createImage(name) { return new Image(name); } var image = createImage('photo.jpeg'); The createImage() factory might look totally unnecessary, why not instantiate the Image class by using the new operator directly? Something like the following line of code: var image = new Image(name); As we already mentioned, using new binds our code to one particular type of object; in the preceding case, to objects of type, Image. A factory instead, gives us much more flexibility; imagine that we want to refactor the Image class, splitting it into smaller classes, one for each image format that we support. If we exposed a factory as the only means to create new images, we can simply rewrite it as follows, without breaking any of the existing code: function createImage(name) { if(name.match(/.jpeg$/)) { return new JpegImage(name); } else if(name.match(/.gif$/)) { return new GifImage(name); } else if(name.match(/.png$/)) { return new PngImage(name); } else { throw new Exception('Unsupported format'); } } Our factory also allows us to not expose the constructors of the objects it creates, and prevents them from being extended or modified (remember the principle of small surface area?). In Node.js, this can be achieved by exporting only the factory, while keeping each constructor private. A mechanism to enforce encapsulation A factory can also be used as an encapsulation mechanism, thanks to closures. Encapsulation refers to the technique of controlling the access to some internal details of an object by preventing the external code from manipulating them directly. The interaction with the object happens only through its public interface, isolating the external code from the changes in the implementation details of the object. This practice is also referred to as information hiding. Encapsulation is also a fundamental principle of object-oriented design, together with inheritance, polymorphism, and abstraction. As we know, in JavaScript, we don't have access level modifiers (for example, we can't declare a private variable), so the only way to enforce encapsulation is through function scopes and closures. A factory makes it straightforward to enforce private variables, consider the following code for example: function createPerson(name) { var privateProperties = {}; var person = { setName: function(name) { if(!name) throw new Error('A person must have a name'); privateProperties.name = name; }, getName: function() { return privateProperties.name; } }; person.setName(name); return person; } In the preceding code, we leverage closures to create two objects: a person object which represents the public interface returned by the factory and a group of privateProperties that are inaccessible from the outside and that can be manipulated only through the interface provided by the person object. For example, in the preceding code, we make sure that a person's name is never empty, this would not be possible to enforce if name was just a property of the person object. Factories are only one of the techniques that we have for creating private members; in fact, other possible approaches are as follows: Defining private variables in a constructor (as recommended by Douglas Crockford: http://javascript.crockford.com/private.html) Using conventions, for example, prefixing the name of a property with an underscore "_" or the dollar sign "$" (this however, does not technically prevent a member from being accessed from the outside) Using ES6 WeakMaps (http://fitzgeraldnick.com/weblog/53/) Building a simple code profiler Now, let's work on a complete example using a factory. Let's build a simple code profiler, an object with the following properties: A start() method that triggers the start of a profiling session An end() method to terminate the session and log its execution time to the console Let's start by creating a file named profiler.js, which will have the following content: function Profiler(label) { this.label = label; this.lastTime = null; } Profiler.prototype.start = function() { this.lastTime = process.hrtime(); } Profiler.prototype.end = function() { var diff = process.hrtime(this.lastTime); console.log('Timer "' + this.label + '" took ' + diff[0] + ' seconds and ' + diff[1] + ' nanoseconds.'); } There is nothing fancy in the preceding class; we simply use the default high resolution timer to save the current time when start() is invoked, and then calculate the elapsed time when end() is executed, printing the result to the console. Now, if we are going to use such a profiler in a real-world application to calculate the execution time of the different routines, we can easily imagine the huge amount of logging we will generate to the standard output, especially in a production environment. What we might want to do instead is redirect the profiling information to another source, for example, a database, or alternatively, disabling the profiler altogether if the application is running in the production mode. It's clear that if we were to instantiate a Profiler object directly by using the new operator, we would need some extra logic in the client code or in the Profiler object itself in order to switch between the different logics. We can instead use a factory to abstract the creation of the Profiler object, so that, depending on whether the application runs in the production or development mode, we can return a fully working Profiler object, or alternatively, a mock object with the same interface, but with empty methods. Let's do this then, in the profiler.js module instead of exporting the Profiler constructor, we will export only a function, our factory. The following is its code: module.exports = function(label) { if(process.env.NODE_ENV === 'development') { return new Profiler(label); //[1] } else if(process.env.NODE_ENV === 'production') { return { //[2] start: function() {}, end: function() {} } } else { throw new Error('Must set NODE_ENV'); } } The factory that we created abstracts the creation of a profiler object from its implementation: If the application is running in the development mode, we return a new, fully functional Profiler object If instead the application is running in the production mode, we return a mock object where the start() and stop() methods are empty functions The nice feature to highlight is that, thanks to the JavaScript dynamic typing, we were able to return an object instantiated with the new operator in one circumstance and a simple object literal in the other (this is also known as duck typing http://en.wikipedia.org/wiki/Duck_typing). Our factory is doing its job perfectly; we can really create objects in any way that we like inside the factory function, and we can execute additional initialization steps or return a different type of object based on particular conditions, and all of this while isolating the consumer of the object from all these details. We can easily understand the power of this simple pattern. Now, we can play with our profiler; this is a possible use case for the factory that we just created earlier: var profiler = require('./profiler'); function getRandomArray(len) { var p = profiler('Generating a ' + len + ' items long array'); p.start(); var arr = []; for(var i = 0; i < len; i++) { arr.push(Math.random()); } p.end(); } getRandomArray(1e6); console.log('Done'); The p variable contains the instance of our Profiler object, but we don't know how it's created and what its implementation is at this point in the code. If we include the preceding code in a file named profilerTest.js, we can easily test these assumptions. To try the program with profiling enabled, run the following command: export NODE_ENV=development; node profilerTest The preceding command enables the real profiler and prints the profiling information to the console. If we want to try the mock profiler instead, we can run the following command: export NODE_ENV=production; node profilerTest The example that we just presented is just a simple application of the factory function pattern, but it clearly shows the advantages of separating an object's creation from its implementation. In the wild As we said, factories are very popular in Node.js. Many packages offer only a factory for creating new instances, some examples are the following: Dnode (https://npmjs.org/package/dnode): This is an RPC system for Node.js. If we look into its source code, we will see that its logic is implemented into a class named D; however, this is never exposed to the outside as the only exported interface is a factory, which allows us to create new instances of the class. You can take a look at its source code at https://github.com/substack/dnode/blob/34d1c9aa9696f13bdf8fb99d9d039367ad873f90/index.js#L7-9. Restify (https://npmjs.org/package/restify): This is a framework to build REST API that allows us to create new instances of a server using the restify.createServer()factory, which internally creates a new instance of the Server class (which is not exported). You can take a look at its source code at https://github.com/mcavage/node-restify/blob/5f31e2334b38361ac7ac1a5e5d852b7206ef7d94/lib/index.js#L91-116. Other modules expose both a class and a factory, but document the factory as the main method—or the most convenient way—to create new instances; some of the examples are as follows: http-proxy (https://npmjs.org/package/http-proxy): This is a programmable proxying library, where new instances are created with httpProxy.createProxyServer(options). The core Node.js HTTP server: This is where new instances are mostly created using http.createServer(), even though this is essentially a shortcut for new http.Server(). bunyan (https://npmjs.org/package/bunyan): This is a popular logging library; in its readme file the contributors propose a factory, bunyan.createLogger(), as the main method to create new instances, even though this would be equivalent to running new bunyan(). Some other modules provide a factory to wrap the creation of other components. Popular examples are through2 and from2 , which allow us to simplify the creation of new streams using a factory approach, freeing the developer from explicitly using inheritance and the new operator. Proxy A proxy is an object that controls the access to another object called subject. The proxy and the subject have an identical interface and this allows us to transparently swap one for the other; in fact, the alternative name for this pattern is surrogate. A proxy intercepts all or some of the operations that are meant to be executed on the subject, augmenting or complementing their behavior. The following figure shows the diagrammatic representation: The preceding figure shows us how the Proxy and the Subject have the same interface and how this is totally transparent to the client, who can use one or the other interchangeably. The Proxy forwards each operation to the subject, enhancing its behavior with additional preprocessing or post-processing. It's important to observe that we are not talking about proxying between classes; the Proxy pattern involves wrapping actual instances of the subject, thus preserving its state. A proxy is useful in several circumstances, for example, consider the following ones: Data validation: The proxy validates the input before forwarding it to the subject Security: The proxy verifies that the client is authorized to perform the operation and it passes the request to the subject only if the outcome of the check is positive Caching: The proxy keeps an internal cache so that the operations are executed on the subject only if the data is not yet present in the cache Lazy initialization: If the creation of the subject is expensive, the proxy can delay it to when it's really necessary Logging: The proxy intercepts the method invocations and the relative parameters, recoding them as they happen Remote objects: A proxy can take an object that is located remotely, and make it appear local Of course, there are many more applications for the Proxy pattern, but these should give us an idea of the extent of its purpose. Techniques for implementing proxies When proxying an object, we can decide to intercept all its methods or only part of them, while delegating the rest of them directly to the subject. There are several ways in which this can be achieved; let's analyze some of them. Object composition Composition is the technique whereby an object is combined with another object for the purpose of extending or using its functionality. In the specific case of the Proxy pattern, a new object with the same interface as the subject is created, and a reference to the subject is stored internally in the proxy in the form of an instance variable or a closure variable. The subject can be injected from the client at creation time or created by the proxy itself. The following is one example of this technique using a pseudo class and a factory: function createProxy(subject) { var proto = Object.getPrototypeOf(subject); function Proxy(subject) { this.subject = subject; } Proxy.prototype = Object.create(proto); //proxied method Proxy.prototype.hello = function() { return this.subject.hello() + ' world!'; } //delegated method Proxy.prototype.goodbye = function() { return this.subject.goodbye .apply(this.subject, arguments); } return new Proxy(subject); } To implement a proxy using composition, we have to intercept the methods that we are interested in manipulating (such as hello()), while simply delegating the rest of them to the subject (as we did with goodbye()). The preceding code also shows the particular case where the subject has a prototype and we want to maintain the correct prototype chain, so that, executing proxy instanceof Subject will return true; we used pseudo-classical inheritance to achieve this. This is just an extra step, required only if we are interested in maintaining the prototype chain, which can be useful in order to improve the compatibility of the proxy with code initially meant to work with the subject. However, as JavaScript has dynamic typing, most of the time we can avoid using inheritance and use more immediate approaches. For example, an alternative implementation of the proxy presented in the preceding code, might just use an object literal and a factory: function createProxy(subject) { return { //proxied method hello: function() { return subject.hello() + ' world!'; }, //delegated method goodbye: function() { return subject.goodbye.apply(subject, arguments); } }; } If we want to create a proxy that delegates most of its methods, it would be convenient to generate these automatically using a library, such as delegates (https://npmjs.org/package/delegates). Object augmentation Object augmentation (or monkey patching)is probably the most pragmatic way of proxying individual methods of an object and consists of modifying the subject directly by replacing a method with its proxied implementation; consider the following example: function createProxy(subject) { var helloOrig = subject.hello; subject.hello = function() { return helloOrig.call(this) + ' world!'; } return subject; } This technique is definitely the most convenient one when we need to proxy only one or a few methods, but it has the drawback of modifying the subject object directly. A comparison of the different techniques Composition can be considered the safest way of creating a proxy, because it leaves the subject untouched without mutating its original behavior. Its only drawback is that we have to manually delegate all the methods, even if we want to proxy only one of them. If needed, we might also have to delegate the access to the properties of the subject. The object properties can be delegated using Object.defineProperty(). Find out more at https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/defineProperty. Object augmentation, on the other hand, modifies the subject, which might not always be what we want, but it does not present the various inconveniences related to delegation. For this reason, object augmentation is definitely the most pragmatic way to implement proxies in JavaScript, and it's the preferred technique in all those circumstances where modifying the subject is not a big concern. However, there is at least one situation where composition is almost necessary; this is when we want to control the initialization of the subject as for example, to create it only when needed (lazy initialization). It is worth pointing out that by using a factory function (createProxy() in our examples), we can shield our code from the technique used to generate the proxy. Creating a logging Writable stream To see the proxy pattern in a real example, we will now build an object that acts as a proxy to a Writable stream, by intercepting all the calls to the write() method and logging a message every time this happens. We will use an object composition to implement our proxy; this is how the loggingWritable.js file looks: function createLoggingWritable(writableOrig) { var proto = Object.getPrototypeOf(writableOrig); function LoggingWritable(subject) { this.writableOrig = writableOrig; } LoggingWritable.prototype = Object.create(proto); LoggingWritable.prototype.write = function(chunk, encoding, callback) { if(!callback && typeof encoding === 'function') { callback = encoding; encoding = undefined; } console.log('Writing ', chunk); return this.writableOrig.write(chunk, encoding, function() { console.log('Finished writing ', chunk); callback && callback(); }); }; LoggingWritable.prototype.on = function() { return this.writableOrig.on .apply(this.writableOrig, arguments); }; LoggingWritable.prototype.end = function() { return this.writableOrig.end .apply(this.writableOrig, arguments); } return new LoggingWritable(this.writableOrig); } In the preceding code, we created a factory that returns a proxied version of the writable object passed as an argument. We provide an override for the write() method that logs a message to the standard output every time it is invoked and every time the asynchronous operation completes. This is also a good example to demonstrate the particular case of creating proxies of asynchronous functions, which makes necessary to proxy the callback as well; this is an important detail to be considered in a platform such as Node.js. The remaining methods, on() and end(), are simply delegated to the original writable stream (to keep the code leaner we are not considering the other methods of the Writable interface). We can now include a few more lines of code into the logginWritable.js module to test the proxy that we just created: var fs = require('fs'); var writable = fs.createWriteStream('test.txt'); var writableProxy = createProxy(writable); writableProxy.write('First chunk'); writableProxy.write('Second chunk'); writable.write('This is not logged'); writableProxy.end(); The proxy did not change the original interface of the stream or its external behavior, but if we run the preceding code, we will now see that every chunk that is written into the stream is transparently logged to the console. Proxy in the ecosystem – function hooks and AOP In its numerous forms, Proxy is a quite popular pattern in Node.js and in the ecosystem. In fact, we can find several libraries that allow us to simplify the creation of proxies, most of the time leveraging object augmentation as an implementation approach. In the community, this pattern can be also referred to as function hooking or sometimes also as Aspect Oriented Programming (AOP), which is actually a common area of application for proxies. As it happens in AOP, these libraries usually allow the developer to set pre or post execution hooks for a specific method (or a set of methods) that allow us to execute some custom code before and after the execution of the advised method respectively. Sometimes proxies are also called middleware, because, as it happens in the middleware pattern, they allow us to preprocess and post-process the input/output of a function. Sometimes, they also allow to register multiple hooks for the same method using a middleware-like pipeline. There are several libraries on npm that allow us to implement function hooks with little effort. Among them there are hooks (https://npmjs.org/package/hooks), hooker (https://npmjs.org/package/hooker), and meld (https://npmjs.org/package/meld). In the wild Mongoose (http://mongoosejs.com) is a popular Object-Document Mapping (ODM) library for MongoDB. Internally, it uses the hooks package (https://npmjs.org/package/hooks) to provide pre and post execution hooks for the init, validate, save, and remove methods of its Document objects. Find out more on the official documentation at http://mongoosejs.com/docs/middleware.html. Decorator Decorator is a structural pattern that consists of dynamically augmenting the behavior of an existing object. It's different from classical inheritance, because the behavior is not added to all the objects of the same class but only to the instances that are explicitly decorated. Implementation-wise, it is very similar to the Proxy pattern, but instead of enhancing or modifying the behavior of the existing interface of an object, it augments it with new functionalities, as described in the following figure: In the previous figure, the Decorator object is extending the Component object by adding the methodC() operation. The existing methods are usually delegated to the decorated object, without further processing. Of course, if necessary we can easily combine the Proxy pattern, so that also the calls to the existing methods can be intercepted and manipulated. Techniques for implementing decorators Although Proxy and Decorator are conceptually two different patterns, with different intents, they practically share the same implementation strategies. Let's revise them. Composition Using composition, the decorated component is wrapped around a new object that usually inherits from it. The Decorator in this case simply needs to define the new methods while delegating the existing ones to the original component: function decorate(component) { var proto = Object.getPrototypeOf(component); function Decorator(component) { this.component = component; } Decorator.prototype = Object.create(proto); //new method Decorator.prototype.greetings = function() { //... }; //delegated method Decorator.prototype.hello = function() { this.component.hello.apply(this.component, arguments); }; return new Decorator(component); } Object augmentation Object decoration can also be achieved by simply attaching new methods directly to the decorated object, as follows: function decorate(component) { //new method component.greetings = function() { //... }; return component; } The same caveats discussed during the analysis of the Proxy pattern are also valid for Decorator. Let's now practice the pattern with a working example! Decorating a LevelUP database Before we start coding with the next example, let's spend a few words to introduce LevelUP, the module that we are now going to work with. Introducing LevelUP and LevelDB LevelUP (https://npmjs.org/package/levelup) is a Node.js wrapper around Google's LevelDB, a key-value store originally built to implement IndexedDB in the Chrome browser, but it's much more than that. LevelDB has been defined by Dominic Tarr as the "Node.js of databases", because of its minimalism and extensibility. Like Node.js, LevelDB provides blazing fast performances and only the most basic set of features, allowing developers to build any kind of database on top of it. The Node.js community, and in this case Rod Vagg, did not miss the chance to bring the power of this database into Node.js by creating LevelUP. Born as a wrapper for LevelDB, it then evolved to support several kinds of backends, from in-memory stores, to other NoSQL databases such as Riak and Redis, to web storage engines such as IndexedDB and localStorage, allowing to use the same API on both the server and the client, opening up some really interesting scenarios. Today, there is a full-fledged ecosystem around LevelUp made of plugins and modules that extend the tiny core to implement features such as replication, secondary indexes, live updates, query engines, and more. Also, complete databases were built on top of LevelUP, including CouchDB clones such as PouchDB (https://npmjs.org/package/pouchdb) and CouchUP (https://npmjs.org/package/couchup), and even a graph database, levelgraph (https://npmjs.org/package/levelgraph) that can work both on Node.js and the browser! Find out more about the LevelUP ecosystem at https://github.com/rvagg/node-levelup/wiki/Modules. Implementing a LevelUP plugin In the next example, we are going to show how we can create a simple plugin for LevelUp using the Decorator pattern, and in particular, the object augmentation technique, which is the simplest but nonetheless the most pragmatic and effective way to decorate objects with additional capabilities. For convenience, we are going to use the level package (http://npmjs.org/package/level) that bundles both levelup and the default adapter called leveldown, which uses LevelDB as the backend. What we want to build is a plugin for LevelUP that allows to receive notifications every time an object with a certain pattern is saved into the database. For example, if we subscribe to a pattern such as {a: 1}, we want to receive a notification when objects such as {a: 1, b: 3} or {a: 1, c: 'x'} are saved into the database. Let's start to build our small plugin by creating a new module called levelSubscribe.js. We will then insert the following code: module.exports = function levelSubscribe(db) { db.subscribe = function(pattern, listener) { //[1] db.on('put', function(key, val) { //[2] var match = Object.keys(pattern).every(function(k) { //[3] return pattern[k] === val[k]; }); if(match) { listener(key, val); //[4] } }); }; return db; } That's it for our plugin, and it's extremely simple. Let's see what happens in the preceding code briefly: We decorated the db object with a new method named subscribe(). We simply attached the method directly to the provided db instance (object augmentation). We listen for any put operation performed on the database. We performed a very simple pattern-matching algorithm, which verified that all the properties in the provided pattern are also available on the data being inserted. If we have a match, we notify the listener. Let's now create some code—in a new file named levelSubscribeTest.js—to try out our new plugin: var level = require('level'); //[1] var db = level(__dirname + '/db', {valueEncoding: 'json'}); var levelSubscribe = require('./levelSubscribe'); //[2] db = levelSubscribe(db); db.subscribe({doctype: 'tweet', language: 'en'}, //[3] function(k, val){ console.log(val); }); //[4] db.put('1', {doctype: 'tweet', text: 'Hi', language: 'en'}); db.put('2', {doctype: 'company', name: 'ACME Co.'}); This is what we did in the preceding code: First, we initialize our LevelUP database, choosing the directory where the files will be stored and the default encoding for the values. Then, we attach our plugin, which decorates the original db object. At this point, we are ready to use the new feature provided by our plugin, the subscribe() method, where we specify that we are interested in all the objects with doctype: 'tweet' and language: 'en'. Finally, we save some values in the database, so that we can see our plugin in action: db.put('1', {doctype: 'tweet', text: 'Hi', language: 'en'}); db.put('2', {doctype: 'company', name: 'ACME Co.'}); This example shows a real application of the decorator pattern in its most simple implementation: object augmentation. It might look like a trivial pattern but it has undoubted power if used appropriately. For simplicity, our plugin will work only in combination with the put operations, but it can be easily expanded to work even with the batch operations (https://github.com/rvagg/node-levelup#batch). In the wild For more examples of how Decorator is used in the real world, we might want to inspect the code of some more LevelUp plugins: level-inverted-index (https://github.com/dominictarr/level-inverted-index): This is a plugin that adds inverted indexes to a LevelUP database, allowing to perform simple text searches across the values stored in the database level-plus (https://github.com/eugeneware/levelplus): This is a plugin that adds atomic updates to a LevelUP database Summary To learn more about Node.js, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Node.js Essentials (https://www.packtpub.com/web-development/nodejs-essentials) Node.js Blueprints (https://www.packtpub.com/web-development/nodejs-blueprints) Learning Node.js for Mobile Application Development (https://www.packtpub.com/web-development/learning-nodejs-mobile-application-development) Mastering Node.js (https://www.packtpub.com/web-development/mastering-nodejs) Resources for Article: Further resources on this subject: Node.js Fundamentals [Article] Learning Node.js for Mobile Application Development [Article] Developing a Basic Site with Node.js and Express [Article]
Read more
  • 0
  • 0
  • 11016
article-image-python-design-patterns-depth-observer-pattern
Packt
16 Feb 2016
11 min read
Save for later

Python Design Patterns in Depth – The Observer Pattern

Packt
16 Feb 2016
11 min read
In this atricle you will see a group of objects when the state of another object changes. A very popular example lies in the Model-View-Controller (MVC) pattern. Assume that we are using the data of the same model in two views, for instance in a pie chart and in a spreadsheet. Whenever the model is modified, both the views need to be updated. That's the role of the Observer design pattern [Eckel08, page 213]. The Observer pattern describes a publish-subscribe relationship between a single object, : the publisher, which is also known as the subject or observable, and one or more objects, : the subscribers, also known as observers. In the MVC example, the publisher is the model and the subscribers are the views. However, MVC is not the only publish-subscribe example. Subscribing to a news feed such as RSS or Atom is another example. Many readers can subscribe to the feed typically using a feed reader, and every time a new item is added, they receive the update automatically. The ideas behind Observer are the same as the ideas behind MVC and the separation of concerns principle, that is, to increase decoupling between the publisher and subscribers, and to make it easy to add/remove subscribers at runtime. Additionally, the publisher is not concerned about who its observers are. It just sends notifications to all the subscribers [GOF95, page 327]. (For more resources related to this topic, see here.) A real-life example In reality, an auction resembles Observer. Every auction bidder has a number paddle that is raised whenever they want to place a bid. Whenever the paddle is raised by a bidder, the auctioneer acts as the subject by updating the price of the bid and broadcasting the new price to all bidders (subscribers). The following figure, courtesy of www.sourcemaking.com, [j.mp/observerpat], shows how the Observer pattern relates to an auction: A software example The django-observer package [j.mp/django-obs] is a third-party Django package that can be used to register callback functions that are executed when there are changes in several Django fields. Many different types of fields are supported (CharField, IntegerField, and so forth). RabbitMQ is a library that can be used to add asynchronous messaging support to an application. Several messaging protocols are supported, such as HTTP and AMQP. RabbitMQ can be used in a Python application to implement a publish-subscribe pattern, which is nothing more than the Observer design pattern [j.mp/rabbitmqobs]. Use cases We generally use the Observer pattern when we want to inform/update one or more objects (observers/subscribers) about a change that happened to another object (subject/publisher/observable). The number of observers as well as who the observers are may vary and can be changed dynamically (at runtime). We can think of many cases where Observer can be useful. Whether it is RSS, Atom, or another format, the idea is the same; you follow a feed, and every time it is updated, you receive a notification about the update [Zlobin13, page 60]. The same concept exists in social networking. If you are connected to another person using a social networking service, and your connection updates something, you are notified about it. It doesn't matter if the connection is a Twitter user that you follow, a real friend on Facebook, or a business colleague on LinkedIn. Event-driven systems are another example where Observer can be (and usually is) used. In such systems, listeners are used to "listen" for specific events. The listeners are triggered when an event they are listening to is created. This can be typing a specific key (of the keyboard), moving the mouse, and more. The event plays the role of the publisher and the listeners play the role of the observers. The key point in this case is that multiple listeners (observers) can be attached to a single event (publisher) [j.mp/magobs]. Implementation We will implement a data formatter. The ideas described here are based on the ActiveState Python Observer code recipe [j.mp/pythonobs]. There is a default formatter that shows a value in the decimal format. However, we can add/register more formatters. In this example, we will add a hex and binary formatter. Every time the value of the default formatter is updated, the registered formatters are notified and take action. In this case, the action is to show the new value in the relevant format. Observer is actually one of the patterns where inheritance makes sense. We can have a base Publisher class that contains the common functionality of adding, removing, and notifying observers. Our DefaultFormatter class derives from Publisher and adds the formatter-specific functionality. We can dynamically add and remove observers on demand. The following class diagram shows an instance of the example using two observers: HexFormatter and BinaryFormatter. Note that, because class diagrams are static, they cannot show the whole lifetime of a system, only the state of it at a specific point in time. We begin with the Publisher class. The observers are kept in the observers list. The add() method registers a new observer, or throws an error if it already exists. The remove() method unregisters an existing observer, or throws an exception if it does not exist. Finally, the notify() method informs all observers about a change: class Publisher: def __init__(self): self.observers = [] def add(self, observer): if observer not in self.observers: self.observers.append(observer) else: print('Failed to add: {}'.format(observer)) def remove(self, observer): try: self.observers.remove(observer) except ValueError: print('Failed to remove: {}'.format(observer)) def notify(self): [o.notify(self) for o in self.observers] Let's continue with the DefaultFormatter class. The first thing that __init__() does is call __init__() method of the base class, since this is not done automatically in Python. A DefaultFormatter instance has name to make it easier for us to track its status. We use name mangling in the _data variable to state that it should not be accessed directly. Note that this is always possible in Python [Lott14, page 54] but fellow developers have no excuse for doing so, since the code already states that they shouldn't. There is a serious reason for using name mangling in this case. Stay tuned. DefaultFormatter treats the _data variable as an integer, and the default value is zero: class DefaultFormatter(Publisher): def __init__(self, name): Publisher.__init__(self) self.name = name self._data = 0 The __str__() method returns information about the name of the publisher and the value of _data. type(self).__name__ is a handy trick to get the name of a class without hardcoding it. It is one of those things that make the code less readable but easier to maintain. It is up to you to decide if you like it or not: def __str__(self): return "{}: '{}' has data = {}".format(type(self).__name__, self.name, self._data) There are two data() methods. The first one uses the @property decorator to give read access to the _data variable. Using this, we can just execute object.data instead of object.data(): @property def data(self): return self._data The second data() method is more interesting. It uses the @setter decorator, which is called every time the assignment (=) operator is used to assign a new value to the _data variable. This method also tries to cast a new value to an integer, and does exception handling in case this operation fails: @data.setter def data(self, new_value): try: self._data = int(new_value) except ValueError as e: print('Error: {}'.format(e)) else: self.notify() The next step is to add the observers. The functionality of HexFormatter and BinaryFormatter is very similar. The only difference between them is how they format the value of data received by the publisher, that is, in hexadecimal and binary, respectively: class HexFormatter: def notify(self, publisher): print("{}: '{}' has now hex data = {}".format(type(self).__name__, publisher.name, hex(publisher.data))) class BinaryFormatter: def notify(self, publisher): print("{}: '{}' has now bin data = {}".format(type(self).__name__, publisher.name, bin(publisher.data))) No example is fun without some test data. The main() function initially creates a DefaultFormatter instance named test1 and afterwards attaches (and detaches) the two available observers. Exception handling is also exercised to make sure that the application does not crash when erroneous data is passed by the user. Moreover, things such as trying to add the same observer twice or removing an observer that does not exist should cause no crashes: def main(): df = DefaultFormatter('test1') print(df) print() hf = HexFormatter() df.add(hf) df.data = 3 print(df) print() bf = BinaryFormatter() df.add(bf) df.data = 21 print(df) print() df.remove(hf) df.data = 40 print(df) print() df.remove(hf) df.add(bf) df.data = 'hello' print(df) print() df.data = 15.8 print(df) Here's how the full code of the example (observer.py) looks: class Publisher: def __init__(self): self.observers = [] def add(self, observer): if observer not in self.observers: self.observers.append(observer) else: print('Failed to add: {}'.format(observer)) def remove(self, observer): try: self.observers.remove(observer) except ValueError: print('Failed to remove: {}'.format(observer)) def notify(self): [o.notify(self) for o in self.observers] class DefaultFormatter(Publisher): def __init__(self, name): Publisher.__init__(self) self.name = name self._data = 0 def __str__(self): return "{}: '{}' has data = {}".format(type(self).__name__, self.name, self._data) @property def data(self): return self._data @data.setter def data(self, new_value): try: self._data = int(new_value) except ValueError as e: print('Error: {}'.format(e)) else: self.notify() class HexFormatter: def notify(self, publisher): print("{}: '{}' has now hex data = {}".format(type(self).__name__, publisher.name, hex(publisher.data))) class BinaryFormatter: def notify(self, publisher): print("{}: '{}' has now bin data = {}".format(type(self).__name__, publisher.name, bin(publisher.data))) def main(): df = DefaultFormatter('test1') print(df) print() hf = HexFormatter() df.add(hf) df.data = 3 print(df) print() bf = BinaryFormatter() df.add(bf) df.data = 21 print(df) print() df.remove(hf) df.data = 40 print(df) print() df.remove(hf) df.add(bf) df.data = 'hello' print(df) print() df.data = 15.8 print(df) if __name__ == '__main__': main() Executing observer.py gives the following output: >>> python3 observer.py DefaultFormatter: 'test1' has data = 0 HexFormatter: 'test1' has now hex data = 0x3 DefaultFormatter: 'test1' has data = 3 HexFormatter: 'test1' has now hex data = 0x15 BinaryFormatter: 'test1' has now bin data = 0b10101 DefaultFormatter: 'test1' has data = 21 BinaryFormatter: 'test1' has now bin data = 0b101000 DefaultFormatter: 'test1' has data = 40 Failed to remove: <__main__.HexFormatter object at 0x7f30a2fb82e8> Failed to add: <__main__.BinaryFormatter object at 0x7f30a2fb8320> Error: invalid literal for int() with base 10: 'hello' BinaryFormatter: 'test1' has now bin data = 0b101000 DefaultFormatter: 'test1' has data = 40 BinaryFormatter: 'test1' has now bin data = 0b1111 DefaultFormatter: 'test1' has data = 15 What we see in the output is that as the extra observers are added, more (and relevant) output is shown, and when an observer is removed, it is not notified any longer. That's exactly what we want: runtime notifications that we are able to enable/disable on demand. The defensive programming part of the application also seems to work fine. Trying to do funny things such as removing an observer that does not exist or adding the same observer twice is not allowed. The messages shown are not very user-friendly but I leave that up to you as an exercise. Runtime failures of trying to pass a string when the API expects a number are also properly handled without causing the application to crash/terminate. This example would be much more interesting if it were interactive. Even a simple menu that allows the user to attach/detach observers at runtime and modify the value of DefaultFormatter would be nice because the runtime aspect becomes much more visible. Feel free to do it. Another nice exercise is to add more observers. For example, you can add an octal formatter, a roman numeral formatter, or any other observer that uses your favorite representation. Be creative and have fun! Summary In this article, we covered the Observer design pattern. We use Observer when we want to be able to inform/notify all stakeholders (an object or a group of objects) when the state of an object changes. An important feature of observer is that the number of subscribers/observers as well as who the subscribers are may vary and can be changed at runtime. You can refer more books on this topic mentioned as follows: Expert Python Programming: https://www.packtpub.com/application-development/expert-python-programming Learning Python Design Patterns: https://www.packtpub.com/application-development/learning-python-design-patterns Resources for Article: Further resources on this subject: Python Design Patterns in Depth: The Singleton Pattern[article] Python Design Patterns in Depth: The Factory Pattern[article] Automating Processes with ModelBuilder and Python[article]
Read more
  • 0
  • 0
  • 8317

article-image-python-design-patterns-depth-factory-pattern
Packt
15 Feb 2016
17 min read
Save for later

Python Design Patterns in Depth: The Factory Pattern

Packt
15 Feb 2016
17 min read
Creational design patterns deal with an object creation [j.mp/wikicrea]. The aim of a creational design pattern is to provide better alternatives for situations where a direct object creation (which in Python happens by the __init__() function [j.mp/divefunc], [Lott14, page 26]) is not convenient. In the Factory design pattern, a client asks for an object without knowing where the object is coming from (that is, which class is used to generate it). The idea behind a factory is to simplify an object creation. It is easier to track which objects are created if this is done through a central function, in contrast to letting a client create objects using a direct class instantiation [Eckel08, page 187]. A factory reduces the complexity of maintaining an application by decoupling the code that creates an object from the code that uses it [Zlobin13, page 30]. Factories typically come in two forms: the Factory Method, which is a method (or in Pythonic terms, a function) that returns a different object per input parameter [j.mp/factorympat]; the Abstract Factory, which is a group of Factory Methods used to create a family of related products [GOF95, page 100], [j.mp/absfpat] (For more resources related to this topic, see here.) Factory Method In the Factory Method, we execute a single function, passing a parameter that provides information about what we want. We are not required to know any details about how the object is implemented and where it is coming from. A real-life example An example of the Factory Method pattern used in reality is in plastic toy construction. The molding powder used to construct plastic toys is the same, but different figures can be produced using different plastic molds. This is like having a Factory Method in which the input is the name of the figure that we want (soldier and dinosaur) and the output is the plastic figure that we requested. The toy construction case is shown in the following figure, which is provided by www.sourcemaking.com [j.mp/factorympat]. A software example The Django framework uses the Factory Method pattern for creating the fields of a form. The forms module of Django supports the creation of different kinds of fields (CharField, EmailField) and customizations (max_length, required) [j.mp/djangofacm]. Use cases If you realize that you cannot track the objects created by your application because the code that creates them is in many different places instead of a single function/method, you should consider using the Factory Method pattern [Eckel08, page 187]. The Factory Method centralizes an object creation and tracking your objects becomes much more easier. Note that it is absolutely fine to create more than one Factory Method, and this is how it is typically done in practice. Each Factory Method logically groups the creation of objects that have similarities. For example, one Factory Method might be responsible for connecting you to different databases (MySQL, SQLite), another Factory Method might be responsible for creating the geometrical object that you request (circle, triangle), and so on. The Factory Method is also useful when you want to decouple an object creation from an object usage. We are not coupled/bound to a specific class when creating an object, we just provide partial information about what we want by calling a function. This means that introducing changes to the function is easy without requiring any changes to the code that uses it [Zlobin13, page 30]. Another use case worth mentioning is related with improving the performance and memory usage of an application. A Factory Method can improve the performance and memory usage by creating new objects only if it is absolutely necessary [Zlobin13, page 28]. When we create objects using a direct class instantiation, extra memory is allocated every time a new object is created (unless the class uses caching internally, which is usually not the case). We can see that in practice in the following code (file id.py), it creates two instances of the same class A and uses the id() function to compare their memory addresses. The addresses are also printed in the output so that we can inspect them. The fact that the memory addresses are different means that two distinct objects are created as follows: class A(object):     pass if __name__ == '__main__':     a = A()     b = A()     print(id(a) == id(b))     print(a, b) Executing id.py on my computer gives the following output:>> python3 id.pyFalse<__main__.A object at 0x7f5771de8f60> <__main__.A object at 0x7f5771df2208> Note that the addresses that you see if you execute the file are not the same as I see because they depend on the current memory layout and allocation. But the result must be the same: the two addresses should be different. There's one exception that happens if you write and execute the code in the Python Read-Eval-Print Loop (REPL) (interactive prompt), but that's a REPL-specific optimization which is not happening normally. Implementation Data comes in many forms. There are two main file categories for storing/retrieving data: human-readable files and binary files. Examples of human-readable files are XML, Atom, YAML, and JSON. Examples of binary files are the .sq3 file format used by SQLite and the .mp3 file format used to listen to music. In this example, we will focus on two popular human-readable formats: XML and JSON. Although human-readable files are generally slower to parse than binary files, they make data exchange, inspection, and modification much more easier. For this reason, it is advised to prefer working with human-readable files, unless there are other restrictions that do not allow it (mainly unacceptable performance and proprietary binary formats). In this problem, we have some input data stored in an XML and a JSON file, and we want to parse them and retrieve some information. At the same time, we want to centralize the client's connection to those (and all future) external services. We will use the Factory Method to solve this problem. The example focuses only on XML and JSON, but adding support for more services should be straightforward. First, let's take a look at the data files. The XML file, person.xml, is based on the Wikipedia example [j.mp/wikijson] and contains information about individuals (firstName, lastName, gender, and so on) as follows: <persons>   <person>     <firstName>John</firstName>     <lastName>Smith</lastName>     <age>25</age>     <address>       <streetAddress>21 2nd Street</streetAddress>       <city>New York</city>       <state>NY</state>       <postalCode>10021</postalCode>     </address>     <phoneNumbers>       <phoneNumber type="home">212 555-1234</phoneNumber>       <phoneNumber type="fax">646 555-4567</phoneNumber>     </phoneNumbers>     <gender>       <type>male</type>     </gender>   </person>   <person>     <firstName>Jimy</firstName>     <lastName>Liar</lastName>     <age>19</age>     <address>       <streetAddress>18 2nd Street</streetAddress>       <city>New York</city>       <state>NY</state>       <postalCode>10021</postalCode>     </address>     <phoneNumbers>       <phoneNumber type="home">212 555-1234</phoneNumber>     </phoneNumbers>     <gender>       <type>male</type>     </gender>   </person>   <person>     <firstName>Patty</firstName>     <lastName>Liar</lastName>     <age>20</age>     <address>       <streetAddress>18 2nd Street</streetAddress>       <city>New York</city>       <state>NY</state>       <postalCode>10021</postalCode>     </address>     <phoneNumbers>       <phoneNumber type="home">212 555-1234</phoneNumber>       <phoneNumber type="mobile">001 452-8819</phoneNumber>     </phoneNumbers>     <gender>       <type>female</type>     </gender>   </person> </persons> The JSON file, donut.json, comes from the GitHub account of Adobe [j.mp/adobejson] and contains donut information (type, price/unit i.e. ppu, topping, and so on) as follows: [   {     "id": "0001",     "type": "donut",     "name": "Cake",     "ppu": 0.55,     "batters": {       "batter": [         { "id": "1001", "type": "Regular" },         { "id": "1002", "type": "Chocolate" },         { "id": "1003", "type": "Blueberry" },         { "id": "1004", "type": "Devil's Food" }       ]     },     "topping": [       { "id": "5001", "type": "None" },       { "id": "5002", "type": "Glazed" },       { "id": "5005", "type": "Sugar" },       { "id": "5007", "type": "Powdered Sugar" },       { "id": "5006", "type": "Chocolate with Sprinkles" },       { "id": "5003", "type": "Chocolate" },       { "id": "5004", "type": "Maple" }     ]   },   {     "id": "0002",     "type": "donut",     "name": "Raised",     "ppu": 0.55,     "batters": {       "batter": [         { "id": "1001", "type": "Regular" }       ]     },     "topping": [       { "id": "5001", "type": "None" },       { "id": "5002", "type": "Glazed" },       { "id": "5005", "type": "Sugar" },       { "id": "5003", "type": "Chocolate" },       { "id": "5004", "type": "Maple" }     ]   },   {     "id": "0003",     "type": "donut",     "name": "Old Fashioned",     "ppu": 0.55,     "batters": {       "batter": [         { "id": "1001", "type": "Regular" },         { "id": "1002", "type": "Chocolate" }       ]     },     "topping": [       { "id": "5001", "type": "None" },       { "id": "5002", "type": "Glazed" },       { "id": "5003", "type": "Chocolate" },       { "id": "5004", "type": "Maple" }     ]   } ] We will use two libraries that are part of the Python distribution for working with XML and JSON: xml.etree.ElementTree and json as follows: import xml.etree.ElementTree as etree import json The JSONConnector class parses the JSON file and has a parsed_data() method that returns all data as a dictionary (dict). The property decorator is used to make parsed_data() appear as a normal variable instead of a method as follows: class JSONConnector:     def __init__(self, filepath):         self.data = dict()         with open(filepath, mode='r', encoding='utf-8') as f:             self.data = json.load(f)       @property     def parsed_data(self):         return self.data The XMLConnector class parses the XML file and has a parsed_data() method that returns all data as a list of xml.etree.Element as follows: class XMLConnector:     def __init__(self, filepath):         self.tree = etree.parse(filepath)     @property    def parsed_data(self):         return self.tree The connection_factory() function is a Factory Method. It returns an instance of JSONConnector or XMLConnector depending on the extension of the input file path as follows: def connection_factory(filepath):     if filepath.endswith('json'):         connector = JSONConnector     elif filepath.endswith('xml'):         connector = XMLConnector     else:         raise ValueError('Cannot connect to {}'.format(filepath))     return connector(filepath) The connect_to() function is a wrapper of connection_factory(). It adds exception handling as follows: def connect_to(filepath):     factory = None     try:         factory = connection_factory(filepath)     except ValueError as ve:         print(ve)     return factory The main() function demonstrates how the Factory Method design pattern can be used. The first part makes sure that exception handling is effective as follows: def main():     sqlite_factory = connect_to('data/person.sq3') The next part shows how to work with the XML files using the Factory Method. XPath is used to find all person elements that have the last name Liar. For each matched person, the basic name and phone number information are shown as follows: xml_factory = connect_to('data/person.xml')     xml_data = xml_factory.parsed_data()     liars = xml_data.findall     (".//{person}[{lastName}='{}']".format('Liar'))     print('found: {} persons'.format(len(liars)))     for liar in liars:         print('first name:         {}'.format(liar.find('firstName').text))         print('last name: {}'.format(liar.find('lastName').text))         [print('phone number ({}):'.format(p.attrib['type']),         p.text) for p in liar.find('phoneNumbers')] The final part shows how to work with the JSON files using the Factory Method. Here, there's no pattern matching, and therefore the name, price, and topping of all donuts are shown as follows: json_factory = connect_to('data/donut.json')     json_data = json_factory.parsed_data     print('found: {} donuts'.format(len(json_data)))     for donut in json_data:         print('name: {}'.format(donut['name']))         print('price: ${}'.format(donut['ppu']))         [print('topping: {} {}'.format(t['id'], t['type'])) for t         in donut['topping']] For completeness, here is the complete code of the Factory Method implementation (factory_method.py) as follows: import xml.etree.ElementTree as etree import json class JSONConnector:     def __init__(self, filepath):         self.data = dict()         with open(filepath, mode='r', encoding='utf-8') as f:             self.data = json.load(f)     @property     def parsed_data(self):         return self.data class XMLConnector:     def __init__(self, filepath):         self.tree = etree.parse(filepath)     @property     def parsed_data(self):         return self.tree def connection_factory(filepath):     if filepath.endswith('json'):         connector = JSONConnector     elif filepath.endswith('xml'):         connector = XMLConnector     else:         raise ValueError('Cannot connect to {}'.format(filepath))     return connector(filepath) def connect_to(filepath):     factory = None     try:        factory = connection_factory(filepath)     except ValueError as ve:         print(ve)     return factory def main():     sqlite_factory = connect_to('data/person.sq3')     print()     xml_factory = connect_to('data/person.xml')     xml_data = xml_factory.parsed_data     liars = xml_data.findall(".//{}[{}='{}']".format('person',     'lastName', 'Liar'))     print('found: {} persons'.format(len(liars)))     for liar in liars:         print('first name:         {}'.format(liar.find('firstName').text))         print('last name: {}'.format(liar.find('lastName').text))         [print('phone number ({}):'.format(p.attrib['type']),         p.text) for p in liar.find('phoneNumbers')]     print()     json_factory = connect_to('data/donut.json')     json_data = json_factory.parsed_data     print('found: {} donuts'.format(len(json_data)))     for donut in json_data:     print('name: {}'.format(donut['name']))     print('price: ${}'.format(donut['ppu']))     [print('topping: {} {}'.format(t['id'], t['type'])) for t     in donut['topping']] if __name__ == '__main__':     main() Here is the output of this program as follows: >>> python3 factory_method.pyCannot connect to data/person.sq3found: 2 personsfirst name: Jimylast name: Liarphone number (home): 212 555-1234first name: Pattylast name: Liarphone number (home): 212 555-1234phone number (mobile): 001 452-8819found: 3 donutsname: Cakeprice: $0.55topping: 5001 Nonetopping: 5002 Glazedtopping: 5005 Sugartopping: 5007 Powdered Sugartopping: 5006 Chocolate with Sprinklestopping: 5003 Chocolatetopping: 5004 Maplename: Raisedprice: $0.55topping: 5001 Nonetopping: 5002 Glazedtopping: 5005 Sugartopping: 5003 Chocolatetopping: 5004 Maplename: Old Fashionedprice: $0.55topping: 5001 Nonetopping: 5002 Glazedtopping: 5003 Chocolatetopping: 5004 Maple Notice that although JSONConnector and XMLConnector have the same interfaces, what is returned by parsed_data() is not handled in a uniform way. Different codes must be used to work with each connector. Although it would be nice to be able to use the same code for all connectors, this is at most times not realistic unless we use some kind of common mapping for the data which is very often provided by external data providers. Assuming that you can use exactly the same code for handling the XML and JSON files, what changes are required to support a third format, for example, SQLite? Find an SQLite file or create your own and try it. As is now, the code does not forbid a direct instantiation of a connector. Is it possible to do this? Try doing it (hint: functions in Python can have nested classes). Summary To learn more about design patterns in depth, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Learning Python Design Patterns (https://www.packtpub.com/application-development/learning-python-design-patterns) Learning Python Design Patterns – Second Edition (https://www.packtpub.com/application-development/learning-python-design-patterns-second-edition) Resources for Article:   Further resources on this subject: Recommending Movies at Scale (Python) [article] An In-depth Look at Ansible Plugins [article] Elucidating the Game-changing Phenomenon of the Docker-inspired Containerization Paradigm [article]
Read more
  • 0
  • 0
  • 10341

article-image-python-design-patterns-depth-singleton-pattern
Packt
15 Feb 2016
14 min read
Save for later

Python Design Patterns in Depth: The Singleton Pattern

Packt
15 Feb 2016
14 min read
There are situations where you need to create only one instance of data throughout the lifetime of a program. This can be a class instance, a list, or a dictionary, for example. The creation of a second instance is undesirable. This can result in logical errors or malfunctioning of the program. The design pattern that allows you to create only one instance of data is called singleton. In this article, you will learn about module-level, classic, and borg singletons; you'll also learn about how they work, when to use them, and build a two-threaded web crawler that uses a singleton to access the shared resource. (For more resources related to this topic, see here.) Singleton is the best candidate when the requirements are as follows: Controlling concurrent access to a shared resource If you need a global point of access for the resource from multiple or different parts of the system When you need to have only one object Some typical use cases of using a singleton are: The logging class and its subclasses (global point of access for the logging class to send messages to the log) Printer spooler (your application should only have a single instance of the spooler in order to avoid having a conflicting request for the same resource) Managing a connection to a database File manager Retrieving and storing information on external configuration files Read-only singletons storing some global states (user language, time, time zone, application path, and so on) There are several ways to implement singletons. We will look at module-level singleton, classic singletons, and borg singleton. Module-level singleton All modules are singletons by nature because of Python's module importing steps: Check whether a module is already imported. If yes, return it. If not, find a module, initialize it, and return it. Initializing a module means executing a code, including all module-level assignments. When you import the module for the first time, all of the initializations will be done; however, if you try to import the module for the second time, Python will return the initialized module. Thus, the initialization will not be done, and you get a previously imported module with all of its data. So, if you want to quickly make a singleton, use the following steps and keep the shared data as the module attribute. singletone.py: only_one_var = "I'm only one var" module1.py: import single tone print singleton.only_one_var singletone.only_one_var += " after modification" import module2 module2.py: import singletone print singleton.only_one_var Here, if you try to import a global variable in a singleton module and change its value in the module1 module, module2 will get a changed variable. This function is quick and sometimes is all that you need; however, we need to consider the following points: It's pretty error-prone. For example, if you happen to forget the global statements, variables local to the function will be created and, the module's variables won't be changed, which is not what you want. It's ugly, especially if you have a lot of objects that should remain as singletons. They pollute the module namespace with unnecessary variables. They don't permit lazy allocation and initialization; all global variables will be loaded during the module import process. It's not possible to re-use the code because you can not use the inheritance. No special methods and no object-oriented programming benefits at all. Classic singleton In classic singleton in Python, we check whether an instance is already created. If it is created, we return it; otherwise, we create a new instance, assign it to a class attribute, and return it. Let's try to create a dedicated singleton class: class Singleton(object): def __new__(cls): if not hasattr(cls, 'instance'): cls.instance = super(Singleton, cls).__new__(cls) return cls.instance Here, before creating the instance, we check for the special __new__ method, which is called right before __init__ if we had created an instance earlier. If not, we create a new instance; otherwise, we return the already created instance. Let's check how it works: >>> singleton = Singleton() >>> another_singleton = Singleton() >>> singleton is another_singleton True >>> singleton.only_one_var = "I'm only one var" >>> another_singleton.only_one_var I'm only one var Try to subclass the Singleton class with another one. class Child(Singleton): pass If it's a successor of Singleton, all of its instances should also be the instances of Singleton, thus sharing its states. But this doesn't work as illustrated in the following code: >>> child = Child() >>> child is singleton >>> False >>> child.only_one_var AttributeError: Child instance has no attribute 'only_one_var' To avoid this situation, the borg singleton is used. Borg singleton Borg is also known as monostate. In the borg pattern, all of the instances are different, but they share the same state. In the following code , the shared state is maintained in the _shared_state attribute. And all new instances of the Borg class will have this state as defined in the __new__ class method. class Borg(object):    _shared_state = {}    def __new__(cls, *args, **kwargs):        obj = super(Borg, cls).__new__(cls, *args, **kwargs)        obj.__dict__ = cls._shared_state        return obj Generally, Python stores the instance state in the __dict__ dictionary and when instantiated normally, every instance will have its own __dict__. But, here we deliberately assign the class variable _shared_state to all of the created instances. Here is how it works with subclassing: class Child(Borg):    pass>>> borg = Borg()>>> another_borg = Borg()>>> borg is another_borgFalse>>> child = Child()>>> borg.only_one_var = "I'm the only one var">>> child.only_one_varI'm the only one var So, despite the fact that you can't compare objects by their identity, using the is statement, all child objects share the parents' state. If you want to have a class, which is a descendant of the Borg class but has a different state, you can reset shared_state as follows: class AnotherChild(Borg):    _shared_state = {}>>> another_child = AnotherChild()>>> another_child.only_one_varAttributeError: AnotherChild instance has no attribute 'shared_state' Which type of singleton should be used is up to you. If you expect that your singleton will not be inherited, you can choose the classic singleton; otherwise, it's better to stick with borg. Implementation in Python As a practical example, we'll create a simple web crawler that scans a website you open on it, follows all the links that lead to the same website but to other pages, and downloads all of the images it'll find. To do this, we'll need two functions: a function that scans a website for links, which leads to other pages to build a set of pages to visit, and a function that scans a page for images and downloads them. To make it quicker, we'll download images in two threads. These two threads should not interfere with each other, so don't scan pages if another thread has already scanned them, and don't download images that are already downloaded. So, a set with downloaded images and scanned web pages will be a shared resource for our application, and we'll keep it in a singleton instance. In this example, you will need a library for parsing and screen scraping websites named BeautifulSoup and an HTTP client library httplib2. It should be sufficient to install both with either of the following commands: $ sudo pip install BeautifulSoup httplib2 $ sudo easy_install BeautifulSoup httplib2 First of all, we'll create a Singleton class. Let's use the classic singleton in this example: import httplib2import osimport reimport threadingimport urllibfrom urlparse import urlparse, urljoinfrom BeautifulSoup import BeautifulSoupclass Singleton(object):    def __new__(cls):        if not hasattr(cls, 'instance'):             cls.instance = super(Singleton, cls).__new__(cls)        return cls.instance It will return the singleton objects to all parts of the code that request it. Next, we'll create a class for creating a thread. In this thread, we'll download images from