Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-how-to-work-with-the-intellij-idea-selenium-plugin
Amey Varangaonkar
03 Apr 2018
3 min read
Save for later

How to work with the Selenium IntelliJ IDEA plugin

Amey Varangaonkar
03 Apr 2018
3 min read
Most of the framework components you design and build will be customized to your application under test. However, there are many third-party tools and plugins available, which you can use to provide better results processing, reporting, performance, and services to engineers using the framework. In this article, we cover one of the most popular plugins used with Selenium - the Selenium IntelliJ IDEA plugin. IntelliJ IDEA Selenium plugin When we covered building page object classes earlier, we discussed how to define the locators on a page for each WebElement or MobileElement using the @findBy annotations. That required the user to use one of the Inspectors or plugins to view the DOM structure and hand-code a robust locator that is cross-platform safe. Now, when using CSS and XPath locators, the hierarchy of the element can get complex, and there is a greater chance of building invalid locators. So, Perfect Test has come up with a Selenium plugin for the IntelliJ IDEA that will find and create locators on the fly. Before discussing some of the features of the plugin, let's review where this is located. Sample project files There are instructions on the www.perfect-test.com site for installing the plugin and once that is done, users can create a new project using a sample template, which will auto- generate a series of template files. These files are generic "getting started" files, but you should still follow the structure and design of the framework as outlined in this book. Here is a quick screenshot of the autogenerated file structure of the sample project: Once the plugin is enabled by simply clicking on the Selenium icon in the toolbar, users can use the Code Generate menu features to create code samples, Java methods, getter/setter methods, WebElements, copyrights for files, locators, and so on. Generating element locators The plugin has a nice feature for creating WebElement definitions, adding locators of choice, and validating them in the class. It provides a set of tooltips to tell the user what is incorrect in the syntax of the locator, which is helpful when creating CSS and XPath strings. Here is a screenshot of the locator strategy feature: Once the WebElement structure is built into the page object class, you can capture and verify the locator, and it will indicate an error with a red underline. When moving over the invalid syntax, it provides a tooltip and a lightbulb icon to the left of it, where users can use features for Check Element Existence on page and Fix Locator Popup. These are very useful for quickly finding syntax errors and defining locators. Here is a screenshot of the Check Element Existence on page feature: Here is a screenshot of the Fix Locator Popup feature: The Selenium IntelliJ plugin deals mostly with creating locators and the differences between CSS and XPath syntax. The tool also provides drop-down lists of examples where users can pick and choose how to build the queries. It's a great way to get started using Selenium to build real page object classes, and it provides a tool to validate complex CSS and XPath structures in locators! Apart from the Selenium IntelliJ plugin, there are other third-party APIs such as HTML Publisher Plugin, BrowserMob Proxy Plugin, ExtentReports Reporter API and also Sauce Labs Test Cloud services.  This article is an excerpt taken from the book Selenium Framework Design in Data-Driven Testing by Carl Cocchiaro. It presents a step-by-step approach to design and build a data-driven test framework using Selenium WebDriver, Java, and TestNG.  
Read more
  • 0
  • 0
  • 18659

article-image-hands-service-fabric
Packt
06 Apr 2017
12 min read
Save for later

Hands on with Service Fabric

Packt
06 Apr 2017
12 min read
In this article by Rahul Rai and Namit Tanasseri, authors of the book Microservices with Azure, explains that Service Fabric as a platform supports multiple programming models. Each of which is best suited for specific scenarios. Each programming model offers different levels of integration with the underlying management framework. Better integration leads to more automation and lesser overheads. Picking the right programming model for your application or services is the key to efficiently utilize the capabilities of Service Fabric as a hosting platform. Let's take a deeper look into these programming models. (For more resources related to this topic, see here.) To start with, let's look at the least integrated hosting option: Guest Executables. Native windows applications or application code using Node.js or Java can be hosted on Service Fabric as a guest executable. These executables can be packaged and pushed to a Service Fabric cluster like any other services. As the cluster manager has minimal knowledge about the executable, features like custom health monitoring, load reporting, state store and endpoint registration cannot be leveraged by the hosted application. However, from a deployment standpoint, a guest executable is treated like any other service. This means that for a guest executable, Service Fabric cluster manager takes care of high availability, application lifecycle management, rolling updates, automatic failover, high density deployment and load balancing. As an orchestration service, Service Fabric is responsible for deploying and activating an application or application services within a cluster. It is also capable of deploying services within a container image. This programming model is addressed as Guest Containers. The concept of containers is best explained as an implementation of operating system level virtualization. They are encapsulated deployable components running on isolated process boundaries sharing the same kernel. Deployed applications and their runtime dependencies are bundles within the container with an isolated view of all operating system constructs. This makes containers highly portable and secure. Guest container programming model is usually chosen when this level of isolation is required for the application. As containers don't have to boot an operating system, they have fast boot up time and are comparatively small in size. A prime benefit of using Service Fabric as a platform is the fact that it supports heterogeneous operating environments. Service Fabric supports two types of containers to be deployed as guest containers: Docker containers on Linux and Windows server containers. Container images for Docker containers are stored in Docker Hub and Docker APIs are used to create and manage the containers deployed on Linux kernel. Service Fabric supports two different types of containers in Windows Server 2016 with different levels of isolation. They are: Windows Server containers and Windows Hyper-V containers Windows Server containers are similar to Docker containers in terms of the isolation they provide. Windows Hyper-V containers offer higher degree of isolation and security by not sharing the operating system kernel across instances. These are ideally used when a higher level of security isolation is required such as systems requiring hostile multitenant hosts. The following figure illustrates the different isolation levels achieved by using these containers. Container isolation levels Service Fabric application model treats containers as an application host which can in turn host service replicas. There are three ways of utilizing containers within a Service Fabric application mode. Existing applications like Node.js, JavaScript application of other executables can be hosted within a container and deployed on Service Fabric as a Guest Container. A Guest Container is treated similar to a Guest Executable by Service Fabric runtime. The second scenario supports deploying stateless services inside a container hosted on Service Fabric. Stateless services using Reliable Services and Reliable actors can be deployed within a container. The third option is to deploy stateful services in containers hosted on Service Fabric. This model also supports Reliable Services and Reliable Actors. Service Fabric offers several features to manage containerized Microservices. These include container deployment and activation, resource governance, repository authentication, port mapping, container discovery and communication and ability to set environment variables. While containers offer a good level of isolation it is still heavy in terms of deployment footprint. Service Fabric offers a simpler, powerful programming model to develop your services which they call Reliable Services. Reliable services let you develop stateful and stateless services which can be directly deployed on Service Fabric clusters. For stateful services, the state can be stored close to the compute by using Reliable Collections. High availability of the state store and replication of the state is taken care by the Service Fabric cluster management services. This contributes substantially to the performance of the system by improving the latency of data access. Reliable services come with a built-in pluggable communication model which supports HTTP with Web API, WebSockets and custom TCP protocols out of the box. A Reliable service is addressed as stateless if it does not maintain any state within it or if the scope of the state stored is limited to a service call and is entirely disposable. This means that a stateless service does not require to persist, synchronize or replicate state. A good example for this service is a weather service like MSN weather service. A weather service can be queried to retrieve weather conditions associated with a specific geographical location. The response is totally based on the parameters supplied to the service. This service does not store any state. Although stateless services are simpler to implement, most of the services in real life are not stateless. They either store state in an external state store or an internal one. Web front end hosting APIs or web applications are good use cases to be hosted as stateless services. A stateful service persists states. The outcome of a service call made to a stateful service is usually influenced by the state persisted by the service. A service exposed by a bank to return the balance on an account is a good example for a stateful service. The state may be stored in an external data store such as Azure SQL Database, Azure Blobs or Azure Table store. Most services prefer to store the state externally considering the challenges around reliability, availability, scalability and consistency of the data store. With Service Fabric, state can be stored close to the compute by using reliable collections. To makes things more lightweight, Service Fabric also offers a programming model based on Virtual actor pattern. This programming model is called Reliable Actors. The Reliable Actors programming model is built on top of Reliable Services. This guarantees the scalability and reliability of the services. An Actor can be defined as an isolated, independent unit of compute and state with single-threaded execution. Actors can be created, managed and disposed independent of each other. Large number of actors can coexist and execute at a time. Service Fabric Reliable Actors are a good fit for systems which are highly distributed and dynamic by nature. Every actor is defined as an instance of an actor type; the same way an object is an instance of a class. Each actor is uniquely identified by an actor ID. The lifetime of Service Fabric Actors is not tied to their in-memory state. As a result, Actors are automatically created the first time a request for them is made. Reliable Actor's garbage collector takes care of disposing unused Actors in memory. Now that we understand the programming models, let's take a look at how the services deployed on Service Fabric are discovered and how the communication between services takes place. Service Fabric discovery and communication An application built on top of Microservices is usually composed of multiple services, each of which runs multiple replicas. Each service is specialized in a specific task. To achieve an end to end business use case, multiple services will need to be stitched together. This requires services to communicate to each other. A simple example would be web front end service communicating with the middle tier services which in turn connects to the back end services to handle a single user request. Some of these middle tier services can also be invoked by external applications. Services deployed on Service Fabric are distributed across multiple nodes in a cluster of virtual machines. The services can move across dynamically. This distribution of services can wither be triggered by a manual action of be result of Service Fabric cluster manager re-balancing services to achieve optimal resource utilization. This makes communication a challenge as services are not tied to a particular machine. Let's understand how Service Fabric solved this challenge for its consumers. Service protocols Service Fabric, as a hosting platform for Microservices does not interfere in the implementation of the service. On top of this, it also lets services decide on the communication channels they want to open. These channels are addressed as service endpoints. During service initiation, Service Fabric provides the opportunity for the services to set up the endpoints for incoming request on any protocol or communication stack. The endpoints are defined according to common industry standards, that is IP:Port. It is possible that multiple service instances share a single host process. In which case, they either have to use different ports or a port sharing mechanism. This will ensure that every service instance is uniquely addressable. Service endpoints Service discovery Service Fabric can rebalance services deployed on a cluster as a part of orchestration activities. This can be caused by resource balancing activities, failovers, upgrades, scale outs or scale ins. This will result in change in service endpoint addresses as the services move across different virtual machines. Service distribution The Service Fabric Naming Service is responsible for abstracting this complexity from the consuming service or application. Naming service takes care of service discovery and resolution. All service instances in Services Fabric are identified by a unique URL like fabric:/MyMicroServiceApp/AppService1. This name stays constant across the lifetime of the service although the endpoint addresses which physically host the service may change. Internally, Service Fabric manages a map between the service names and the physical location where the service is hosted. This is similar to the DNS service which is used to resolve Website URLs to IP addresses. The following figure illustrates the name resolution process for a service hosted on Service Fabric: Name resolution Connections from applications external to Service Fabric Service communications to or between services hosted in Service Fabric can be categorized as internal or external. Internal communication among services hosted on Service Fabric is easily achieved using the Naming Service. External communication, originated from an application or a user outside the boundaries of Service Fabric will need some extra work. To understand how this works, let's dive deeper in to the logical network layout of a typical Service Fabric cluster. Service Fabric cluster is always placed behind an Azure Load Balancer. The Load Balancer acts like a gateway to all traffic which needs to pass to the Service Fabric cluster. The Load Balancer is aware of every post open on every node of a cluster. When a request hits the Load Balancer, it identifies the port the request is looking for and randomly routes the request to one of the nodes which has the requested port open. The Load Balancer is not aware of the services running on the nodes or the ports associated with the services. The following figure illustrates request routing in action. Request routing Configuring ports and protocols The protocol and the ports to be opened by a Service Fabric cluster can be easily configured through the portal. Let's take an example to understand the configuration in detail. If we need a web application to be hosted on a Service Fabric cluster which should have port 80 opened on HTTP to accept incoming traffic, the following steps should be performed. Configuring service manifest Once a service listening to port 80 is authored, we need to configure port 80 in the service manifest to open a listener in the service. This can be done by editing the Service Manifest.xml. <Resources> <Endpoints> <Endpoint Name="WebEndpoint" Protocol="http" Port="80" /> </Endpoints> </Resources> Configuring custom end point On the Service Fabric cluster, configure port 80 as a custom endpoint. This can be easily done through the Azure Management portal. Configuring custom port Configure Azure Load Balancer Once the cluster is configured and created, the Azure Load Balancer can be instructed to forward the traffic to port 80. If the Service Fabric cluster is created through the portal, this step is automatically taken care for every port which is configured on the cluster configuration. Configuring Azure Load Balancer Configure health check Azure Load Balancer probes the ports on the nodes for their availability to ensure reliability of the service. The probes can be configured on the Azure portal. This is an optional step as a default probe configuration is applied for each endpoint when a cluster is created. Configuring probe Built-in Communication API Service Fabric offers many built-in communication options to support inter service communications. Service Remoting is one of them. This option allows strong typed remote procedure calls between Reliable Services and Reliable Actors. This option is very easy to set up and operate with as Service Remoting handles resolution of service addresses, connection, retry and error handling. Service Fabric also supports HTTP for language-agnostic communication. Service Fabric SDK exposes ICommunicationClient and ServicePartitionClient classes for service resolution, HTTP connections, and retry loops. WCF is also supported by Service Fabric as a communication channel to enable legacy workload to be hosted on it. The SDK exposed WcfCommunicationListener for the server side and WcfCommunicationClient and ServicePartitionClient classes for the client to ease programming hurdles. Resources for Article: Further resources on this subject: Installing Neutron [article] Designing and Building a vRealize Automation 6.2 Infrastructure [article] Insight into Hyper-V Storage [article]
Read more
  • 0
  • 0
  • 18655

article-image-understanding-websockets-and-server-sent-events-detail
Packt
29 Oct 2013
10 min read
Save for later

Understanding WebSockets and Server-sent Events in Detail

Packt
29 Oct 2013
10 min read
(For more resources related to this topic, see here.) Encoders and decoders in Java API for WebSockets As seen in the previous chapter, the class-level annotation @ServerEndpoint indicates that a Java class is a WebSocket endpoint at runtime. The value attribute is used to specify a URI mapping for the endpoint. Additionally the user can add encoder and decoder attributes to encode application objects into WebSocket messages and WebSocket messages into application objects. The following table summarizes the @ServerEndpoint annotation and its attributes: Annotation Attribute Description @ServerEndpoint   This class-level annotation signifies that the Java class is a WebSockets server endpoint.   value The value is the URI with a leading '/.'   encoders The encoders contains a list of Java classes that act as encoders for the endpoint. The classes must implement the Encoder interface.   decoders The decoders contains a list of Java classes that act as decoders for the endpoint. The classes must implement the Decoder interface.   configurator The configurator attribute allows the developer to plug in their implementation of ServerEndpoint.Configurator that is used when configuring the server endpoint.   subprotocols The sub protocols attribute contains a list of sub protocols that the endpoint can support. In this section we shall look at providing encoder and decoder implementations for our WebSockets endpoint. The preceding diagram shows how encoders will take an application object and convert it to a WebSockets message. Decoders will take a WebSockets message and convert to an application object. Here is a simple example where a client sends a WebSockets message to a WebSockets java endpoint that is annotated with @ServerEndpoint and decorated with encoder and decoder class. The decoder will decode the WebSockets message and send back the same message to the client. The encoder will convert the message to a WebSockets message. This sample is also included in the code bundle for the book. Here is the code to define ServerEndpoint with value for encoders and decoders: @ServerEndpoint(value="/book", encoders={MyEncoder.class}, decoders = {MyDecoder.class} ) public class BookCollection { @OnMessage public void onMessage(Book book,Session session) { try { session.getBasicRemote().sendObject(book); } catch (Exception ex) { ex.printStackTrace(); } } @OnOpen public void onOpen(Session session) { System.out.println("Opening socket" +session.getBasicRemote() ); } @OnClose public void onClose(Session session) { System.out.println("Closing socket" + session.getBasicRemote()); } } In the preceding code snippet, you can see the class BookCollection is annotated with @ServerEndpoint. The value=/book attribute provides URI mapping for the endpoint. The @ServerEndpoint also takes the encoders and decoders to be used during the WebSocket transmission. Once a WebSocket connection has been established, a session is created and the method annotated with @OnOpen will be called. When the WebSocket endpoint receives a message, the method annotated with @OnMessage will be called. In our sample the method simply sends the book object using the Session.getBasicRemote() which will get a reference to the RemoteEndpoint and send the message synchronously. Encoders can be used to convert a custom user-defined object in a text message, TextStream, BinaryStream, or BinaryMessage format. An implementation of an encoder class for text messages is as follows: public class MyEncoder implements Encoder.Text<Book> { @Override public String encode(Book book) throws EncodeException { return book.getJson().toString(); } } As shown in the preceding code, the encoder class implements Encoder.Text<Book>. There is an encode method that is overridden and which converts a book and sends it as a JSON string. (More on JSON APIs is covered in detail in the next chapter) Decoders can be used to decode WebSockets messages in custom user-defined objects. They can decode in text, TextStream, and binary or BinaryStream format. Here is a code for a decoder class: public class MyDecoder implements Decoder.Text<Book> { @Override public Book decode(String string) throws DecodeException { javax.json.JsonObject jsonObject = javax.json.Json.createReader(new StringReader(string)).readObject(); return new Book(jsonObject); } @Override public boolean willDecode(String string) { try { javax.json.Json.createReader(new StringReader(string)).readObject(); return true; } catch (Exception ex) { } return false; } In the preceding code snippet, the Decoder.Text needs two methods to be overridden. The willDecode() method checks if it can handle this object and decode it. The decode() method decodes the string into an object of type Book by using the JSON-P API javax.json.Json.createReader(). The following code snippet shows the user-defined class Book: public class Book { public Book() {} JsonObject jsonObject; public Book(JsonObject json) { this.jsonObject = json; } public JsonObject getJson() { return jsonObject; } public void setJson(JsonObject json) { this.jsonObject = json; } public Book(String message) { jsonObject = Json.createReader(new StringReader(message)).readObject(); } public String toString () { StringWriter writer = new StringWriter(); Json.createWriter(writer).write(jsonObject); return writer.toString(); } } The Book class is a user-defined class that takes the JSON object sent by the client. Here is an example of how the JSON details are sent to the WebSockets endpoints from JavaScript. var json = JSON.stringify({ "name": "Java 7 JAX-WS Web Services", "author":"Deepak Vohra", "isbn": "123456789" }); function addBook() { websocket.send(json); } The client sends the message using websocket.send() which will cause the onMessage() of the BookCollection.java to be invoked. The BookCollection.java will return the same book to the client. In the process, the decoder will decode the WebSockets message when it is received. To send back the same Book object, first the encoder will encode the Book object to a WebSockets message and send it to the client. The Java WebSocket Client API WebSockets and Server-sent Events , covered the Java WebSockets client API. Any POJO can be transformed into a WebSockets client by annotating it with @ClientEndpoint. Additionally the user can add encoders and decoders attributes to the @ClientEndpoint annotation to encode application objects into WebSockets messages and WebSockets messages into application objects. The following table shows the @ClientEndpoint annotation and its attributes: Annotation Attribute Description @ClientEndpoint   This class-level annotation signifies that the Java class is a WebSockets client that will connect to a WebSockets server endpoint.   value The value is the URI with a leading /.   encoders The encoders contain a list of Java classes that act as encoders for the endpoint. The classes must implement the encoder interface.   decoders The decoders contain a list of Java classes that act as decoders for the endpoint. The classes must implement the decoder interface.   configurator The configurator attribute allows the developer to plug in their implementation of ClientEndpoint.Configurator, which is used when configuring the client endpoint.   subprotocols The sub protocols attribute contains a list of sub protocols that the endpoint can support. Sending different kinds of message data: blob/binary Using JavaScript we can traditionally send JSON or XML as strings. However, HTML5 allows applications to work with binary data to improve performance. WebSockets supports two kinds of binary data Binary Large Objects (blob) arraybuffer A WebSocket can work with only one of the formats at any given time. Using the binaryType property of a WebSocket, you can switch between using blob or arraybuffer: websocket.binaryType = "blob"; // receive some blob data websocket.binaryType = "arraybuffer"; // now receive ArrayBuffer data The following code snippet shows how to display images sent by a server using WebSockets. Here is a code snippet for how to send binary data with WebSockets: websocket.binaryType = 'arraybuffer'; The preceding code snippet sets the binaryType property of websocket to arraybuffer. websocket.onmessage = function(msg) { var arrayBuffer = msg.data; var bytes = new Uint8Array(arrayBuffer); var image = document.getElementById('image'); image.src = 'data:image/png;base64,'+encode(bytes); } When the onmessage is called the arrayBuffer is initialized to the message.data. The Uint8Array type represents an array of 8-bit unsigned integers. The image.src value is in line using the data URI scheme. Security and WebSockets WebSockets are secured using the web container security model. A WebSockets developer can declare whether the access to the WebSocket server endpoint needs to be authenticated, who can access it, or if it needs an encrypted connection. A WebSockets endpoint which is mapped to a ws:// URI is protected under the deployment descriptor with http:// URI with the same hostname,port path since the initial handshake is from the HTTP connection. So, WebSockets developers can assign an authentication scheme, user roles, and a transport guarantee to any WebSockets endpoints. We will take the same sample as we saw in , WebSockets and Server-sent Events , and make it a secure WebSockets application. Here is the web.xml for a secure WebSocket endpoint: <web-app version="3.0" xsi_schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"> <security-constraint> <web-resource-collection> <web-resource-name>BookCollection</web-resource-name> <url-pattern>/index.jsp</url-pattern> <http-method>PUT</http-method> <http-method>POST</http-method> <http-method>DELETE</http-method> <http-method>GET</http-method> </web-resource-collection> <user-data-constraint> <description>SSL</description> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> </web-app> As you can see in the preceding snippet, we used <transport-guarantee>CONFIDENTIAL</transport-guarantee>. The Java EE specification followed by application servers provides different levels of transport guarantee on the communication between clients and application server. The three levels are: Data Confidentiality (CONFIDENTIAL) : We use this level to guarantee that all communication between client and server goes through the SSL layer and connections won't be accepted over a non-secure channel. Data Integrity (INTEGRAL) : We can use this level when a full encryption is not required but we want our data to be transmitted to and from a client in such a way that, if anyone changed the data, we could detect the change. Any type of connection (NONE) : We can use this level to force the container to accept connections on HTTP and HTTPs. The following steps should be followed for setting up SSL and running our sample to show a secure WebSockets application deployed in Glassfish. Generate the server certificate: keytool -genkey -alias server-alias -keyalg RSA -keypass changeit --storepass changeit -keystore keystore.jks Export the generated server certificate in keystore.jks into the file server.cer: keytool -export -alias server-alias -storepass changeit -file server.cer -keystore keystore.jks Create the trust-store file cacerts.jks and add the server certificate to the trust store: keytool -import -v -trustcacerts -alias server-alias -file server.cer -keystore cacerts.jks -keypass changeit -storepass changeit Change the following JVM options so that they point to the location and name of the new keystore. Add this in domain.xml under java-config: <jvm-options>-Djavax.net.ssl.keyStore=${com.sun.aas.instanceRoot}/config/keystore.jks</jvm-options> <jvm-options>-Djavax.net.ssl.trustStore=${com.sun.aas.instanceRoot}/config/cacerts.jks</jvm-options> Restart GlassFish. If you go to https://localhost:8181/helloworld-ws/, you can see the secure WebSocket application. Here is how the the headers look under Chrome Developer Tools: Open Chrome Browser and click on View and then on Developer Tools . Click on Network . Select book under element name and click on Frames . As you can see in the preceding screenshot, since the application is secured using SSL the WebSockets URI, it also contains wss://, which means WebSockets over SSL. So far we have seen the encoders and decoders for WebSockets messages. We also covered how to send binary data using WebSockets. Additionally we have demonstrated a sample on how to secure WebSockets based application. We shall now cover the best practices for WebSocket based-applications.
Read more
  • 0
  • 0
  • 18374

article-image-how-to-build-and-deploy-node-app-docker
John Oerter
20 Sep 2016
7 min read
Save for later

How to Build and Deploy a Node App with Docker

John Oerter
20 Sep 2016
7 min read
How many times have you deployed your app that was working perfectly in your local environment to production, only to see it break? Whether it was directly related to the bug or feature you were working on, or another random issue entirely, this happens all too often for most developers. Errors like this not only slow you down, but they're also embarrassing. Why does this happen? Usually, it's because your development environment on your local machine is different from the production environment you're deploying to. The tenth factor of the Twelve-Factor App is Dev/prod parity. This means that your development, staging, and production environments should be as similar as possible. The authors of the Twelve-Factor App spell out three "gaps" that can be present. They are: The time gap: A developer may work on code that takes days, weeks, or even months to go into production. The personnel gap: Developers write code, ops engineers deploy it. The tools gap: Developers may be using a stack like Nginx, SQLite, and OS X, while the production deployment uses Apache, MySQL, and Linux. (Source) In this post, we will mostly focus on the tools gap, and how to bridge that gap in a Node application with Docker. The Tools Gap In the Node ecosystem, the tools gap usually manifests itself either in differences in Node and npm versions, or differences in package dependency versions. If a package author publishes a breaking change in one of your dependencies or your dependencies' dependencies, it is entirely possible that your app will break on the next deployment (assuming you reinstall dependencies with npm install on every deployment), while it runs perfectly on your local machine. Although you can work around this issue using tools like npm shrinkwrap, adding Docker to the mix will streamline your deployment life cycle and minimize broken deployments to production. Why Docker? Docker is unique because it can be used the same way in development and production. When you enable the architecture of your app to run inside containers, you can easily scale out and create small containers that can be composed together to make one awesome system. Then, you can mimic this architecture in development so you never have to guess how your app will behave in production. In regards to the time gap and the personnel gap, Docker makes it easier for developers to automate deployments, thereby decreasing time to production and making it easier for full-stack teams to own deployments. Tools and Concepts When developing inside Docker containers, the two most important concepts are docker-compose and volumes. docker-compose helps define mulit-container environments and the ability to run them with one command. Here are some of the more often used docker-compose commands: docker-compose build: Builds images for services defined in docker-compose.yml docker-compose up: Creates and starts services. This is the same as running docker-compose create && docker-compose start docker-compose run: Runs a one-off command inside a container Volumes allow you to mount files from the host machine into the container. When the files on your host machine change, they change inside the container as well. This is important so that we don't have to constantly rebuild containers during development every time we make a change. You can also use a tool like node-mon to automatically restart the node app on changes. Let's walk through some tips and tricks with developing Node apps inside Docker containers. Set up Dockerfile and docker-compose.yml When you start a new project with Docker, you'll first want to define a barebones Dockerfile and docker-compose.yml to get you started. Here's an example Dockerfile: FROM node:6.2.1 RUN useradd --user-group --create-home --shell /bin/false app-user ENV HOME=/home/app-user USER app-user WORKDIR $HOME/app This Dockerfile displays two best practices: Favor exact version tags over floating tags such as latest. Node releases often these days, and you don't want to implicitly upgrade when building your container on another machine. By specifying a version such as 6.2.1, you ensure that anyone who builds the image will always be working from the same node version. Create a new user to run the app inside the container. Without this step, everything would run under root in the container. You certainly wouldn't do that on a physical machine, so don't do in Docker containers either. Here's an example starter docker-compose.yml: web: build: . volumes: - .:/home/app-user/app Pretty simple right? Here we are telling Docker to build the web service based on our Dockerfile and create a volume from our current host directory to /home/app-user/app inside the container. This simple setup lets you build the container with docker-compose build and then run bash inside it with docker-compose run --rm web /bin/bash. Now, it's essentially the same as if you were SSH'd into a remote server or working off a VM, except that any file you create inside the container will be on your host machine and vice versa. With that in mind, you can bootstrap your Node app from inside your container using npm init -y and npm shrinkwrap. Then, you can install any modules you need such as Express. Install node modules on build With that done, we need to update our Dockerfile to install dependencies from npm when the image is built. Here is the updated Dockerfile: FROM node:6.2.1 RUN useradd --user-group --create-home --shell /bin/false app-user ENV HOME=/home/app-user COPY package.json npm-shrinkwrap.json $HOME/app/ RUN chown -R app-user:app-user $HOME/* USER app-user WORKDIR $HOME/app RUN npm install Notice that we had to change the ownership of the copied files to app-user. This is because files copied into a container are automatically owned by root. Add a volume for the node_modules directory We also need to make an update to our docker-compose.yml to make sure that our modules are installed inside the container properly. web: build: . volumes: - .:/home/app-user/app - /home/app-user/app/node_modules Without adding a data volume to /home/app-user/app/node_modules, the node_modules wouldn't exist at runtime in the container because our host directory, which won't contain the node_modules directory, would be mounted and hide the node_modules directory that was created when the container was built. For more information, see this Stack Overflow post. Running your app Once you've got an entry point to your app ready to go, simply add it as a CMD in your Dockerfile: CMD ["node", "index.js"] This will automatically start your app on docker-compose up. Running tests inside your container is easy as well. docker-compose --rm run web npm test You could easily hook this into CI. Production Now going to production with your Docker-powered Node app is a breeze! Just use docker-compose again. You will probably want to define another docker-compose.yml that is especially written for production use. This means removing volumes, binding to different ports, setting NODE_ENV=production, and so on. Once you have a production config file, you can tell docker-compose to use it, like so: docker-compose -f docker-compose.yml -f docker-compose.production.yml up The -f lets you specify a list of files that are merged in the order specified. Here is a complete Dockerfile and docker-compose.yml for reference: # Dockerfile FROM node:6.2.1 RUN useradd --user-group --create-home --shell /bin/false app-user ENV HOME=/home/app-user COPY package.json npm-shrinkwrap.json $HOME/app/ RUN chown -R app-user:app-user $HOME/* USER app-user WORKDIR $HOME/app RUN npm install CMD ["node", "index.js"] # docker-compose.yml web: build: . ports: - '3000:3000' volumes: - .:/home/app-user/app - /home/app-user/app/node_modules About the author John Oerter is a software engineer from Omaha, Nebraska, USA. He has a passion for continuous improvement and learning in all areas of software development, including Docker, JavaScript, and C#. He blogs here.
Read more
  • 0
  • 0
  • 18098

article-image-chart-model-and-draggable-and-droppable-directives
Packt
06 Jul 2017
9 min read
Save for later

Chart Model and Draggable and Droppable Directives

Packt
06 Jul 2017
9 min read
In this article by Sudheer Jonna and Oleg Varaksin, the author of the book Learning Angular UI Development with PrimeNG, we will see how to work with the chart model and learn about draggable and droppable directives. (For more resources related to this topic, see here.) Working with the chart model The chart component provides a visual representation of data using chart on a web page. PrimeNG chart components are based on charts.js 2.x library (as a dependency), which is a HTML5 open source library. The chart model is based on UIChart class name and it can be represented with element name as p-chart. The chart components will work efficiently by attaching the chart model file (chart.js) to the project root folder entry point. For example, in this case it would be index.html file. It can be configured as either CDN resource, local resource, or CLI configuration. CDN resource configuration: <script src="https://cdnjs.cloudflare.com/ajax/libs/Chart.js/2.5.0/Chart.bu ndle.min.js"></script> Angular CLI configuration: "scripts": [ "../node_modules/chart.js/dist/Chart.js", //..others ] More about the chart configuration and options will be available in the official documentation of the chartJS library (http://www.chartjs.org/). Chart types The chart type is defined through the type property. It supports six types of charts with an options such as pie, bar, line, doughnut, polarArea, and radar. Each type has it's own format of data and it can be supplied through the data property. For example, in doughnut chart, the type should refer to doughnut and the data property should bind to the data options as shown here: <p-chart type="doughnut" [data]="doughnutdata"></p-chart> The component class has to define data the options with labels and datasets as follows: this.doughnutdata = { labels: ['PrimeNG', 'PrimeUI', 'PrimeReact'], datasets: [ { data: [3000, 1000, 2000], backgroundColor: [ "#6544a9", "#51cc00", "#5d4361" ], hoverBackgroundColor: [ "#6544a9", "#51cc00", "#5d4361" ] } ] }; Along with labels and data options, other properties related to skinning can be applied too. The legends are closable by default (that is, if you want to visualize only particular data variants then it is possible by collapsing legends which are not required). The collapsed legend is represented with a strike line. The respective data component will be disappeared after click operation on legend. Customization Each series is customized on a dataset basis but you can customize the general or common options via the options attribute. For example, the line chart which customize the default options would be as follows: <p-chart type="line" [data]="linedata" [options]="options"></p-chart> The component class needs to define chart options with customized title and legend properties as follows: this.options = { title: { display: true, text: 'PrimeNG vs PrimeUI', fontSize: 16 }, legend: { position: 'bottom' } }; As per the preceding example, the title option is customized with a dynamic title, font size, and conditional display of the title. Where as legend attribute is used to place the legend in top, left, bottom, and right positions. The default legend position is top. In this example, the legend position is bottom. The line chart with preceding customized options would results as a snapshot shown here: The Chart API also supports the couple of utility methods as shown here: refresh Redraws the graph with new data reinit Destroys the existing graph and then creates it again generateLegend Returns an HTML string of a legend for that chart Events The chart component provides a click event on data sets to process the select data using onDataSelect event callback. Let us take a line chart example with onDataSelect event callback by passing an event object as follows: <p-chart type="line" [data]="linedata" (onDataSelect)="selectData($event)"></p-chart> In the component class, an event callback is used to display selected data information in a message format as shown: selectData(event: any) { this.msgs = []; this.msgs.push({ severity: 'info', summary: 'Data Selected', 'detail': this.linedata.datasets[event.element._datasetIndex] .data[event.element._index] }); } In the preceding event callback (onDataSelect), we used an index of the dataset to display information. There are also many other options from an event object: event.element = Selected element event.dataset = Selected dataset event.element._datasetIndex = Index of the dataset in data event.element._index = Index of the data in dataset Learning Draggable and Droppable directives Drag and drop is an action, which means grabbing an object and dragging it to a different location. The components capable of being dragged and dropped enrich the web and make a solid base for modern UI patterns. The drag and drop utilities in PrimeNG allow us to create draggable and droppable user interfaces efficiently. They make it abstract for the developers to deal with the implementation details at the browser level. In this section, we will learn about pDraggable and pDroppable directives. We will introduce a DataGrid component containing some imaginary documents and make these documents draggable in order to drop them onto a recycle bin. The recycle bin is implemented as DataTable component which shows properties of dropped documents. For the purpose of better understanding the developed code, a picture comes first: This picture shows what happens after dragging and dropping three documents. The complete demo application with instructions is available on GitHub at https://github.com/ova2/angular-development-with-primeng/tree/master/chapter9/dragdrop. Draggable pDraggable is attached to an element to add a drag behavior. The value of the pDraggable attribute is required--it defines the scope to match draggables with droppables. By default, the whole element is draggable. We can restrict the draggable area by applying the dragHandle attribute. The value of dragHandle can be any CSS selector. In the DataGrid with available documents, we only made the panel's header draggable: <p-dataGrid [value]="availableDocs"> <p-header> Available Documents </p-header> <ng-template let-doc pTemplate="item"> <div class="ui-g-12 ui-md-4" pDraggable="docs" dragHandle=".uipanel- titlebar" dragEffect="move" (onDragStart)="dragStart($event, doc)" (onDragEnd)="dragEnd($event)"> <p-panel [header]="doc.title" [style]="{'text-align':'center'}"> <img src="/assets/data/images/docs/{{doc.extension}}.png"> </p-panel> </div> </ng-template> </p-dataGrid> The draggable element can fire three events when dragging process begins, proceeds, and ends. These are onDragStart, onDrag, and onDragEnd respectively. In the component class, we buffer the dragged document at the beginning and reset it at the end of the dragging process. This task is done in two callbacks: dragStart and dragEnd. class DragDropComponent { availableDocs: Document[]; deletedDocs: Document[]; draggedDoc: Document; constructor(private docService: DocumentService) { } ngOnInit() { this.deletedDocs = []; this.docService.getDocuments().subscribe((docs: Document[]) => this.availableDocs = docs); } dragStart(event: any, doc: Document) { this.draggedDoc = doc; } dragEnd(event: any) { this.draggedDoc = null; } ... } In the shown code, we used the Document interface with the following properties: interface Document { id: string; title: string; size: number; creator: string; creationDate: Date; extension: string; } In the demo application, we set the cursor to move when the mouse is moved over any panel's header. This trick provides a better visual feedback for draggable area: body .ui-panel .ui-panel-titlebar { cursor: move; } We can also set the dragEffect attribute to specifies the effect that is allowed for a drag operation. Possible values are none, copy, move, link, copyMove, copyLink, linkMove, and all. Refer the official documentation to read more details at https://developer.mozilla.org/en-US/docs/Web/API/DataTransfer/effectAllowed. Droppable pDroppable is attached to an element to add a drop behavior. The value of the pDroppable attribute should have the same scope as pDraggable. Droppable scope can also be an array to accept multiple droppables. The droppable element can fire four events. Event name Description onDragEnter Invoked when a draggable element enters the drop area onDragOver Invoked when a draggable element is being dragged over the drop area onDrop Invoked when a draggable is dropped onto the drop area onDragLeave Invoked when a draggable element leaves the drop area In the demo application, the whole code of the droppable area looks as follows: <div pDroppable="docs" (onDrop)="drop($event)" [ngClass]="{'dragged-doc': draggedDoc}"> <p-dataTable [value]="deletedDocs"> <p-header>Recycle Bin</p-header> <p-column field="title" header="Title"></p-column> <p-column field="size" header="Size (bytes)"></p-column> <p-column field="creator" header="Creator"></p-column> <p-column field="creationDate" header="Creation Date"> <ng-template let-col let-doc="rowData" pTemplate="body"> {{doc[col.field].toLocaleDateString()}} </ng-template> </p-column> </p-dataTable> </div> Whenever a document is being dragged and dropped into the recycle bin, the dropped document is removed from the list of all available documents and added to the list of deleted documents. This happens in the onDrop callback: drop(event: any) { if (this.draggedDoc) { // add draggable element to the deleted documents list this.deletedDocs = [...this.deletedDocs, this.draggedDoc]; // remove draggable element from the available documents list this.availableDocs = this.availableDocs.filter((e: Document) => e.id !== this.draggedDoc.id); this.draggedDoc = null; } } Both available and deleted documents are updated by creating new arrays instead of manipulating existing arrays. This is necessary in data iteration components to force Angular run change detection. Manipulating existing arrays would not run change detection and the UI would not be updated. The Recycle Bin area gets a red border while dragging any panel with document. We have achieved this highlighting by setting ngClass as follows: [ngClass]="{'dragged-doc': draggedDoc}". The style class dragged-doc is enabled when the draggedDoc object is set. The style class is defined as follows: .dragged-doc { border: solid 2px red; } Summary Initially we started with chart components. At first we started with chart Model API and then will learn how to create charts programmatically using various chart types such as pie, bar, line, doughnut, polar and radar charts. We also learned features of Draggable and Droppable. Resources for Article: Further resources on this subject: Building Components Using Angular [article] Get Familiar with Angular [article] Writing a Blog Application with Node.js and AngularJS [article]
Read more
  • 0
  • 0
  • 17930

article-image-quick-start-your-first-sinatra-application
Packt
14 Aug 2013
15 min read
Save for later

Quick start - your first Sinatra application

Packt
14 Aug 2013
15 min read
(For more resources related to this topic, see here.) Step 1 – creating the application The first thing to do is set up Sinatra itself, which means creating a Gemfile. Open up a Terminal window and navigate to the directory where you're going to keep your Sinatra applications. Create a directory called address-book using the following command: mkdir address-book Move into the new directory: cd address-book Create a file called Gemfile: source 'https://rubygems.org'gem 'sinatra' Install the gems via bundler: bundle install You will notice that Bundler will not just install the sinatra gem but also its dependencies. The most important dependency is Rack (http://rack.github.com/), which is a common handler layer for web servers. Rack will be receiving requests for web pages, digesting them, and then handing them off to your Sinatra application. If you set up your Bundler configuration as indicated in the previous section, you will now have the following files: .bundle: This is a directory containing the local configuration for Bundler Gemfile: As created previously Gemfile.lock: This is a list of the actual versions of gems that are installed vendor/bundle: This directory contains the gems You'll need to understand the Gemfile.lock file. It helps you know exactly which versions of your application's dependencies (gems) will get installed. When you run bundle install, if Bundler finds a file called Gemfile.lock, it will install exactly those gems and versions that are listed there. This means that when you deploy your application on the Internet, you can be sure of which versions are being used and that they are the same as the ones on your development machine. This fact makes debugging a lot more reliable. Without Gemfile.lock, you might spend hours trying to reproduce behavior that you're seeing on your deployed app, only to discover that it was caused by a glitch in a gem version that you haven't got on your machine. So now we can actually create the files that make up the first version of our application. Create address-book.rb: require 'sinatra/base'class AddressBook < Sinatra::Base get '/' do 'Hello World!' endend This is the skeleton of the first part of our application. Line 1 loads Sinatra, line 3 creates our application, and line 4 says we handle requests to '/'—the root path. So if our application is running on myapp.example.com, this means that this method will handle requests to http://myapp.example.com/. Line 5 returns the string Hello World!. Remember that a Ruby block or a method without explicit use of the return keyword will return the result of its last line of code. Create config.ru: $: << File.dirname(__FILE__)require 'address-book'run AddressBook.new This file gets loaded by rackup, which is part of the Rack gem. Rackup is a tool that runs rack-based applications. It reads the configuration from config.ru and runs our application. Line 1 adds the current directory to the list of paths where Ruby looks for files to load, line 2 loads the file we just created previously, and line 4 runs the application. Let's see if it works. In a Terminal, run the following command: bundle exec rackup -p 3000 Here rackup reads config.ru, loads our application, and runs it. We use the bundle exec command to ensure that only our application's gems (the ones in vendor/bundle) get used. Bundler prepares the environment so that the application only loads the gems that were installed via our Gemfile. The -p 3000 command means we want to run a web server on port 3000 while we're developing. Open up a browser and go to http://0.0.0.0:3000; you should see something that looks like the following screenshot: Illustration 1: The Hello World! output from the application Logging Have a look at the output in the Terminal window where you started the application. I got the following (line numbers are added for reference): 1 [2013-03-03 12:30:02] INFO WEBrick 1.3.12 [2013-03-03 12:30:02] INFO ruby 1.9.3 (2013-01-15) [x86_64-linux]3 [2013-03-03 12:30:02] INFO WEBrick::HTTPServer#start: pid=28551 port=30004 127.0.0.1 - - [03/Mar/2013 12:30:06] "GET / HTTP/1.1" 200 12 0.01425 127.0.0.1 - - [03/Mar/2013 12:30:06] "GET /favicon.ico HTTP/1.1" 404 445 0.0018 Like it or not, you'll be seeing a lot of logs such as this while doing web development, so it's a good idea to get used to noticing the information they contain. Line 1 says that we are running the WEBrick web server. This is a minimal server included with Ruby—it's slow and not very powerful so it shouldn't be used for production applications, but it will do for now for application development. Line 2 indicates that we are running the application on Version 1.9.3 of Ruby. Make sure you don't develop with older versions, especially the 1.8 series, as they're being phased out and are missing features that we will be using in this book. Line 3 tells us that the server started and that it is awaiting requests on port 3000, as we instructed. Line 4 is the request itself: GET /. The number 200 means the request succeeded—it is an HTTP status code that means Success . Line 5 is a second request created by our web browser. It's asking if the site has a favicon, an icon representing the site. We don't have one, so Sinatra responded with 404 (not found). When you want to stop the web server, hit Ctrl + C in the Terminal window where you launched it. Step 2 – putting the application under version control with Git When developing software, it is very important to manage the source code with a version control system such as Git or Mercurial. Version control systems allow you to look at the development of your project; they allow you to work on the project in parallel with others and also to try out code development ideas (branches) without messing up the stable application. Create a Git repository in this directory: git init Now add the files to the repository: git add Gemfile Gemfile.lock address-book.rb config.ru Then commit them: git commit -m "Hello World" I assume you created a GitHub account earlier. Let's push the code up to www.github.com for safe keeping. Go to https://github.com/new. Create a repo called sinatra-address-book. Set up your local repo to send code to your GitHub account: git remote add origin git@github.com:YOUR_ACCOUNT/sinatra-address-book.git Push the code: git push You may need to sort out authentication if this is your first time pushing code. So if you get an error such as the following, you'll need to set up authentication on GitHub: Permission denied (publickey) Go to https://github.com/settings/ssh and add the public key that you generated in the previous section. Now you can refresh your browser, and GitHub will show you your code as follows: Note that the code in my GitHub repository is marked with tags. If you want to follow the changes by looking at the repository, clone my repo from //github.com/joeyates/sinatra-address-book.git into a different directory and then "check out" the correct tag (indicated by a footnote) at each stage. To see the code at this stage, type in the following command: git checkout 01_hello_world If you type in the following command, Git will tell you that you have "untracked files", for example, .bundle: git status To get rid of the warning, create a file called .gitignore inside the project and add the following content: /.bundle//vendor/bundle/ Git will no longer complain about those directories. Remember to add .gitignore to the Git repository and commit it. Let's add a README file as the page is requesting, using the following steps: Create the README.md file and insert the following text: sinatra-address-book ==================== An example program of various Sinatra functionality. Add the new file to the repo: git add README.md Commit the changes: git commit -m "Add a README explaining the application" Send the update to GitHub: git push Now that we have a README file, GitHub will stop complaining. What's more is other people may see our application and decide to build on it. The README file will give them some information about what the application does. Step 3 – deploying the application We've used GitHub to host our project, but now we're going to publish it online as a working site. In the introduction, I asked you to create a Heroku account. We're now going to use that to deploy our code. Heroku uses Git to receive code, so we'll be setting up our repository to push code to Heroku as well. Now let's create a Heroku app: heroku createCreating limitless-basin-9090... done, stack is cedarhttp://limitless-basin-9090.herokuapp.com/ | git@heroku.com:limitless-basin-9090.gitGit remote heroku added My Heroku app is called limitless-basin-9090. This name was randomly generated by Heroku when I created the app. When you generate an app, you will get a different, randomly generated name. My app will be available on the Web at the http://limitless-basin-9090.herokuapp.com/ address. If you deploy your app, it will be available on an address based on the name that Heroku has generated for it. Note that, on the last line, Git has been configured too. To see what has happened, use the following command: git remote show heroku* remote heroku Fetch URL: git@heroku.com:limitless-basin-9090.git Push URL: git@heroku.com:limitless-basin-9090.git HEAD branch: (unknown) Now let's deploy the application to the Internet: git push heroku master Now the application is online for all to see: The initial version of the application, running on Heroku Step 4 – page layout with Slim The page looks a bit sad. Let's set up a standard page structure and use a templating language to lay out our pages. A templating language allows us to create the HTML for our web pages in a clearer and more concise way. There are many HTML templating systems available to the Sinatra developer: erb , haml , and slim are three popular choices. We'll be using Slim (http://slim-lang.com/). Let's add the gem: Update our Gemfile: gem 'slim' Install the gem: bundle We will be keeping our page templates as .slim files. Sinatra looks for these in the views directory. Let's create the directory, our new home page, and the standard layout for all the pages in the application. Create the views directory: mkdir views Create views/home.slim: p address book – a Sinatra application When run via Sinatra, this will create the following HTML markup: <p>address book – a Sinatra application</p> Create views/layout.slim: doctype html html head title Sinatra Address Book body == yield Note how Slim uses indenting to indicate the structure of the web page. The most important line here is as follows: == yield This is the point in the layout where our home page's HTML markup will get inserted. The yield instruction is where our Sinatra handler gets called. The result it returns (that is, the web page) is inserted here by Slim. Finally, we need to alter address-book.rb. Add the following line at the top of the file: require 'slim' Replace the get '/' handler with the following: get '/' do slim :home end Start the local web server as we did before: bundle exec rackup -p 3000 The following is the new home page: Using the Slim Templating Engine Have a look at the source for the page. Note how the results of home.slim are inserted into layout.slim. Let's get that deployed. Add the new code to Git and then add the two new files: git add views/*.slim Also add the changes made to the other files: git add address-book.rb Gemfile Gemfile.lock Commit the changes with a comment: git commit -m "Generate HTML using Slim" Deploy to Heroku: git push heroku master Check online that everything's as expected. Step 5 – styling To give a slightly nicer look to our pages, we can use Bootstrap (http://twitter.github.io/bootstrap/); it's a CSS framework made by Twitter. Let's modify views/layout.slim. After the line that says title Sinatra Address Book, add the following code: link href="//netdna.bootstrapcdn.com/twitter-bootstrap/2.3.1/css/bootstrap-combined.min.css" rel="stylesheet"There are a few things to note about this line. Firstly, we will be using a file hosted on a Content Distribution Network (CDN ). Clearly, we need to check that the file we're including is actually what we think it is. The advantage of a CDN is that we don't need to keep a copy of the file ourselves, but if our users visit other sites using the same CDN, they'll only need to download the file once. Note also the use of // at the beginning of the link address; this is called a "protocol agnostic URL". This way of referencing the document will allow us later on to switch our application to run securely under HTTPS, without having to readjust all our links to the content. Now let's change views/home.slim to the following: div class="container" h1 address book h2 a Sinatra application We're not using Bootstrap to anywhere near its full potential here. Later on we can improve the look of the app using Bootstrap as a starting point. Remember to commit your changes and to deploy to Heroku. Step 6 – development setup As things stand, during local development we have to manually restart our local web server every time we want to see a change. Now we are going to set things up with the following steps so the application reloads after each change: Add the following block to the Gemfile: group :development do gem 'unicorn' gem 'guard' gem 'listen' gem 'rb-inotify', :require => false gem 'rb-fsevent', :require => false gem 'guard-unicorn' endThe group around these gems means they will only be installed and used in development mode and not when we deploy our application to the Web. Unicorn is a web server—it's better than WEBrick —that is used in real production environments. WEBrick's slowness can even become noticeable during development, while Unicorn is very fast. rb-inotify and rb-fsevent are the Linux and Mac OS X components that keep a check on your hard disk. If any of your application's files change, guard restarts the whole application, updating the changes. Finally, update your gems: bundle Now add Guardfile: guard :unicorn, :daemonize => true do `git ls-files`.each_line { |s| s.chomp!; watch s }end Add a configuration file for unicorn: mkdir config In config/unicorn.rb, add the following: listen 3000 Run the web server: guard Now if you make any changes, the web server will restart and you will get a notification via a desktop message. To see this, type in the following command: touch address-book.rb You should get a desktop notification saying that guard has restarted the application. Note that to shut guard down, you need to press Ctrl + D . Also, remember to add the new files to Git. Step 7 – testing the application We want our application to be robust. Whenever we make changes and deploy, we want to be sure that it's going to keep working. What's more, if something does not work properly, we want to be able to fix bugs so we know that they won't come back. This is where testing comes in. Tests check that our application works properly and also act as detailed documentation for it; they tell us what the application is intended for. Our tests will actually be called "specs", a term that is supposed to indicate that you write tests as specifications for what your code should do. We will be using a library called RSpec . Let's get it installed. Add the gem to the Gemfile: group :test do gem 'rack-test' gem 'rspec'end Update the gems so RSpec gets installed: bundle Create a directory for our specs: mkdir spec Create the spec/spec_helper.rb file: $: << File.expand_path('../..', __FILE__)require 'address-book'require 'rack/test'def app AddressBook.newendRSpec.configure do |config| config.include Rack::Test::Methodsend Create a directory for the integration specs: mkdir spec/integration Create a spec/integration/home_spec.rb file for testing the home page: require 'spec_helper'describe "Sinatra App" do it "should respond to GET" do get '/' expect(last_response).to be_ok expect(last_response.body).to match(/address book/) endend What we do here is call the application, asking for its home page. We check that the application answers with an HTTP status code of 200 (be_ok). Then we check for some expected content in the resulting page, that is, the address book page. Run the spec: bundle exec rspec Finished in 0.0295 seconds1 example, 0 failures Ok, so our spec is executed without any errors. There you have it. We've created a micro application, written tests for it, and deployed it to the Internet. Summary This article discussed how to perform the core tasks of Sinatra: handling a GET request and rendering a web page. Resources for Article : Further resources on this subject: URL Shorteners – Designing the TinyURL Clone with Ruby [Article] Building tiny Web-applications in Ruby using Sinatra [Article] Setting up environment for Cucumber BDD Rails [Article]  
Read more
  • 0
  • 0
  • 17836
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-django-and-django-rest-frameworks-build-restful-app
Sugandha Lahoti
29 Mar 2018
12 min read
Save for later

Getting started with Django and Django REST frameworks to build a RESTful app

Sugandha Lahoti
29 Mar 2018
12 min read
In this article, we will learn how to install Django and Django REST framework in an isolated environment. We will also look at the Django folders, files, and configurations, and how to create an app with Django. We will also introduce various command-line and GUI tools that are use to interact with the RESTful Web Services. Installing Django and Django REST frameworks in an isolated environment First, run the following command to install the Django web framework: pip install django==1.11.5 The last lines of the output will indicate that the django package has been successfully installed. The process will also install the pytz package that provides world time zone definitions. Take into account that you may also see a notice to upgrade pip. The next lines show a sample of the four last lines of the output generated by a successful pip installation: Collecting django Collecting pytz (from django) Installing collected packages: pytz, django Successfully installed django-1.11.5 pytz-2017.2 Now that we have installed the Django web framework, we can install Django REST framework. Django REST framework works on top of Django and provides us with a powerful and flexible toolkit to build RESTful Web Services. We just need to run the following command to install this package: pip install djangorestframework==3.6.4 The last lines for the output will indicate that the djangorestframework package has been successfully installed, as shown here: Collecting djangorestframework Installing collected packages: djangorestframework Successfully installed djangorestframework-3.6.4 After following the previous steps, we will have Django REST framework 3.6.4 and Django 1.11.5 installed in our virtual environment. Creating an app with Django Now, we will create our first app with Django and we will analyze the directory structure that Django creates. First, go to the root folder for the virtual environment: 01. In Linux or macOS, enter the following command: cd ~/HillarDjangoREST/01 If you prefer Command Prompt, run the following command in the Windows command line: cd /d %USERPROFILE%\HillarDjangoREST\01 If you prefer Windows PowerShell, run the following command in Windows PowerShell: cd /d $env:USERPROFILE\HillarDjangoREST\01 In Linux or macOS, run the following command to create a new Django project named restful01. The command won't produce any output: python bin/django-admin.py startproject restful01 In Windows, in either Command Prompt or PowerShell, run the following command to create a new Django project named restful01. The command won't produce any output: python Scripts\django-admin.py startproject restful01 The previous command creates a restful01 folder with other subfolders and Python files. Now, go to the recently created restful01 folder. Just execute the following command on any platform: cd restful01 Then, run the following command to create a new Django app named toys within the restful01 Django project. The command won't produce any output: python manage.py startapp toys The previous command creates a new restful01/toys subfolder, with the following files: views.py tests.py models.py apps.py admin.py   init  .py In addition, the restful01/toys folder will have a migrations subfolder with an init  .py Python script. The following diagram shows the folders and files in the directory tree, starting at the restful01 folder with two subfolders - toys and restful01: Understanding Django folders, files, and configurations After we create our first Django project and then a Django app, there are many new folders and files. First, use your favorite editor or IDE to check the Python code in the apps.py file within the restful01/toys folder (restful01\toys in Windows). The following lines show the code for this file: from django.apps import AppConfig class ToysConfig(AppConfig): name = 'toys' The code declares the ToysConfig class as a subclass of the django.apps.AppConfig class that represents a Django application and its configuration. The ToysConfig class just defines the name class attribute and sets its value to 'toys'. Now, we have to add toys.apps.ToysConfig as one of the installed apps in the restful01/settings.py file that configures settings for the restful01 Django project. I built the previous string by concatenating many values as follows: app name + .apps. + class name, which is, toys + .apps. + ToysConfig. In addition, we have to add the rest_framework app to make it possible for us to use Django REST framework. The restful01/settings.py file is a Python module with module-level variables that define the configuration of Django for the restful01 project. We will make some changes to this Django settings file. Open the restful01/settings.py file and locate the highlighted lines that specify the strings list that declares the installed apps. The following code shows the first lines for the settings.py file. Note that the file has more code: """ Django settings for restful01 project. Generated by 'django-admin startproject' using Django 1.11.5. For more information on this file, see https://docs.djangoproject.com/en/1.11/topics/settings/ For the full list of settings and their values, see https://docs.djangoproject.com/en/1.11/ref/settings/ """ import os # Build paths inside the project like this: os.path.join(BASE_DIR, ...) BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath( file ))) # Quick-start development settings - unsuitable for production # See https://docs.djangoproject.com/en/1.11/howto/deployment/checklist/ # SECURITY WARNING: keep the secret key used in production secret! SECRET_KEY = '+uyg(tmn%eo+fpg+fcwmm&x(2x0gml8)=cs@$nijab%)y$a*xe' # SECURITY WARNING: don't run with debug turned on in production! DEBUG = True ALLOWED_HOSTS = [] # Application definition INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', ]         Add the following two strings to the INSTALLED_APPS strings list and save the changes to the restful01/settings.py file: 'rest_framework' 'toys.apps.ToysConfig' The following lines show the new code that declares the INSTALLED_APPS string list with the added lines highlighted and with comments to understand what each added string means. The code file for the sample is included in the hillar_django_restful_01 folder: INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', # Django REST framework 'rest_framework', # Toys application 'toys.apps.ToysConfig', ] This way, we have added Django REST framework and the toys application to our initial Django project named restful01. Installing tools Now, we will leave Django for a while and we will install many tools that we will use to interact with the RESTful Web Services that we will develop throughout this book. We will use the following different kinds of tools to compose and send HTTP requests and visualize the responses throughout our book: Command-line tools GUI tools Python code Web browser JavaScript code You can use any other application that allows you to compose and send HTTP requests. There are many apps that run on tablets and smartphones that allow you to accomplish this task. However, we will focus our attention on the most useful tools when building RESTful Web Services with Django. Installing Curl We will start installing command-line tools. One of the key advantages of command-line tools is that you can easily run again the HTTP requests again after we have built them for the first time, and we don't need to use the mouse or tap the screen to run requests. We can also easily build a script with batch requests and run them. As happens with any command-line tool, it can take more time to perform the first requests compared with GUI tools, but it becomes easier once we have performed many requests and we can easily reuse the commands we have written in the past to compose new requests. Curl, also known as cURL, is a very popular open source command-line tool and library that allows us to easily transfer data. We can use the curl command-line tool to easily compose and send HTTP requests and check their responses. In Linux or macOS, you can open a Terminal and start using curl from the command line. In Windows, you have two options. You can work with curl in Command Prompt or you can decide to install curl as part of the Cygwin package installation option and execute it from the Cygwin terminal. You can read more about the Cygwin terminal and its installation procedure at: http://cygwin.com/install.html. Windows Powershell includes a curl alias that calls the Invoke-WebRequest command, and therefore, if you want to work with Windows Powershell with curl, it is necessary to remove the curl alias. If you want to use the curl command within Command Prompt, you just need to download and unzip the latest version of the curl download page: https://curl.haxx.se/download.html. Make sure you download the version that includes SSL and SSH. The following screenshot shows the available downloads for Windows. The Win64 - Generic section includes the versions that we can run in Command Prompt or Windows Powershell. After you unzip the .7zip or .zip file you have downloaded, you can include the folder in which curl.exe is included in your path. For example, if you unzip the Win64 x86_64.7zip file, you will find curl.exe in the bin folder. The following screenshot shows the results of executing curl --version on Command Prompt in Windows 10. The --version option makes curl display its version and all the libraries, protocols, and features it supports: Installing HTTPie Now, we will install HTTPie, a command-line HTTP client written in Python that makes it easy to send HTTP requests and uses a syntax that is easier than curl. By default, HTTPie displays colorized output and uses multiple lines to display the response details. In some cases, HTTPie makes it easier to understand the responses than the curl utility. However, one of the great disadvantages of HTTPie as a command-line utility is that it takes more time to load than curl, and therefore, if you want to code scripts with too many commands, you have to evaluate whether it makes sense to use HTTPie. We just need to make sure we run the following command in the virtual environment we have just created and activated. This way, we will install HTTPie only for our virtual environment. Run the following command in the terminal, Command Prompt, or Windows PowerShell to install the httpie package: pip install --upgrade httpie The last lines of the output will indicate that the httpie package has been successfully installed: Collecting httpie Collecting colorama>=0.2.4 (from httpie) Collecting requests>=2.11.0 (from httpie) Collecting Pygments>=2.1.3 (from httpie) Collecting idna<2.7,>=2.5 (from requests>=2.11.0->httpie) Collecting urllib3<1.23,>=1.21.1 (from requests>=2.11.0->httpie) Collecting chardet<3.1.0,>=3.0.2 (from requests>=2.11.0->httpie) Collecting certifi>=2017.4.17 (from requests>=2.11.0->httpie) Installing collected packages: colorama, idna, urllib3, chardet, certifi, requests, Pygments, httpie Successfully installed Pygments-2.2.0 certifi-2017.7.27.1 chardet-3.0.4 colorama-0.3.9 httpie-0.9.9 idna-2.6 requests-2.18.4 urllib3-1.22 Now, we will be able to use the http command to easily compose and send HTTP requests to our future RESTful Web Services build with Django. The following screenshot shows the results of executing http on Command Prompt in Windows 10. HTTPie displays the valid options and indicates that a URL is required: Installing the Postman REST client So far, we have installed two terminal-based or command-line tools to compose and send HTTP requests to our Django development server: cURL and HTTPie. Now, we will start installing Graphical User Interface (GUI) tools. Postman is a very popular API testing suite GUI tool that allows us to easily compose and send HTTP requests, among other features. Postman is available as a standalone app in Linux, macOS, and Windows. You can download the versions of the Postman app from the following URL: https://www.getpostman.com. The following screenshot shows the HTTP GET request builder in Postman: Installing Stoplight Stoplight is a very useful GUI tool that focuses on helping architects and developers to model complex APIs. If we need to consume our RESTful Web Service in many different programming languages, we will find Stoplight extremely helpful. Stoplight provides an HTTP request maker that allows us to compose and send requests and generate the necessary code to make them in different programming languages, such as JavaScript, Swift, C#, PHP, Node, and Go, among others. Stoplight provides a web version and is also available as a standalone app in Linux, macOS, and Windows. You can download the versions of Stoplight from the following URL: http://stoplight.io/. The following screenshot shows the HTTP GET request builder in Stoplight with the code generation at the bottom: Installing iCurlHTTP We can also use apps that can compose and send HTTP requests from mobile devices to work with our RESTful Web Services. For example, we can work with the iCurlHTTP app on iOS devices such as iPad and iPhone: https://itunes.apple.com/us/app/icurlhttp/id611943891. On Android devices, we can work with the HTTP Request app: https://play.google.com/store/apps/details?id=air.http.request&hl=en. The following screenshot shows the UI for the iCurlHTTP app running on an iPad Pro: At the time of writing, the mobile apps that allow you to compose and send HTTP requests do not provide all the features you can find in Postman or command-line utilities. We learnt to set up a virtual environment with Django and Django REST framework and created an app with Django. We looked at Django folders, files, and configurations and installed command-line and GUI tools to interact with the RESTful Web Services. This article is an excerpt from the book, Django RESTful Web Services, written by Gaston C. Hillar. This book serves as an easy guide to build Python RESTful APIs and web services with Django. The code bundle for the article is hosted on GitHub.
Read more
  • 0
  • 0
  • 17835

article-image-interacting-gnu-octave-operators
Packt
20 Jun 2011
6 min read
Save for later

Interacting with GNU Octave: Operators

Packt
20 Jun 2011
6 min read
GNU Octave Beginner's Guide Become a proficient Octave user by learning this high-level scientific numerical tool from the ground up The reader will benefit from the previous article on GNU Octave Variables. Basic arithmetic Octave offers easy ways to perform different arithmetic operations. This ranges from simple addition and multiplication to very complicated linear algebra. In this section, we will go through the most basic arithmetic operations, such as addition, subtraction, multiplication, and left and right division. In general, we should think of these operations in the framework of linear algebra and not in terms of arithmetic of simple scalars. Addition and subtraction We begin with addition. Time for action – doing addition and subtraction operations I have lost track of the variables! Let us start afresh and clear all variables first: octave:66> clear (Check with whos to see if we cleared everything). Now, we define four variables in a single command line(!) octave:67> a = 2; b=[1 2 3]; c=[1; 2; 3]; A=[1 2 3; 4 5 6]; Note that there is an important difference between the variables b and c; namely, b is a row vector, whereas c is a column vector. Let us jump into it and try to add the different variables. This is done using the + character: octave:68>a+a ans = 4 octave:69>a+b ans = 3 4 5 octave:70>b+b ans = 2 4 6 octave:71>b+c error: operator +: nonconformant arguments (op1 is 1x3, op2 is 3x1) It is often convenient to enter multiple commands on the same line. Try to test the difference in separating the commands with commas and semicolons. What just happened? The output from Command 68 should be clear; we add the scalar a with itself. In Command 69, we see that the + operator simply adds the scalar a to each element in the b row vector. This is named element-wise addition. It also works if we add a scalar to a matrix or a higher dimensional array. Now, if + is applied between two vectors, it will add the elements together element-wise if and only if the two vectors have the same size, that is, they have same number of rows or columns. This is also what we would expect from basic linear algebra. From Command 70 and 71, we see that b+b is valid, but b+c is not, because b is a row vector and c is a column vector—they do not have the same size. In the last case, Octave produces an error message stating the problem. This would also be a problem if we tried to add, say b with A: octave:72>b+A error: operator +: nonconformant arguments (op1 is 1x3, op2 is 2x3) From the above examples, we see that adding a scalar to a vector or a matrix is a special case. It is allowed even though the dimensions do not match! When adding and subtracting vectors and matrices, the sizes must be the same. Not surprisingly, subtraction is done using the - operator. The same rules apply here, for example: octave:73> b-b ans = 0 0 0 is fine, but: octave:74> b-c error: operator -: nonconformant arguments (op1 is 1x3, op2 is 2x3) produces an error. Matrix multiplication The * operator is used for matrix multiplication. Recall from linear algebra that we cannot multiply any two matrices. Furthermore, matrix multiplication is not commutative. For example, consider the two matrices: The matrix product AB is defined, but BA is not. If A is size n x k and B has size k x m, the matrix product AB will be a matrix with size n x m. From this, we know that the number of columns of the "left" matrix must match the number of rows of the "right" matrix. We may think of this as (n x k)(k x m) = n x m. In the example above, the matrix product AB therefore results in a 2 x 3 matrix: Time for action – doing multiplication operations Let us try to perform some of the same operations for multiplication as we did for addition: octave:75> a*a ans = 4 octave:76> a*b ans = 2 4 6 octave:77> b*b error: operator *: nonconformant arguments (op1 is 1x3, op2 is 1x3) octave:78> b*c ans = 14 What just happened? From Command 75, we see that * multiplies two scalar variables just like standard multiplication. In agreement with linear algebra, we can also multiply a scalar by each element in a vector as shown by the output from Command 76. Command 77 produces an error—recall that b is a row vector which Octave also interprets as a 1 x 3 matrix, so we try to perform the matrix multiplication (1 x 3)(1 x 3), which is not valid. In Command 78, on the other hand, we have (1 x 3)(3 x 1) since c is a column vector yielding a matrix with size 1 x 1, that is, a scalar. This is, of course, just the dot product between b and c. Let us try an additional example and perform the matrix multiplication between A and B discussed above. First, we need to instantiate the two matrices, and then we multiply them: octave:79> A=[1 2; 3 4]; B=[1 2 3; 4 5 6]; octave:80> A*B ans = 9 12 15 19 26 33 octave:81> B*A error: operator *: nonconformant arguments (op1 is 2x3, op2 is 2x2) Seems like Octave knows linear algebra! Element-by-element, power, and transpose operations If the sizes of two arrays are the same, Octave provides a convenient way to multiply the elements element-wise. For example, for B: octave:82> B.*B ans = 1 4 9 16 25 36 Notice that the period (full stop) character precedes the multiplication operator. The period character can also be used in connection with other operators. For example: octave:83> B.+B ans = 2 4 6 8 10 12 which is the same as the command B+B. If we wish to raise each element in B to the power 2.1, we use the element-wise power operator.ˆ: octave:84> B.^2.1 ans = 1.0000 4.2871 10.0451 18.3792 29.3655 43.0643 You can perform element-wise power operation on two matrices as well (if they are of the same size, of course): octave:85> B.^B ans = 1 4 27 256 3125 46656 If the power is a real number, you can use ˆ instead of .ˆ; that is, instead of Command 84 above, you can use: octave:84>Bˆ2.1 Transposing a vector or matrix is done via the 'operator. To transpose B, we simply type: octave:86> B' ans = 1 4 2 5 3 6 Strictly, the ' operator is a complex conjugate transpose operator. We can see this in the following examples: octave:87> B = [1 2; 3 4] + I.*eye(2) B = 1 + 1i 2 + 0i 3 + 0i 4 + 1i octave:88> B' ans = 1 - 1i 3 - 0i 2 - 0i 4 - 1i Note that in Command 87, we have used the .* operator to multiply the imaginary unit with all the elements in the diagonal matrix produced by eye(2). Finally, note that the command transpose(B)or the operator .' will transpose the matrix, but not complex conjugate the elements.
Read more
  • 0
  • 0
  • 17820

article-image-understanding-basics-gulp
Packt
19 Jun 2017
15 min read
Save for later

Understanding the Basics of Gulp

Packt
19 Jun 2017
15 min read
In this article written by Travis Maynard, author of the book Getting Started with Gulp - Second Edition, we will take a look at the basics of gulp and how it works. Understanding some of the basic principles and philosophies behind the tool, it's plugin system will assist you as you begin writing your own gulpfiles. We'll start by taking a look at the engine behind gulp and then follow up by breaking down the inner workings of gulp itself. By the end of this article, you will be prepared to begin writing your own gulpfile. (For more resources related to this topic, see here.) Installing node.js and npm As you learned in the introduction, node.js and npm are the engines that work behind the scenes that allow us to operate gulp and keep track of any plugins we decide to use. Downloading and installing node.js For Mac and Windows, the installation is quite simple. All you need to do is navigate over to http://nodejs.org and click on the big green install button. Once the installer has finished downloading, run the application and it will install both node.js and npm. For Linux, there are a couple more steps, but don't worry; with your newly acquired command-line skills, it should be relatively simple. To install node.js and npm on Linux, you'll need to run the following three commands in Terminal: sudo add-apt-repository ppa:chris-lea/node.js sudo apt-get update sudo apt-get install nodejs The details of these commands are outside the scope of this book, but just for reference, they add a repository to the list of available packages, update the total list of packages, and then install the application from the repository we added. Verify the installation To confirm that our installation was successful, try the following command in your command line: node -v If node.js is successfully installed, node -v will output a version number on the next line of your command line. Now, let's do the same with npm: npm -v Like before, if your installation was successful, npm -v should output the version number of npm on the next line. The versions displayed in this screenshot reflect the latest Long Term Support (LTS) release currently available as of this writing. This may differ from the version that you have installed depending on when you're reading this. It's always suggested that you use the latest LTS release when possible. The -v  command is a common flag used by most command-line applications to quickly display their version number. This is very useful to debug version issues while using command-line applications. Creating a package.json file Having npm in our workflow will make installing packages incredibly easy; however, we should look ahead and establish a way to keep track of all the packages (or dependencies) that we use in our projects. Keeping track of dependencies is very important to keep your workflow consistent across development environments. Node.js uses a file named package.json to store information about your project, and npm uses this same file to manage all of the package dependencies your project requires to run properly. In any project using gulp, it is always a great practice to create this file ahead of time so that you can easily populate your dependency list as you are installing packages or plugins. To create the package.json file, we will need to run npm's built in init action using the following command: npm init Now, using the preceding command, the terminal will show the following output: Your command line will prompt you several times asking for basic information about the project, such as the project name, author, and the version number. You can accept the defaults for these fields by simply pressing the Enter key at each prompt. Most of this information is used primarily on the npm website if a developer decides to publish a node.js package. For our purposes, we will just use it to initialize the file so that we can properly add our dependencies as we move forward. The screenshot for the preceding command is as follows: Installing gulp With npm installed and our package.json file created, we are now ready to begin installing node.js packages. The first and most important package we will install is none other than gulp itself. Locating gulp Locating and gathering information about node.js packages is very simple, thanks to the npm registry. The npm registry is a companion website that keeps track of all the published node.js modules, including gulp and gulp plugins. You can find this registry at http://npmjs.org. Take a moment to visit the npm registry and do a quick search for gulp. The listing page for each node.js module will give you detailed information on each project, including the author, version number, and dependencies. Additionally, it also features a small snippet of command-line code that you can use to install the package along with readme information that will outline basic usage of the package and other useful information. Installing gulp locally Before we install gulp, make sure you are in your project's root directory, gulp-book, using the cd and ls commands you learned earlier. If you ever need to brush up on any of the standard commands, feel free to take a moment to step back and review as we progress through the book. To install packages with npm, we will follow a similar pattern to the ones we've used previously. Since we will be covering both versions 3.x and 4.x in this book, we'll demonstrate installing both: For installing gulp 3.x, you can use the following: npm install --save-dev gulp For installing gulp 4.x, you can use the following: npm install --save-dev gulpjs/gulp#4.0 This command is quite different from the 3.x command because this command is installing the latest development release directly from GitHub. Since the 4.x version is still being actively developed, this is the only way to install it at the time of writing this book. Once released, you will be able to run the previous command to without installing from GitHub. Upon executing the command, it will result in output similar to the following: To break this down, let's examine each piece of this command to better understand how npm works: npm: This is the application we are running install: This is the action that we want the program to run. In this case, we are instructing npm to install something in our local folder --save-dev: This is a flag that tells npm to add this module to the dev dependencies list in our package.json file gulp: This is the package we would like to install Additionally, npm has a –-save flag that saves the module to the list of dependencies instead of devDependencies. These dependency lists are used to separate the modules that a project depends on to run, and the modules a project depends on when in development. Since we are using gulp to assist us in development, we will always use the --save-dev flag throughout the book. So, this command will use npm to contact the npm registry, and it will install gulp to our local gulp-book directory. After using this command, you will note that a new folder has been created that is named node_modules. It is where node.js and npm store all of the installed packages and dependencies of your project. Take a look at the following screenshot: Installing gulp-cli globally For many of the packages that we install, this will be all that is needed. With gulp, we must install a companion module gulp-cli globally so that we can use the gulp command from anywhere in our filesystem. To install gulp-cli globally, use the following command: npm install -g gulp-cli In this command, not much has changed compared to the original command where we installed the gulp package locally. We've only added a -g flag to the command, which instructs npm to install the package globally. On Windows, your console window should be opened under an administrator account in order to install an npm package globally. At first, this can be a little confusing, and for many packages it won't apply. Similar build systems actually separate these usages into two different packages that must be installed separately; once that is installed globally for command-line use and another installed locally in your project. Gulp was created so that both of these usages could be combined into a single package, and, based on where you install it, it could operate in different ways. Anatomy of a gulpfile Before we can begin writing tasks, we should take a look at the anatomy and structure of a gulpfile. Examining the code of a gulpfile will allow us to better understand what is happening as we run our tasks. Gulp started with four main methods:.task(), .src(), .watch(), and .dest(). The release of version 4.x introduced additional methods such as: .series() and .parallel(). In addition to the gulp API methods, each task will also make use of the node.js .pipe() method. This small list of methods is all that is needed to understand how to begin writing basic tasks. They each represent a specific purpose and will act as the building blocks of our gulpfile. The task() method The .task() method is the basic wrapper for which we create our tasks. Its syntax is .task(string, function). It takes two arguments—string value representing the name of the task and a function that will contain the code you wish to execute upon running that task. The src() method The .src() method is our input or how we gain access to the source files that we plan on modifying. It accepts either a single glob string or an array of glob strings as an argument. Globs are a pattern that we can use to make our paths more dynamic. When using globs, we can match an entire set of files with a single string using wildcard characters as opposed to listing them all separately. The syntax is for this method is .src(string || array).  The watch() method The .watch() method is used to specifically look for changes in our files. This will allow us to keep gulp running as we code so that we don't need to rerun gulp any time we need to process our tasks. This syntax is different between the 3.x and 4.x version. For version 3.x the syntax is—.watch(string || array, array) with the first argument being our paths/globs to watch and the second argument being the array of task names that need to be run when those files change. For version 4.x the syntax has changed a bit to allow for two new methods that provide more explicit control of the order in which tasks are executed. When using 4.x instead of passing in an array as the second argument, we will use either the .series() or .parallel() method like so—.watch(string || array, gulp.series() || gulp.parallel()). The dest() method The dest() method is used to set the output destination of your processed file. Most often, this will be used to output our data into a build or dist folder that will be either shared as a library or accessed by your application. The syntax for this method is .dest(string). The pipe() method The .pipe() method will allow us to pipe together smaller single-purpose plugins or applications into a pipechain. This is what gives us full control of the order in which we would need to process our files. The syntax for this method is .pipe(function). The parallel() and series() methods The parallel and series methods were added in version 4.x as a way to easily control whether your tasks are run together all at once or in a sequence one after the other. This is important if one of your tasks requires that other tasks complete before it can be ran successfully. When using these methods the arguments will be the string names of your tasks separated by a comma. The syntax for these methods is—.series(tasks) and .parallel(tasks); Understanding these methods will take you far, as these are the core elements of building your tasks. Next, we will need to put these methods together and explain how they all interact with one another to create a gulp task. Including modules/plugins When writing a gulpfile, you will always start by including the modules or plugins you are going to use in your tasks. These can be both gulp plugins or node.js modules, based on what your needs are. Gulp plugins are small node.js applications built for use inside of gulp to provide a single-purpose action that can be chained together to create complex operations for your data. Node.js modules serve a broader purpose and can be used with gulp or independently. Next, we can open our gulpfile.js file and add the following code: // Load Node Modules/Plugins var gulp = require('gulp'); var concat = require('gulp-concat'); var uglify = require('gulp-uglify'); The gulpfile.js file will look as shown in the following screenshot: In this code, we have included gulp and two gulp plugins: gulp-concat and gulp-uglify. As you can now see, including a plugin into your gulpfile is quite easy. After we install each module or plugin using npm, you simply use node.js' require() function and pass it in the name of the module. You then assign it to a new variable so that you can use it throughout your gulpfile. This is node.js' way of handling modularity, and because a gulpfile is essentially a small node.js application, it adopts this practice as well. Writing a task All tasks in gulp share a common structure. Having reviewed the five methods at the beginning of this section, you will already be familiar with most of it. Some tasks might end up being larger than others, but they still follow the same pattern. To better illustrate how they work, let's examine a bare skeleton of a task. This skeleton is the basic bone structure of each task we will be creating. Studying this structure will make it incredibly simple to understand how parts of gulp work together to create a task. An example of a sample task is as follows: gulp.task(name, function() { return gulp.src(path) .pipe(plugin) .pipe(plugin) .pipe(gulp.dest(path)); }); In the first line, we use the new gulp variable that we created a moment ago and access the .task() method. This creates a new task in our gulpfile. As you learned earlier, the task method accepts two arguments: a task name as a string and a callback function that will contain the actions we wish to run when this task is executed. Inside the callback function, we reference the gulp variable once more and then use the .src() method to provide the input to our task. As you learned earlier, the source method accepts a path or an array of paths to the files that we wish to process. Next, we have a series of three .pipe() methods. In each of these pipe methods, we will specify which plugin we would like to use. This grouping of pipes is what we call our pipechain. The data that we have provided gulp with in our source method will flow through our pipechain to be modified by each piped plugin that it passes through. The order of the pipe methods is entirely up to you. This gives you a great deal of control in how and when your data is modified. You may have noticed that the final pipe is a bit different. At the end of our pipechain, we have to tell gulp to move our modified file somewhere. This is where the .dest() method comes into play. As we mentioned earlier, the destination method accepts a path that sets the destination of the processed file as it reaches the end of our pipechain. If .src() is our input, then .dest() is our output. Reflection To wrap up, take a moment to look at a finished gulpfile and reflect on the information that we just covered. This is the completed gulpfile that we will be creating from scratch, so don't worry if you still feel lost. This is just an opportunity to recognize the patterns and syntaxes that we have been studying so far. We will begin creating this file step by step: // Load Node Modules/Plugins var gulp = require('gulp'); var concat = require('gulp-concat'); var uglify = require('gulp-uglify'); // Process Styles gulp.task('styles', function() {     return gulp.src('css/*.css')         .pipe(concat('all.css'))         .pipe(gulp.dest('dist/')); }); // Process Scripts gulp.task('scripts', function() {     return gulp.src('js/*.js')         .pipe(concat('all.js'))         .pipe(uglify())         .pipe(gulp.dest('dist/')); }); // Watch Files For Changes gulp.task('watch', function() {     gulp.watch('css/*.css', 'styles');     gulp.watch('js/*.js', 'scripts'); }); // Default Task gulp.task('default', gulp.parallel('styles', 'scripts', 'watch')); The gulpfile.js file will look as follows: Summary In this article, you installed node.js and learned the basics of how to use npm and understood how and why to install gulp both locally and globally. We also covered some of the core differences between the 3.x and 4.x versions of gulp and how they will affect your gulpfiles as we move forward. To wrap up the article, we took a small glimpse into the anatomy of a gulpfile to prepare us for writing our own gulpfiles from scratch. Resources for Article: Further resources on this subject: Performing Task with Gulp [article] Making a Web Server in Node.js [article] Developing Node.js Web Applications [article]
Read more
  • 0
  • 0
  • 17807

Packt
10 Oct 2013
3 min read
Save for later

Top Features You Need to Know About – Responsive Web Design

Packt
10 Oct 2013
3 min read
Responsive web design Nowadays, almost everyone has a smartphone or tablet in hand; this article prepares these individuals to adapt their portfolio to this new reality. Acknowledging that, today, there are tablets that are also phones and some laptops that are also tablets, we use an approach known as device agnostic, where instead of giving devices names, such as mobile, tablet, or desktop, we refer to them as small, medium, or large. With this approach, we can cover a vast array of gadgets from smartphones, tablets, laptops, and desktops, to the displays on refrigerators, cars, watches, and so on. Photoshop Within the pages of this article, you will find two Photoshop templates that I prepared for you. The first is small.psd, which you may use to prepare your layouts for smartphones, small tablets, and even, to a certain extent, displays on a refrigerator. The second is medium.psd, which can be used for tablets, net books, or even displays in cars. I used these templates to lay out all the sizes of our website (portfolio) that we will work on in this article, as you can see in the following screenshot: One of the principle elements of responsive web design is the flexible grid and what I did with Photoshop layout was to mimic those grids, which we will use later. With time, this will be easier and it won't be necessary to lay out every version of every page, but, for now, it is good to understand how things happen. Code Now that we have a preview of how the small version will look, it's time to code it. The first thing we will need is the fluid version of the 960.gs, which you can download from https://raw.github.com/bauhouse/fluid960gs/master/css/grid.css and save as 960_fluid.css in the css folder. After that, let's create two more files in this folder, small.css and medium.css. We will use these files to maintain the organized versions of our portfolio. Lastly, let's link the files to our HTML document as follows: <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width"> <title>Portfolio</title> <link href="css/reset.css" rel="stylesheet" type="text/css"> <link href="css/960_fluid.css" rel="stylesheet" type="text/css"> <link href="css/main.css" rel="stylesheet" type="text/css"> <link href="css/medium.css" rel="stylesheet" type="text/css"> <link href="css/small.css" rel="stylesheet" type="text/css"> </head> If you reload your browser now, you should see that the portfolio is stretching all over the browser. This occurs because the grid is now fluid. To fix the width to, at most, 960 pixels, we need to insert the following lines at the beginning of the main.css file: Code 2: /* grid ================================ */ .container_12 { max-width: 960px; margin: Once you reload the browser and resize the window, you will see that the display is overly stretched and broken. In order to fix this, keeping in mind the layout we did in Photoshop, we can use the small version and medium version. Summary In this article we saw how to prepare our desktop-only portfolio using Photoshop and the method used to fix the broken and overly stretched display. Resources for Article: Further resources on this subject: Web Design Principles in Inkscape Building HTML5 Pages from Scratch HTML5 Presentations - creating our initial presentation
Read more
  • 0
  • 0
  • 17768
article-image-bootstrap-3-and-other-applications
Packt
21 Apr 2014
10 min read
Save for later

Bootstrap 3 and other applications

Packt
21 Apr 2014
10 min read
(For more resources related to this topic, see here.) Bootstrap 3 Bootstrap 3, formerly known as Twitter's Bootstrap, is a CSS and JavaScript framework for building application frontends. The third version of Bootstrap has important changes over the earlier versions of the framework. Bootstrap 3 is not compatible with the earlier versions. Bootstrap 3 can be used to build great frontends. You can download the complete framework, including CSS and JavaScript, and start using it right away. Bootstrap also has a grid. The grid of Bootstrap is mobile-first by default and has 12 columns. In fact, Bootstrap defines four grids: the extra-small grid up to 768 pixels (mobile phones), the small grid between 768 and 992 pixels (tablets), the medium grid between 992 and 1200 pixels (desktop), and finally, the large grid of 1200 pixels and above for large desktops. The grid, all other CSS components, and JavaScript plugins are described and well documented at http://getbootstrap.com/. Bootstrap's default theme looks like the following screenshot: Example of a layout built with Bootstrap 3 The time when all Bootstrap websites looked quite similar is far behind us now. Bootstrap will give you all the freedom you need to create innovative designs. There is much more to tell about Bootstrap, but for now, let's get back to Less. Working with Bootstrap's Less files All the CSS code of Bootstrap is written in Less. You can download Bootstrap's Less files and recompile your own version of the CSS. The Less files can be used to customize, extend, and reuse Bootstrap's code. In the following sections, you will learn how to do this. To download the Less files, follow the links at http://getbootstrap.com/ to Bootstrap's GitHub pages at https://github.com/twbs/bootstrap. On this page, choose Download Zip on the right-hand side column. Building a Bootstrap project with Grunt After downloading the files mentioned earlier, you can build a Bootstrap project with Grunt. Grunt is a JavaScript task runner; it can be used for the automation of your processes. Grunt helps you when performing repetitive tasks such as minifying, compiling, unit testing, and linting your code. Grunt runs on node.js and uses npm, which you saw while installing the Less compiler. Node.js is a standalone JavaScript interpreter built on Google's V8 JavaScript runtime, as used in Chrome. Node.js can be used for easily building fast, scalable network applications. When you unzip the files from the downloaded file, you will find Gruntfile.js and package.json among others. The package.json file contains the metadata for projects published as npm modules. The Gruntfile.js file is used to configure or define tasks and load Grunt plugins. The Bootstrap Grunt configuration is a great example to show you how to set up automation testing for projects containing HTML, Less (CSS), and JavaScript. The parts that are interesting for you as a Less developer are mentioned in the following sections. In package.json file, you will find that Bootstrap compiles its Less files with grunt-contrib-less. At the time of writing this article, the grunt-contrib-less plugin compiles Less with less.js Version 1.7. In contrast to Recess (another JavaScript build tool previously used by Bootstrap), grunt-contrib-less also supports source maps. Apart from grunt-contrib-less, Bootstrap also uses grunt-contrib-csslint to check the compiled CSS for syntax errors. The grunt-contrib-csslint plugin also helps improve browser compatibility, performance, maintainability, and accessibility. The plugin's rules are based on the principles of object-oriented CSS (http://www.slideshare.net/stubbornella/object-oriented-css). You can find more information by visiting https://github.com/stubbornella/csslint/wiki/Rules. Bootstrap makes heavy use of Less variables, which can be set by the customizer. Whoever has studied the source of Gruntfile.js may very well also find a reference to the BsLessdocParser Grunt task. This Grunt task is used to build Bootstrap's customizer dynamically based on the Less variables used by Bootstrap. Though the process of parsing Less variables to build, for instance, documentation will be very interesting, this task is not discussed here further. This section ends with the part of Gruntfile.js that does the Less compiling. The following code from Gruntfile.js should give you an impression of how this code will look: less: { compileCore: { options: { strictMath: true, sourceMap: true, outputSourceFiles: true, sourceMapURL: '<%= pkg.name %>.css.map', sourceMapFilename: 'dist/css/<%= pkg.name %>.css.map' }, files: { 'dist/css/<%= pkg.name %>.css': 'less/bootstrap.less' } } Last but not least, let's have a look at the basic steps to run Grunt from the command line and build Bootstrap. Grunt will be installed with npm. Npm checks Bootstrap's package.json file and automatically installs the necessary local dependencies listed there. To build Bootstrap with Grunt, you will have to enter the following commands on the command line: > npm install -g grunt-cli > cd /path/to/extracted/files/bootstrap After this, you can compile the CSS and JavaScript by running the following command: > grunt dist This will compile your files into the /dist directory. The > grunt test command will also run the built-in tests. Compiling your Less files Although you can build Bootstrap with Grunt, you don't have to use Grunt. You will find the Less files in a separate directory called /less inside the root /bootstrap directory. The main project file is bootstrap.less; other files will be explained in the next section. You can include bootstrap.less together with less.js into your HTML for the purpose of testing as follows: <link rel="bootstrap/less/bootstrap.less"type="text/css" href="less/styles.less" /> <script type="text/javascript">less = { env: 'development' };</script> <script src = "less.js" type="text/javascript"></script> Of course, you can compile this file server side too as follows: lessc bootstrap.less > bootstrap.css Dive into Bootstrap's Less files Now it's time to look at Bootstrap's Less files in more detail. The /less directory contains a long list of files. You will recognize some files by their names. You have seen files such as variables.less, mixins.less, and normalize.less earlier. Open bootstrap.less to see how the other files are organized. The comments inside bootstrap.less tell you that the Less files are organized by functionality as shown in the following code snippet: // Core variables and mixins // Reset // Core CSS // Components Although Bootstrap is strongly CSS-based, some of the components don't work without the related JavaScript plugins. The navbar component is an example of this. Bootstrap's plugins require jQuery. You can't use the newest 2.x version of jQuery because this version doesn't have support for Internet Explorer 8. To compile your own version of Bootstrap, you have to change the variables defined in variables.less. When using the last declaration wins and lazy loading rules, it will be easy to redeclare some variables. Creating a custom button with Less By default, Bootstrap defines seven different buttons, as shown in the following screenshot: The seven different button styles of Bootstrap 3 Please take a look at the following HTML structure of Bootstrap's buttons before you start writing your Less code: <!-- Standard button --> <button type="button" class="btn btn-default">Default</button> A button has two classes. Globally, the first .btn class only provides layout styles, and the second .btn-default class adds the colors. In this example, you will only change the colors, and the button's layout will be kept intact. Open buttons.less in your text editor. In this file, you will find the following Less code for the different buttons: // Alternate buttons // -------------------------------------------------- .btn-default { .button-variant(@btn-default-color; @btn-default-bg; @btn-default-border); } The preceding code makes it clear that you can use the .button-variant() mixin to create your customized buttons. For instance, to define a custom button, you can use the following Less code: // Customized colored button // -------------------------------------------------- .btn-colored { .button-variant(blue;red;green); } In the preceding case, you want to extend Bootstrap with your customized button, add your code to a new file, and call this file custom.less. Appending @import custom.less to the list of components inside bootstrap.less will work well. The disadvantage of doing this will be that you will have to change bootstrap.less again when updating Bootstrap; so, alternatively, you could create a file such as custombootstrap.less which contains the following code: @import "bootstrap.less"; @import "custom.less"; The previous step extends Bootstrap with a custom button; alternatively, you could also change the colors of the default button by redeclaring its variables. To do this, create a new file, custombootstrap.less again, and add the following code into it: @import "bootstrap.less"; //== Buttons // //## For each of Bootstrap's buttons, define text,background and border color. @btn-default-color: blue; @btn-default-bg: red; @btn-default-border: green; In some situations, you will, for instance, need to use the button styles without everything else of Bootstrap. In these situations, you can use the reference keyword with the @import directive. You can use the following Less code to create a Bootstrap button for your project: @import (reference) "bootstrap.less"; .btn:extend(.btn){}; .btn-colored { .button-variant(blue;red;green); } You can see the result of the preceding code by visiting http://localhost/index.html in your browser. Notice that depending on the version of less.js you use, you may find some unexpected classes in the compiled output. Media queries or extended classes sometimes break the referencing in older versions of less.js. Use CSS source maps for debugging When working with large LESS code bases finding the original source can be become complex when viewing your results in the browsers. Since version 1.5 LESS offers support for CSS source maps. CSS source maps enable developer tools to map calls back to their location in original source files. This also works for compressed files. The latest versions of Google's Chrome Developers Tools offer support for these sources files. Currently CSS source maps debugging won't work for client side compiling as used for the examples in this book. The server-side lessc compiler can generate useful CSS source maps. After installing the lessc compiler you can run: >> lessc –source-map=styles.css.map styles.less > styles.css The preceding code will generate two files: styles.css.map and styles.css. The last line of styles.css contains now an extra line which refers to the source map: /*# sourceMappingURL=boostrap.css.map */ In your HTML you only have to include the styles.css as you used to: <link href="styles.css" rel="stylesheet"> When using CSS source maps as described earlier and inspecting your HTML with Google's Chrome Developers Tools, you will see something like the following screenshot: Inspect source with Google's Chrome Developers Tools and source maps As you see styles now have a reference to their original LESS file such as grid.less, including line number, which helps you in the process of debugging. The styles.css.map file should be in the same directory as the styles.css file. You don't have to include your LESS files in this directory. Summary This article has covered the concept of Bootstrap, how to use Bootstrap's Less files, and how the files can be modified to be used according to your convenience. Resources for Article: Further resources on this subject: Getting Started with Bootstrap [Article] Bootstrap 3.0 is Mobile First [Article] Downloading and setting up Bootstrap [Article]
Read more
  • 0
  • 0
  • 17767

article-image-examining-encodingjson-package-go
Packt
28 Dec 2016
13 min read
Save for later

Examining the encoding/json Package with Go

Packt
28 Dec 2016
13 min read
In this article by Nic Jackson, author of the book Building Microservices with Go, we will examine the encoding/json package to see just how easy Go makes it for us to use JSON objects for our requests and responses. (For more resources related to this topic, see here.) Reading and writing JSON Thanks to the encoding/json package, which is built into the standard library, encoding and decoding JSON to and from Go types is both fast and easy. It implements the simplistic Marshal and Unmarshal functions; however, if we need them, the package also provides Encoder and Decoder types, which allow us greater control when reading and writing streams of JSON data. In this section, we are going to examine both of these approaches, but first let's take a look at how simple it is to convert a standard Go struct into its corresponding JSON string. Marshalling Go structs to JSON To encode JSON data, the encoding/json package provides the Marshal function, which has the following signature: func Marshal(v interface{}) ([]byte, error) This function takes one parameter, which is of the interface type, so that's pretty much any object you can think of, since interface represents any type in Go. It returns a tuple of ([]byte, error). You will see this return style quite frequently in Go. Some languages implement a try...catch approach, which encourages an error to be thrown when an operation cannot be performed. Go suggests the (return type, error) pattern, where the error is nil when an operation succeeds. In Go, unhanded errors are a bad thing, and while the language does implement the panic and recover functions, which resemble exception handling in other languages, the situations in which you should use them are quite different (The Go Programming Language, Donovan and Kernighan). In Go, panic causes normal execution to stop, and all deferred function calls in the Go routine are executed; the program will then crash with a log message. It is generally used for unexpected errors that indicate a bug in the code, and good, robust Go code will attempt to handle these runtime exceptions and return a detailed error object back to the calling function. This pattern is exactly what is implemented with the Marshal function. In case Marshal cannot create a JSON-encoded byte array from the given object, which could be due to a runtime panic, then this is captured and an error object detailing the problem is returned to the caller. Let's try this out, expanding on our existing example. Instead of simply printing a string from our handler, let's create a simple struct for the response and return that: 10 type helloWorldResponse struct { 11 Message string 12 } In our handler, we will create an instance of this object, set the message, and then use the Marshal function to encode it to a string before returning. Let's see what that will look like: 23 func helloWorldHandler(w http.ResponseWriter, r *http.Request) { 24 response := helloWorldResponse{Message: "HelloWorld"} 25 data, err := json.Marshal(response) 26 if err != nil { 27 panic("Ooops") 28 } 29 30 fmt.Fprint(w, string(data)) 31 } Now when we rerun our program and refresh our browser, we'll see the following output rendered in valid JSON: {"Message":"Hello World"} This is awesome, but the default behavior of Marshal is to take the literal name of the field and use that as the field in the JSON output. What if I prefer to use camel case and would rather see message—could we just rename the field in our struct message? Unfortunately, we can't because in Go, lowercase properties are not exported. Marshal will ignore these and will not include them in the output. All is not lost: the encoding/json package implements struct field attributes, which allow us to change the output for the property to anything we choose. The example code is as follows: 10 type helloWorldResponse struct { 11 Message string `json:"message"` 12 } Using the struct field's tags, we can have greater control over how the output will look. In the preceding example, when we marshal this struct, the output from our server would be the following: {"message":"Hello World"} This is exactly what we want, but we can use field tags to control the output even further. We can convert object types and even ignore a field altogether if we need to: struct helloWorldResponse { // change the output field to be "message" Message string `json:"message"` // do not output this field Author string `json:"-"` // do not output the field if the value is empty Date string `json:",omitempty"` // convert output to a string and rename "id" Id int `json:"id, string"` } The channel, complex types, and functions cannot be encoded in JSON. Attempting to encode these types will result in an UnsupportedTypeError being returned by the Marshal function. It also can't represent cyclic data structures, so if your stuct contains a circular reference, then Marshal will result in an infinite recursion, which is never a good thing for a web request. If we want to export our JSON pretty formatted with indentation, we can use the MarshallIndent function, which allows you to pass an additional string parameter to specify what you would like the indent to be—two spaces, not a tab, right? func MarshalIndent(v interface{}, prefix, indent string) ([]byte, error) The astute reader might have noticed that we are decoding our struct into a byte array and then writing that to the response stream. This does not seem to be particularly efficient, and in fact, it is not. Go provides encoders and decoders, which can write directly to a stream. Since we already have a stream with the ResponseWriter interface, let's do just that. Before we do so, I think we need to look at the ResponseWriter interface a little to see what is going on there. ResponseWriter is an interface that defines three methods: // Returns the map of headers which will be sent by the // WriteHeader method. Header() // Writes the data to the connection. If WriteHeader has not // already been called then Write will call // WriteHeader(http.StatusOK). Write([]byte) (int, error) // Sends an HTTP response header with the status code. WriteHeader(int)   If we have a ResponseWriter, how can we use this with fmt.Fprint(w io.Writer, a ...interface{})? This method requires a Writer interface as a parameter, and we have a ResponseWriter. If we look at the signature for Writer, we can see that it is the following: Write(p []byte) (n int, err error) Because the ResponseWriter interface implements this method, it also satisfies the Writer interface; therefore, any object that implements ResponseWriter can be passed to any function that expects Writer. Amazing! Go rocks—but we don't have an answer to our question: is there any better way to send our data to the output stream without marshalling to a temporary string before we return it? The encoding/json package has a function called NewEncoder. This returns an Encoder object, which can be used to write JSON straight to an open writer, and guess what—we have one of those: func NewEncoder(w io.Writer) *Encoder So instead of storing the output of Marshal into a byte array, we can write it straight to the HTTP response, as shown in the following code: func helloWorldHandler(w http.ResponseWriter, r *http.Request) { response := HelloWorldResponse{Message: "HelloWorld"} encoder := json.NewEncoder(w) encoder.Encode(&response) } We will look at benchmarking in a later chapter, but to see why this is important, here's a simple benchmark to check the two methods against each other; have a look at the output: go test -v -run="none" -bench=. -benchtime="5s" -benchmem testing: warning: no tests to run PASS BenchmarkHelloHandlerVariable 10000000 1211 ns/op 248 B/op 5 allocs/op BenchmarkHelloHandlerEncoder 10000000 662 ns/op 8 B/op 1 allocs/op ok github.com/nicholasjackson/building-microservices-in-go/chapter1/bench 20.650s Using the Encoder rather than marshalling to a byte array is nearly 50% faster. We are dealing with nanoseconds here, so that time may seem irrelevant, but it isn't; this was two lines of code. If you have that level of inefficiency throughout the rest of your code, your application will run slower, you will need more hardware to satisfy the load, and that will cost you money. There is nothing clever in the differences between the two methods—all we have done is understood how the standard packages work and chosen the correct option for our requirements. That is not performance tuning, that is understanding the framework. Unmarshalling JSON to Go structs Now that we have learned how we can send JSON back to the client, what if we need to read input before returning the output? We could use URL parameters, and we will see what that is all about in the next chapter, but usually, you will need more complex data structures, which include the service to accept JSON as part of an HTTP POST request. If we apply techniques similar to those we learned in the previous section (to write JSON), reading JSON is just as easy. To decode JSON into a stuct, the encoding/json package provides us with the Unmarshal function: func Unmarshal(data []byte, v interface{}) error The Unmarshal function works in the opposite way to Marshal: it allocates maps, slices, and pointers as required. Incoming object keys are matched using either the struct field name or its tag and will work with a case-insensitive match; however, an exact match is preferred. Like Marshal, Unmarshal will only set exported struct fields: those that start with an upper case letter. We start by adding a new struct to represent the request, while Unmarshal can decode the JSON into an interface{} array, which would be of one of the following types: map[string]interface{} // for JSON objects []interface{} // for JSON arrays Which type it is depends on whether our JSON is an object or an array. In my opinion, it is much clearer to the readers of our code if we explicitly state what we are expecting as a request. We can also save ourselves work by not having to manually cast the data when we come to use it. Remember two things: You do not write code for the compiler; you write code for humans to understand You will spend more time reading code than you do writing it We are going to do ourselves a favor by taking into account these two points and creating a simple struct to represent our request, which will look like this: 14 type helloWorldRequest struct { 15 Name string `json:"name"` 16 } Again, we are going to use struct field tags because while we could let Unmarshal do case-insensitive matching so that {"name": "World} would correctly unmarshal into the struct the same as {"Name": "World"}, when we specify a tag, we are being explicit about the request form, and that is a good thing. In terms of speed and performance, it is also about 10% faster, and remember: performance matters. To access the JSON sent with the request, we need to take a look at the http.Request object passed to our handler. The following listing does not show all the methods in the request, just the ones we are going to be immediately dealing with. For the full documentation, I recommend checking out the docs at https://godoc.org/net/http#Request. type Requests struct { … // Method specifies the HTTP method (GET, POST, PUT, etc.). Method string // Header contains the request header fields received by the server. The type Header is a link to map[string] []string. Header Header // Body is the request's body. Body io.ReadCloser … } The JSON that has been sent with the request is accessible in the Body field. The Body field implements the io.ReadCloser interface as a stream and does not return []byte or string data. If we need the data contained in the body, we can simply read it into a byte array, like the following example: 30 body, err := ioutil.ReadAll(r.Body) 31 if err != nil { 32 http.Error(w, "Bad request", http.StatusBadRequest) 33 return 34 } Here is something we'll need to remember: we are not calling Body.Close(); if we were making a call with a client, we would need to do this as it is not automatically closed; however, when used in a ServeHTTP handler, the server automatically closes the request stream. To see how this all works inside our handler, we can look at the following handler: 28 func helloWorldHandler(w http.ResponseWriter, r *http.Request) { 29 30 body, err := ioutil.ReadAll(r.Body) 31 if err != nil { 32 http.Error(w, "Bad request", http.StatusBadRequest) 33 return 34 } 35 36 var request HelloWorldRequest 37 err = json.Unmarshal(body, &request) 38 if err != nil { 39 http.Error(w, "Bad request", http.StatusBadRequest) 40 return 41 } 42 43 response := HelloWorldResponse{Message: "Hello " + request.Name} 44 45 encoder := json.NewEncoder(w) 46 encoder.Encode(response) 47 } Let's run this example and see how it works; to test it, we can simply use curl to send a request to the running server. If you feel more comfortable using a GUI tool, then Postman, which is available for the Google Chrome browser, will work just fine. Otherwise, feel free to use your preferred tool. $ curl localhost:8080/helloworld -d '{"name":"Nic"}' You should see the following response: {"message":"Hello Nic"} What do you think will happen if you do not include a body with your request? $ curl localhost:8080/helloworld If you guessed correctly that you would get an "HTTP status 400 Bad Request" error, then you win a prize. The following error replies to the request with the given message and status code: func Error(w ResponseWriter, error string, code int) Once we have sent this, we need to return, stopping further execution of the function as this does not close the ResponseWriter and return flow to the calling function automatically. You might think you are done, but have a go and see whether you can improve the performance of the handler. Think about the things we were talking about when marshaling JSON. Got it? Well, if not, here is the answer: again, all we are doing is using Decoder, which is the opposite of the Encoder function we used when writing JSON, as shown in the following code example. This nets an instant 33% performance increase, and with less code, too. 27 func helloWorldHandler(w http.ResponseWriter, r *http.Request) { 28 29 var request HelloWorldRequest 30 decoder := json.NewDecoder(r.Body) 31 32 err := decoder.Decode(&request) 33 if err != nil { 34 http.Error(w, "Bad request", http.StatusBadRequest) 35 return 36 } 37 38 response := HelloWorldResponse{Message: "Hello " + request.Name} 39 40 encoder := json.NewEncoder(w) 41 encoder.Encode(response) 42 } Now that you can see just how easy it is to encode and decode JSON with Go, I would recommend taking 5 minutes to spend some time digging through the documentation for the encoding/json package as there is a whole lot more than you can do with it: https://golang.org/pkg/encoding/json/ Summary In this article, we looked at encoding and decoding data using the encoding/json package. Resources for Article: Further resources on this subject: Microservices – Brave New World [article] A capability model for microservices [article] Breaking into Microservices Architecture [article]
Read more
  • 0
  • 0
  • 17755

article-image-edge-chrome-brave-share-updates-on-upcoming-releases-recent-milestones-and-more-at-state-of-browsers-event
Bhagyashree R
24 Jun 2019
9 min read
Save for later

Edge, Chrome, Brave share updates on upcoming releases, recent milestones, and more at State of Browsers event

Bhagyashree R
24 Jun 2019
9 min read
Last month, This Dot Labs, a framework-agnostic JavaScript consultancy, conducted its biannual online live streaming event, This.JavaScript - State of Browsers. In this live stream, representatives of popular browsers talk about the amazing features users can look forward to, next releases, and much more. This time Firefox was missing. However, in attendance were: Stephanie Drescher ,  Program Manager, Microsoft Edge Brian Kardell ,  Developer Advocate, Igalia, an active contributor to WebKit Rijubrata Bhaumik , Software Engineer, Intel, who talked about Intel’s contribution towards web Jonathan Sampson ,  Developer Relations, Brave Paul Kinlan , Sr. Developer Advocate, Google Diego Gonzalez, Product Manager, Samsung Internet The event was moderated by Tracy Lee , who is the  founder of This Dot Labs. Following are some of the updates shared by the browser representatives: What’s new with Edge In December last year, Microsoft announced that it will be adopting Chromium in the development of Microsoft Edge for desktop. And, beginning this year we saw its decision coming to fruition. The tech giant made the first preview builds of the Chromium-based Edge available to both macOS and Windows 10 users. These preview builds are available for testing from the Microsoft Edge Insider site. This Chromium-powered Edge is available for iOS and Android users too. Stephanie Drescher shared what has changed for the Edge team after switching to Chromium. This is enabling them to deliver and update the Edge browser across all supported versions of Windows. This is also allowing them to update the browser more frequently as they are no longer tied to the operating system. The Edge team is not just using Chromium but also contributing all the web platform enhancements back to Chromium by default. The team has already made 400+ commits into the Chromium project. Edge comes with support for cross-platform and installable progressive web apps directly from the browser. The team’s next focus area is to improve Windows experience in terms of accessibility, localization, scrolling, and touch. At Build 2019, Microsoft also announced its new WebView that will be available for Win32 and UWP apps. She said this “will give you the option of an evergreen Chromium platform via edge or the option to bring your own version for AppCompat via a model that's similar to Electron.” Moving on to dev tools, the browser has several new dev tools that are visually aligned with VS Code. The updates in dev tools include dark mode on by default, control inputs, and the team is further exploring “more ways to align the experience between your browser dev tools and VS Code.” The browser’s built-in tools can now inspect and debug any Microsoft-Edge powered web content including PWAs, WebView, etc. No doubt these are some amazing features to be excited for. Edge has come to iOS and macOS, however, the question of whether it will support Linux in the future remains unanswered. Drescher said that the team has no plans right now to support Linux, however looking at the number of user requests for Linux support they are starting to think about it. What’s new with Chrome At I/O 2019, Google shared its vision for Chrome, which is making it "instant, powerful, and safe" to help improve the overall browsing experience. To make Chrome faster and lighter, a bunch of improvements to V8, Chrome’s JavaScript engine has been made. Now, JavaScript memory usage is down by 20% for real-world apps. After addressing the startup bottlenecks, Chrome's loading speed has now become 50% better on low-end devices and 10 percent across devices. The scrolling performance has also improved by 18%. Along with these speed gains, the team has also introduced a few features in the web platform that aim to take the burden away from the developers: The lazy loading mechanism reduces the initial payload to improve load time. You just need to add “loading=lazy" in the image or iframe elements. The idea is simple, the web browser will not download an image or iframe that has the loading attribute until the user scrolls near to it. The Portals API, first showcased at I/O this year, aims to make navigation between sites and web pages smoother. Portals is very similar to iframe in that it allows web developers to embed remote content in their pages. The difference is that with Portals you will able to navigate inside the content you are embedding. As a part of making Chrome more powerful, Google is actively working on bridging the capabilities gap between native and web under Project Fugu. It has already introduced two APIs: Web Share and Web Share Target and plans to bring more capabilities like writable file API, event alarms, user idle detection, and more. As the name suggests, the Web Share API allows websites to invoke the native sharing capabilities of the host platform. Users will be able to easily share either a URL or text on pretty much any platform they want to. Till date, we were restricted to share content on native apps that have registered as a share target. With Web Share Target API, installed web apps can also register with the underlying OS as a target to receive shared content. Talking about the safety aspect, Chrome now comes with support for WebAuthn, a new authentication standard by W3C, starting from its 67 version. This API allows servers to integrate strong authenticators that are built into devices, for instance, Windows Hello or Apple’s Touch ID. What's new with Brave Edge, Chrome, and Brave share one common thing and that is they all are Chromium-based. But, what sets Brave apart is the Basic Attention Token (BAT). Jonathan Sampson, who was representing Brave, said that we have seen a “Cambrian Explosion” of cryptocurrencies utility tokens or blockchain assets like Bitcoin, Litecoin, Etherium. Partnership with Coinbase Previously, if we wanted to acquire these assets there was only one way to do it “mining”, which meant a huge investment on expensive GPUs and power bill. Brave believes that the next step to earn these assets is primarily by your “attention”. Brave’s goal is to take users from mining to earning blockchain assets. As a part of this goal, it has partnered with Coinbase, one of the prominent companies in the blockchain space. Users will get 10 dollars in the form of BAT just for learning the state of digital advertising and what Brave and attention tokens are doing in that space. Through BAT, Brave is providing its consumers with a direct way to support their content creators. These content creators can customize and personalize this entire experience by navigating to the signing up on Brave’s creators page. Implementation changes in how BAT is sent to creators The Brave team has also made some implementation changes in terms of how this whole thing works. Previously, consumers could send these tokens to anyone. The token then used to go into an omnibus settlement wallet and stays there until that creator verifies with the program and demonstrates ownership over their web property. Finally, after all this, they get access to these tokens for use. Unfortunately, this could mean that some tokens have to “sit in a state of limbo” for an indefinite amount of time. Now, the team has re-engineered this process to hold these tokens inside your wallet for up to 90 days. If and when that property is verified the tokens are transmitted out. And, if the property is never verified then the tokens are released back inside your wallet. You can send them to another creator instead of letting them sit in that omnibus settlement wallet. Sampson further added, “of course the entire process goes through the anonymize protocol so that brave nor anybody else has any idea which websites you're visiting or to whom you are contributing support.” Inner working of Brave ads To better the ads recommendation Brave comes with a machine learning model integrated. This feature is opt-in so the user gets to decide when and how many ads they want to see in order to earn BAT from their attention. The ML model can study the user and learn about them each day. Every day a catalog is downloaded to each users’ device. Then the individual machines would churn away on that catalog to figure out which ads are relevant to an individual. Once, the relevant ads are found out users will see a small operating system notification. Brave sends 70% of the revenue made from the users’ attention to the user in the form of BAT. Brave Sync (Beta) The beta version of Brave Sync is available across platforms from Windows, macOS, Linux to Android, and iOS. Similar to Brave Ads, this is also an opt-in feature that allows you to automatically sync browsing data across devices. Right now it is in beta and supports syncing only bookmarks. In the future releases, we can expect support for tabs, history, passwords, autofill, as well as Brave Rewards. Once you enable it on one device, you just need to scan a QR code or enter a secret phrase to register another device for syncing. Canary builds available Like all the other browsers, Brave has also started to share their nightly and dev builds to give developers an “earlier insight” into the work they are doing. You can access them through their download page. These were some of the major updates discussed in the live stream. There was also Intel and Samsung who talked about their contributions to the web. Igalia’s developer Brian Kardell talked about the dark mode, pointer events, and more in WebKit. Watch the full event on YouTube for more details. https://www.youtube.com/watch?v=olSQai4EUD8 Elvis Pranskevichus on limitations in SQL and how EdgeQL can help Microsoft makes the first preview builds of Chromium-based Edge available for testing Brave introduces Brave Ads that share 70% revenue with users for viewing ads
Read more
  • 0
  • 0
  • 17604
article-image-issues-and-wikis-gitlab
Packt
20 Nov 2013
6 min read
Save for later

Issues and Wikis in GitLab

Packt
20 Nov 2013
6 min read
(For more resources related to this topic, see here.) Issues The built-in features for issue tracking and documentation will be very beneficial to you, especially if you're working on extensive software projects, the ones with many components, or those that need to be supported in multiple versions at once, for example, stable, testing, and unstable. In this article, we will have a closer look at the formats that are supported for issues and wiki pages (in particular, Markdown); also the elements that can be referenced from within these and how issues can be organized. Furthermore, we will go through the process of assigning issues to team members, and keeping documentation in wiki pages, which can also be edited locally. Lastly, we will see how the RSS feeds generated by GitLab can keep your team in a closer loop around the projects they work on. The metadata covered in this article may seem trivial, but many famous software projects have gained traction due to their extensive and well-written documentation, which initially was done by core developers. It enables your users to do the same with their projects, even if only internally; it opens up for a much more efficient collaboration. GitLab-flavored Markdown GitLab comes with a Markdown formatting parser that is fairly similar to GitHubs, which makes it very easy to adapt and migrate. Many standalone editors also support this format, such as Mou (http://mouapp.com/) for Mac or MarkdownPad (http://markdownpad.com/) for Windows. On Linux, editors with a split view, such as ReText (http://sourceforge.net/projects/retext/) or the more Zen-writing UberWriter (http://uberwriter.wolfvollprecht.de/) are available. For the popular Vim editor , multiple Markdown plugins too are up for grabs on a number of GitHub repositories; one of them is Vim Markdown (https://github.com/tpope/vim-markdown) by Tim Pope. Lastly, I'd like to mention that you don't need a dedicated editor for Markdown because they are plain text files. The mentioned editors simply enhance the view through syntax highlighting and preview modes. About Markdown Markdown was originally written by John Gruber, and has since evolved into various flavors. The intention of this very lightweight markup language is to have a source that is easy to edit and can be transformed into meaningful HTML to be displayed on the Web. Different variations of Markdown have made it to a majority of very successful software projects as the default language; readme files, documentation, and even blogging engines adopt it. In Markdown, text styles can be applied, links placed, and images can be inserted. If ever Markdown, by default, does not support what you are currently trying to do, you can insert plain HTML, which will not be altered by the Markdown parser. Referring to elements inside GitLab When working with source code, it can be of importance to refer to a line of code, a file, or other things, when discussing something. Because many development teams are nowadays spread throughout the world, GitLab adapts to that and makes it easy to refer and reference many things directly from comments, wiki pages, or issues. Some things like files or lines can be referenced via links, because GitLab has unique links to the branches of a repository; others are more directly accessible. The following items (basically, prefixed strings or IDs) can be referenced through shortcodes: commit messages comments wall posts issues merge requests milestones wiki pages To reference items, use the following shortcodes inside any field that supports Markdown or RDoc on the web interface: @foofor team members #123for issues !123for merge requests $123for snippets 1234567for commits Issues, knowing what needs to be done An issue is a text message of variable length, describing a bug in the code, an improvement to be made, or something else that should be done or discussed. By commenting on the issue, developers or project leaders can respond to this request or statement. The meta information attached to an issue can be very valuable to the team, because developers can be assigned to an issue, and it can be tagged or labeled with keywords that describe the content or area to which it belongs. Furthermore, you can also set a goal for the milestone to be included in this fix or feature. In the following screenshot, you can see the interface for issues: Creating issues By navigating to the Issues tab of a repository in the web interface, you can easily create new issues. Their title should be brief and precise, because a more elaborate description area is available. The description area supports the GitLab-flavored Markdown, as mentioned previously. Upon creation, you can choose a milestone and a user to assign an issue to, but you can also leave these fields unset, possibly to let your developers themselves choose with what they want to work and at what time. Before they begin their work, they can assign the issues to themselves. In the following screenshot, you can see what the issue creation form looks like: Working with labels or tags Labels are tags used to organize issues by the topic and severity. Creating labels is as easy as inserting them, separated by a comma, into the respective field while creating an issue. Currently in Version 5.2, certain keywords trigger a certain background color on the label. Labels like critical or bug turn red, feature turns green, and other labels are blue by default. The following screenshot shows what a list of labeled features looks like: After the creation of a label, it will be listed under the Labels tab within the Issues page, with a link that lists all the issues that have been labeled the same. Filtering by the label, assigned user, or milestone is also possible from the list of issues within each projects overview. Summary In this article, we have had a look at the project management side of things. You can now make use of the built-in possibilities to distribute tasks across team members through issues, keep track of things that still have to do with the issues, or enable observers to point out bugs. Resources for Article : Further resources on this subject: Using Gerrit with GitHub [Article] The architecture of JavaScriptMVC [Article] Using the OSGi Bundle Repository in OSGi and Apache Felix 3.0 [Article]
Read more
  • 0
  • 1
  • 17589

Packt
17 Mar 2016
9 min read
Save for later

Microservices – Brave New World

Packt
17 Mar 2016
9 min read
In this article by David Gonzalez, author of the book Developing Microservices with Node.js, we will cover the need for microservices, explain the monolithic approach, and study how to build and deploy microservices. (For more resources related to this topic, see here.) Need for microservices The world of software development has evolved quickly over the past 40 years. One of the key points of this evolution has been the size of these systems. From the days of MS-DOS, we taken a hundred-fold leap into our present systems. This growth in size creates a need for better ways of organizing the code and software components. Usually, when a company grows due to business needs, which is known as organic growth, the software gets organized on a monolithic architecture as it is the easiest and quickest way of building software. After few years (or even months), adding new features becomes harder due to the coupled nature of the created software. Monolithic software There are a few companies that have already started building their software using microservices, which is the ideal scenario. The problem is that not all the companies can plan their software upfront. Instead of planning, these companies build the software based on the organic growth experienced: few software components that group business flows by affinity. It is not rare to see companies having two big software components: the user facing website and the internal administration tools. This is usually known as a monolithic software architecture. Some of these companies face big problems when trying to scale the engineering teams. It is hard to coordinate the teams that build, deploy, and maintain a single software component. Clashes on releases and reintroduction of bugs are a common problem that drains a big chunk of energy from the teams. One of the solution to this problem (it also has other benefits) is to split the monolithic software into microservices so that the teams are able to specialize in few smaller modules and autonomous and isolated software components that can be versioned, updated, and deployed without interfering with the rest of the systems of the company. One of the most interesting solutions to this problem is splitting the monolithic architecture into microservices. This enables the engineering team to create isolated and autonomous units of work that are highly specialized in a given task (such as sending e-mails, processing card payment, and so on). Microservices in the real world Microservices are small software components that specialize in one task and work together to achieve a higher-level task. Forget about software for a second and think about how a company works. When someone applies for a job in a company, he applies for a given position: software engineer, systems administrator, or office manager The reason for it can be summarized in one word—specialization. If you are used to working as a software engineer, you will get better with the experience and add more value to the company. The fact that you don’t know how to deal with a customer, won’t affect your performance as it is not your area of expertise and will hardly add any value to your day-to-day work. A microservice is an autonomous unit of work that can execute one task without interfering with other parts of the system, similar to what a job position is to a company. This has a number of benefits that can be used in favor of the engineering team in order to help to scale the systems of a company. Nowadays, hundreds of systems are built using a microservices-oriented architectures, as follows: Netflix: They are one of the most popular streaming services and have built an entire ecosystem of applications that collaborate in order to provide a reliable and scalable streaming system used across the globe. Spotify: They are one of the leading music streaming services in the world and have built this application using microservices. Every single widget of the application (which is a website exposed as a desktop app using Chromium Embedded Framework (CEF)) is a different microservice that can be updated individually. First, there was the monolith A huge percentage (my estimate is around 90%) of the modern enterprise software is built following a monolithic approach. Huge software components that run in a single container and have a well-defined development life cycle that goes completely against the following agile principles, deliver early and deliver often (https://en.wikipedia.org/wiki/Release_early,_release_often): Deliver early: The sooner you fail, the easier it is to recover. If you are working for two years in a software component and then, it is released, there is a huge risk of deviation from the original requirements, which are usually wrong and changing every few days. Deliver often: Everything of the software is delivered to all the stake holders so that they can have their inputs and see the changes reflected in the software. Errors can be fixed in a few days and improvements are identified easily. Companies build big software components instead of smaller ones that work together as it is the natural thing to do, as follows: The developer has a new requirement. He builds a new method on an existing class on the service layer. The method is exposed on the API via HTTP, SOAP, or any other protocol. Now, repeat it by the number of developers in your company and you will obtain something called organic growth. Organic growth is the type of uncontrolled and unplanned growth on software systems under business pressure without an adequate long-term planning, and it is bad. How to tackle the organic growth? The first thing needed to tackle the organic growth is make sure that business and IT are aligned in the company. Usually, in big companies, IT is not seen as a core part of the business. Organizations outsource their IT systems, keeping the cost in mind, but not the quality so that the partners building these software components are focused on one thing: deliver on time and according to the specification, even if it is incorrect. This produces a less-than-ideal ecosystem to respond to the business needs with a working solution for an existing problem. IT is lead by people who barely understand how the systems are built and usually overlook the complexity of the software development. Fortunately, this is a changing tendency as IT systems have become the drivers of 99% of the businesses around the world, but we need to be smarter about how we build them. The first measure to tackle the organic growth is to align IT and business stakeholders in order to work together, educating the non-technical stakeholders is the key to success. If we go back to the example from the previous section (few releases with quite big changes). Can we do it better? Of course, we can. Divide the work into manageable software artifacts that model a single and well-defined business activity and give it an entity. It does not need to be a microservice at this stage, but keeping the logic inside a separated, well-defined, easy testable, and decoupled module will give us a huge advantage towards future changes in the application. Building microservices – The fallback strategy When you design a system, we usually think about the replaceability of the existing components. For example, when using a persistence technology in Java, we tend to lean towards the standards (Java Persistence API (JPA)) so that we can replace the underneath implementation without too much effort. Microservices take the same approach, but they isolate the problem instead of working towards an easy replaceability. Also, e-mailing is something that, although it seems simple, always ends up giving problems. Consider that we want to replace Mandrill with a plain SMTP server, such as Gmail. We don't need to do anything special, we just change the implementation and rollout the new version of our microservice, as follows: var nodemailer = require('nodemailer'); var seneca = require("seneca")(); var transporter = nodemailer.createTransport({ service: 'Gmail', auth: { user: 'info@micromerce.com', pass: 'verysecurepassword' } }); /** * Sends an email including the content. */ seneca.add({area: "email", action: "send"}, function(args, done) { var mailOptions = { from: 'Micromerce Info ✔ <info@micromerce.com>', to: args.to, subject: args.subject, html: args.body }; transporter.sendMail(mailOptions, function(error, info){ if(error){ done({code: e}, null); } done(null, {status: "sent"}); }); }); For the outer world, our simplest version of the e-mail sender is now at all lights, using SMTP through Gmail to deliver our e-mails. We could even rollout one server with this version and send some traffic to it in order to validate our implementation without affecting all the customers (in other words, contain the failure). Deploying microservices Deployment is usually the ugly friend of the software development life cycle party. There is a missing contact point in between development and system administration, which DevOps is going to solve in the following few years (or has already done it and no one told me). The following is the graph showing the cost of fixing software bugs versus the various phases of development: From the continuous integration up to continuous delivery, the process should be automated as much as possible, where as much as possible means 100%. Remember, humans are imperfect…if we rely on humans carrying on a manual repetitive process for a bug-free software, we are walking the wrong path. Remember that a machine will always be error free (as long as the algorithm that is executed is error free) so…why not let a machine control our infrastructure? Summary In this article, we saw how microservices are required in complex software systems, how the monolithic approach is useful, and how to build and deploy microservices. Resources for Article: Further resources on this subject: Making a Web Server in Node.js [article] Node.js Fundamentals and Asynchronous JavaScript [article] An Introduction to Node.js Design Patterns [article]
Read more
  • 0
  • 0
  • 17495
Modal Close icon
Modal Close icon