Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-integrating-angular-2-react
Mary Gualtieri
01 Sep 2016
5 min read
Save for later

How to integrate Angular 2 with React

Mary Gualtieri
01 Sep 2016
5 min read
It can be overwhelming to choose which framework is the best framework to use to get the job done in JavaScript, because you have so many options out there. In previous years, we have seen two popular frameworks come to fame: React and Angular. React has gained a lot of popularity because it is what Facebook and Instagram are built on. So, which one do you use? If you ask most JavaScript developers, they advise you to use one or the other, and that comes down to personal choice. Let's go ahead and explore Angular and React, and actually explore how the two can work together A common misconception about React is that React is a full JavaScript framework in competition with Angular. But it actually is not. React is a user interface library and is just the view in an 'MVC' framework, with a little bit of a controller in it. In other words, React is a template (an Angular term for view) where you can add some controller logic.It is the same idea of integrating the jQuery UI into JavaScript. React aids in Angular, which is a frontend framework, and makes it more efficient for the user. This is because you can write a reusable component that can be plugged into an application. Angular has many pros, and that's what makes it so appealing to a lot of developers and companies. With my personal experience, Angular has been very powerful in making solid applications. However, one of the cons that bothers me about Angular is how it goes about executing a template. I always want to practice writing DRY code and that can be a problem with Angular. You can end up writing a bunch of HTML that can be complicated and difficult to read. But you can also end up causing a spaghetti effect in your CSS. One of Angular's big strengths is the watchers and the binders. When executed correctly in well-thought-out places, it can be great for fast binding and good responsiveness. But like every human, we all make errors, and when misused, you can have performance issues when binding way too many elements in your HTML. This leads to a very slow and lagged application that no one wants to use. But there is a way to rectify this. You can use React to aid in Angular's downfalls.React was designed to work really well with other libraries and makes rendering views or templates much faster. The beauty of React is how it uses a more efficient algorithm on the virtual DOM. In plain terms, it allows you to change parts of the application that need to be updated without having to touch the rest of the application. You can send a command to update the user interface and React compares these changes to the existing DOM. Instead of theorizing on how Angular and React complement one another, let's see how it looks in code. First, let's create an HMTL file that has an Angular script and a React script. *Remember, your relative path may be different from the example.  Next, let's create a React component that renders a string that is inputted by the user.  What is happening here is that we are using React to render our model. We created a component that renders the props passed to it. Then we create an Angular directive and controller to start the app. The directive is calling the React component and telling it to render. But, let's take a look at another example that integrates Angular 2. We can demonstrate how a React component can be self-contained but still be injected into the Angular 2 world. Angular 2 has an optional hook, onInit(), that takes advantage of triggering the code to render a React component. As you can see, the host component has defined implementation for the onInit handler, where you can call the static initialize function. If you noticed, the initialize method is passing a title text that is passed down from the React component as a prop. *React Component Another item to consider that unites React and Angular is TypeScript. This is a big deal because we can manage both Angular code and React code in the same compilation step. When it comes down to it, TypeScript is what gets stripped down to regular JavaScript. One thing that you have to remember to do is tell the compiler that you are using JSX by specifying the JSX flag. To conclude, Angular will always remain a very popular framework for developers. For an application to render faster, React is a great way to render templates faster and create a more efficient user experience. React is a great compliment to Angular and will only enhance your application. About the author Mary Gualtieri is a full-stack web developer and web designer who enjoys all aspects of the web and creating a pleasant user experience. Web development, specifically frontend development, is an interest of hers because it challenges her to think outside of the box and solveproblems, all while constantly learning. She can be found on GitHub at MaryGualtieri.
Read more
  • 0
  • 2
  • 22094

article-image-how-to-publish-a-microservice-as-a-service-onto-a-docker
Pravin Dhandre
12 Apr 2018
6 min read
Save for later

How to publish Microservice as a service onto a Docker

Pravin Dhandre
12 Apr 2018
6 min read
In today’s tutorial, we will show a step-by-step approach to publishing a prebuilt microservice onto a docker so you can scale and control it further. Once the swarm is ready, we can use it to create services that we can use for scaling, but first, we will create a shared registry in our swarm. Then, we will build a microservice and publish it into a Docker. Creating a registry When we create a swarm's service, we specify an image that will be used, but when we ask Docker to create the instances of the service, it will use the swarm master node to do it. If we have built our Docker images in our machine, they are not available on the master node of the swarm, so it will create a registry service that we can use to publish our images and reference when we create our own services. First, let's create the registry service: docker service create --name registry --publish 5000:5000 registry Now, if we list our services, we should see our registry created: docker service ls ID NAME MODE REPLICAS IMAGE PORTS os5j0iw1p4q1 registry replicated 1/1 registry:latest *:5000->5000/tcp And if we visit http://localhost:5000/v2/_catalog, we should just get: {"repositories":[]} This Docker registry is ephemeral, meaning that if we stop the service or start it again, our images will disappear. For that reason, we recommend to use it only for development. For a real registry, you may want to use an external service. We discussed some of them in Chapter 7, Creating Dockers. Creating a microservice In order to create a microservice, we will use Spring Initializr, as we have been doing in previous chapters. We can start by visiting the URL: https:/​/​start.​spring.​io/​: Creating a project in Spring Initializr We have chosen to create a Maven Project using Kotlin and Spring Boot 2.0.0 M7. We chose the Group to be com.Microservices and the Artifact to be chapter08. For Dependencies, we have set Web. Now, we can click Generate Project to download it as a zip file. After we unzip it, we can open it with IntelliJ IDEA to start working on our project. After a few minutes, our project will be ready and we can open the Maven window to see the different lifecycle phases and Maven plugins and their goals. We covered how to use Spring Initializr, Maven, and IntelliJ IDEA in Chapter 2, Getting Started with Spring Boot 2.0. You can check out this chapter to learn about topics not covered in this section. Now, we will add RestController to our project, in IntelliJ IDEA, in the Project window. Right-click our com.Microservices.chapter08 package and then choose New | Kotlin File/Class. In the pop-up window, we will set its name as HelloController and in the Kind drop-down, choose Class. Let's modify our newly created controller: package com.microservices.chapter08 import org.springframework.web.bind.annotation.GetMapping import org.springframework.web.bind.annotation.RestController import java.util.* import java.util.concurrent.atomic.AtomicInteger @RestController class HelloController { private val id: String = UUID.randomUUID().toString() companion object { val total: AtomicInteger = AtomicInteger() } @GetMapping("/hello") fun hello() = "Hello I'm $id and I have been called ${total.incrementAndGet()} time(s)" } This small controller will display a message with a GET request to the /hello URL. The message will display a unique id that we establish when the object is created and it will show how many times it has been called. We have used AtomicInteger to guarantee that no other requests are modified on our number concurrently. Finally, we will rename application.properties in the resources folder, using Shift + F6 to get application.yml, and edit it: logging.level.org.springframework.web: DEBUG We will establish that we have the log in the debug level for the Sprint Web Framework, so for the latter, we could view the information that will be needed in our logs. Now, we can run our microservice, either using IntelliJ IDEA Maven Project window with the springboot:run target, or by executing it on the command line: mvnw spring-boot:run Either way, when we visit the http://localhost:8080/hello URL, we should see something like: Hello I'm a0193477-b9dd-4a50-85cc-9d9405e02299 and I have been called 1 time(s) Doing consecutive calls will increase the number. Finally, we can see at the end of our log that we have several requests. The following should appear: Hello I'm a0193477-b9dd-4a50-85cc-9d9405e02299 and I have been called 1 time(s) Now, we will stop the microservice to create our Docker. Creating our Docker Now that our microservice is running, we need to create a Docker. To simplify the process, we will just use Dockerfile. To create this file on the top of the Project window, rightclick chapter08 and then select New | File and type Dockerfile in the drop-down menu. In the next window, click OK and the file will be created. Now, we can add this to our Dockerfile: FROM openjdk:8-jdk-alpine ADD target/*.jar app.jar ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar", "App.jar"] In this case, we tell the JVM to use dev/urandom instead of dev/random to generate random numbers required by our ID generation and this will make the operation much faster. You may think that urandom is less secure than random, but read this article for more information: https:/​/www.​2uo.​de/​myths-​about-​urandom/​.​ From our microservice directory, we will use the command line to create our package: mvnw package Then, we will create a Docker for it: docker build . -t hello Now, we need to tag our image in our shared registry service, using the following command: docker tag hello localhost:5000/hello Then, we will push our image to the shared registry service: docker push localhost:5000/hello Creating the service Finally, we will add the microservice as a service to our swarm, exposing the 8080 port: docker service create --name hello-service --publish 8080:8080 localhost:5000/hello Now, we can navigate to http://localhost:8888/application/default to get the same results as before, but now we run our microservice in a Docker swarm. If you found this tutorial useful, do check out the book Hands-On Microservices with Kotlin  to work with Kotlin, Spring and Spring Boot for building reactive and cloud-native microservices. Also check out other posts: How to publish Docker and integrate with Maven Understanding Microservices How to build Microservices using REST framework
Read more
  • 0
  • 0
  • 22057

article-image-getting-started-zombiejs
Packt
22 May 2013
9 min read
Save for later

Getting Started with Zombie.js

Packt
22 May 2013
9 min read
(For more resources related to this topic, see here.) A brief history of software and user interface testing Software testing is a necessary activity for gathering information about the quality of a certain product or a service. In the traditional software development cycle, this activity had been delegated to a team whose sole job was to find problems in the software. This type of testing would be required if a generic product was being sold to a domestic end user or if a company was buying a licensed operating system. In most custom-built pieces of software, the testing team has the responsibility of manually testing the software, but often the client has to do the acceptance testing in which he or she has to make sure that the software behaves as expected. Every time someone in these teams finds a new problem in the software, the development team has to fix the software and put it back in the testing loop one more time. This implies that the cost and time required to deliver a final version of the software increases every time a bug is found. Furthermore, the later in the development process the problem is found, the more it will impact the final cost of the product. Also, the way software is delivered has changed in the last few years; the Web has enabled us to make the delivery of software and its upgrade easy, shortening the time between when new functionality is developed and when it is put in use. But once you have delivered the first version of a product and have a few customers using it, you can face a dilemma; fewer updates can mean the product quickly becomes obsolete. On the other hand, introducing many changes in the software increases the chance of something going wrong and your software becoming faulty, which may drive customers away. There are many versions and iterations over how a development process can mitigate the risk of shipping a faulty product and increase the chances of new functionalities to be delivered on time, and for the overall product to meet a certain quality standard, but all people involved in building software must agree that the sooner you catch a bug, the better. This means that you should catch the problems early on, preferably in the development cycle. Unfortunately, completely testing the software by hand every time the software changes, would be costly. The solution here is to automate the tests in order to maximize the test coverage (the percentage of the application code that is tested and the possible input variations) and minimize the time it takes to run each test. If your tests take just a few seconds to run, you can afford to run them every time you make a single change in the code base. Enter the automation era Test automation has been around for some years, even before the Web was around. As soon as graphical user interfaces (GUIs) started to become mainstream, the tools that allowed you to record, build, and run automated tests against a GUI started appearing. Since there were many languages and GUI libraries for building applications, many tools that covered some of these started showing up. Generally they allowed you to record a testing session that you could later recreate automatically. In this session, you could automate the pointer to click on things (buttons, checkboxes, places on a window, and so on), select values (from a select box, for instance), and input keyboard actions and test the results. All of these tools were fairly complex to operate and, worst of all, most of them were technology-specific. But, if you're building a web-based application that uses HTML and JavaScript, you have better alternatives. The most well known of these is likely to be Selenium, which allows you to record, change, and run testing scripts against all the major browsers. You can run tests using Selenium, but you need at least one browser for Selenium to attach itself to, in order to load and run the tests. If you run the tests with as many browsers as you possibly can, you will be able to guarantee that your application behaves correctly across all of them. But since Selenium plugs into a browser and commands it, running all the tests for a considerably complex application in as many browsers as possible can take some time, and the last thing you want is to not run the tests as often as possible. Unit tests versus integration tests Generally you can divide automated tests into two categories, namely unit tests and integration tests. Unit tests: These tests are where you select a small subset of your application—such as a class or a specific object—and test the interface the class or object provides to the rest of the application. In this way, you can isolate a specific component and make sure it behaves as expected so that other components in the application can use it safely. Integration tests: These tests are where individual components are combined together and tested as a working group. During these tests, you interact and manipulate the user interface that in turn interacts with the underlying blocks of your application. The kind of testing you do with Zombie.js falls in this category. What Zombie.js is Zombie.js allows you to run these tests without a real web browser. Instead, it uses a simulated browser where it stores the HTML code and runs the JavaScript you may have in your HTML page. This means that an HTML page doesn't need to be displayed, saving precious time that would otherwise be occupied rendering it. You can then use Zombie.js to conduct this simulated browser into loading pages and, once a page is loaded, doing certain actions and observing the results. And you can do all this using JavaScript, never having to switch languages between your client code and your test scripts. Understanding the server-side DOM Zombie.js runs on top of Node.js (http://nodejs.org), a platform where you can easily build networking servers using JavaScript. It runs on top of Google's fast V8 JavaScript engine that also powers their Chrome browsers. At the time of writing, V8 implements the JavaScript ECMA 3 standard and part of the ECMA 5 standard. Not all browsers implement all the features of all the versions of the JavaScript standards equally. This means that even if your tests pass in Zombie.js, it doesn't mean they will pass for all the target browsers. On top of Node.js, there is a third-party module named JSDOM (https://npmjs.org/package/jsdom) that allows you to parse an HTML document and use an API on top of a representation of that document; this allows you to query and manipulate it. The API provided is the standard Document Object Model (DOM). All browsers implement a subset of the DOM standard, which has been dictated as a set of recommendations by a working group inside the World Wide Web Consortium (W3C). They have three levels of recommendations. JSDOM implements all three. Web applications, directly or indirectly (by using tools such as jQuery), use this browser-provided DOM API to query and manipulate the document, enabling you to create browser applications that have complex behavior. This means that by using JSDOM you automatically support any JavaScript libraries that most modern browsers support. Zombie.js is your headless browser On top of Node.js and JSDOM lies Zombie.js. Zombie.js provides browser-like functionality and an API you can use for testing. For instance, a typical use of Zombie.js would be to open a browser, ask for a certain URL to be loaded, fill some values on a form, and submit it, and then query the resulting document to see if a success message is present. To make it more concrete, here is a simple example of what the code for a simple Zombie.js test may look like: browser.visit('http://localhost:8080/form', function() {browser.fill('Name', 'Pedro Teixeira').select('Born', '1975').check('Agree with terms and conditions').pressButton('Submit', function() {assert.equal(browser.location.pathname, '/success');assert.equal(browser.text('#message'),'Thank you for submitting this form!');});}); Here you are making typical use of Zombie.js: to load an HTML page containing a form; filling that form and submitting it; and then verifying that the result is successful. Zombie.js may not only be used for testing your web app but also by applications that need to behave like browsers, such as HTML scrapers, crawlers, and all sorts of HTML bots. If you are going to use Zombie.js to do any of these activities, please be a good Web citizen and use it ethically. Summary Creating automated tests is a vital part of the development process of any software application. When creating web applications using HTML, JavaScript, and CSS, you can use Zombie.js to create a set of tests; these tests load, query, manipulate, and provide inputs to any given web page. Given that Zombie.js simulates a browser and does not depend on the actual rendering of the HTML page, the tests run much faster than they would if you instrumented a real browser. Thus it is possible for you to run these tests whenever you make any small changes to your application. Zombie.js runs on top of Node.js, uses JSDOM to provide a DOM API on top of any HTML document, and simulates browser-like functionalities with a simple API that you can use to create your tests using JavaScript Resources for Article : Further resources on this subject: Understanding and Developing Node Modules [Article] An Overview of the Node Package Manager [Article] Build iPhone, Android and iPad Applications using jQTouch [Article]
Read more
  • 0
  • 0
  • 21941

article-image-drupal-8-and-configuration-management
Packt
18 Mar 2015
15 min read
Save for later

Drupal 8 and Configuration Management

Packt
18 Mar 2015
15 min read
In this article, by the authors, Stefan Borchert and Anja Schirwinski, of the book, Drupal 8 Configuration Management,we will learn the inner workings of the Configuration Management system in Drupal 8. You will learn about config and schema files and read about the difference between simple configuration and configuration entities. (For more resources related to this topic, see here.) The config directory During installation, Drupal adds a directory within sites/default/files called config_HASH, where HASH is a long random string of letters and numbers, as shown in the following screenshot: This sequence is a random hash generated during the installation of your Drupal site. It is used to add some protection to your configuration files. Additionally to the default restriction enforced by the .htaccess file within the subdirectories of the config directory that prevents unauthorized users from seeing the content of the directories. As a result, would be really hard for someone to guess the folder's name. Within the config directory, you will see two additional directories that are empty by default (leaving the .htaccess and README.txt files aside). One of the directories is called active. If you change the configuration system to use file storage instead of the database for active Drupal site configuration, this directory will contain the active configuration. If you did not customize the storage mechanism of the active configuration (we will learn later how to do this), Drupal 8 uses the database to store the active configuration. The other directory is called staging. This directory is empty by default, but can host the configuration you want to be imported into your Drupal site from another installation. You will learn how to use this later on in this article. A simple configuration example First, we want to become familiar with configuration itself. If you look into the database of your Drupal installation and open up the config table , you will find the entire active configuration of your site, as shown in the following screenshot: Depending on your site's configuration, table names may be prefixed with a custom string, so you'll have to look for a table name that ends with config. Don't worry about the strange-looking text in the data column; this is the serialized content of the corresponding configuration. It expands to single configuration values—that is, system.site.name, which holds the value of the name of your site. Changing the site's name in the user interface on admin/config/system/site-information will immediately update the record in the database; thus, put simply the records in the table are the current state of your site's configuration, as shown in the following screenshot: But where does the initial configuration of your site come from? Drupal itself and the modules you install must use some kind of default configuration that gets added to the active storage during installation. Config and schema files – what are they and what are they used for? In order to provide a default configuration during the installation process, Drupal (modules and profiles) comes with a bunch of files that hold the configuration needed to run your site. To make parsing of these files simple and enhance readability of these configuration files, the configuration is stored using the YAML format. YAML (http://yaml.org/) is a data-orientated serialization standard that aims for simplicity. With YAML, it is easy to map common data types such as lists, arrays, or scalar values. Config files Directly beneath the root directory of each module and profile defining or overriding configuration (either core or contrib), you will find a directory named config. Within this directory, there may be two more directories (although both are optional): install and schema. Check the image module inside core/modules and take a look at its config directory, as shown in the following screenshot: The install directory shown in the following screenshot contains all configuration values that the specific module defines or overrides and that are stored in files with the extension .yml (one of the default extensions for files in the YAML format): During installation, the values stored in these files are copied to the active configuration of your site. In the case of default configuration storage, the values are added to the config table; in file-based configuration storage mechanisms, on the other hand, the files are copied to the appropriate directories. Looking at the filenames, you will see that they follow a simple convention: <module name>.<type of configuration>[.<machine name of configuration object>].yml (setting aside <module name>.settings.yml for now). The explanation is as follows: <module name>: This is the name of the module that defines the settings included in the file. For instance, the image.style.large.yml file contains settings defined by the image module. <type of configuration>: This can be seen as a type of group for configuration objects. The image module, for example, defines several image styles. These styles are a set of different configuration objects, so the group is defined as style. Hence, all configuration files that contain image styles defined by the image module itself are named image.style.<something>.yml. The same structure applies to blocks (<block.block.*.yml>), filter formats (<filter.format.*.yml>), menus (<system.menu.*.yml>), content types (<node.type.*.yml>), and so on. <machine name of configuration object>: The last part of the filename is the unique machine-readable name of the configuration object itself. In our examples from the image module, you see three different items: large, medium, and thumbnail. These are exactly the three image styles you will find on admin/config/media/image-styles after installing a fresh copy of Drupal 8. The image styles are shown in the following screenshot: Schema files The primary reason schema files were introduced into Drupal 8 is multilingual support. A tool was needed to identify all translatable strings within the shipped configuration. The secondary reason is to provide actual translation forms for configuration based on your data and to expose translatable configuration pieces to external tools. Each module can have as many configuration the .yml files as needed. All of these are explained in one or more schema files that are shipped with the module. As a simple example of how schema files work, let's look at the system's maintenance settings in the system.maintenance.yml file at core/modules/system/config/install. The file's contents are as follows: message: '@site is currently under maintenance. We should be back shortly. Thank you for your patience.' langcode: en The system module's schema files live in core/modules/system/config/schema. These define the basic types but, for our example, the most important aspect is that they define the schema for the maintenance settings. The corresponding schema section from the system.schema.yml file is as follows: system.maintenance: type: mapping label: 'Maintenance mode' mapping:    message:      type: text      label: 'Message to display when in maintenance mode'    langcode:      type: string      label: 'Default language' The first line corresponds to the filename for the .yml file, and the nested lines underneath the first line describe the file's contents. Mapping is a basic type for key-value pairs (always the top-level type in .yml). The system.maintenance.yml file is labeled as label: 'Maintenance mode'. Then, the actual elements in the mapping are listed under the mapping key. As shown in the code, the file has two items, so the message and langcode keys are described. These are a text and a string value, respectively. Both values are given a label as well in order to identify them in configuration forms. Learning the difference between active and staging By now, you know that Drupal works with the two directories active and staging. But what is the intention behind those directories? And how do we use them? The configuration used by your site is called the active configuration since it's the configuration that is affecting the site's behavior right now. The current (active) configuration is stored in the database and direct changes to your site's configuration go into the specific tables. The reason Drupal 8 stores the active configuration in the database is that it enhances performance and security. Source: https://www.drupal.org/node/2241059. However, sometimes you might not want to store the active configuration in the database and might need to use a different storage mechanism. For example, using the filesystem as configuration storage will enable you to track changes in the site's configuration using a versioning system such as Git or SVN. Changing the active configuration storage If you do want to switch your active configuration storage to files, here's how: Note that changing the configuration storage is only possible before installing Drupal. After installing it, there is no way to switch to another configuration storage! To use a different configuration storage mechanism, you have to make some modifications to your settings.php file. First, you'll need to find the section named Active configuration settings. Now you will have to uncomment the line that starts with $settings['bootstrap_config_storage'] to enable file-based configuration storage. Additionally, you need to copy the existing default.services.yml (next to your settings.php file) to a file named services.yml and enable the new configuration storage: services: # Override configuration storage. config.storage:    class: DrupalCoreConfigCachedStorage    arguments: ['@config.storage.active', '@cache.config'] config.storage.active:    # Use file storage for active configuration.    alias: config.storage.file This tells Drupal to override the default service used for configuration storage and use config.storage.file as the active configuration storage mechanism instead of the default database storage. After installing the site with these settings, we will take another look at the config directory in sites/default/files (assuming you didn't change to the location of the active and staging directory): As you can see, the active directory contains the entire site's configuration. The files in this directory get copied here during the website's installation process. Whenever you make a change to your website, the change is reflected in these files. Exporting a configuration always exports a snapshot of the active configuration, regardless of the storage method. The staging directory contains the changes you want to add to your site. Drupal compares the staging directory to the active directory and checks for changes between them. When you upload your compressed export file, it actually gets placed inside the staging directory. This means you can save yourself the trouble of using the interface to export and import the compressed file if you're comfortable enough with copy-and-pasting files to another directory. Just make sure you copy all of the files to the staging directory even if only one of the files was changed. Any missing files are interpreted as deleted configuration, and will mess up your site. In order to get the contents of staging into active, we simply have to use the synchronize option at admin/config/development/configuration again. This page will show us what was changed and allows us to import the changes. On importing, your active configuration will get overridden with the configuration in your staging directory. Note that the files inside the staging directory will not be removed after the synchronization is finished. The next time you want to copy-and-paste from your active directory, make sure you empty staging first. Note that you cannot override files directly in the active directory. The changes have to be made inside staging and then synchronized. Changing the storage location of the active and staging directories In case you do not want Drupal to store your configuration in sites/default/files, you can set the path according to your wishes. Actually, this is recommended for security reasons, as these directories should never be accessible over the Web or by unauthorized users on your server. Additionally, it makes your life easier if you work with version control. By default, the whole files directory is usually ignored in version-controlled environments because Drupal writes to it, and having the active and staging directory located within sites/default/files would result in them being ignored too. So how do we change the location of the configuration directories? Before installing Drupal, you will need to create and modify the settings.php file that Drupal uses to load its basic configuration data from (that is, the database connection settings). If you haven't done it yet, copy the default.settings.php file and rename the copy to settings.php. Afterwards, open the new file with the editor of your choice and search for the following line: $config_directories = array(); Change the preceding line to the following (or simply insert your addition at the bottom of the file). $config_directories = array( CONFIG_ACTIVE_DIRECTORY => './../config/active', // folder outside the webroot CONFIG_STAGING_DIRECTORY => './../config/staging', // folder outside the webroot ); The directory names can be chosen freely, but it is recommended that you at least use similar names to the default ones so that you or other developers don't get confused when looking at them later. Remember to put these directories outside your webroot, or at least protect the directories using an .htaccess file (if using Apache as the server). Directly after adding the paths to your settings.php file, make sure you remove write permissions from the file as it would be a security risk if someone could change it. Drupal will now use your custom location for its configuration files on installation. You can also change the location of the configuration directories after installing Drupal. Open up your settings.php file and find these two lines near the end of the file and start with $config_directories. Change their paths to something like this: $config_directories['active'] = './../config/active'; $config_directories['staging] = './../config/staging'; This path places the directories above your Drupal root. Now that you know about active and staging, let's learn more about the different types of configuration you can create on your own. Simple configuration versus configuration entities As soon as you want to start storing your own configuration, you need to understand the differences between simple configuration and configuration entities. Here's a short definition of the two types of configuration used in Drupal. Simple configuration This configuration type is easier to implement and therefore ideal for basic configuration settings that result in Boolean values, integers, or simple strings of text being stored, as well as global variables that are used throughout your site. A good example would be the value of an on/off toggle for a specific feature in your module, or our previously used example of the site name configured by the system module: name: 'Configuration Management in Drupal 8' Simple configuration also includes any settings that your module requires in order to operate correctly. For example, JavaScript aggregation has to be either on or off. If it doesn't exist, the system module won't be able to determine the appropriate course of action. Configuration entities Configuration entities are much more complicated to implement but far more flexible. They are used to store information about objects that users can create and destroy without breaking the code. A good example of configuration entities is an image style provided by the image module. Take a look at the image.style.thumbnail.yml file: uuid: fe1fba86-862c-49c2-bf00-c5e1f78a0f6c langcode: en status: true dependencies: { } name: thumbnail label: 'Thumbnail (100×100)' effects: 1cfec298-8620-4749-b100-ccb6c4500779:    uuid: 1cfec298-8620-4749-b100-ccb6c4500779    id: image_scale    weight: 0    data:      width: 100      height: 100      upscale: false third_party_settings: { } This defines a specific style for images, so the system is able to create derivatives of images that a user uploads to the site. Configuration entities also come with a complete set of create, read, update, and delete (CRUD) hooks that are fired just like any other entity in Drupal, making them an ideal candidate for configuration that might need to be manipulated or responded to by other modules. As an example, the Views module uses configuration entities that allow for a scenario where, at runtime, hooks are fired that allow any other module to provide configuration (in this case, custom views) to the Views module. Summary In this article, you learned about how to store configuration and briefly got to know the two different types of configuration. Resources for Article: Further resources on this subject: Tabula Rasa: Nurturing your Site for Tablets [article] Components - Reusing Rules, Conditions, and Actions [article] Introduction to Drupal Web Services [article]
Read more
  • 0
  • 0
  • 21909

article-image-how-build-cross-platform-desktop-application-nodejs-and-electron
Mika Turunen
14 Oct 2015
9 min read
Save for later

How to build a cross-platform desktop application with Node.js and Electron

Mika Turunen
14 Oct 2015
9 min read
Do you want to make a desktop application, but you have only mastered web development so far? Or maybe you feel overwhelmed by all of the different API’s that different desktop platforms have to offer? Or maybe you want to write a beautiful application in HTML5 and JavaScript and have it working on the desktop? Maybe you want to port an existing web application to the desktop? Well, luckily for us, there are a number of alternatives and we are going to look into Node.js and Electron to help us get our HTML5 and JavaScript running on the desktop side with no hiccups. What are the different parts in an Electron application Commonly, all of the different components in Electron are either running in the main process (backend) or the rendering process (frontend). The main process can communicate with different parts of the operating system if there’s a need for that, and the rendering process mainly just focuses on showing the content, pretty much like in any HTML5 application you find on the Internet. The processes communicate with each other through IPC (inter-process communication), which in Node.js terms is just a super simple event emitter and nothing else. You can send events and listen for events. You can get the complete source code from here for this post. Let's start working on it You need to have node.js installed and you can install it from https://nodejs.org/. Now that you have Node.js installed you can start focusing on creating the application. First of all, create an empty directory where you will be placing your code. # Open up your favourite terminal, command-line tool or any other alternative as we'll be running quite a bit of commands # Create the directory mkdir /some/location/that/works/in/your/system # Go into the directory cd /some/location/that/works/in/your/system # Now we need to initialize it for our Electron and Node work npm init NPM will start asking you questions about the application we are about to make. You can just hit Enter and not answer any of them if you feel like it. We can fill them in manually once we know a bit more about our application. Now we should have a directory structure with the following files in it: package.json And that's it, nothing else. We'll start by creating two new files in your favorite text editor or IDE. The files are (leave the files empty): main.js index.html Drop all of the files into the same directory as the package.json is in for easier handling of everything for now. Main.js will be our main process file, which is the connecting layer to the underlying desktop operating system for our Electron application. At this point we need to install Electron as a dependency for our application, which is really easy. Just write: npm install --save electron-prebuilt Alternatively if you cloned/downloaded the associated Github repository you can just go into the directory and write: npm install This will install all dependencies from package.json, including the prebuilt-electron. Now we have Electron's prebuilt binaries installed as a direct dependency for our application and we can run our application on our platform. It's wise to manually update our package.json file using the npm init command generated for us. Open up package.josn file and modify the scripts block to look like this (or if it's missing, create it): "main": "main.js", "scripts": { "start": "electron ." }, The whole package.json file should be roughly something like this (taken from the tutorial repo I linked earlier): { "name": "", "version": "1.0.0", "description": "", "main": "main.js", "scripts": { "start": "electron ." }, "repository": { }, "keywords": [ ], "author": "", "license": "MIT", "bugs": { }, "homepage": "", "dependencies": { "electron-prebuilt": "^0.25.3" } } The main property in the file points to the main.js and the scripts sections start property tells it to run command "Electron .", which essentially tells Electron to digest the current directory as an application and Electron hardwires the property main as the main process for the application. This means that main.js is now our main process, just like we wanted. Main and rendering process We need to write the main process JavaScript and the rendering process HTML to get our application to start. Let's start with the main process, main.js. You can also find all of the code below from the tutorial repository here. The code has been peppered with a good amount of comments to give a deeper understanding of what is going on in the code and what different parts do in the context of Electron. // Loads Electron specific app that is not commonly available for node or io.js var app = require("app"); // Inter process communication -- Used to communicate from Main process (this) // to the actual rendering process (index.html) -- not really used in this example var ipc = require("ipc"); // Loads the Electron specific module or browser handling var BrowserWindow = require("browser-window"); // Report crashes to our server. var crashReporter = require("crash-reporter"); // Keep a global reference of the window object, if you don't, the window will // be closed automatically when the javascript object is garbage collected var mainWindow = null; // Quit when all windows are closed. app.on("window-all-closed", function() { // OS X specific check if (process.platform != "darwin") { app.quit(); } }); // This event will be called when Electron has done initialization and ready for creating browser windows. app.on("ready", function() { crashReporter.start(); // Create the browser window (where the applications visual parts will be) mainWindow = newBrowserWindow({ width: 800, height: 600 }); // Building the file path to the index.html mainWindow.loadUrl("file://" + __dirname + "/index.html"); // Emitted when the window is closed. // The function just deferences the mainWindow so garbage collection can // pick it up mainWindow.on("closed", function() { mainWindow = null; }); }); You can now start the application, but it'll just start an empty window since we have nothing to render in the rendering process. Let's fix that by populating our index.html with some content. <!DOCTYPE html> <html> <head> <title>Hello Tutorial!</title> </head> <body> <h2>Tutorial</h2> We are using node.js <script>document.write(process.version)</script> and Electron <script>document.write(process.versions["electron"])</script>. </body> </html> Because this is an Electron application we have used the node.js/io.js process and other content relating to the actual node.js/io.js setup we have going. The line document.write(process.version) actually is a call to the Node.js process. This is one of the great things about Electron: we are essentially bridging the gap between the desktop applications and HTML5 applications. Now to run the application. npm start There is a huge list of different desktop environment integration possibilities you can do with Electron and you can read more about them from the Electron documentation at http://electron.atom.io/. Obviously this is still far from a complete application, but this should give you the understanding on how to work with Electron, how it behaves and what you can do with it. You can start using your favorite JavaScript/CSS frontend framework in the index.html to build a great looking GUI for your new desktop application and you can also use all Node.js specific NPM modules in the backend along with the desktop environment integration. Maybe we'll look into writing a great looking GUI for our application with some additional desktop environment integration in another post. Packaging and distributing Electron applications Applications can be packaged into distributable operating system specific containers. For example, .exe files allow them to run on different hardware. The packaging process is fairly simple and well documented in the Electron's documentation and it is out of the scope for this post but worth the look if you want to package your application. To understand more of the application distribution and packaging process, read the Electrons official documentation on it here. Electron and it's current use Electron is still really fresh and right out of GitHub's knowing hands, but it's already been adopted by quite few companies for use and there are number of applications already built on top of it. Companies using Electron: Slack Microsoft Github Applications built with Electron or using Electron Visual Studio Code - Microsofts Visual Studio code Heartdash - Hearthdash is a card tracking application for Hearthstone. Monu - Process monitoring app Kart - Frontend for RetroArch Friends - P2P chat powered by the web Final words on Electron It's obvious that Electron is still taking its first baby steps, but it's hard to deny the fact that more and more user interfaces will be written in different web technologies with HTML5 and this is one of the great starts for it. It'll be interesting to see how the gap between the desktop and the web application develop as time goes on and people like you and me will be playing a key role in the development of future applications. With help of technologies like Electron the desktop application development just got that much easier. For more Node.js content, look no further than our dedicated page! About the author Mika Turunen is a software professional hailing from the frozen cold Finland. He spends a good part of his day playing with emerging web and cloud related technologies, but he also has a big knack for games and game development. His hobbies include game collecting, game development and games in general. When he's not playing with technology he is spending time with his two cats and growing his beard.
Read more
  • 0
  • 0
  • 21899

article-image-derivatives-pricing
Packt
18 Nov 2013
10 min read
Save for later

Derivatives Pricing

Packt
18 Nov 2013
10 min read
(For more resources related to this topic, see here.) Derivatives are financial instruments which derive their value from (or are dependent on) the value of another product, called the underlying. The three basic types of derivatives are forward and futures contracts, swaps, and options. In this article we will focus on this latter class and show how basic option pricing models and some related problems can be handled in R. We will start with overviewing how to use the continuous Black-Scholes model and the binomial Cox-Ross-Rubinstein model in R, and then we will proceed with discussing the connection between these models. Furthermore, with the help of calculating and plotting of the Greeks, we will show how to analyze the most important types of market risks that options involve. Finally, we will discuss what implied volatility means and will illustrate this phenomenon by plotting the volatility smile with the help of real market data. The most important characteristics of options compared to futures or swaps is that you cannot be sure whether the transaction (buying or selling the underlying) will take place or not. This feature makes option pricing more complex and requires all models to make assumptions regarding the future price movements of the underlying product. The two models we are covering here differ in these assumptions: the Black-Scholes model works with a continuous process while the Cox-Ross-Rubinstein model works with a discrete stochastic process. However, the remaining assumptions are very similar and we will see that the results are close too. The Black-Scholes model The assumptions of the Black-Scholes model (Black and Sholes, 1973, see also Merton, 1973) are as follows: The price of the underlying asset (S) follows geometric Brownian motion: Here μ (drift) and σ (volatility) are constant parameters and W is a standard Wiener process. The market is arbitrage-free. The underlying is a stock paying no dividends. Buying and (short) selling the underlying asset is possible in any (even fractional) amount. There are no transaction costs. The short-term interest rate (r) is known and constant over time. The main result of the model is that under these assumptions, the price of a European call option (c) has a closed form: Here X is the strike price, T-tis the time to maturity of the option, and N denotes the cumulative distribution function of the standard normal distribution. The equation giving the price of the option is usually referred to as the Black-Scholes formula. It is easy to see from put-call parity that the price of a European put option (p) with the same parameters is given by: Now consider a call and put option on a Google stock in June 2013 with a maturity of September 2013 (that is, with 3 months of time to maturity).Let us assume that the current price of the underlying stock is USD 900, the strike price is USD 950, the volatility of Google is 22%, and the risk-free rate is 2%. We will calculate the value of the call option with the GBSOption function from the fOptions package. Beyond the parameters already discussed, we also have to set the cost of carry (b); in the original Black-Scholes model, (with underlying paying no dividends) it equals the risk-free rate. > library(fOptions) > GBSOption(TypeFlag = "c", S = 900, X =950, Time = 1/4, r = 0.02, + sigma = 0.22, b = 0.02) Title: Black Scholes Option Valuation Call: GBSOption(TypeFlag = "c", S = 900, X = 950, Time = 1/4, r = 0.02, b = 0.02, sigma = 0.22) Parameters: Value: TypeFlag c S 900 X 950 Time 0.25 r 0.02 b 0.02 sigma 0.22 Option Price: 21.79275 Description: Tue Jun 25 12:54:41 2013 This prolonged output returns the passed parameters with the result just below the Option Price label. Setting the TypeFlag to p would compute the price of the put option and now we are only interested in the results (found in the price slot—see the str of the object for more details) without the textual output: > GBSOption(TypeFlag = "p", S = 900, X =950, Time = 1/4, r = 0.02, sigma = 0.22, b = 0.02)@price [1] 67.05461 We also have the choice to compute the preceding values with a more user-friendly calculator provided by the GUIDE package. Running the blackscholes() function would trigger a modal window with a form where we can enter the same parameters. Please note that the function uses the dividend yield instead of cost of carry, which is zero in this case. The Cox-Ross-Rubinstein model The Cox-Ross-Rubinstein(CRR) model (Cox, Ross and Rubinstein, 1979) assumes that the price of the underlying asset follows a discrete binomial process. The price might go up or down in each period and hence changes according to a binomial tree illustrated in the following plot, where u and dare fixed multipliers measuring the price changes when it goes up and down. The important feature of the CRR model is that u=1/d and the tree is recombining; that is, the price after two periods will be the same if it first goes up and then goes down or vice versa, as shown in the following figure: To build a binomial tree, first we have to decide how many steps we are modeling (n); that is, how many steps the time to maturity of the option will be divided into. Alternatively, we can determine the length of one time step ∆t,(measured in years) on the tree: If we know the volatility (σ) of the underlying, the parameters u and dare determined according to the following formulas: And consequently: When pricing an option in a binomial model, we need to determine the tree of the underlying until the maturity of the option. Then, having all the possible prices at maturity, we can calculate the corresponding possible option values, simply given by the following formulas: To determine the option price with the binomial model, in each node we have to calculate the expected value of the next two possible option values and then discount it. The problem is that it is not trivial what expected return to use for discounting. The trick is that we are calculating the expected value with a hypothetic probability, which enables us to discount with the risk-free rate. This probability is called risk neutral probability (pn) and can be determined as follows: The interpretation of the risk-neutral probability is quite plausible: if the probability that the underlying price goes up from any of the nodes was pn, then the expected return of the underlying would be the risk-free rate. Consequently, an expected value calculated with pn can be discounted by rand the price of the option in any node of the tree is determined as: In the preceding formula, g is the price of an option in general (it may be call or put as well) in a given node, gu and gd are the values of this derivative in the two possible nodes one period later. For demonstrating the CRR model in R, we will use the same parameters as in the case of the Black-Scholes formula. Hence, S=900, X=950, σ=22%, r=2%, b=2%, T-t=0.25. We also have to set n, the number of time steps on the binomial tree. For illustrative purposes, we will work with a 3-period model: > CRRBinomialTreeOption(TypeFlag = "ce", S = 900, X = 950, + Time = 1/4, r = 0.02, b = 0.02, sigma = 0.22, n = 3)@price [1] 20.33618 > CRRBinomialTreeOption(TypeFlag = "pe", S = 900, X = 950, + Time = 1/4, r = 0.02, b = 0.02, sigma = 0.22, n = 3)@price [1] 65.59803 It is worth observing that the option prices obtained from the binomial model are close to (but not exactly the same as) the Black-Scholes prices calculated earlier. Apart from the final result, that is, the current price of the option, we might be interested in the whole option tree as well: > CRRTree <- BinomialTreeOption(TypeFlag = "ce", S = 900, X = 950, + Time = 1/4, r = 0.02, b = 0.02, sigma = 0.22, n = 3) > BinomialTreePlot(CRRTree, dy = 1, xlab = "Time steps", + ylab = "Number of up steps", xlim = c(0,4)) > title(main = "Call Option Tree") Here we first computed a matrix by BinomialTreeOption with the given parameters and saved the result in CRRTree that was passed to the plot function with specified labels for both the x and y axis with the limits of the x axis set from 0 to 4, as shown in the following figure. The y-axis (number of up steps) shows how many times the underlying price has gone up in total. Down steps are defined as negative up steps. The European put option can be shown similarly by changing the TypeFlag to pe in the previous code: Connection between the two models After applying the two basic option pricing models, we give some theoretical background to them. We do not aim to give a detailed mathematical derivation, but we intend to emphasize (and then illustrate in R) the similarities of the two approaches. The financial idea behind the continuous and the binomial option pricing is the same: if we manage to hedge the option perfectly by holding the appropriate quantity of the underlying asset, it means we created a risk-free portfolio. Since the market is supposed to be arbitrage-free, the yield of a risk-free portfolio must equal the risk-free rate. One important observation is that the correct hedging ratio is holding underlying asset per option. Hence, the ratio is the partial derivative (or its discrete correspondent in the binomial model) of the option value with respect to the underlying price. This partial derivative is called the delta of the option. Another interesting connection between the two models is that the delta-hedging strategy and the related arbitrage-free argument yields the same pricing principle: the value of the derivative is the risk-neutral expected value of its future possible values, discounted by the risk-free rate. This principle is easily tractable on the binomial tree where we calculated the discounted expected values node by node; however, the continuous model has the same logic as well, even if the expected value is mathematically more complicated to compute. This is the reason why we gave only the final result of this argument, which was the Black-Scholes formula. Now we know that the two models have the same pricing principles and ideas (delta-hedging and risk-neutral valuation), but we also observed that their numerical results are not equal. The reason is that the stochastic processes assumed to describe the price movements of the underlying asset are not identical. Nevertheless, they are very similar; if we determine the value of u and d from the volatility parameter as we did it in The Cox-Ross-Rubinstein model section, the binomial process approximates the geometric Brownian motion. Consequently, the option price of the binomial model converges to that of the Black-Scholes model if we increase the number of time steps (or equivalently, decrease the length of the steps). To illustrate this relationship, we will compute the option price in the binomial model with increasing numbers of time steps. In the following figure, we compare the results with the Black-Scholes price of the option: The plot was generated by a loop running N from 1 to 200 to compute CRRBinomialTreeOption with fixed parameters: > prices <- sapply(1:200, function(n) { + CRRBinomialTreeOption(TypeFlag = "ce", S = 900, X = 950, + Time = 1/4, r = 0.02, b = 0.02, sigma = 0.22, n = n)@price + }) Now the prices variable holds 200 computed values: > str(prices) num [1:200] 26.9 24.9 20.3 23.9 20.4... Let us also compute the option with the generalized Black-Scholes option: > price <- GBSOption(TypeFlag = "c", S = 900, X = 950, Time = 1/4, r = 0.02, sigma = 0.22, b = 0.02)@price And show the prices in a joint plot with the GBS option rendered in red: > plot(1:200, prices, type='l', xlab = 'Number of steps', + ylab = 'Prices') > abline(h = price, col ='red') > legend("bottomright", legend = c('CRR-price', 'BS-price'), + col = c('black', 'red'), pch = 19)
Read more
  • 0
  • 0
  • 21748
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-apple-announces-webkit-tracking-prevention-policy-that-considers-web-tracking-as-a-security-vulnerability
Bhagyashree R
19 Aug 2019
5 min read
Save for later

Apple announces ‘WebKit Tracking Prevention Policy’ that considers web tracking as a security vulnerability

Bhagyashree R
19 Aug 2019
5 min read
Inspired by Mozilla’s anti-tracking policy, Apple has announced its intention to implement the WebKit Tracking Prevention Policy into Safari, the details of which it shared last week. This policy outlines the types of tracking techniques that will be prevented in WebKit to ensure user privacy. The anti-tracking mitigations listed in this policy will be applied “universally to all websites, or based on algorithmic, on-device classification.” https://twitter.com/webkit/status/1161782001839607809 Web tracking is the collection of user data over multiple web pages and websites, which can be linked to individual users via a unique user identifier. All your previous interactions with any website could be recorded and recalled with the help of a tracking system like cookies. Among the data tracked include the things you have searched, the websites you visited, the things you have clicked on, the movements of your mouse around a web page, and more. Organizations and companies rely heavily on web tracking to gain insight into their user behavior and preferences. One of the main purposes of these insights is user profiling and targeted marketing. While this user tracking helps businesses, it can be pervasive and used for other sinister purposes. In the recent past, we have seen many companies including the big tech like Facebook and Google involved in several scandals related to violating user online privacy. For instance, Facebook’s Cambridge Analytica scandal and Google’s cookie case. Apple aims to create “a healthy web ecosystem, with privacy by design” The WebKit Prevention Policy will prevent several tracking techniques including cross-site tracking, stateful tracking, covert stateful tracking, navigational tracking, fingerprinting, covert tracking, and other unknown techniques that do not fall under these categories. WebKit will limit the capability of using a tracking technique in case it is not possible to prevent it without any undue harm to the user. If this also does not help, users will be asked for their consent. Apple will treat any attempt to subvert the anti-tracking methods as a security vulnerability. “We treat circumvention of shipping anti-tracking measures with the same seriousness as an exploitation of security vulnerabilities,” Apple wrote. It warns to add more restrictions without prior notice against parties who attempt to circumvent the tracking prevention methods. Apple further mentioned that there won’t be any exception even if you have a valid use for a technique that is also used for tracking. The announcement reads, “But WebKit often has no technical means to distinguish valid uses from tracking, and doesn’t know what the parties involved will do with the collected data, either now or in the future.” WebKit Tracking Prevention Policy’s unintended impact With the implementation of this policy, Apple warns of certain unintended repercussions as well. Among the possibly affected features are funding websites using targeted or personalized advertising, federated login using a third-party login provider, fraud prevention, and more. In cases of tradeoffs, WebKit will prioritize user benefits over current website practices. Apple promises to limit this unintended impact and might update the tracking prevention methods to permit certain use cases. In the future, it will also come up with new web technologies that will allow these practices without comprising the user online privacy such as Storage Access API and Privacy-Preserving Ad Click Attribution. What users are saying about Apple’s anti-tracking policy A time when there is increasing concern regarding user online privacy, this policy comes as a blessing. Many users are appreciating this move, while some do fear that this will affect some of the user-friendly features. In an ongoing discussion on Hacker News, a user commented, “The fact that this makes behavioral targeting even harder makes me very happy.” Some others also believe that focusing on online tracking protection methods will give browsers an edge over Google’s Chrome. A user said, “One advantage of Google's dominance and their business model being so reliant on tracking, is that it's become the moat for its competitors: investing energy into tracking protection is a good way for them to gain a competitive advantage over Google, since it's a feature that Google will not be able to copy. So as long as Google's competitors remain in business, we'll probably at least have some alternatives that take privacy seriously.” When asked about the added restrictions that will be applied if a party is found circumventing tracking prevention, a member of the WebKit team commented, “We're willing to do specifically targeted mitigations, but only if we have to. So far, nearly everything we've done has been universal or algorithmic. The one exception I know of was to delete tracking data that had already been planted by known circumventors, at the same time as the mitigation to stop anyone else from using that particular hole (HTTPS supercookies).” Some users had questions about the features that will be impacted by the introduction of this policy. A user wrote, “While I like the sentiment, I hate that Safari drops cookies after a short period of non-use. I wind up having to re-login to sites constantly while Chrome does it automatically.” Another user added, “So what is going to happen when Apple succeeds in making it impossible to make any money off advertisements shown to iOS users on the web? I'm currently imagining a future where publishers start to just redirect iOS traffic to install their app, where they can actually make money. Good news for the walled garden, I guess?” Read Apple’s official announcement, to know more about the WebKit Tracking Prevention Policy. Firefox Nightly now supports Encrypted Server Name Indication (ESNI) to prevent 3rd parties from tracking your browsing history All about Browser Fingerprinting, the privacy nightmare that keeps web developers awake at night Apple proposes a “privacy-focused” ad click attribution model for counting conversions without tracking users  
Read more
  • 0
  • 0
  • 21636

article-image-internationalization-localization-node-js-app
Packt Editorial Staff
07 May 2018
15 min read
Save for later

How to implement Internationalization and localization in your Node.js app

Packt Editorial Staff
07 May 2018
15 min read
Internationalization, often abbreviated as i18n, implies a particular software design capable of adapting to the requirements of target local markets. In other words if we want to distribute our application to the markets other than USA we need to take care of translations, formatting of datetime, numbers, addresses, and such. In today’s post we will cover the concept of implementing Internationalization and localization in the Node.js app and look at context menu and system clipboard in detail. Date format by country Internationalization is a cross-cutting concern. When you are changing the locale it usually affects multiple modules. So I suggest going with the observer pattern that we already examined while working on DirService'. The ./js/Service/I18n.jsfile contains the following code: constEventEmitter=require("events"); classI18nServiceextendsEventEmitter{ constructor(){ super(); this.locale="en-US"; } notify(){ this.emit("update"); } } exports.I18nService=I18nService; As you see, we can change the locale by setting a new value to localeproperty. As soon as we call notifymethod, then all the subscribed modules immediately respond. But localeis a public property and therefore we have no control on its access and mutation. We can fix it by using overloading. The ./js/Service/I18n.jsfile contains the following code: //...constructor(){ super(); this._locale="en-US"; } getlocale(){ returnthis._locale; } setlocale(locale){ //validatelocale...this._locale=locale; } //... Now if we access localeproperty of I18ninstance it gets delivered by the getter (getlocale). When setting it a value, it goes through the setter (setlocale). Thus we can add extra functionality such as validation and logging on property access and mutation. Remember we have in the HTML, a combobox for selecting language. Why not give it a view? The ./js/View/LangSelector.jfile contains the following code: classLangSelectorView{ constructor(boundingEl,i18n){ boundingEl.addEventListener("change",this.onChanged.bind(this),false); this.i18n=i18n; } onChanged(e){ constselectEl=e.target;this.i18n.locale=selectEl.value;this.i18n.notify(); } } exports.LangSelectorView=LangSelectorView; In the preceding code, we listen for change events on the combobox. When the event occurs we change localeproperty of the passed in I18ninstance and call notifyto inform the subscribers. The ./js/app.jsfile contains the following code: consti18nService=newI18nService(), {LangSelectorView}=require("./js/View/LangSelector"); newLangSelectorView(document.querySelector("[data-bind=langSelector]"),i18nService); Well, we can change the locale and trigger the event. What about consuming modules? In FileListview we have static method formatTimethat formats the passed in timeStringfor printing. We can make it formated in accordance with currently chosen locale. The ./js/View/FileList.jsfile contains the following code: constructor(boundingEl,dirService,i18nService){ //... this.i18n=i18nService; //Subscribeoni18nServiceupdates i18nService.on("update",()=>this.update( dirService.getFileList())); } staticformatTime(timeString,locale){ constdate=newDate(Date.parse(timeString)),options={ year:"numeric",month:"numeric",day:"numeric",hour:"numeric",minute:"numeric",second:"numeric",hour12:false }; returndate.toLocaleString(locale,options); } update(collection){ //... this.el.insertAdjacentHTML("beforeend",`<liclass="file-listli"data-file="${fInfo.fileName}"> <spanclass="file-listliname">${fInfo.fileName}</span> <spanclass="file- listlisize">${filesize(fInfo.stats.size)}</span> <spanclass="file-listlitime">${FileListView.formatTime( fInfo.stats.mtime,this.i18n.locale)}</span> </li>`); //... } //... In the constructor, we subscribe for I18nupdate event and update the file list every time the locale changes. Static method formatTimeconverts passed in string into a Dateobject and uses Date.prototype.toLocaleString()method to format the datetime according to a given locale. This method belongs to so called ECMAScript Internationalization API (http://norbertlindenberg.com/2012/12/ecmascript-internationalization-api/index.html). The API describes methods of built-in object String, Dateand Numberdesigned to format and compare localized data. But what it really does is formatting a Dateinstance with toLocaleStringfor the English (United States) locale ("en-US") and it returns the date as follows: 3/17/2017, 13:42:23 However if we feed to the method German locale ("de-DE") we get quite a different result: 17.3.2017, 13:42:23 To put it into action we set an identifier to the combobox. The ./index.htmlfile contains the following code: .. <selectclass="footerselect"data-bind="langSelector"> .. And of course, we have to create an instance of I18nservice and pass it in LangSelectorViewand FileListView: ./js/app.js //... const{I18nService}=require("./js/Service/I18n"), {LangSelectorView}=require("./js/View/LangSelector"),i18nService=newI18nService(); newLangSelectorView(document.querySelector("[data-bind=langSelector]"),i18nService); //... newFileListView(document.querySelector("[data-bind=fileList]"),dirService,i18nService); Now we start the application. Yeah! As we change the language in the combobox the file modification dates adjust accordingly: Multilingual support Localization dates and number is a good thing, but it would be more exciting to provide translation to multiple languages. We have a number of terms across the application, namely the column titles of the file list and tooltips (via titleattribute) on windowing action buttons. What we need is a dictionary. Normally it implies sets of token translation pairs mapped to language codes or locales. Thus when you request from the translation service a term, it can correlate to a matching translation according to currently used language/locale. Here I have suggested making the dictionary as a static module that can be loaded with the required function. The ./js/Data/dictionary.jsfile contains the following code: exports.dictionary={"en-US":{ NAME:"Name",SIZE:"Size",MODIFIED:"Modified", MINIMIZE_WIN:"Minimizewindow", RESTORE_WIN:"Restorewindow",MAXIMIZE_WIN:"Maximizewindow",CLOSE_WIN:"Closewindow" }, "de-DE":{ NAME:"Dateiname",SIZE:"Grösse", MODIFIED:"Geändertam",MINIMIZE_WIN:"Fensterminimieren",RESTORE_WIN:"Fensterwiederherstellen",MAXIMIZE_WIN:"Fenstermaximieren", CLOSE_WIN:"Fensterschliessen" } }; So we have two locales with translations per term. We are going to inject the dictionary as a dependency into our I18nservice. The ./js/Service/I18n.jsfile contains the following code: //... constructor(dictionary){ super(); this.dictionary=dictionary; this._locale="en-US"; } translate(token,defaultValue){ constdictionary=this.dictionary[this._locale]; returndictionary[token]||defaultValue; } //... We also added a new method translate that accepts two parameters: tokenand defaulttranslation. The first parameter can be one of the keys from the dictionary like NAME. The second one is guarding value for the case when requested token does not yet exist in the dictionary. Thus we still get a meaningful text at least in English. Let's see how we can use this new method. The ./js/View/FileList.jsfile contains the following code: //... update(collection){ this.el.innerHTML=`<liclass="file-listlifile-listhead"> <spanclass="file-listliname">${this.i18n.translate("NAME","Name")}</span> <spanclass="file-listlisize">${this.i18n.translate("SIZE", "Size")}</span> <spanclass="file-listlitime">${this.i18n.translate("MODIFIED","Modified")}</span> </li>`; //... We change in FileListview hardcoded column titles with calls for translatemethod of I18ninstance, meaning that every time view updates it receives the actual translations. We shall not forget about TitleBarActionsview where we have windowing action buttons. The ./js/View/TitleBarActions.jsfile contains the following code: constructor(boundingEl,i18nService){ this.i18n=i18nService; //... //Subscribeoni18nServiceupdates i18nService.on("update",()=>this.translate()); } translate(){ this.unmaximizeEl.title=this.i18n.translate("RESTORE_WIN","Restorewindow"); this.maximizeEl.title=this.i18n.translate("MAXIMIZE_WIN","Maximizewindow"); this.minimizeEl.title=this.i18n.translate("MINIMIZE_WIN","Minimizewindow"); this.closeEl.title=this.i18n.translate("CLOSE_WIN","Closewindow"); } Here we add method translate, which updates button title attributes with actual translations. We subscribe for i18nupdate event to call the method every time user changes locale: Context menu Well, with our application we can already navigate through the file system and open files. Yet, one might expect more of a File Explorer. We can add some file related actions like delete, copy/paste. Usually these tasks are available via the context menu, what gives us a good opportunity to examine how to make it with NW.js. With the environment integration API we can create an instance of system menu (http://docs.nwjs.io/en/latest/References/Menu/). Then we compose objects representing menu items and attach them to the menu instance (http://docs.nwjs.io/en/latest/References/MenuItem/). This menucan be shown in an arbitrary position: constmenu=newnw.Menu(),menutItem=newnw.MenuItem({ label:"Sayhello", click:()=>console.log("hello!") }); menu.append(menu); menu.popup(10,10); Yet our task is more specific. We have to display the menu on the right mouse click in the position of the cursor. That is, we achieve by subscribing a handler to contextmenuDOM event: document.addEventListener("contextmenu",(e)=>{ console.log(`Showmenuinposition${e.x},${e.y}`); }); Now whenever we right-click within the application window the menu shows up. It's not exactly what we want, isn't it? We need it only when the cursor resides within a particular region. For an instance, when it hovers a file name. That means we have to test if the target element matches our conditions: document.addEventListener("contextmenu",(e)=>{ constel=e.target; if(elinstanceofHTMLElement&&el.parentNode.dataset.file){ console.log(`Showmenuinposition${e.x},${e.y}`); } }); Here we ignore the event until the cursor hovers any cell of file table row, given every row is a list item generated by FileListview and therefore provided with a value for data file attribute. This passage explains pretty much how to build a system menu and how to attach it to the file list. But before starting on a module capable of creating menu, we need a service to handle file operations. The ./js/Service/File.jsfile contains the following code: constfs=require("fs"), path=require("path"), //Copyfilehelper cp=(from,toDir,done)=>{ constbasename=path.basename(from),to=path.join(toDir,basename),write=fs.createWriteStream(to); fs.createReadStream(from) .pipe(write); write .on("finish",done); }; classFileService{ constructor(dirService){this.dir=dirService;this.copiedFile=null; } remove(file){ fs.unlinkSync(this.dir.getFile(file)); this.dir.notify(); } paste(){ constfile=this.copiedFile; if(fs.lstatSync(file).isFile()){ cp(file,this.dir.getDir(),()=>this.dir.notify()); } } copy(file){ this.copiedFile=this.dir.getFile(file); } open(file){ nw.Shell.openItem(this.dir.getFile(file)); } showInFolder(file){ nw.Shell.showItemInFolder(this.dir.getFile(file)); } }; exports.FileService=FileService; What's going on here? FileServicereceives an instance of DirServiceas a constructor argument. It uses the instance to obtain the full path to a file by name ( this.dir.getFile(file)). It also exploits notifymethod of the instance to request all the views subscribed to DirServiceto update. Method showInFoldercalls the corresponding method of nw.Shellto show the file in the parent folder with the system file manager. As you can recon method removedeletes the file. As for copy/paste we do the following trick. When user clicks copy we store the target file path in property copiedFile. So when user next time clicks paste we can use it to copy that file to the supposedly changed current location. Method openevidently opens file with the default associated program. That is what we do in FileListview directly. Actually this action belongs to FileService. So we rather refactor the view to use the service. The ./js/View/FileList.jsfile contains the following code: constructor(boundingEl,dirService,i18nService,fileService){ this.file=fileService; //... } bindUi(){ //... this.file.open(el.dataset.file); //... } Now we have a module to handle context menu for a selected file. The module will subscribe for contextmenuDOM event and build a menu when user right clicks on a file. This menu will contain items Show Item in the Folder, Copy, Paste, and Delete. Whereas copy and paste are separated from other items with delimiters. Besides, Paste will be disabled until we store a file with copy. Further goes the source code. The ./js/View/ContextMenu.jsfile contains the following code: classConextMenuView{ constructor(fileService,i18nService){ this.file=fileService;this.i18n=i18nService;this.attach(); } getItems(fileName){ constfile=this.file, isCopied=Boolean(file.copiedFile); return[ { label:this.i18n.translate("SHOW_FILE_IN_FOLDER","ShowItemintheFolder"), enabled:Boolean(fileName), click:()=>file.showInFolder(fileName) }, { type:"separator" }, { label:this.i18n.translate("COPY","Copy"),enabled:Boolean(fileName), click:()=>file.copy(fileName) }, { label:this.i18n.translate("PASTE","Paste"),enabled:isCopied, click:()=>file.paste() }, { type:"separator" }, { label:this.i18n.translate("DELETE","Delete"),enabled:Boolean(fileName), click:()=>file.remove(fileName) } ]; } render(fileName){ constmenu=newnw.Menu(); this.getItems(fileName).forEach((item)=>menu.append(newnw.MenuItem(item))); returnmenu; } attach(){ document.addEventListener("contextmenu",(e)=>{ constel=e.target; if(!(elinstanceofHTMLElement)){ return; } if(el.classList.contains("file-list")){ e.preventDefault(); this.render() .popup(e.x,e.y); } //Ifachildofanelementmatching[data-file] if(el.parentNode.dataset.file){ e.preventDefault(); this.render(el.parentNode.dataset.file) .popup(e.x,e.y); } }); } } exports.ConextMenuView=ConextMenuView; So in ConextMenuViewconstructor, we receive instances of FileServiceand I18nService. During the construction we also call attach method that subscribes for contextmenuDOM event, creates the menu and shows it in the position of the mouse cursor. The event gets ignored unless the cursor hovers a file or resides in empty area of the file list component. When user right clicks the file list, the menu still appears, but with all items disable except paste (in case a file was copied before). Method render create an instance of menu and populates it with nw.MenuItemscreated by getItemsmethod. The method creates an array representing menu items. Elements of the array are object literals. Property labelaccepts translation for item caption. Property enableddefines the state of item depending on our cases (whether we have copied file or not, whether the cursor on a file or not). Finally property clickexpects the handler for click event. Now we need to enable our new components in the main module. The ./js/app.jsfile contains the following code: const{FileService}=require("./js/Service/File"), {ConextMenuView}=require("./js/View/ConextMenu"),fileService=newFileService(dirService); newFileListView(document.querySelector("[data-bind=fileList]"),dirService,i18nService,fileService); newConextMenuView(fileService,i18nService); Let's now run the application, right-click on a file and voilà! We have the context menu and new file actions. System clipboard Usually Copy/Paste functionality involves system clipboard. NW.jsprovides an API to control it (http://docs.nwjs.io/en/latest/References/Clipboard/). Unfortunately it's quite limited, we cannot transfer an arbitrary file between applications, what you may expect of a file manager. Yet some things we are still available to us. Transferring text In order to examine text transferring with the clipboard we modify the method copy of FileService: copy(file){ this.copiedFile=this.dir.getFile(file);constclipboard=nw.Clipboard.get();clipboard.set(this.copiedFile,"text"); } What does it do? As soon as we obtained file full path, we create an instance of nw.Clipboardand save the file path there as a text. So now, after copying a file within the File Explorer we can switch to an external program (for example, a text editor) and paste the copied path from the clipboard. Transferring graphics It doesn't look very handy, does it? It would be more interesting if we could copy/paste a file. Unfortunately NW.jsdoesn't give us many options when it comes to file exchange. Yet we can transfer between NW.jsapplication and external programs PNG and JPEG images. The ./js/Service/File.jsfile contains the following code: //... copyImage(file,type){ constclip=nw.Clipboard.get(), //loadfilecontentasBase64 data=fs.readFileSync(file).toString("base64"), //imageasHTML html=`<imgsrc="file:///${encodeURI(data.replace(/^//,"") )}">`; //writebothoptions(rawimageandHTML)totheclipboardclip.set([ {type,data:data,raw:true}, {type:"html",data:html} ]); } copy(file){ this.copiedFile=this.dir.getFile(file); constext=path.parse(this.copiedFile).ext.substr(1); switch(ext){case"jpg":case"jpeg": returnthis.copyImage(this.copiedFile,"jpeg"); case"png": returnthis.copyImage(this.copiedFile,"png"); } } //... We extended our FileServicewith private method copyImage. It reads a given file, converts its contents in Base64 and passes the resulting code in a clipboard instance. In addition, it creates HTML with image tag with Base64-encoded image in data Uniform Resource Identifier (URI). Now after copying an image (PNG or JPEG) in the File Explorer, we can paste it in an external program such as graphical editor or text processor. Receiving text and graphics We've learned how to pass a text and graphics from our NW.jsapplication to external programs. But how can we receive data from outside? As you can guess it is accessible through get method of nw.Clipboard. Text can be retrieved that simple: constclip=nw.Clipboard.get(); console.log(clip.get("text")); When graphic is put in the clipboard we can get it with NW.js only as Base64-encoded content or as HTML. To see it in practice we add a few methods to FileService. The ./js/Service/File.jsfile contains the following code: //...hasImageInClipboard(){ constclip=nw.Clipboard.get(); returnclip.readAvailableTypes().indexOf("png")!==-1; } pasteFromClipboard(){ constclip=nw.Clipboard.get(); if(this.hasImageInClipboard()){ constbase64=clip.get("png",true), binary=Buffer.from(base64,"base64"),filename=Date.now()+"--img.png"; fs.writeFileSync(this.dir.getFile(filename),binary); this.dir.notify(); } } //... Method hasImageInClipboardchecks if the clipboard keeps any graphics. Method pasteFromClipboardtakes graphical content from the clipboard as Base64-encoded PNG. It converts the content into binary code, writes into a file and requests DirServicesubscribers to update. To make use of these methods we need to edit ContextMenuview. The ./js/View/ContextMenu.jsfile contains the following code: getItems(fileName){ constfile=this.file, isCopied=Boolean(file.copiedFile); return[ //... { label:this.i18n.translate("PASTE_FROM_CLIPBOARD","Pasteimagefromclipboard"), enabled:file.hasImageInClipboard(), click:()=>file.pasteFromClipboard() }, //... ]; } We add to the menu a new item Pasteimagefromclipboard, which is enabled only when there is any graphic in the clipboard. [box type="shadow" align="" class="" width=""]This article is an excerpt from the book Cross Platform Desktop Application Development: Electron, Node, NW.js and React written by Dmitry Sheiko. This book will help you build powerful cross-platform desktop applications with web technologies such as Node, NW.JS, Electron, and React.[/box] Read More: How to deploy a Node.js application to the web using Heroku How is Node.js Changing Web Development? With Node.js, it’s easy to get things done  
Read more
  • 0
  • 0
  • 21546

article-image-developers-want-to-become-entrepreneurs-make-successful-transition
Fatema Patrawala
04 Jun 2018
9 min read
Save for later

1 in 3 developers want to be entrepreneurs. What does it take to make a successful transition?

Fatema Patrawala
04 Jun 2018
9 min read
Much of the change we have seen over the last decade has been driven by a certain type of person: part developer, part entrepreneur. From Steve Jobs, to Bill Gates, Larry Page to Sergey Bin, Mark Zuckerberg to Elon Musk - the list is long. These people have built their entrepreneurial success on software. In doing so, they’ve changed the way we live, and changed the way we look at the global tech landscape. Part of the reason software is eating the world is because of the entrepreneurial spirit of people like these.Silicon Valley is a testament to this! That is why a combination of tech and leadership skills in developers is the holy grail even today. From this perspective, we weren’t surprised to find that a significant number of developers working today have entrepreneurial ambitions. In this year’s Skill Up survey we asked respondents what they hope to be doing in the next 5 years. A little more than one third of the respondents said they would like to be working as a founder of their own company. Source: Packt Skill Up Survey But being a successful entrepreneur requires a completely new set of skills from those that make one a successful developer. It requires a different mindset. True, certain skills you might already possess. Others, though you’re going to have to acquire. With skills realizing the power of purpose and values will be the key factor to building a successful venture. You will need to clearly articulate why you’re doing something and what you’re doing. In this way you will attract and keep customers who will clearly know the purpose of your business. Let us see what it takes to transition from a developer to a successful entrepreneur. Think customer first, then product: Source: Gifer Whatever you are thinking right now, think Bigger! Developers love making things. And that’s a good place to begin. In tech, ideas often originate from technical conversations and problem solving. “Let’s do X on Z  platform because the Y APIs make it possible”. This approach works sometimes, but it isn’t a sustainable business model. That’s because customers don’t think like that. Their thought process is more like this: “Is X for me?”, “Will X solve my problem?”, “Will X save me time or money?”. Think about the customer first and then begin developing your product. You can’t create a product and then check if there is it is in demand - that sounds obvious, but it does happen. Do your research, think about the market and get to know your customer first. In other words, to be a successful entrepreneur, you need to go one step further - yes, care about your products, but think about it from the long term perspective. Think big picture: what’s going to bring in real value to the lives of other people. Search for opportunities to solve problems in other people’s lives adopting a larger and more market oriented approach. Adding a bit of domain knowledge like market research, product-market fit, market positioning etc to your to-do list should be a vital first step in your journey. Remember, being a good technician is not enough Source: Tenor Most developers are comfortable and probably excel at being good technicians. This means you’re good at writing those beautiful codes, produce something tangible, and cranking away on each task, moving one step closer to the project launch date. Great. But it takes more than a technician to run a successful business. It’s critical to look ahead into both the near and the long term. What’s going to provide the best ROI next year? Where do I want to be in 5 years time? To do this you need to be organized and focused. It’s of critical importance to determine your goals and objectives. Simply put you need to evolve from being a problem solver/great executioner to a visionary/strategic thinker. The developer-entrepreneur is a rare breed of doer-thinker. Be passionate and focussed Source: Bustle Perseverance is the single most important trait found in successful entrepreneurs. As a developer you are primed to think logically and come to conclusions, which is a great trait to possess in most cases. However, in entrepreneurship there will be times when you will do all the right things, meet the right people and yet meet with failures. To get your company off the ground, you need to be passionate and excited about what you’re working on. This enthusiasm will often be the only thing to carry you through late nights, countless setbacks and tough situations. You must stay open, even if logic dictates otherwise. Most startups fail either because they read the market wrong or they didn’t stay long enough in the race. You need to also know how to focus closely on the very next step to get closer to your ultimate goal. There will be many distracting forces when trying to build a business that focusing on one particular task will not be easy and you will need to religiously master this skill. Become amazing at networking Source: Gfycat It truly isn't about what you know or what you have developed. Sure, what you know is important. But what's far more important is who you know. There's a reason why certain people can make so much progress in such a brief period. These business networking power players command the room by bringing the right people together.  As an entrepreneur, if there's one thing that you should focus on, it's becoming a truly skilled business networker. Imagine having an idea for a business that's so wonderful, that you can pick up the phone and call four or five people who can help you turn that idea into a reality. Sharpen your strategic and critical thinking skills Source: Tenor As entrepreneurs it’s essential to possess sharp critical thinking skills. When you think critically,t you ask the hard, tactical questions while expanding the lens to see the wider picture. Without thinking critically, it’s hard to assess whether the most creative application that you have developed really has a life out in the world. You are always going to love what you have created but it is important to wear multiple hats to think from the users’ perspective. Get it reviewed by the industry stalwarts in your community to catch that silly but important area that you could have missed while planning. With critical thinking it is also important you strategize and plan out things for smooth functioning of your business. Find and manage the right people Source: Gfycat In reality businesses are like living creatures, who’s organs all need to work in harmony. There is no major organ more critical than another, and a failure to one system can bring down all the others. Developers build things, marketers get customers and salespersons sell products. The trick is, to find and employ the right people for your business if you are to create a sustainable business. Your ideas and thinking will need to align with the ideas of people that you are working with. Only by learning to leverage employees, vendors and other resources can you build a scalable and sustainable company. Learn the art of selling an idea Source: Patent an idea Every entrepreneur needs to play the role of a sales person whether they like it or not. To build a successful business venture you will need to sell your ideas, products or services to customers, investors or employees. You should be ready to work and be there when customers are ready to buy. Alternatively, you should also know how to let go and move on when they are not. The ability to convince others that you are going to provide them the maximum product value will help you crack that mission critical deal. Be flexible. Create contingency plans Source: Gifer Things rarely go as per plan in software development. Project scope creeps, clients expectations rise, or bugs always seem to mysteriously appear. Even the most seasoned developers can’t predict all possible scenarios so they have to be ready with contingencies. This is also true in the world of business start-ups. Despite the best-laid business plans, entrepreneurs need to be prepared for the unexpected and be able to roll with the punches. You need to very well be prepared with an option A, B or C. Running a business is like sea surfing. You’ve got to be nimble enough both in your thinking and operations to ride the waves, high and low. Always measure performance Source: Tenor "If you can't measure it, you can't improve it." Peter Drucker’s point feels obvious but is one worth bearing in mind always. If you can't measure something, you’ll never improve. Measuring key functions of your business will help you scale it faster. Otherwise you risk running your firm blindly, without any navigational path to guide it. Closing Comments Becoming an entrepreneur and starting your own business is one of life’s most challenging and rewarding journeys. Having said that, for all of the perks that the entrepreneurial path offers, it’s far from being all roses. Being an entrepreneur means being a warrior. It means being clever, hungry and often a ruthless competitor. If you are a developer turned entrepreneur, we’d love to hear your take on the topic and your own personal journey. Share your comments below or write to us at hub@packtpub.com. Developers think managers don’t know enough about technology. And that’s hurting business. 96% of developers believe developing soft skills is important Don’t call us ninjas or rockstars, say developers        
Read more
  • 0
  • 0
  • 21409

article-image-converting-xml-pdf
Packt
23 Oct 2009
5 min read
Save for later

Converting XML to PDF

Packt
23 Oct 2009
5 min read
XML is the most suitable format for data exchange, but not for data presentation. Adobe's PDF and Microsoft Excel's spreadsheet are the commonly used formats for data presentation. If you receive an XML file containing data that needs to be included in a PDF or Excel spreadsheet file, you need to convert the XML file to the relevant format. Some of the commonly used XML-to-PDF conversion tools/APIs are discussed in the following table: Tool/API Description iText iText is a Java library for generating a PDF document. iText may be downloaded from http://www.lowagie.com/iText/. Stylus Studio XML Publisher XML Publisher is a report designer, which supports many data sources, including XML, to generate PDF reports. Stylus Studio XML Editor XML Editor supports XML conversion to PDF. Apache FOP The Apache project provides an open source FO processor called Apache FOP to render an XSL-FO document as a PDF document. We will discuss the Apache FOP processor in this chapter. XMLMill for Java XMLMill may be used to generate PDF documents from XML data combined with XSL and XSLT. XMLMill may be downloaded from http://www.xmlmill.com/. RenderX XEP RenderX provides an XSL-FO processor called XEP that may be used to generate a PDF document.   We can convert an XML file data to a PDF document using any of these tools/APIs in JDeveloper 11g. Here, we will use Apache FOP API. The XSL specification consists of two components: a language for transforming XML documents (XSLT), and XML syntax for specifying formatting objects (XSL-FO). Using XSL-FO, the layout, fonts, and representations of the data may be formatted. Apache FOP (Formatting Objects Processor) is a print formatter for converting XSL formatting objects (XSL-FO) to an output format such as PDF, PCL, PS, SVG, XML, Print, AWT, MIF, or TXT. In this  article, we will convert an XML document to PDF using XSL-FO and the FOP processor in Oracle JDeveloper 11g.The procedure to create a PDF document from an XML file using the Apache FOP processor in JDeveloper is as follows: Create an XML document. Create an XSL stylesheet. Convert the XML document to an XSL-FO document. Convert the XSL-FO document to a PDF file. Setting the environment We need to download the FOP JAR file fop-0.20.5-bin.zip (or a later version) from http://archive.apache.org/dist/xmlgraphics/fop/binaries/ and extract the ZIP file to a directory. To develop an XML-to-PDF conversion application, we need to create an application (ApacheFOP, for example) and a project (ApacheFOP for example) in JDeveloper. In the project add an XML document, catalog.xml, with File | New. In the New Gallery window select Categories | General | XML and Items | XML Document. Click on OK. In the Create XML File window specify a File Name, catalog.xml, and click on OK. A catalog.xml file gets added to the ApacheFOP project. Copy the following catalog.xml listing to catalog.xml: <?xml version="1.0" encoding="UTF-8"?><catalog title="Oracle Magazine" publisher="Oracle Publishing"> <journal edition="September-October 2008"> <article> <title>Share 2.0</title> <author>Alan Joch</author> </article> <article> <title>Restrictions Apply</title> <author>Alan Joch</author> </article> </journal> <journal edition="March-April 2008"> <article> <title>Oracle Database 11g Redux</title> <author>Tom Kyte</author> </article> <article> <title>Declarative Data Filtering</title> <author>Steve Muench</author> </article> </journal></catalog> We also need to add an XSL stylesheet to convert the XML document to an XSL-FO document. Create an XSL stylesheet with File | New. In the New Gallery window, select Categories | General | XML and Items | XSL Stylesheet. Click on OK. In the Create XSL File window specify a File Name (catalog.xsl) and click on OK. A catalog.xsl file gets added to the ApacheFOP project. To convert the XML document to an XSL-FO document and subsequently create a PDF file from the XSL-FO file, we need a Java application. Add a Java class,  XMLToPDF.java, with File | New. In the New Gallery window select Categories | General and Items | Java Class. Click on OK. In the Create Java Class window specify a class Name (XMLToPDF for example) and click on OK. A Java class gets added to the ApacheFOP project. The directory structure of the FOP application is shown in the following illustration: Next, add the FOP JAR files to the project. Select the project node (ApacheFOP node) and then Tools | Project Properties. In the Project Properties window, select Libraries and Classpath. Add the Oracle XML Parser v2 library with the  Add Library button. The JAR files required to develop an FOP application are listed in the following table: JAR File Description <FOP>/fop-0.20.5/build/fop.jar Apache FOP API <FOP>/fop-0.20.5/lib/batik.jar Graphics classes <FOP>/fop-0.20.5/lib/ avalon-framework-cvs-20020806.jar Logger classes <FOP>/fop-0.20.5/lib/ xercesImpl-2.2.1.jar The DOMParser and the SAXParser classes
Read more
  • 0
  • 0
  • 21099
article-image-getting-started-nwjs
Packt
21 May 2015
19 min read
Save for later

Getting Started with NW.js

Packt
21 May 2015
19 min read
In this article by Alessandro Benoit, author of the book NW.js Essentials, we will learn that until a while ago, developing a desktop application that was compatible with the most common operating systems required an enormous amount of expertise, different programming languages, and logics for each platform. (For more resources related to this topic, see here.) Yet, for a while now, the evolution of web technologies has brought to our browsers many web applications that have nothing to envy from their desktop alternative. Just think of Google apps such as Gmail and Calendar, which, for many, have definitely replaced the need for a local mail client. All of this has been made possible thanks to the amazing potential of the latest implementations of the Browser Web API combined with the incredible flexibility and speed of the latest server technologies. Although we live in a world increasingly interconnected and dependent on the Internet, there is still the need for developing desktop applications for a number of reasons: To overcome the lack of vertical applications based on web technologies To implement software solutions where data security is essential and cannot be compromised by exposing data on the Internet To make up for any lack of connectivity, even temporary Simply because operating systems are still locally installed Once it's established that we cannot completely get rid of desktop applications and that their implementation on different platforms requires an often prohibitive learning curve, it comes naturally to ask: why not make desktop applications out of the very same technologies used in web development? The answer, or at least one of the answers, is NW.js! NW.js doesn't need any introduction. With more than 20,000 stars on GitHub (in the top four hottest C++ projects of the repository-hosting service) NW.js is definitely one of the most promising projects to create desktop applications with web technologies. Paraphrasing the description on GitHub, NW.js is a web app runtime that allows the browser DOM to access Node.js modules directly. Node.js is responsible for hardware and operating system interaction, while the browser serves the graphic interface and implements all the functionalities typical of web applications. Clearly, the use of the two technologies may overlap; for example, if we were to make an asynchronous call to the API of an online service, we could use either a Node.js HTTP client or an XMLHttpRequest Ajax call inside the browser. Without going into technical details, in order to create desktop applications with NW.js, all you need is a decent understanding of Node.js and some expertise in developing HTML5 web apps. In this article, we are going to dissect the topic dwelling on these points: A brief technical digression on how NW.js works An analysis of the pros and cons in order to determine use scenarios Downloading and installing NW.js Development tools Making your first, simple "Hello World" application Important notes about NW.js (also known as Node-Webkit) and io.js Before January 2015, since the project was born, NW.js was known as Node-Webkit. Moreover, with Node.js getting a little sluggish, much to the concern of V8 JavaScript engine updates, from version 0.12.0, NW.js is not based on Node.js but on io.js, an npm-compatible platform originally based on Node.js. For the sake of simplicity, we will keep referring to Node.js even when talking about io.js as long as this does not affect a proper comprehension of the subject. NW.js under the hood As we stated in the introduction, NW.js, made by Roger Wang of Intel's Open Source Technology Center (Shanghai office) in 2011, is a web app runtime based on Node.js and the Chromium open source browser project. To understand how it works, we must first analyze its two components: Node.js is an efficient JavaScript runtime written in C++ and based on theV8 JavaScript engine developed by Google. Residing in the operating system's application layer, Node.js can access hardware, filesystems, and networking functionalities, enabling its use in a wide range of fields, from the implementation of web servers to the creation of control software for robots. (As we stated in the introduction, NW.js has replaced Node.js with io.js from version 0.12.0.) WebKit is a layout engine that allows the rendering of web pages starting from the DOM, a tree of objects representing the web page. NW.js is actually not directly based on WebKit but on Blink, a fork of WebKit developed specifically for the Chromium open source browser project and based on the V8 JavaScript engine as is the case with Node.js. Since the browser, for security reasons, cannot access the application layer and since Node.js lacks a graphical interface, Roger Wang had the insight of combining the two technologies by creating NW.js. The following is a simple diagram that shows how Node.js has been combined with WebKit in order to give NW.js applications access to both the GUI and the operating system: In order to integrate the two systems, which, despite speaking the same language, are very different, a couple of tricks have been adopted. In the first place, since they are both event-driven (following a logic of action/reaction rather than a stream of operations), the event processing has been unified. Secondly, the Node context was injected into WebKit so that it can access it. The amazing thing about it is that you'll be able to program all of your applications' logic in JavaScript with no concerns about where Node.js ends and WebKit begins. Today, NW.js has reached version 0.12.0 and, although still young, is one of the most promising web app runtimes to develop desktop applications adopting web technologies. Features and drawbacks of NW.js Let's check some of the features that characterize NW.js: NW.js allows us to realize modern desktop applications using HTML5, CSS3, JS, WebGL, and the full potential of Node.js, including the use of third-party modules The Native UI API allows you to implement native lookalike applications with the support of menus, clipboards, tray icons, and file binding Since Node.js and WebKit run within the same thread, NW.js has excellent performance With NW.js, it is incredibly easy to port existing web applications to desktop applications Thanks to the CLI and the presence of third-party tools, it's really easy to debug, package, and deploy applications on Microsoft Windows, Mac OS, and Linux However, all that glitters is not gold. There are some cons to consider when developing an application with NW.js: Size of the application: Since a copy of NW.js (70-90 MB) must be distributed along with each application, the size of the application makes it quite expensive compared to native applications. Anyway, if you're concerned about download times, compressing NW.js for distribution will save you about half the size. Difficulties in distributing your application through Mac App Store: In this article, it will not be discussed (just do a search on Google), but even if the procedure is rather complex, you can distribute your NW.js application through Mac App Store. At the moment, it is not possible to deploy a NW.js application on Windows Store due to the different architecture of .appx applications. Missing support for iOS or Android: Unlike other SDKs and libraries, at the moment, it is not possible to deploy an NW.js application on iOS or Android, and it does not seem to be possible to do so in the near future. However, the portability of the HTML, JavaScript, and CSS code that can be distributed on other platforms with tools such as PhoneGap or TideSDK should be considered. Unfortunately, this is not true for all of the features implemented using Node.js. Stability: Finally, the platform is still quite young and not bug-free. NW.js – usage scenarios The flexibility and good performance of NW.js allows its use in countless scenarios, but, for convenience, I'm going to report only a few notable ones: Development tools Implementation of the GUI around existing CLI tools Multimedia applications Web services clients Video games The choice of development platform for a new project clearly depends only on the developer; for the overall aim of confronting facts, it may be useful to consider some specific scenarios where the use of NW.js might not be recommended: When developing for a specific platform, graphic coherence is essential, and, perhaps, it is necessary to distribute the application through a store If the performance factor limits the use of the preceding technologies If the application does a massive use of the features provided by the application layer via Node.js and it has to be distributed to mobile devices Popular NW.js applications After summarizing the pros and cons of NW.js, let's not forget the real strength of the platform—the many applications built on top of NW.js that have already been distributed. We list a few that are worth noting: Wunderlist for Windows 7: This is a to-do list / schedule management app used by millions Cellist: This is an HTTP debugging proxy available on Mac App Store Game Dev Tycoon: This is one of the first NW.js games that puts you in the shoes of a 1980s game developer Intel® XDK: This is an HTML5 cross-platform solution that enables developers to write web and hybrid apps Downloading and installing NW.js Installing NW.js is pretty simple, but there are many ways to do it. One of the easiest ways is probably to run npm install nw from your terminal, but for the educational purposes, we're going to manually download and install it in order to properly understand how it works. You can find all the download links on the project website at http://nwjs.io/ or in the Downloads section on the GitHub project page at https://github.com/nwjs/nw.js/; from here, download the package that fits your operating system. For example, as I'm writing this article, Node-Webkit is at version 0.12.0, and my operating system is Mac OS X Yosemite 10.10 running on a 64-bit MacBook Pro; so, I'm going to download the nwjs-v0.12.0-osx-x64.zip file. Packages for Mac and Windows are zipped, while those for Linux are in the tar.gz format. Decompress the files and proceed, depending on your operating system, as follows. Installing NW.js on Mac OS X Inside the archive, we're going to find three files: Credits.html: This contains credits and licenses of all the dependencies of NW.js nwjs.app: This is the actual NW.js executable nwjc: This is a CLI tool used to compile your source code in order to protect it Before v0.12.0, the filename of nwjc was nwsnapshot. Currently, the only file that interests us is nwjs.app (the extension might not be displayed depending on the OS configuration). All we have to do is copy this file in the /Applications folder—your main applications folder. If you'd rather install NW.js using Homebrew Cask, you can simply enter the following command in your terminal: $ brew cask install nw If you are using Homebrew Cask to install NW.js, keep in mind that the Cask repository might not be updated and that the nwjs.app file will be copied in ~/Applications, while a symlink will be created in the /Applications folder. Installing NW.js on Microsoft Windows Inside the Microsoft Windows NW.js package, we will find the following files: credits.html: This contains the credits and licenses of all NW.js dependencies d3dcompiler_47.dll: This is the Direct3D library ffmpegsumo.dll: This is a media library to be included in order to use the <video> and <audio> tags icudtl.dat: This is an important network library libEGL.dll: This is the WebGL and GPU acceleration libGLESv2.dll: This is the WebGL and GPU acceleration locales/: This is the languages folder nw.exe: This is the actual NW.js executable nw.pak: This is an important JS library pdf.dll: This library is used by the web engine for printing nwjc.exe: This is a CLI tool to compile your source code in order to protect it Some of the files in the folder will be omitted during the final distribution of our application, but for development purposes, we are simply going to copy the whole content of the folder to C:/Tools/nwjs. Installing NW.js on Linux On Linux, the procedure can be more complex depending on the distribution you use. First, copy the downloaded archive into your home folder if you have not already done so, and then open the terminal and type the following command to unpack the archive (change the version accordingly to the one downloaded): $ gzip -dc nwjs-v0.12.0-linux-x64.tar.gz | tar xf - Now, rename the newly created folder in nwjs with the following command: $ mv ~/nwjs-v0.12.0-linux-x64 ~/nwjs Inside the nwjs folder, we will find the following files: credits.html: This contains the credits and licenses of all the dependencies of NW.js icudtl.dat This is an important network library libffmpegsumo.so: This is a media library to be included in order to use the <video> and <audio> tags locales/: This is a languages folder nw: This is the actual NW.js executable nw.pak: This is an important JS library nwjc: This is a CLI tool to compile your source code in order to protect it Open the folder inside the terminal and try to run NW.js by typing the following: $ cd nwjs$ ./nw If you get the following error, you are probably using a version of Ubuntu later than 13.04, Fedora later than 18, or another Linux distribution that uses libudev.so.1 instead of libudev.so.0: otherwise, you're good to go to the next step: error while loading shared libraries: libudev.so.0: cannot open shared object file: No such file or directory Until NW.js is updated to support libudev.so.1, there are several solutions to solve the problem. For me, the easiest solution is to type the following terminal command inside the directory containing nw: $ sed -i 's/udev.so.0/udev.so.1/g' nw This will replace the string related to libudev, within the application code, with the new version. The process may take a while, so wait for the terminal to return the cursor before attempting to enter the following: $ ./nw Eventually, the NW.js window should open properly. Development tools As you'll make use of third-party modules of Node.js, you're going to need npm in order to download and install all the dependencies; so, Node.js (http://nodejs.org/) or io.js (https://iojs.org/) must be obviously installed in your development environment. I know you cannot wait to write your first application, but before you start, I would like to introduce you to Sublime Text 2. It is a simple but sophisticated IDE, which, thanks to the support for custom build scripts, allows you to run (and debug) NW.js applications from inside the editor itself. If I wasn't convincing and you'd rather keep using your favorite IDE, you can skip to the next section; otherwise, follow these steps to install and configure Sublime Text 2: Download and install Sublime Text 2 for your platform from http://www.sublimetext.com/. Open it and from the top menu, navigate to Tools | Build System | New Build System. A new edit screen will open; paste the following code depending on your platform: On Mac OS X: {"cmd": ["nwjs", "--enable-logging",     "${project_path:${file_path}}"],"working_dir": "${project_path:${file_path}}","path": "/Applications/nwjs.app/Contents/MacOS/"} On Microsoft Windows: {"cmd": ["nw.exe", "--enable-logging",     "${project_path:${file_path}}"],"working_dir": "${project_path:${file_path}}","path": "C:/Tools/nwjs/","shell": true} On Linux: {"cmd": ["nw", "--enable-logging",     "${project_path:${file_path}}"],"working_dir": "${project_path:${file_path}}","path": "/home/userName/nwjs/"} Type Ctrl + S (Cmd + S on Mac) and save the file as nw-js.sublime-build. Perfect! Now you are ready to run your applications directly from the IDE. There are a lot of packages, such as SublimeLinter, LiveReload, and Node.js code completion, available to Sublime Text 2. In order to install them, you have to install Package Control first. Just open https://sublime.wbond.net/installation and follow the instructions. Writing and running your first "Hello World" app Finally, we are ready to write our first simple application. We're going to revisit the usual "Hello World" application by making use of a Node.js module for markdown parsing. "Markdown is a plain text formatting syntax designed so that it can be converted to HTML and many other formats using a tool by the same name."                                                                                                              – Wikipedia Let's create a Hello World folder and open it in Sublime Text 2 or in your favorite IDE. Now open a new package.json file and type in the following JSON code: {"name": "nw-hello-world","main": "index.html","dependencies": {   "markdown": "0.5.x"}} The package.json manifest file is essential for distribution as it determines many of the window properties and primary information about the application. Moreover, during the development process, you'll be able to declare all of the dependencies. In this specific case, we are going to assign the application name, the main file, and obviously our dependency, the markdown module, written by Dominic Baggott. If you so wish, you can create the package.json manifest file using the npm init command from the terminal as you're probably used to already when creating npm packages. Once you've saved the package.json file, create an index.html file that will be used as the main application file and type in the following code: <!DOCTYPE html><html><head>   <title>Hello World!</title></head><body>   <script>   <!--Here goes your code-->   </script></body></html> As you can see, it's a very common HTML5 boilerplate. Inside the script tag, let's add the following: var markdown = require("markdown").markdown,   div = document.createElement("div"),   content = "#Hello World!n" +   "We are using **io.js** " +   "version *" + process.version + "*"; div.innerHTML = markdown.toHTML(content);document.body.appendChild(div); What we do here is require the markdown module and then parse the content variable through it. To keep it as simple as possible, I've been using Vanilla JavaScript to output the parsed HTML to the screen. In the highlighted line of code, you may have noticed that we are using process.version, a property that is a part of the Node.js context. If you try to open index.html in a browser, you'd get the Reference Error: require is not defined error as Node.js has not been injected into the WebKit process. Once you have saved the index.html file, all that is left is to install the dependencies by running the following command from the terminal inside the project folder: $ npm install And we are ready to run our first application! Running NW.js applications on Sublime Text 2 If you opted for Sublime Text 2 and followed the procedure in the development tools section, simply navigate to Project | Save Project As and save the hello-world.sublime-project file inside the project folder. Now, in the top menu, navigate to Tools | Build System and select nw-js. Finally, press Ctrl + B (or Cmd + B on Mac) to run the program. If you have opted for a different IDE, just follow the upcoming steps depending on your operating system. Running NW.js applications on Microsoft Windows Open the command prompt and type: C:> c:Toolsnwjsnw.exe c:pathtotheproject On Microsoft Windows, you can also drag the folder containing package.json to nw.exe in order to open it. Running NW.js applications on Mac OS Open the terminal and type: $ /Applications/nwjs.app/Contents/MacOS/nwjs /path/to/the/project/ Or, if running NW.js applications inside the directory containing package.json, type: $ /Applications/nwjs.app/Contents/MacOS/nwjs. As you can see in Mac OS X, the NW.js kit's executable binary is in a hidden directory within the .app file. Running NW.js applications on Linux Open the terminal and type: $ ~/nwjs/nw /path/to/the/project/ Or, if running NW.js applications inside the directory containing package.json, type: $ ~/nwjs/nw . Running the application, you may notice that a few errors are thrown depending on your platform. As I stated in the pros and cons section, NW.js is still young, so that's quite normal, and probably we're talking about minor issues. However, you can search in the NW.js GitHub issues page in order to check whether they've already been reported; otherwise, open a new issue—your help would be much appreciated. Now, regardless of the operating system, a window similar to the following one should appear: As illustrated, the process.version object variable has been printed properly as Node.js has correctly been injected and can be accessed from the DOM. Perhaps, the result is a little different than what you expected since the top navigation bar of Chromium is visible. Do not worry! You can get rid of the navigation bar at any time simply by adding the window.toolbar = false parameter to the manifest file, but for now, it's important that the bar is visible in order to debug the application. Summary In this article, you discovered how NW.js works under the hood, the recommended tools for development, a few usage scenarios of the library, and eventually, how to run your first, simple application using third-party modules of Node.js. I really hope I haven't bored you too much with the theoretical concepts underlying the functioning of NW.js; I really did my best to keep it short.
Read more
  • 0
  • 0
  • 21053

article-image-media-queries-less
Packt
21 Oct 2014
9 min read
Save for later

Media Queries with Less

Packt
21 Oct 2014
9 min read
In this article by Alex Libby, author of Learning Less.js, we'll see how Less can make creating media queries a cinch; we will cover the following topics: How media queries work What's wrong with CSS? Creating a simple example (For more resources related to this topic, see here.) Introducing media queries If you've ever spent time creating content for sites, particularly for display on a mobile platform, then you might have come across media queries. For those of you who are new to the concept, media queries are a means of tailoring the content that is displayed on screen when the viewport is resized to a smaller size. Historically, websites were always built at a static size—with more and more people viewing content on smartphones and tablets, this means viewing them became harder, as scrolling around a page can be a tiresome process! Thankfully, this became less of an issue with the advent of media queries—they help us with what should or should not be displayed when viewing content on a particular device. Almost all modern browsers offer native support for media queries—the only exception being IE Version 8 or below, where it is not supported natively: Media queries always begin with @media and consist of two parts: The first part, only screen, determines the media type where a rule should apply—in this case, it will only show the rule if we're viewing content on screen; content viewed when printed can easily be different. The second part, or media feature, (min-width: 530px) and (max-width: 949px), means the rule will only apply between a screen size set at a minimum of 530px and a maximum of 949px. This will rule out any smartphones and will apply to larger tablets, laptops, or PCs. There are literally dozens of combinations of media queries to suit a variety of needs—for some good examples, visit http://cssmediaqueries.com/overview.html, where you can see an extensive list, along with an indication whether it is supported in the browser you normally use. Media queries are perfect to dynamically adjust your site to work in multiple browsers—indeed, they are an essential part of a responsive web design. While browsers support media queries, there are some limitations we need to consider; let's take a look at these now. The limitations of CSS If we spend any time working with media queries, there are some limitations we need to consider; these apply equally if we were writing using Less or plain CSS: Not every browser supports media features uniformly; to see the differences, visit http://cssmediaqueries.com/overview.html using different browsers. Current thinking is that a range of breakpoints has to be provided; this can result in a lot of duplication and a constant battle to keep up with numerous different screen sizes! The @media keyword is not supported in IE8 or below; you will need to use JavaScript or jQuery to achieve the same result, or a library such as Modernizr to provide a graceful fallback option. Writing media queries will tie your design to a specific display size; this increases the risk of duplication as you might want the same element to appear in multiple breakpoints, but have to write individual rules to cover each breakpoint. Breakpoints are points where your design will break if it is resized larger or smaller than a particular set of given dimensions. The traditional thinking is that we have to provide different style rules for different breakpoints within our style sheets. While this is valid, ironically it is something we should not follow! The reason for this is the potential proliferation of breakpoint rules that you might need to add, just to manage a site. With care and planning and a design-based breakpoints mindset, we can often get away with a fewer number of rules. There is only one breakpoint given, but it works in a range of sizes without the need for more breakpoints. The key to the process is to start small, then increase the size of your display. As soon as it breaks your design (this is where your first breakpoint is) add a query to fix it, and then, keep doing it until you get to your maximum size. Okay, so we've seen what media queries are; let's change tack and look at what you need to consider when working with clients, before getting down to writing the queries in code. Creating a simple example The best way to see how media queries work is in the form of a simple demo. In this instance, we have a simple set of requirements, in terms of what should be displayed at each size: We need to cater for four different sizes of content The small version must be shown to the authors as plain text e-mail links, with no decoration For medium-sized screens, we will add an icon before the link On large screens, we will add an e-mail address after the e-mail links On extra-large screens, we will combine the medium and large breakpoints together, so both icons and e-mail addresses are displayed In all instances, we will have a simple container in which there will be some dummy text and a list of editors. The media queries we create will control the appearance of the editor list, depending on the window size of the browser being used to display the content. Next, add the following code to a new document. We'll go through it section by section, starting with the variables created for our media queries: @small: ~"(max-width: 699px) and (min-width: 520px)"; @medium: ~"(max-width: 1000px) and (min-width: 700px)"; @large: ~"(min-width: 1001px)"; @xlarge: ~"(min-width: 1151px)"; Next comes some basic styles to define margins, font sizes, and styles: * { margin: 0; padding: 0; } body { font: 14px Georgia, serif; } h3 { margin: 0 0 8px 0; } p { margin: 0 25px } We need to set sizes for each area within our demo, so go ahead and add the following styles: #fluid-wrap { width: 70%; margin: 60px auto; padding: 20px; background: #eee; overflow: hidden; } #main-content { width: 65%; float: right; }  #sidebar { width: 35%; float: left; ul { list-style: none; } ul li a { color: #900; text-decoration: none; padding: 3px 0; display: block; } } Now that the basic styles are set, we can add our media queries—beginning with the query catering for small screens, where we simply display an e-mail logo: @media @small { #sidebar ul li a { padding-left: 21px; background: url(../img/email.png) left center no-repeat; } } The medium query comes next; here, we add the word Email before the e-mail address instead: @media @medium { #sidebar ul li a:before { content: "Email: "; font-style: italic; color: #666; } } In the large media query, we switch to showing the name first, followed by the e-mail (the latter extracted from the data-email attribute): @media @large { #sidebar ul li a:after { content: " (" attr(data-email) ")"; font-size: 11px; font-style: italic; color: #666; } } We finish with the extra-large query, where we use the e-mail address format shown in the large media query, but add an e-mail logo to it: @media @xlarge { #sidebar ul li a { padding-left: 21px; background: url(../img/email.png) left center no-repeat; } } Save the file as simple.less. Now that our files are prepared, let's preview the results in a browser. For this, I recommend that you use Responsive Design View within Firefox (activated by pressing Ctrl + Shift + M). Once activated, resize the view to 416 x 735; here we can see that only the name is displayed as an e-mail link: Increasing the size to 544 x 735 adds an e-mail logo, while still keeping the same name/e-mail format as before: If we increase it further to 716 x 735, the e-mail logo changes to the word Email, as seen in the following screenshot: Let's increase the size even further to 735 x 1029; the format changes again, to a name/e-mail link, followed by an e-mail address in parentheses: In our final change, increase the size to 735 x 1182. Here, we can see the previous style being used, but with the addition of an e-mail logo: These screenshots illustrate perfectly how you can resize your screen and still maintain a suitable layout for each device you decide to support; let's take a moment to consider how the code works. The normal accepted practice for developers is to work on the basis of "mobile first", or create the smallest view so it is perfect, then increase the size of the screen and adjust the content until the maximum size is reached. This works perfectly well for new sites, but the principle might have to be reversed if a mobile view is being retrofitted to an existing site. In our instance, we've produced the content for a full-size screen first. From a Less perspective, there is nothing here that isn't new—we've used nesting for the #sidebar div, but otherwise the rest of this part of the code is standard CSS. The magic happens in two parts—immediately at the top of the file, we've set a number of Less variables, which encapsulate the media definition strings we use in the queries. Here, we've created four definitions, ranging from @small (for devices between 520px to 699px), right through to @xlarge for widths of 1151px or more. We then take each of the variables and use them within each query as appropriate, for example, the @small query is set as shown in the following code: @media @small { #sidebar ul li a { padding-left: 21px; background: url(../img/email.png) left center no-repeat; } } In the preceding code, we have standard CSS style rules to display an e-mail logo before the name/e-mail link. Each of the other queries follows exactly the same principle; they will each compile as valid CSS rules when running through Less. Summary Media queries have rapidly become a de facto part of responsive web design. We started our journey through media queries with a brief introduction, with a review of some of the limitations that we must work around and considerations that need to be considered when working with clients. We then covered how to create a simple media query. Resources for Article: Further resources on this subject: Creating Blog Content in WordPress [Article] Customizing WordPress Settings for SEO [Article] Introduction to a WordPress application's frontend [Article]
Read more
  • 0
  • 0
  • 20734

article-image-opentracing-and-opencensus-merge-into-opentelemetry-project-google-introduces-opencensus-web
Sugandha Lahoti
13 Aug 2019
4 min read
Save for later

OpenTracing and OpenCensus merge into OpenTelemetry project; Google introduces OpenCensus Web

Sugandha Lahoti
13 Aug 2019
4 min read
Google has introduced an extension of OpenCensus called the OpenCensus Web which is a library for collecting application performance and behavior monitoring data of web pages. This library focuses on the frontend web application code that executes in the browser allowing it to collect user-side performance data. It is still in alpha stage with the API subject to change. This is great news for websites that are heavy by nature, such as media-driven pages like Instagram, Facebook, YouTube, and Amazon, and WebApps. OpenCensus Web interacts with three application components, the Frontend web server, the Browser JS, and the OpenCensus Agent. The agent receives traces from the frontend web server proxy endpoint or directly from the browser JS, and exports them to a trace backend. Features of OpenCensus Web OpenCensus Web traces spans for initial load including server-side HTML rendering The OpenCensus Web spans also includes detailed annotations for DOM load events as well as network events It automatically traces all the click events as long as the click is done in a DOM element and it is not disabled OC Web traces route transitions between the different sections of your page by monkey-patching the History API It allows users to create custom spans for their web application for tasks or code involved in user interaction It performs automatic spans for HTTP requests and browser performance data OC web relates user interactions back to the initial page load tracing. Along with this release, the OpenCensus family of projects is merging with OpenTracing into OpenTelemetry. This means all of the OpenCensus community will be moving over to OpenTelemetry, Google and Omnition included. OpenCensus Web’s functionality will be migrated into OpenTelemetry JS once this project is ready. Omnition founder wrote on Hacker News, “Although Google will be heavily involved in both the client libraries and agent development, Omnition, Microsoft, and others will also be major contributors.” Another comment on Hacker News, explains the merger more in detail. “OpenCensus is a Google project to standardize metrics and distributed tracing. It's an API spec and libraries for various languages with varying backend support. OpenTracing is a CNCF project as an API for distributed tracing with a separate project called OpenMetrics for the metrics API. Neither include libraries and rely on the community to provide them.  The industry decided for once that we don't need all this competing work and is consolidating everything into OpenTelemetry that combines an API for tracing and metrics along with libraries. Logs (the 3rd part of observability) are in the planning phase.  OpenCensus Web is bringing the tracing/metrics part to your frontend JS so you can measure how your webapp works in addition to your backend apps and services.” By September 2019, OpenTelemetry plans to reach parity with existing projects for C#, Golang, Java, NodeJS, and Python. When each language reaches parity, the corresponding OpenTracing and OpenCensus projects will be sunset (old projects will be frozen, but the new project will continue to support existing instrumentation for two years, via a backwards compatibility bridge). Read more on the OpenTelemetry roadmap. Public reaction for OpenCensus Web has been positive. People have expressed their opinions on a Hacker News thread. “This is great, as the title says, this means that web applications can now have tracing across the whole stack, all within the same platform.” “I am also glad to know that the merge between OpenTracing and OpenCensus is still going well. I started adding telemetry to the projects I maintain in my current job and so far it has been very helpful to detect not only bottlenecks in the operations but also sudden spikes in the network traffic since we depend on so many 3rd-party web API that we have no control over. Thank you OpenCensus team for providing me with the tools to learn more.” For more information about OpenCensus Web, visit Google’s blog. CNCF Sandbox, the home for evolving cloud-native projects, accepts Google’s OpenMetrics Project Google open sources ClusterFuzz, a scalable fuzzing tool Google open sources its robots.txt parser to make Robots Exclusion Protocol an official internet standard
Read more
  • 0
  • 0
  • 20725
article-image-oracle-web-services-manager-authentication-and-authorization
Packt
23 Oct 2009
6 min read
Save for later

Oracle Web Services Manager: Authentication and Authorization

Packt
23 Oct 2009
6 min read
Here, we will see: Steps involved in the authentication and authorization process Learning file authentication and authorization Implementing active directory authentication and authorization Details of policy template Steps Involved in the Authentication and Authorization Process Oracle Web Services Manager can authenticate the web services request by validating the credentials against a data store. The credentials (e.g. username and password, SAML token, certificate, etc.) that are attached to the web services will be validated against the data store, such as the file system, databases, active directory and any LDAP compliant directory. Once authentication is successful, the next step is to perform authorization by validating the username against a set of pre-defined groups which have access to the web service. The following figure shows the process where the user accesses an application which acts as a client for the web service. The client application then attaches the username and password to make the web service request. The username and password are then validated against file system or LDAP directory by Oracle WSM, either using the gateway or the agent. The authentication and authorization against different directory stores can be configured using Oracle WSM policy steps. Oracle Web Services Manager has predefined policy steps for: File Authenticate and Authorize Active Directory Authenticate and Authorize LDAP Authenticate and Authorize In the previous figure, the Oracle WSM Gateway is used to protect the web services and externalize the security. In order to authenticate and authorize requests to web services, the web services can be registered within the gateway and the request pipeline of gateway will validate the credentials and authorize the access before it forwards the request to the actual web service provider. The gateway steps for authentication and authorization can be summarized as: Log incoming request (optional) Extract credentials get the credentials from the SOAP message or HTTP header) Authenticate (file authenticate, active directory authenticate, etc.) Authorize (file authorize, active directory authorize, etc.) Request is forwarded to the web service provider The response from the web service also follows through a similar response pipeline where you can implement the log, encryption of response, or signing, or response, etc. While it is not required to implement any steps in the response pipeline, there should be a response pipeline even if it's doing nothing. Oracle WSM: File Authenticate and Authorize Oracle Web Services MManager can authenticate the web services requests against a file that has the list of usernames and passwords. In this example, the username and password information are part of the SOAP message, however one can also send a username and password as HTTP header, or it can be any XMML data that is a part of the web services message. While file-based authentication can easily be compromised, it is often used as a jump start or testing process to validate the authentication and authorization process. Authentication and authorization of web service requests against a file requires three main steps, and these are described below. There is a default log step which will log all the request and response messages, and you can also include that log step at any point to log messages: Extract Credentials File Authenticate File Authorize The first step to authenticate a web service request against a password file (file authenticate) is to extract the username and password credentials from the SOAP message. The client application attaches the username and password to the SOAP message, as per the UserName token profile. In the policy to authenticate the web service against the file, add the step in the request process to extract credentials. Since this is a web service request, as opposed to HTTP post, configure the Credentials location to WS-BASIC (refer to the following screenshot). Note: WS-BASIC means that it is WS-security compliant. WS-security is the oasis specification that specifies how security tokens are inserted as a part of the SOAP message. In other words, WS-BASIC means that the username and password can be found in the SOAP message, as per the username token profile of the WS-security specification. Once the credentials are extracted, the next step is to validate them against the file. The default implementation of the Oracle WSM File Authenticate requires the username and password to be in a comma separated format and the password should be the hash value using a MMD5 or SHA1 algorithm. In order to authenticate the credentials against the data store, the next step is to configure the File Authenticate step in Oracle WSMM. In this step, the options are straightforward. We have to configure the location of the password file and the hash algorithm format as either md5 or SHA1 (see the next screenshot). The sample file with username and password is: bob:{MD5}jK2x5HPF1b3NIjcmjdlDNA== You can use the wsmadmin tool provided as part of Oracle WSMM standalone or SOA suite). Type: wsmadmin md5encode bob password c;.htpasswd     Now that the authentication steps are configured, the next step is to configure the authorization policy step to ensure that only valid users can access the web service. For the file authorization method, it is no different than the file authenticate method i.e. even the user-to-role mappings are kept in the file. The following figure shows the File Authorize policy step. In this step, we have to define the location of the XML file that contains the users to roles mapping, and also the list of roles that should be allowed to access the service. The roles XML file should look like: <?xml version=‘1.0' encoding=‘utf-8'?> <UserRoles> <user username="joe" roles="guest"/> <user username="Bob" roles="Admin,guest"/> </UserRoles> In the previous XML file, the list of roles the user belongs to are defined as a value of roles element and is comma separated. Now that we have completed the steps to extract credentials, authenticate the request and also authorize the request, the next step is to save the policy steps and commit the policy changes. Once the policy is committed, any request to that web service would require a username and password, and that user should have necessary privileges to access the service. Oracle WSM: Active Directory Authenticate and Authorize In the previous section, we discussed authenticating and authorizing web service requests against a file. Though it's an easy start, security based on a file system can be easily compromised and will be tough to maintain. Authentication and authorization of web services are better handled when integrated with a native LDAP directory, such as active directory, so that the AD administrator can manage users and group membership. In this section, we will discuss how to authenticate and authorize web service requests against an active directory. Active-directory-based authentication and authorization of web service requests involves the same steps as file-based-authentication and authorization, and they are: Extract Credentials Active Directory Authenticate Active Directory Authorize
Read more
  • 0
  • 0
  • 20706

article-image-crud-operations-rest
Packt
16 Sep 2015
11 min read
Save for later

CRUD Operations in REST

Packt
16 Sep 2015
11 min read
In this article by Ludovic Dewailly, the author of Building a RESTful Web Service with Spring, we will learn how requests to retrieve data from a RESTful endpoint, created to access the rooms in a sample property management system, are typically mapped to the HTTP GET method in RESTful web services. We will expand on this by implementing some of the endpoints to support all the CRUD (Create, Read, Update, Delete) operations. In this article, we will cover the following topics: Mapping the CRUD operations to the HTTP methods Creating resources Updating resources Deleting resources Testing the RESTful operations Emulating the PUT and DELETE methods (For more resources related to this topic, see here.) Mapping the CRUD operations[km1]  to HTTP [km2] [km3] methods The HTTP 1.1 specification defines the following methods: OPTIONS: This method represents a request for information about the communication options available for the requested URI. This is, typically, not directly leveraged with REST. However, this method can be used as a part of the underlying communication. For example, this method may be used when consuming web services from a web page (as a part of the C[km4] ross-origin resource sharing mechanism). GET: This method retrieves the information identified by the request URI. In the context of the RESTful web services, this method is used to retrieve resources. This is the method used for read operations (the R in CRUD). HEAD: The HEAD requests are semantically identical to the GET requests except the body of the response is not transmitted. This method is useful for obtaining meta-information about resources. Similar to the OPTIONS method, this method is not typically used directly in REST web services. POST: This method is used to instruct the server to accept the entity enclosed in the request as a new resource. The create operations are typically mapped to this HTTP method. PUT: This method requests the server to store the enclosed entity under the request URI. To support the updating of REST resources, this method can be leveraged. As per the HTTP specification, the server can create the resource if the entity does not exist. It is up to the web service designer to decide whether this behavior should be implemented or resource creation should only be handled by POST requests. DELETE: The last operation not yet mapped is for the deletion of resources. The HTTP specification defines a DELETE method that is semantically aligned with the deletion of RESTful resources. TRACE: This method is used to perform actions on web servers. These actions are often aimed to aid development and the testing of HTTP applications. The TRACE requests aren't usually mapped to any particular RESTful operations. CONNECT: This HTTP method is defined to support HTTP tunneling through a proxy server. Since it deals with transport layer concerns, this method has no natural semantic mapping to the RESTful operations. The RESTful architecture does not mandate the use of HTTP as a communication protocol. Furthermore, even if HTTP is selected as the underlying transport, no provisions are made regarding the mapping of the RESTful operations to the HTTP method. Developers could feasibly support all operations through POST requests. This being said, the following CRUD to HTTP method mapping is commonly used in REST web services: Operation HTTP method Create POST Read GET Update PUT Delete DELETE Our sample web service will use these HTTP methods to support CRUD operations. The rest of this article will illustrate how to build such operations. Creating r[km5] esources The inventory component of our sample property management system deals with rooms. If we have already built an endpoint to access the rooms. Let's take a look at how to define an endpoint to create new resources: @RestController @RequestMapping("/rooms") public class RoomsResource { @RequestMapping(method = RequestMethod.POST) public ApiResponse addRoom(@RequestBody RoomDTO room) { Room newRoom = createRoom(room); return new ApiResponse(Status.OK, new RoomDTO(newRoom)); } } We've added a new method to our RoomsResource class to handle the creation of new rooms. @RequestMapping is used to map requests to the Java method. Here we map the POST requests to addRoom(). Not specifying a value (that is, path) in @RequestMapping is equivalent to using "/". We pass the new room as @RequestBody. This annotation instructs Spring to map the body of the incoming web request to the method parameter. Jackson is used here to convert the JSON request body to a Java object. With this new method, the POSTing requests to http://localhost:8080/rooms with the following JSON body will result in the creation of a new room: { name: "Cool Room", description: "A room that is very cool indeed", room_category_id: 1 } Our new method will return the newly created room: { "status":"OK", "data":{ "id":2, "name":"Cool Room", "room_category_id":1, "description":"A room that is very cool indeed" } } We can decide to return only the ID of the new resource in response to the resource creation. However, since we may sanitize or otherwise manipulate the data that was sent over, it is a good practice to return the full resource. Quickly testing endpoints[km6]  For the purpose of quickly testing our newly created endpoint, let's look at testing the new rooms created using Postman. Postman (https://www.getpostman.com) is a Google Chrome plugin extension that provides tools to build and test web APIs. This following screenshot illustrates how Postman can be used to test this endpoint: In Postman, we specify the URL to send the POST request to http://localhost:8080/rooms, with the "[km7] application/json" content type header and the body of the request. Sending this requesting will result in a new room being created and returned as shown in the following: We have successfully added a room to our inventory service using Postman. It is equally easy to create incomplete requests to ensure our endpoint performs any necessary sanity checks before persisting data into the database. JSON versus[km8]  form data Posting forms is the traditional way of creating new entities on the web and could easily be used to create new RESTful resources. We can change our method to the following: @RequestMapping(method = RequestMethod.POST, consumes = MediaType.APPLICATION_FORM_URLENCODED_VALUE) public ApiResponse addRoom(String name, String description, long roomCategoryId) { Room room = createRoom(name, description, roomCategoryId); return new ApiResponse(Status.OK, new RoomDTO(room)); } The main difference with the previous method is that we tell Spring to map form requests (that is, with application/x-www-form-urlencoded the content type) instead of JSON requests. In addition, rather than expecting an object as a parameter, we receive each field individually. By default, Spring will use the Java method attribute names to map incoming form inputs. Developers can change this behavior by annotating attribute with @RequestParam("…") to specify the input name. In situations where the main web service consumer is a web application, using form requests may be more applicable. In most cases, however, the former approach is more in line with RESTful principles and should be favored. Besides, when complex resources are handled, form requests will prove cumbersome to use. From a developer standpoint, it is easier to delegate object mapping to a third-party library such as Jackson. Now that we have created a new resource, let's see how we can update it. Updating r[km9] esources Choosing URI formats is an important part of designing RESTful APIs. As seen previously, rooms are accessed using the /rooms/{roomId} path and created under /rooms. You may recall that as per the HTTP specification, PUT requests can result in creation of entities, if they do not exist. The decision to create new resources on update requests is up to the service designer. It does, however, affect the choice of path to be used for such requests. Semantically, PUT requests update entities stored under the supplied request URI. This means the update requests should use the same URI as the GET requests: /rooms/{roomId}. However, this approach hinders the ability to support resource creation on update since no room identifier will be available. The alternative path we can use is /rooms with the room identifier passed in the body of the request. With this approach, the PUT requests can be treated as POST requests when the resource does not contain an identifier. Given the first approach is semantically more accurate, we will choose not to support resource create on update, and we will use the following path for the PUT requests: /rooms/{roomId} Update endpoint[km10]  The following method provides the necessary endpoint to modify the rooms: @RequestMapping(value = "/{roomId}", method = RequestMethod.PUT) public ApiResponse updateRoom(@PathVariable long roomId, @RequestBody RoomDTO updatedRoom) { try { Room room = updateRoom(updatedRoom); return new ApiResponse(Status.OK, new RoomDTO(room)); } catch (RecordNotFoundException e) { return new ApiResponse(Status.ERROR, null, new ApiError(999, "No room with ID " + roomId)); } } As discussed in the beginning of this article, we map update requests to the HTTP PUT verb. Annotating this method with @RequestMapping(value = "/{roomId}", method = RequestMethod.PUT) instructs Spring to direct the PUT requests here. The room identifier is part of the path and mapped to the first method parameter. In fashion similar to the resource creation requests, we map the body to our second parameter with the use of @RequestBody. Testing update requests[km11]  With Postman, we can quickly create a test case to update the room we created. To do so, we send a PUT request with the following body: { id: 2, name: "Cool Room", description: "A room that is really very cool indeed", room_category_id: 1 } The resulting response will be the updated room, as shown here: { "status": "OK", "data": { "id": 2, "name": "Cool Room", "room_category_id": 1, "description": "A room that is really very cool indeed." } } Should we attempt to update a nonexistent room, the server will generate the following response: { "status": "ERROR", "error": { "error_code": 999, "description": "No room with ID 3" } } Since we do not support resource creation on update, the server returns an error indicating that the resource cannot be found. Deleting resources[km12]  It will come as no surprise that we will use the DELETE verb to delete REST resources. Similarly, the reader will have already figured out that the path to delete requests will be /rooms/{roomId}. The Java method that deals with room deletion is as follows: @RequestMapping(value = "/{roomId}", method = RequestMethod.DELETE) public ApiResponse deleteRoom(@PathVariable long roomId) { try { Room room = inventoryService.getRoom(roomId); inventoryService.deleteRoom(room.getId()); return new ApiResponse(Status.OK, null); } catch (RecordNotFoundException e) { return new ApiResponse(Status.ERROR, null, new ApiError( 999, "No room with ID " + roomId)); } } By declaring the request mapping method to be RequestMethod.DELETE, Spring will make this method handle the DELETE requests. Since the resource is deleted, returning it in the response would not make a lot of sense. Service designers may choose to return a boolean flag to indicate the resource was successfully deleted. In our case, we leverage the status element of our response to carry this information back to the consumer. The response to deleting a room will be as follows: { "status": "OK" } With this operation, we have now a full-fledged CRUD API for our Inventory Service. Before we conclude this article, let's discuss how REST developers can deal with situations where not all HTTP verbs can be utilized. HTTP method override In certain situations (for example, when the service or its consumers are behind an overzealous corporate firewall, or if the main consumer is a web page), only the GET and POST HTTP methods might be available. In such cases, it is possible to emulate the missing verbs by passing a customer header in the requests. For example, resource updates can be handle using POST requests by setting a customer header (for example, X-HTTP-Method-Override) to PUT to indicate that we are emulating a PUT request via a POST request. The following method will handle this scenario: @RequestMapping(value = "/{roomId}", method = RequestMethod.POST, headers = {"X-HTTP-Method-Override=PUT"}) public ApiResponse updateRoomAsPost(@PathVariable("roomId") long id, @RequestBody RoomDTO updatedRoom) { return updateRoom(id, updatedRoom); } By setting the headers attribute on the mapping annotation, Spring request routing will intercept the POST requests with our custom header and invoke this method. Normal POST requests will still map to the Java method we had put together to create new rooms. Summary In this article, we've performed the implementation of our sample RESTful web service by adding all the CRUD operations necessary to manage the room resources. We've discussed how to organize URIs to best embody the REST principles and looked at how to quickly test endpoints using Postman. Now that we have a fully working component of our system, we can take some time to discuss performance. Resources for Article: Further resources on this subject: Introduction to Spring Web Application in No Time[article] Aggregators, File exchange Over FTP/FTPS, Social Integration, and Enterprise Messaging[article] Time Travelling with Spring[article]
Read more
  • 0
  • 0
  • 20530
Modal Close icon
Modal Close icon