Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7013 Articles
article-image-getting-started-with-automated-machine-learning-automl
Kunal Chaudhari
10 May 2018
7 min read
Save for later

Anatomy of an automated machine learning algorithm (AutoML)

Kunal Chaudhari
10 May 2018
7 min read
Machine learning has always been dependent on the selection of the right features within a given model; even the selection of the right algorithm. But deep learning changed this. The selection process is now built into the models themselves. Researchers and engineers are now shofting their focus from feature engineering to network engineering. Out of this, AutoML, or meta learning, has become an increasingly important part of deep learning. AutoML is an emerging research topic which aims at auto-selecting the most efficient neural network for a given learning task. In other words, AutoML represents a set of methodologies for learning how to learn efficiently. Consider for instance the tasks of machine translation, image recognition, or game playing. Typically, the models are manually designed by a team of engineers, data scientist, and domain experts. If you consider that a typical 10-layer network can have ~1010 candidate network, you understand how expensive, error prone, and ultimately sub-optimal the process can be. This article is an excerpt from a book written by Antonio Gulli and Amita Kapoor titled TensorFlow 1.x Deep Learning Cookbook. This book is an easy-to-follow guide that lets you explore reinforcement learning, GANs, autoencoders, multilayer perceptrons and more. AutoML with recurrent networks and with reinforcement learning The key idea to tackle this problem is to have a controller network which proposes a child model architecture with probability p, given a particular network given in input. The child is trained and evaluated for the particular task to be solved (say for instance that the child gets accuracy R). This evaluation R is passed back to the controller which, in turn, uses R to improve the next candidate architecture. Given this framework, it is possible to model the feedback from the candidate child to the controller as the task of computing the gradient of p and then scale this gradient by R. The controller can be implemented as a Recurrent Neural Network (see the following figure). In doing so, the controller will tend to privilege iteration after iterations candidate areas of architecture that achieve better R and will tend to assign a lower probability to candidate areas that do not score so well. For instance, a controller recurrent neural network can sample a convolutional network. The controller can predict many hyper-parameters such as filter height, filter width, stride height, stride width, and the number of filters for one layer and then can repeat. Every prediction can be carried out by a softmax classifier and then fed into the next RNN time step as input. This is well expressed by the following images taken from Neural Architecture Search with Reinforcement Learning, Barret Zoph, Quoc V. Le: Predicting hyperparameters is not enough as it would be optimal to define a set of actions to create new layers in the network. This is particularly difficult because the reward function that describes the new layers is most likely not differentiable. This makes it impossible to optimize using standard techniques such as SGD. The solution comes from reinforcement learning. It consists of adopting a policy gradient network. Besides that, parallelism can be used for optimizing the parameters of the controller RNN. Quoc Le & Barret Zoph proposed to adopt a parameter-server scheme where we have a parameter server of S shards, that store the shared parameters for K controller replicas. Each controller replica samples m different child architectures that are trained in parallel as illustrated in the following images, taken from Neural Architecture Search with Reinforcement Learning, Barret Zoph, Quoc V. Le: Quoc and Barret applied AutoML techniques for Neural Architecture Search to the Penn Treebank dataset, a well-known benchmark for language modeling. Their results improve the manually designed networks currently considered the state-of-the-art. In particular, they achieve a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. Similarly, on the CIFAR-10 dataset, starting from scratch, the method can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. The proposed CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. Meta-learning blocks In Learning Transferable Architectures for Scalable Image Recognition, Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le, 2017. propose to learn an architectural building block on a small dataset that can be transferred to a large dataset. The authors propose to search for the best convolutional layer (or cell) on the CIFAR-10 dataset and then apply this learned cell to the ImageNet dataset by stacking together more copies of this cell, each with their own parameters. Precisely, all convolutional networks are made of convolutional layers (or cells) with identical structures but different weights. Searching for the best convolutional architectures is therefore reduced to searching for the best cell structures, which is faster more likely to generalize to other problems. Although the cell is not learned directly on ImageNet, an architecture constructed from the best learned cell achieves, among the published work, state-of-the-art accuracy of 82.7 percent top-1 and 96.2 percent top-5 on ImageNet. The model is 1.2 percent better in top-1 accuracy than the best human-invented architectures while having 9 billion fewer FLOPS—a reduction of 28% from the previous state of the art model. What is also important to notice is that the model learned with RNN+RL (Recurrent Neural Networks + Reinforcement Learning) is beating the baseline represented by Random Search (RS) as shown in the figure taken from the paper. In the mean performance of the top-5 and top-25 models identified in RL versus RS, RL is always winning: AutoML and learning new tasks Meta-learning systems can be trained to achieve a large number of tasks and are then tested for their ability to learn new tasks. A famous example of this kind of meta-learning is transfer learning, where networks can successfully learn new image-based tasks from relatively small datasets. However, there is no analogous pre-training scheme for non-vision domains such as speech, language, and text. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks, Chelsea Finn, Pieter Abbeel, Sergey Levine, 2017, proposes a model- agnostic approach names MAML, compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. The meta-learner aims at finding an initialization that rapidly adapts to various problems quickly (in a small number of steps) and efficiently (using only a few examples). A model represented by a parametrized function fθ with parameters θ.When adapting to a new task Ti, the model's parameters θ become θi  . In MAML, the updated parameter vector θi  is computed using one or more gradient descent updates on task Ti. For example, when using one gradient update, θ ~ = θ − α∇θLTi (fθ) where LTi is the loss function for the task T and α is a meta-learning parameter. The MAML algorithm is reported in this figure: MAML was able to substantially outperform a number of existing approaches on popular few-shot image classification benchmark. Few shot image is a quite challenging problem aiming at learning new concepts from one or a few instances of that concept. As an example, Human-level concept learning through probabilistic program induction, Brenden M. Lake, Ruslan Salakhutdinov, Joshua B. Tenenbaum, 2015, suggested that humans can learn to identify novel two-wheel vehicles from a single picture such as the one contained in the box as follows: If you enjoyed this excerpt, check out the book TensorFlow 1.x Deep Learning Cookbook, to skill up and implement tricky neural networks using Google's TensorFlow 1.x AmoebaNets: Google’s new evolutionary AutoML AutoML : Developments and where is it heading to What is Automated Machine Learning (AutoML)?
Read more
  • 0
  • 0
  • 21659

article-image-cloudtrail-logs-amazon-elasticsearch
Vijin Boricha
10 May 2018
5 min read
Save for later

Analyzing CloudTrail Logs using Amazon Elasticsearch

Vijin Boricha
10 May 2018
5 min read
Log management and analysis for many organizations start and end with just three letters: E, L, and K, which stands for Elasticsearch, Logstash, and Kibana. In today's tutorial, we will learn about analyzing CloudTrail logs which are E, L and K. [box type="shadow" align="" class="" width=""]This tutorial is an excerpt from the book AWS Administration - The Definitive Guide - Second Edition written by Yohan Wadia. This book will help you enhance your application delivery skills with the latest AWS services while also securing and monitoring the environment workflow.[/box] The three open-sourced products are essentially used together to aggregate, parse, search and visualize logs at an enterprise scale: Logstash: Logstash is primarily used as a log collection tool. It is designed to collect, parse, and store logs originating from multiple sources, such as applications, infrastructure, operating systems, tools, services, and so on. Elasticsearch: With all the logs collected in one place, you now need a query engine to filter and search through these logs for particular events. That's exactly where Elasticsearch comes into play. Elasticsearch is basically a search server based on the popular information retrieval software library, Lucene. It provides a distributed, full-text search engine along with a RESTful web interface for querying your logs. Kibana: Kibana is an open source data visualization plugin, used in conjunction with Elasticsearch. It provides you with the ability to create and export your logs into various visual graphs, such as bar charts, scatter graphs, pie charts, and so on. You can easily download and install each of these components in your AWS environment, and get up and running with your very own ELK stack in a matter of hours! Alternatively, you can also leverage AWS own Elasticsearch service! Amazon Elasticsearch is a managed ELK service that enables you to quickly deploy operate, and scale an ELK stack as per your requirements. Using Amazon Elasticsearch, you eliminate the need for installing and managing the ELK stack's components on your own, which in the long run can be a painful experience. For this particular use case, we will leverage a simple CloudFormation template that will essentially set up an Amazon Elasticsearch domain to filter and visualize the captured CloudTrail Log files, as depicted in the following diagram: To get started, log in to the CloudFormation dashboard, at https://console.aws.amazon.com/cloudformation. Next, select the option Create Stack to bring up the CloudFormation template selector page. Paste http://s3.amazonaws.com/concurrencylabs-cfn-templates/cloudtrail-es-cluster/cloudtrail-es-cluster.json in, the Specify an Amazon S3 template URL field, and click on Next to continue. In the Specify Details page, provide a suitable Stack name and fill out the following required parameters: AllowedIPForEsCluster: Provide the IP address that will have access to the nginx proxy and, in turn, have access to your Elasticsearch cluster. In my case, I've provided my laptop's IP. Note that you can change this IP at a later stage, by visiting the security group of the nginx proxy once it has been created by the CloudFormation template. CloudTrailName: Name of the CloudTrail that we set up at the beginning of this chapter. KeyName: You can select a key-pair for obtaining SSH to your nginx proxy instance: LogGroupName: The name of the CloudWatch Log Group that will act as the input to our Elasticsearch cluster. ProxyInstanceTypeParameter: The EC2 instance type for your proxy instance. Since this is a demonstration, I've opted for the t2.micro instance type. Alternatively, you can select a different instance type as well. Once done, click on Next to continue. Review the settings of your stack and hit Create to complete the process. The stack takes a good few minutes to deploy as a new Elasticsearch domain is created. You can monitor the progress of the deployment by either viewing the CloudFormation's Output tab or, alternatively, by viewing the Elasticsearch dashboard. Note that, for this deployment, a default t2.micro.elasticsearch instance type is selected for deploying Elasticsearch. You should change this value to a larger instance type before deploying the stack for production use. You can view information on Elasticsearch Supported Instance Types at http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/aes-supported-instance-types.html. With the stack deployed successfully, copy the Kibana URL from the CloudFormation Output tab: "KibanaProxyEndpoint": "http://<NGINX_PROXY>/_plugin/kibana/" The Kibana UI may take a few minutes to load. Once it is up and running, you will need to configure a few essential parameters before you can actually proceed. Select Settings and hit the Indices option. Here, fill in the following details: Index contains time-based events: Enable this checkbox to index time-based events Use event times to create index names: Enable this checkbox as well Index pattern interval: Set the Index pattern interval to Daily from the drop-down list Index name of pattern: Type [cwl-]YYYY.MM.DD in to this field Time-field name: Select the @timestamp value from the drop-down list Once completed, hit Create to complete the process. With this, you should now start seeing logs populate on to Kibana's dashboard. Feel free to have a look around and try out the various options and filters provided by Kibana:   Phew! That was definitely a lot to cover! But wait, there's more! AWS provides yet another extremely useful governance and configuration management service AWS Config, know more from this book AWS Administration - The Definitive Guide - Second Edition. The Cloud and the DevOps Revolution Serverless computing wars: AWS Lambdas vs Azure Functions
Read more
  • 0
  • 0
  • 26532

article-image-tensor-processing-unit-tpu-3-0-cloud-ready-ai
Amey Varangaonkar
10 May 2018
5 min read
Save for later

Tensor Processing Unit (TPU) 3.0: Google’s answer to cloud-ready Artificial Intelligence

Amey Varangaonkar
10 May 2018
5 min read
It won’t be wrong to say that the first day of the ongoing Google I/O 2018 conference was largely dominated by Artificial Intelligence. CEO Sundar Pichai didn’t hide his excitement in giving us a sneak-peek of a whole host of features across various products driven by AI, which Google plan to roll out to the masses in the coming days. One of the biggest announcements was the unveiling of the next-gen silicon chip - the Tensor Processing Unit 3.0. The TPU has been central to Google’s AI market dominance strategy since its inception in 2016, and the latest iteration of this custom-made silicon chip promises to deliver faster and more powerful machine learning capabilities. What’s new in TPU 3.0? TPU is Google’s premium AI hardware offering for its cloud platform, with the objective of making it easier to run machine learning systems in a fast, cheap and easy manner. In his I/O 2018 keynote, Sundar Pichai declared that TPU 3.0 will be 8x more powerful than its predecessor. [embed]https://www.youtube.com/watch?v=ogfYd705cRs&t=5750[/embed] A TPU 3.0 pod is expected to crunch numbers at approximately 100 petaflops, as compared to 11.5 petaflops delivered by TPU 2.0. Pichai did not comment about the precision of the processing in these benchmarks - something which can make a lot of difference in real-world applications. Not a lot of other TPU 3.0 features were disclosed. The chip is still only for Google’s use for powering their applications. It also powers the Google Cloud Platform to handle customers’ workloads. High performance - but at a cost? An important takeaway from Pichai’s announcement is that TPU 3.0 is expected to be power-hungry - so much so that Google’s data centers deploying the chips now require liquid cooling to take care of the heat dissipation problem. This is not necessarily a good thing, as the need for dedicated cooling systems will only increase as Google scale up their infrastructure. A few analysts and experts, including Patrick Moorhead, tech analyst and founder of Moor Insights & Strategy, have raised concerns about this on twitter. [embed]https://twitter.com/PatrickMoorhead/status/993905641390391296[/embed] [embed]https://twitter.com/ryanshrout/status/993906310197506048[/embed] [embed]https://twitter.com/LostInBrittany/status/993904650888724480[/embed] The TPU is keeping up with Google’s growing AI needs The evolution of the Tensor Processing Unit has been rather interesting. When TPU was initially released way back in 2016, it was just a simple math accelerator used for training models, supporting barely close to 8 software instructions. However, Google needed more computing power to keep up with their neural networks to power their applications on the cloud. TPU 2.0 supported single-precision floating calculations and added 8GB of HBM (High Bandwidth Memory) for faster, more improved performance. With 3.0, TPU have stepped up another notch, delivering the power and performance needed to process data and run their AI models effectively. The significant increase in the processing capability from 11 petaflops to more than 100 petaflops is a clear step in this direction. Optimized for Tensorflow - the most popular machine learning framework out there - it is clear that TPU 3.0 will have an important role to play as Google look to infuse AI into all their major offerings. A proof of this is some of the smart features that were announced in the conference - the smart compose option in Gmail, improved Google Assistant, Gboard, Google Duplex, and more. TPU 3.0 was needed, with the competition getting serious It comes as no surprise to anyone that almost all the major tech giants are investing in cloud-ready AI technology. These companies are specifically investing in hardware to make machine learning faster and more efficient, to make sense of the data at scale, and give intelligent predictions which are used to improve their operations. There are quite a few examples to demonstrate this. Facebook’s infrastructure is being optimized for the Caffe2 and Pytorch frameworks, designed to process the massive information it handles on a day to day basis. Intel have come up with their neural network processors in a bid to redefine AI. It is also common knowledge that even the cloud giants like Amazon want to build an efficient cloud infrastructure powered by Artificial Intelligence. Just a few days back, Microsoft previewed their Project Brainwave in the Build 2018 conference, claiming super-fast Artificial Intelligence capabilities which rivaled Google’s very own TPU. We can safely infer that Google needed a TPU 3.0 like hardware to join the elite list of prime enablers of Artificial Intelligence in the cloud, empowering efficient data management and processing. Check out our coverage of Google I/O 2018, for some exciting announcements on other Google products in store for the developers and Android fans. Read also AI chip wars: Is Brainwave Microsoft’s Answer to Google’s TPU? How machine learning as a service is transforming cloud The Deep Learning Framework Showdown: TensorFlow vs CNTK
Read more
  • 0
  • 0
  • 19919

article-image-secure-private-cloud-iam
Savia Lobo
10 May 2018
11 min read
Save for later

How to secure a private cloud using IAM

Savia Lobo
10 May 2018
11 min read
In this article, we look at securing the private cloud using IAM. For IAM, OpenStack uses the Keystone project. Keystone provides the identity, token, catalog, and policy services, which are used specifically by OpenStack services. It is organized as a group of internal services exposed on one or many endpoints. For example, an authentication call validates the user and project credentials with the identity service. [box type="shadow" align="" class="" width=""]This article is an excerpt from the book,'Cloud Security Automation'. In this book, you'll learn how to work with OpenStack security modules and learn how private cloud security functions can be automated for better time and cost-effectiveness.[/box] Authentication Authentication is an integral part of an OpenStack deployment and so we must be careful about the system design. Authentication is the process of confirming a user's identity, which means that a user is actually who they claim to be. For example, providing a username and a password when logging into a system. Keystone supports authentication using the username and password, LDAP, and external authentication methods. After successful authentication, the identity service provides the user with an authorization token, which is further used for subsequent service requests. Transport Layer Security (TLS) provides authentication between services and users using X.509 certificates. The default mode for TLS is server-side only authentication, but we can also use certificates for client authentication. However, in authentication, there can also be the case where a hacker is trying to access the console by guessing your username and password. If we have not enabled the policy to handle this, it can be disastrous. For this, we can use the Failed Login Policy, which states that a maximum number of attempts are allowed for a failed login; after that, the account is blocked for a certain number of hours and the user will also get a notification about it. However, the identity service provided in Keystone does not provide a method to limit access to accounts after repeated unsuccessful login attempts. For this, we need to rely on an external authentication system that blocks out an account after a configured number of failed login attempts. Then, the account might only be unlocked with further side-channel intervention, or on request, or after a certain duration. We can use detection techniques to the fullest only when we have a prevention method available to save them from damage. In the detection process, we frequently review the access control logs to identify unauthorized attempts to access accounts. During the review of access control logs, if we find any hints of a brute force attack (where the user tries to guess the username and password to log in to the system), we can define a strong username and password or block the source of the attack (IP) through firewall rules. When we define firewall rules on Keystone node, it restricts the connection, which helps to reduce the attack surface. Apart from this, reviewing access control logs also helps to examine the account activity for unusual logins and suspicious actions, so that we can take corrective actions such as disabling the account. To increase the level of security, we can also utilize MFA for network access to the privileged user accounts. Keystone supports external authentication services through the Apache web server that can provide this functionality. Servers can also enforce client-side authentication using certificates. This will help to get rid of brute force and phishing attacks that may compromise administrator passwords. Authentication methods – internal and external Keystone stores user credentials in a database or may use an LDAP-compliant directory server. The Keystone identity database can be kept separate from databases used by other OpenStack services to reduce the risk of a compromise of the stored credentials. When we use the username and password to authenticate, identity does not apply policies for password strength, expiration, or failed authentication attempts. For this, we need to implement external authentication services. To integrate an external authentication system or organize an existing directory service to manage users account management, we can use LDAP. LDAP simplifies the integration process. In OpenStack authentication and authorization, the policy may be delegated to another service. For example, an organization that is going to deploy a private cloud and already has a database of employees and users in an LDAP system. Using this LDAP as an authentication authority, requests to the Identity service (Keystone) are transferred to the LDAP system, which allows or denies requests based on its policies. After successful authentication, the identity service generates a token for access to the authorized services. Now, if the LDAP has already defined attributes for the user such as the admin, finance, and HR departments, these must be mapped into roles and groups within identity for use by the various OpenStack services. We need to define this mapping into Keystone node configuration files stored at /etc/keystone/keystone.conf. Keystone must not be allowed to write to the LDAP used for authentication outside of the OpenStack Scope, as there is a chance to allow a sufficiently privileged Keystone user to make changes to the LDAP directory, which is not desirable from a security point of view. This can also lead to unauthorized access of other information and resources. So, if we have other authentication providers such as LDAP or Active Directory, then user provisioning always happens at other authentication provider systems. For external authentication, we have the following methods: MFA: The MFA service requires the user to provide additional layers of information for authentication such as a one-time password token or X.509 certificate (called MFA token). Once MFA is implemented, the user will have to enter the MFA token after putting the user ID and password in for a successful login. Password policy enforcement: Once the external authentication service is in place, we can define the strength of the user passwords to conform to the minimum standards for length, diversity of characters, expiration, or failed login attempts. Keystone also supports TLS-based client authentication. TLS client authentication provides an additional authentication factor, apart from the username and password, which provides greater reliability on user identification. It reduces the risk of unauthorized access when usernames and passwords are compromised. However, TLS-based authentication is not cost effective as we need to have a certificate for each of the clients. Authorization Keystone also provides the option of groups and roles. Users belong to groups where a group has a list of roles. All of the OpenStack services, such as Cinder, Glance, nova, and Horizon, reference the roles of the user attempting to access the service. OpenStack policy enforcers always consider the policy rule associated with each resource and use the user’s group or role, and their association, to determine and allow or deny the service access. Before configuring roles, groups, and users, we should document your required access control policies for the OpenStack installation. The policies must be as per the regulatory or legal requirements of the organization. Additional changes to the access control configuration should be done as per the formal policies. These policies must include the conditions and processes for creating, deleting, disabling, and enabling accounts, and for assigning privileges to the accounts. One needs to review these policies from time to time and ensure that the configuration is in compliance with the approved policies. For user creation and administration, there must be a user created with the admin role in Keystone for each OpenStack service. This account will provide the service with the authorization to authenticate users. Nova (compute) and Swift (object storage) can be configured to use the Identity service to store authentication information. For the test environment, we can have tempAuth, which records user credentials in a text file, but it is not recommended for the production environment. The OpenStack administrator must protect sensitive configuration files from unauthorized modification with mandatory access control frameworks such as SELinux or DAC. Also, we need to protect the Keystone configuration files, which are stored at /etc/keystone/keystone.conf, and also the X.509 certificates. It is recommended that cloud admin users must authenticate using the identity service (Keystone) and an external authentication service that supports two-factor authentication. Getting authenticated with two-factor authentication reduces the risk of compromised passwords. It is also recommended in the NIST guideline called NIST 800-53 IA-2(1). Which defines MFA for network access to privileged accounts, when one factor is provided by a separate device from the system being accessed. Policy, tokens, and domains In OpenStack, every service defines the access policies for its resources in a policy file, where a resource can be like an API access, it can create and attach Cinder volume, or it can create an instance. The policy rules are defined in JSON format in a file called policy.json. Only administrators can modify the service-based policy.json file, to control the access to the various resources. However, one has to also ensure that any changes to the access control policies do not unintentionally breach or create an option to breach the security of any resource. Any changes made to policy.json are applied immediately and it does not need any service restart. After a user is authenticated, a token is generated for authorization and access to an OpenStack environment. A token can have a variable lifespan, but the default value is 1 hour. It is also recommended to lower the lifespan of the token to a certain level so that within the specified timeframe the internal service can complete the task. If the token expires before task completion, the system can be unresponsive. Keystone also supports token revocation. For this, it uses an API to revoke a token and to list the revoked tokens. In OpenStack Newton release, there are four supported token types: UUID, PKI, PKIZ, and fernet. After the OpenStack Ocata release, there are two supported token types: UUID and fernet. We'll see all of these token types in detail here: UUID: These tokens are persistent tokens. UUID tokens are 32 bytes in length, which must be persisted in the backend. They are stored in the Keystone backend, along with the metadata for authentication. All of the clients must pass their UUID token to the Keystone (identity service) in order to validate it. PKI and PKIZ: These are signed documents that contain the authentication content, as well as the service catalog. The difference between the PKI and PKIZ is that PKIZ tokens are compressed to help mitigate the size issues of PKI (sometimes PKI tokens becomes very long). Both of these tokens have become obsolete after the Ocata release. The length of PKI and PKIZ tokens typically exceeds 1,600 bytes. The Identity service uses public and private key pairs and certificates in order to create and validate these tokens. Fernet: These tokens are the default supported token provider for OpenStack Pike Release. It is a secure messaging format explicitly designed for use in API tokens. They are nonpersistent, lightweight (fall in the range of 180 to 240 bytes), and reduce the operational overhead. Authentication and authorization metadata is neatly bundled into a message-packed payload, which is then encrypted and signed in as a fernet token. In the OpenStack, the Keystone Service domain is a high-level container for projects, users, and groups. Domains are used to centrally manage all Keystone-based identity components. Compute, storage, and other resources can be logically grouped into multiple projects, which can further be grouped under a master account. Users of different domains can be represented in different authentication backends and have different attributes that must be mapped to a single set of roles and privileges in the policy definitions to access the various service resources. Domain-specific authentication drivers allow the identity service to be configured for multiple domains, using domain-specific configuration files stored at keystone.conf. Federated identity Federated identity enables you to establish trusts between identity providers and the cloud environment (OpenStack Cloud). It gives you secure access to cloud resources using your existing identity. You do not need to remember multiple credentials to access your applications. Now, the question is, what is the reason for using federated identity? This is answered as follows: It enables your security team to manage all of the users (cloud or noncloud) from a single identity application It enables you to set up different identity providers on the basis of the application that somewhere creates an additional workload for the security team and leads the security risk as well It gives ease of life to users by proving them a single credential for all of the apps so that they can save the time they spend on the forgot password page Federated identity enables you to have a single sign-on mechanism. We can implement it using SAML 2.0. To do this, you need to run the identity service provider under Apache. We learned about securing your private cloud and the authentication process therein. If you've enjoyed this article, do check out 'Cloud Security Automation' for a hands-on experience of automating your cloud security and governance. Top 5 cloud security threats to look out for in 2018 Cloud Security Tips: Locking Your Account Down with AWS Identity Access Manager (IAM)
Read more
  • 0
  • 0
  • 23050

article-image-angular-cli-build-angular-components
Amarabha Banerjee
09 May 2018
13 min read
Save for later

Getting started with Angular CLI and build your first Angular Component

Amarabha Banerjee
09 May 2018
13 min read
When it comes to Angular development, there are some things that are good to know and some things that we need to know to embark on our great journey. One of the things that is good to know is semantic versioning. This is good to know because it is the way the Angular team has chosen to deal with changes. This will hopefully make it easier to find the right solutions to future app development challenges when you go to https://angular.io/ or Stack Overflow and other sites to search for solutions. In this tutorial, we will discuss Angular components and few practical examples to help you get real world understanding of Angular components. This article is an excerpt from the book Learning Angular Second edition, written by Christoffer Noring & Pablo Deeleman. Web components Web components is a concept that encompasses four technologies designed to be used together to build feature elements with a higher level of visual expressivity and reusability, thereby leading to a more modular, consistent, and maintainable web. These four technologies are as follows: Templates: These are pieces of HTML that structure the content we aim to render. Custom elements: These templates not only contain traditional HTML elements, but also the custom wrapper items that provide further presentation elements or API functionalities. Shadow DOM: This provides a sandbox to encapsulate the CSS layout rules and JavaScript behaviors of each custom element. HTML imports: HTML is no longer constrained to host HTML elements, but to other HTML documents as well. In theory, an Angular component is indeed a custom element that contains a template to host the HTML structure of its layout, the latter being governed by a scoped CSS style sheet encapsulated within a shadow DOM container. Let's try to rephrase this in plain English. Think of the range input control type in HTML5. It is a handy way to give our users a convenient input control for entering a value ranging between two predefined boundaries. If you have not used it before, insert the following piece of markup in a blank HTML template and load it in your browser: <input id="mySlider" type="range" min="0" max="100" step="10"> You will see a nice input control featuring a horizontal slider in your browser. Inspecting such control with the browser developer tools will unveil a concealed set of HTML tags that were not present at the time you edited your HTML template. There you have an example of shadow DOM in action, with an actual HTML template governed by its own encapsulated CSS with advanced dragging functionality. You will probably agree that it would be cool to do that yourself. Well, the good news is that Angular gives you the tool set required for delivering this very same functionality, to build our own custom elements (input controls, personalized tags, and self-contained widgets). We can feature the inner HTML markup of our choice and our very own style sheet that is not affected (nor is impacted) by the CSS of the page hosting our component. Why TypeScript over other syntaxes to code Angular apps? Angular applications can be coded in a wide variety of languages and syntaxes: ECMAScript 5, Dart, ECMAScript 6, TypeScript, or ECMAScript 7. TypeScript is a typed superset of ECMAScript 6 (also known as ECMAScript 2015) that compiles to plain JavaScript and is widely supported by modern OSes. It features a sound object-oriented design and supports annotations, decorators, and type checking. The reason why we picked (and obviously recommend) TypeScript as the syntax of choice for instructing how to develop Angular applications is based on the fact that Angular itself is written in this language. Being proficient in TypeScript will give the developer an enormous advantage when it comes to understanding the guts of the framework. On the other hand, it is worth remarking that TypeScript's support for annotations and type introspection turns out to be paramount when it comes to managing dependency injection and type binding between components with a minimum code footprint. Check out the book, Learning Angular 2nd edition, to learn how to do this. Ultimately, you can carry out your Angular projects in plain ECMAScript 6 syntax if that is your preference. Even the examples provided in this book can be easily ported to ES6 by removing type annotations and interfaces, or replacing the way dependency injection is handled in TypeScript with the most verbose ES6 way. For the sake of brevity, we will only cover examples written in TypeScript. We recommend its usage because of its higher expressivity thanks to type annotations, and its neat way of approaching dependency injection based on type introspection out of such type annotations. Setting up our workspace with Angular CLI There are different ways to get started, either using the Angular quickstart repository on the https://angular.io/ site, or installing the scaffolding tool Angular CLI, or, you could use Webpack to set up your project. It is worth pointing out that the standard way of creating a new Angular project is through using Angular CLI and scaffold your project. Systemjs, used by the quickstart repository, is something that used to be the default way of building Angular projects. It is now rapidly diminishing, but it is still a valid way of setting up an Angular project. Setting up a frontend project today is more cumbersome than ever. We used to just include the necessary script with our JavaScript code and a link tag for our CSS and img tag for our [SN1] assets and so on. Life used to be simple. Then frontend development became more ambitious and we started splitting up our code in modules, we started using preprocessors for both our code and CSS. All in all, our projects became more complicated and we started to rely on build systems such as Grunt, Gulp, Webpack, and so on. Most developers out there are not huge fans of configuration, they just want to focus on building apps. Modern browsers, however, do more to support the latest ECMAScript standard and some browsers have even started to support modules, which are resolved at runtime. This is far from being widely supported though. In the meantime, we still have to rely on tools for bundling and module support. Setting up a project with leading frameworks such as React or Angular can be quite difficult. You need to know what libraries to import and ensure that files are processed in the correct order, which leads us to the topic of scaffolding tools. For AngularJS, it was quite popular to use Yeoman to scaffold up a new application quickly and get a lot of nice things preconfigured. React has a scaffolder tool called create-react-app, which you probably have saved and it saves countless hours for React developers. Scaffolder tools becomes almost a necessity as complexity grows, but also where every hour counts towards producing business value rather than fighting configuration problems. The main motivation behind creating the Angular CLI tool was to help developers focus on app building and not so much on configuration. Essentially, with a simple command, you should be able to scaffold an application, add a new construct to it, run tests, or create a production grade bundle. Angular CLI supports all that. Prerequisites for installing Angular CLI What you need to get started is to have Git and Node.js installed. Node.js will also install something called NPM, a node package manager that you will use later to install files you need for your project. After this is done, you are ready to set up your Angular application. You can find installation files to Node.js. The easiest way to have it installed is to go to the site: Installing Node.js will also install something called NPM, Node Package Manager, which you will need to install dependencies and more. The Angular CLI requires Node 6.9.0 and NPM 3 or higher. Currently on the site, you can choose between an LTS version and the current version. The LTS version should be enough. Angular CLI Installation Installing the Angular CLI is as easy as running the following command in your Terminal: npm install -g @angular/cli On some systems, you may need to have elevated permissions to do so; in that case, run your Terminal window as an administrator and on Linux/macOS instead run the command like this: sudo npm install -g @angular/cli Building your first app Once the Angular CLI is in place the time has come to create your first project. To do so place yourself in a directory of your choice and type the following: ng new <give it a name here> Type the following: ng new TodoApp This will create a directory called TodoApp. After you have run the preceding command, there are two things you need to do to see your app in a browser: Navigate to the just created directory Serve up the application This will be accomplished by the following commands: cd TodoApp npm start At this point, open up your browser on http://localhost:4200 and you should see the following: Testing your app The Angular CLI doesn't just come with code that makes your app work. It also comes with code that sets up testing and includes a test. Running the said test is as easy as typing the following in the Terminal: You should see the following: How come this works? Let's have a look at the package.json file that was just created and the scripts tag. Everything specified here can be run using the following syntax: npm run <key> In some cases, it is not necessary to type run and it will be enough to just type: npm <key> This is the case with the start and test commands. The following listing makes it clear that it is possible to run more commands than start and test that we just learned about: "scripts": { "ng": "ng", "start": "ng serve", "build": "ng build", "test": "ng test", "lint": "ng lint", "e2e": "ng e2e" } So far we have learned how to install the Angular CLI. Using the Angular CLI we have learned to:    Scaffold a new project.    Serve up the project and see it displayed in a browser.    Run tests. That is quite an accomplishment. We will revisit the Angular CLI in a later chapter as it is a very competent tool, capable of a lot more. Hello Angular We are about to take the first trembling steps into building our first component. The Angular CLI has already scaffolded our project and thereby carried out a lot of heavy lifting. All we need to do is to create new file and starting filling it with content. The million dollar question is what to type? So let's venture into building our first component. There are three steps you need to take in creating a component. Those are:    Import the component decorator construct.    Decorate a class with a component decorator.    Add a component to its module (this might be in two different places). Creating the component First off, let's import the component decorator: import { Component } from '@angular/core'; Then create the class for your component: class AppComponent { title:string = 'hello app'; } Then decorate your class using the Component decorator: @Component({ selector: 'app', template: `<h1>{{ title }}</h1>` }) export class AppComponent { title: string = 'hello app'; } We give the Component decorator, which is function, an object literal as an input parameter. The object literal consists at this point of the selector and template keys, so let's explain what those are. Selector A selector is what it should be referred to if used in a template somewhere else. As we call it app, we would refer to it as: <app></app> Template/templateUrl The template or templateUrl is your view. Here you can write HTML markup. Using the  template keyword, in our object literal, means we get to define the HTML markup in the same file as the component class. Were we to use templateUrl, we would then place our HTML markup in a separate file. The preceding  example also lists the following double curly braces, in the markup: <h1>{{ title }}</h1> This will be treated as an interpolation and the expression will be replaced with the value of AppComponent's title field. The component, when rendered, will therefore look like this: hello app Telling the module Now we need to introduce a completely new concept, an Angular module. All types of constructs that you create in Angular should be registered with a module. An Angular module serves as a facade to the outside world and it  is nothing more than a class that is decorated by the decorate @NgModule. Just like the @Component decorator, the @NgModule decorator takes an object literal as an input parameter. To register our component with our Angular module, we need to give the object literal the property declarations. The declarations property is of a type array and by adding our component to that array we are registering it with the Angular module. The following code shows the creation of an Angular module and the component being registered with it by being added to declarations keyword array: import { AppComponent } from './app.component'; @NgModule({ declarations: [AppComponent] }) export class AppModule {} At this point, our Angular module knows about the component. We need to add one more property to our module, bootstrap. The bootstrap keyword states that whatever is placed in here serves as the entry component for the entire application. Because we only have one component, so far, it makes sense to register our component with this bootstrap keyword: @NgModule({ declarations:  [AppComponent], bootstrap: [AppComponent] }) export class AppModule {} It's definitely possible to have more than one entry component, but the usual scenario is that there is only one. For any future components, however, we will only need to add them to the declarations property, to ensure the module knows about them. So far we have created a component and an Angular module and registered the component with said the module. We don't really have a working application yet, as there is one more step we need to take. We need to set up the bootstrapping. To summarize, we have shown how to get started with the Angular CLI and create your first Angular component efficiently. If you are interested to know more, check out Learning Angular Second edition, to get your way through Angular and create dynamic applications with it. Building Components Using Angular Why switch to Angular for web development – Interview Insights 8 built-in Angular Pipes in Angular 4 that you should know    
Read more
  • 0
  • 0
  • 26686

article-image-microsofts-azure-container-service-acs-is-now-azure-kubernetes-services-aks
Savia Lobo
09 May 2018
2 min read
Save for later

Microsoft’s Azure Container Service (ACS) is now Azure Kubernetes Services (AKS)

Savia Lobo
09 May 2018
2 min read
At the Build 2018, Microsoft announced that its Azure Container Service (ACS), its managed Kubernetes service is now Azure Kubernetes Service (AKS), which is currently in preview and will soon be generally available. AKS is also a part of the Kubernetes Conformance Program, which is a certification program run by the Cloud Native Computing Foundation. The Azure Kubernetes Service (AKS) adds automated support for upgrades and scaling capabilities. It also includes self-healing aspects that aims to make spinning up containers on Kubernetes easier for developers. Developers now have added advantages with AKS, which include: A DevOps Project support for AKS : Now, with a few clicks developers can create a new AKS cluster, containerize their applications, deploy with a VSTS CI/CD pipeline, and view integrated App Insights telemetry with the DevOps project. New Azure Portal experience for AKS : This includes AKS create and browse experiences inside the Azure Portal, which makes it easier for cluster operators to configure and manage Kubernetes. Some features of AKS Custom VNET with Azure CNI : AKS now supports deploying Kubernetes nodes into custom VNETs using Azure CNI, with configurable IP ranges for Kubernetes networking components. Integration with Azure Monitor : AKS is now integrated directly into Azure Monitor for control plane telemetry, log aggregation, and container health monitoring. This provides operational visibility into one’s Kubernetes environment directly from the Azure portal. HTTP application routing : AKS also supports exposing public applications natively, using an Azure-integrated Kubernetes ingress controller. With this, customers can access their applications without having to configure DNS records and nameservers. Microsoft has also introduces a new Dev Spaces capability. With the AKS and the Dev Spaces, all a new developer needs is their IDE and the Azure CLI. The developers can simply create a new Dev Space inside AKS and can begin working on any component of their microservice environment safely, without impeding production traffic flows. Dev Spaces for AKS makes developing against a complex microservices environment simple. It is now available in private preview. To know more about the AKS in detail, visit Microsoft Azure Blog. Here is a quick recap of what happened at Day 1 of the Microsoft Build Conference 2018, if you are interested.   Everything you need to know about Jenkins X, the new cloud native CI/CD solution on Kubernetes Kubernetes 1.10 released The key differences between Kubernetes and Docker Swarm
Read more
  • 0
  • 0
  • 22745
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-create-aws-cloudtrail
Vijin Boricha
09 May 2018
8 min read
Save for later

How to create your own AWS CloudTrail

Vijin Boricha
09 May 2018
8 min read
AWS provides a wide variety of tools and managed services which allow you to safeguard your applications running on the cloud, such as AWS WAF and AWS Shield. But this, however, just forms one important piece of a much larger jigsaw puzzle! What about compliance monitoring, risk auditing, and overall governance of your environments? How do you effectively analyze events occurring in your environment and mitigate against the same? Well, luckily for us, AWS has the answer to our problems in the form of AWS CloudTrail. In today's post, we will explore AWS CloudTrail and learn how to create our own CloudTrail trail. [box type="shadow" align="" class="" width=""]This tutorial is an excerpt from the book AWS Administration - The Definitive Guide - Second Edition, written by Yohan Wadia.  This book will help you create a highly secure, fault-tolerant, and scalable Cloud environment for your applications to run on.[/box] AWS CloudTrail provides you with the ability to log every single action taken by a user, service, role, or even API, from within your AWS account. Each action recorded is treated as an event which can then be analyzed for enhancing the security of your AWS environment. The following are some of the key benefits that you can obtain by enabling CloudTrail for your AWS accounts: In-depth visibility: Using CloudTrail, you can easily gain better insights into your account's usage by recording each user's activities, such as which user initiated a new resource creation, from which IP address was this request initiated, which resources were created and at what time, and much more! Easier compliance monitoring: With CloudTrail, you can easily record and log events occurring within your AWS account, whether they may originate from the Management Console, or the AWS CLI, or even from other AWS tools and services. The best thing about this is that you can integrate CloudTrail with another AWS service, such as Amazon CloudWatch, to alert and respond to out-of-compliance events. Security automations: Automating responses to security threats not only enables you to mitigate the potential threats faster, but also provides you with a mechanism to stop all further attacks. The same can be applied to AWS CloudTrail as well! With its easy integration with Amazon CloudWatch events, you can now create corresponding Lambda functions that trigger automatically each time a compliance is not met, all in a matter of seconds! CloudTrail's essential concepts and terminologies With these key points in mind, let's have a quick look at some of CloudTrail's essential concepts and terminologies: Events Events are the basic unit of measurement in CloudTrail. Essentially, an event is nothing more than a record of a particular activity either initiated by the AWS services, roles, or even an AWS user. These activities are all logged as API calls that can originate from the Management Console, the AWS SDK, or even the AWS CLI as well. By default, events are stored by CloudTrail with S3 buckets for a period of 7 days. You can view, search, and even download these events by leveraging the events history feature provided by CloudTrail. Trails Trails are essentially the delivery mechanism, using which events are dumped to S3 buckets. You can use these trails to log specific events within specific buckets, as well as to filter events and encrypt the transmitted log files. By default, you can have a maximum of five trails created per AWS region, and this limit cannot by increased. CloudTrail Logs Once your CloudTrail starts capturing events, it sends these events to an S3 bucket in the form of a CloudTrail Log file. The log files are JSON text files that are compressed using the .gzip format. Each file can contain one or more events within itself. Here is a simple representation of what a CloudTrail Log looks like. In this case, the event was created when I tried to add an existing user by the name of Mike to an administrator group using the AWS Management Console: {"Records": [{ "eventVersion": "1.0", "userIdentity": { "type": "IAMUser", "principalId": "12345678", "arn": "arn:aws:iam::012345678910:user/yohan", "accountId": "012345678910", "accessKeyId": "AA34FG67GH89", "userName": "Alice", "sessionContext": {"attributes": { "mfaAuthenticated": "false", "creationDate": "2017-11-08T13:01:44Z" }} }, "eventTime": "2017-11-08T13:09:44Z", "eventSource": "iam.amazonaws.com", "eventName": "AddUserToGroup", "awsRegion": "us-east-1", "sourceIPAddress": "127.0.0.1", "userAgent": "AWSConsole", "requestParameters": { "userName": "Mike", "groupName": "administrator" }, "responseElements": null }]} You can view your own CloudTrail Log files by visiting the S3 bucket that you specify during the trail's creation. Each log file is named uniquely using the following format: AccountID_CloudTrail_RegionName_YYYYMMDDTHHmmZ_UniqueString.json.gz Where: AccountID: Your AWS account ID. RegionName: AWS region where the event was captured: us-east-1, and so on. YYYYMMDDTTHHmmz: Specifies the year, month, day, hour (24 hours), minutes, and seconds. The z indicates time in UTC. UniqueString: A randomly generated 16-character-long string that is simply used so that there is no overwriting of the log files. With the basics in mind, let's quickly have a look at how you can get started with CloudTrail for your own AWS environments! Creating your first CloudTrail Trail To get started, log in to your AWS Management Console and filter the CloudTrail service from the AWS services filter. On the CloudTrail dashboard, select the Create Trail option to get started: This will bring up the Create Trail wizard. Using this wizard, you can create a maximum of five-trails per region. Type a suitable name for the Trail into the Trail name field, to begin with. Next, you can either opt to Apply trail to all regions or only to the region out of which you are currently operating. Selecting all regions enables CloudTrail to record events from each region and dump the corresponding log files into an S3 bucket that you specify. Alternatively, selecting to record out of one region will only capture the events that occur from the region out of which you are currently operating. In my case, I have opted to enable the Trail only for the region I'm currently working out of. In the subsequent sections, we will learn how to change this value using the AWS CLI: Next, in the Management events section, select the type of events you wish to capture from your AWS environment. By default, CloudTrail records all management events that occur within your AWS account. These events can be API operations, such as events caused due to the invocation of an EC2 RunInstances or TerminateInstances operation, or even non-API based events, such as a user logging into the AWS Management Console, and so on. For this particular use case, I've opted to record All management events. Selecting the Read-only option will capture all the GET API operations, whereas the Write-only option will capture only the PUT API operations that occur within your AWS environment. Moving on, in the Storage location section, provide a suitable name for the S3 bucket that will store your CloudTrail Log files. This bucket will store all your CloudTrail Log files, irrespective of the regions the logs originated from. You can alternatively select an existing bucket from the S3 bucket selection field: Next, from the Advanced section, you can optionally configure a Log file prefix. By default, the logs will automatically get stored under a folder-like hierarchy that is usually of the form AWSLogs/ACCOUNT_ID/CloudTrail/REGION. You can also opt to Encrypt log files with the help of an AWS KMS key. Enabling this feature is highly recommended for production use. Selecting Yes in the Enable log file validation field enables you to verify the integrity of the delivered log files once they are delivered to the S3 bucket. Finally, you can even enable CloudTrail to send you notifications each time a new log file is delivered to your S3 bucket by selecting Yes against the Send SNS notification for every log file delivery option. This will provide you with an additional option to either select a predefined SNS topic or alternatively create a new one specifically for this particular CloudTrail. Once all the required fields are filled in, click on Create to continue. With this, you should be able to see the newly created Trail by selecting the Trails option from the CloudTrail dashboard's navigation pane, as shown in the following screenshot: We learned to create a new trail and enable notifications each time a new log file is delivered. If you are interested to learn more about CloudTrail Logs and AWS Config you may refer to this book  AWS Administration - The Definitive Guide - Second Edition. AWS SAM (AWS Serverless Application Model) is now open source! How to run Lambda functions on AWS Greengrass AWS Greengrass brings machine learning to the edge
Read more
  • 0
  • 0
  • 34021

article-image-nativescript-set-up
Amey Varangaonkar
09 May 2018
9 min read
Save for later

NativeScript: What is it, and how to set it up

Amey Varangaonkar
09 May 2018
9 min read
In this tutorial, we introduce you to the NativeScript library, which allows you to create and deploy a web application on a mobile device and use it like a mobile app, rather than as a web or a hybrid application. [box type="shadow" align="" class="" width=""]The following excerpt is taken from the book TypeScript 2.x By Example written by Sachin Ohri. This book presents essential techniques to leverage the power of TypeScript 2.x to build efficient web applications.[/box] What is NativeScript? NativeScript is the open source framework for building native Android and iOS applications with web technologies. This means we can develop native mobile applications with JavaScript, TypeScript, and/or Angular. It is based on the thinking of write once and run everywhere. Applications developed with NativeScript are pure mobile apps when compared to applications developed with technologies such as PhoneGap. As they are native mobile applications, we can use all the richness of the mobile platform and provide the performance associated with that. We use native APIs and use native controls to render, which allows us to create more sophisticated applications compared to a hybrid approach. Hybrid applications do not provide the same level of flexibility or performance because they are hosted on a separate framework and do not get to interact with low-level mobile APIs directly. The best part is that it does not require us to learn a new programming language, unlike developing an iOS-based application, for which you need to know Objective C or Swift. So, we can use our existing skills to develop mobile applications. NativeScript design NativeScript is a runtime that sits on top of the native mobile operating system and uses the JavaScript Virtual Machine (JVM) V8 on Android and JavaScriptCore on iOS. Having access to these platforms allows NativeScript to expose a unified API system for developers, which is then converted into the native API at runtime. This translation between the JavaScript APIs and the native platform APIs is possible through reflection, which NativeScript uses to create its own set of interfaces. Another advantage of using JavaScript by NativeScript is its independence from specific editors. You can use any of your favorite editors to develop a NativeScript application, and you will have access to all the native APIs rather than using Xcode for iOS-based apps and Android Studio for Android-based apps. Architecture The following is a high-level diagram of NativeScript and its interaction with the mobile platform: As we can see, the runtime is responsible for converting JavaScript application code to the native platform code. It has various components that work together to convert and call the native APIs. Because NativeScript uses JVM and JavaScriptCore, it has access to all the latest ECMAScript language specifications for development, which allows us to use the latest ES6 feature set. One of the main components that we need to understand in NativeScript design is modules. Modules The NativeScript team made sure that the platform was developed in a modular fashion, much like plugins, which allow us to include only the modules that we need in our development. These modules provide us with the abstraction of native APIs and allow us to write code that work on both platforms. It has separate APIs for each logical functionality. For example, if you want to use SQLite for your storage needs, there is a package for that; if you want to use a filesystem, there is a package for that. Let's take one example to see how these modules help us write consistent code for a multiplatform environment. If you want to access a filesystem on the native platform using NativeScript, you will write code similar to what you see in the following code snippet: var filesystem = require("file-system"); new filesystem.file(path) This code is written in pure JavaScript, which first gets a reference to a file-system module, and then, using the API of the file-system module calls a file method. This code, when executed by the NativeScript runtime, first checks the platform it wants to run on and then converts the code accordingly, as shown in the following code snippets. The Android version of the code will be as follows: new java.io.file(path) The iOS version of the code will be as follows: nsFileManager.defaultManager(); fileManager.createFileAtPathContentsAttributes(path); If you have worked on any of the mobile platforms before, you will recognize this code as using the native filesystem API to access the file path. NativeScript versus web applications Until now, we have been mentioning that we can use our web technologies to write mobile applications with the help of NativeScript. So, can we write a pure web application and use the it in runtime to create a mobile application? Yes and no. Yes, we can, and we will see with our application that we can use the same code base to write with NativeScript. No, because not all components of web applications can be directly used. NativeScript allows us to use our existing JavaScript/TypeScript and CSS skills for developing the business logic and the design for our application. But because the native platforms are not web-based and do not have a DOM, we cannot use HTML as the template for our applications. Although you will see that the extension of our template files will be HTML, the element tags will be somewhat different. To give you a brief example, it does not have UI elements such as <div> or <span>, but has elements such as <StackLayout> and <DockLayout>, which allow us to arrange our UI components. Another thing to note here is that these UI elements are then converted into native elements based on the platform. So, if we use the <Button> control in NativeScript, it will get converted into android.widget.Button on the Android platform and UIButton on iOS. Setting up your NativeScript environment NativeScript provides very good documentation about installing and setting up your development environment. You can find the documentation at https://docs.nativescript.org/angular/start/quick-setup. We will briefly go through the setup process here, but recommend that you go through the documentation to understand the process. NativeScript CLI The best way to use is through the NativeScript CLI. You can install it from npm using the following command: npm install -g nativescript This command will install the NativeScript library in your global scope. To confirm that the installation has been successful, you can try running the following command from the command-line window: tns The tns command is a short form for Telerik NativeScript, and will show the array of commands associated with it. The NativeScript CLI comes with a host of commands to assist in our development, commands such as create, which helps us create a basic startup project, and deploy, which informs the NativeScript CLI to deploy the application to the device (the device can be a connected device or an emulator). You can check all the commands available with the NativeScript CLI by using the help command as follows: tns --help Installing mobile platform dependencies To build native applications, we need to install the dependencies for those mobile platforms. It is important to remember that if we want to build a NativeScript application for iOS and run it on an iOS-compatible device, we need to use macOS; for building Android applications, we can use both Windows and macOS. It provides an easy single script for Windows and macOS that takes care of the responsibility to install all the tools and framework required. The script for Windows is as shown in the following code: @powershell -NoProfile -ExecutionPolicy Bypass -Command "iex ((new-object net.webclient).DownloadString('https://www.nativescript.org/setup/win'))" The script for iOS is as shown in the following code: ruby -e "$(curl -fsSL https://www.nativescript.org/setup/mac)" It's important to note that these scripts require administrator-level privileges, so you may need to run them using the sudo command. It also provides a step-by-step guide to installing all these dependencies manually; details can be found at https://docs.nativescript.org/start/ns-setup-win. Once you have installed all the packages, you can check if the installation was successful by running the following command: tns doctor This command checks all the required prerequisites for building a NativeScript application, and if there are no issues identified, this command will return a success message, No issues were detected. Installing an Android Virtual Device Once you have installed all the dependencies, the next step is to install an Android emulator, which can be used for testing instead of connecting real devices. To be able to create an emulator, you need to have Android Studio on your machine. You can install Android Studio from https://developer.android.com/studio/index.html. Once you have installed Android Studio, you can check whether you have the correct Android SDK version. The NativeScript CLI needs Android SDK version 25 or higher; if you see that you do not have the required Android SDK version, then you can install it either using the following command or using the Android Studio IDE: "%ANDROID_HOME%\tools\bin\sdkmanager" "tools" "platform-tools" "platforms;android-25" "build-tools;25.0.2" "extras;android;m2repository" "extras;google;m2repository" To install the Android emulator, we use Android Studio, the details of which can be found at https://docs.nativescript.org/tooling/android-virtual-devices. On macOS, we need to make sure we have hXcodeCode installed, or else, we will not be able to run iOS-based applications. Again, you can use the tns doctor command to check if your installation was successful. And that's it! You have successfully installed and set up the NativeScript environment. Want to learn how to develop native web apps? We've got it covered. All you have to do is check out this book TypeScript 2.x By Example to create and deploy web app as a native app in a step-by-step manner. Tools in TypeScript Introducing Object Oriented Programmng with TypeScript Writing SOLID JavaScript code with TypeScript  
Read more
  • 0
  • 0
  • 45490

article-image-functional-programs-with-f
Kunal Chaudhari
08 May 2018
23 min read
Save for later

Building functional programs with F#

Kunal Chaudhari
08 May 2018
23 min read
Functional programming treats programs as mathematical expressions and evaluates expressions. It focuses on functions and constants, which don't change, unlike variables and states. Functional programming solves complex problems with simple code; it is a very efficient programming technique for writing bug-free applications; for example, the null exception can be avoided using this technique. In today's tutorial, we will learn how to build functional programs with F# that leverage .NET Core. Here are some rules to understand functional programming better: In functional programming, a function's output never gets affected by outside code changes and the function always gives the same result for the same parameters. This gives us confidence in the function's behavior that it will give the expected result in all the scenarios, and this is helpful for multithread or parallel programming. In functional programming, variables are immutable, which means we cannot modify a variable once it is initialized, so it is easy to determine the value of a variable at any given point at program runtime. Functional programming works on referential transparency, which means it doesn't use assignment statements in a function. For example, if a function is assigning a new value to a variable such as shown here: Public int sum(x) { x = x + 20 ; return x; } This is changing the value of x, but if we write it as shown here: Public int sum(x) { return x + 20 ; } This is not changing the variable value and the function returns the same result. Functional programming uses recursion for looping. A recursive function calls itself and runs till the condition is satisfied. Functional programming features Let's discuss some functional programming features: Higher-order functions Purity Recursion Currying Closure Function composition Higher-order functions (HOF) One function can take an input argument as another function and it can return a function. This originated from calculus and is widely used in functional programming. An order can be determined by domain and range of order such as order 0 has no function data and order 1 has a domain and range of order 0, if the order is higher than 1, it is called a higher-order function. For example, the ComplexCalc function takes another function as input and returns a different function as output: open System let sum y = x+x let divide y = x/x Let ComplexCalc func = (func 2) Printfn(ComplexCalc sum) // 4 Printfn(ComplexCalc divide) //1 In the previous example, we created two functions, sum and divide. We pass these two functions as parameters to the ComplexCalc function, and it returns a value of 4 and 1, respectively. Purity In functional programming, a function is referred to as a pure function if all its input arguments are known and all its output results are also well known and declared; or we can say the input and output result has no side-effects. Now, you must be curious to know what the side-effect could be, let's discuss it. Let's look at the following example: Public int sum(int x) { return x+x; } In the previous example, the function sum takes an integer input and returns an integer value and predefined result. This kind of function is referred to as a pure function. Let's investigate the following example: Public void verifyData() { Employee emp = OrgQueue.getEmp(); If(emp != null) { ProcessForm(emp); } } In the preceding example, the verifyData() function does not take any input parameter and does not return anything, but this function is internally calling the getEmp() function so verifyData() depends on the getEmp() function. If the output of getEmp() is not null, it calls another function, called ProcessForm() and we pass the getEmp() function output as input for ProcessForm(emp). In this example, both the functions, getEmp() and ProcessForm(), are unknown at the verifyData() function level call, also emp is a hidden value. This kind of program, which has hidden input and output, is treated as a side-effect of the program. We cannot understand what it does in such functions. This is different from encapsulation; encapsulation hides the complexity but in such function, the functionality is not clear and input and output are unreliable. These kinds of function are referred to as impure functions. Let's look at the main concepts of pure functions: Immutable data: Functional programming works on immutable data, it removes the side-effect of variable state change and gives a guarantee of an expected result. Referential transparency: Large modules can be replaced by small code blocks and reuse any existing modules. For example, if a = b*c and d = b*c*e then the value of d can be written as d = a*e. Lazy evaluation: Referential transparency and immutable data give us the flexibility to calculate the function at any given point of time and we will get the same result because a variable will not change its state at any time. Recursion In functional programming, looping is performed by recursive functions. In F#, to make a function recursive, we need to use the rec keyword. By default, functions are not recursive in F#, we have to rectify this explicitly using the rec keyword. Let's take an example: let rec summation x = if x = 0 then 0 else x + summation(x-1) printfn "The summation of first 10 integers is- %A" (summation 10) In this code, we used the keyword rec for the recursion function and if the value passed is 0, the sum would be 0; otherwise it will add x + summation(x-1), like 1+0 then 2+1 and so on. We should take care with recursion because it can consume memory heavily. Currying This converts a function with multiple input parameter to a function which takes one parameter at a time, or we can say it breaks the function into multiple functions, each taking one parameter at a time. Here is an example: int sum = (a,b) => a+b int sumcurry = (a) =>(b) => a+b sumcurry(5)(6) // 11 int sum8 = sumcurry(8) // b=> 8+b sum8(5) // 13 Closure Closure is a feature which allows us to access a variable which is not within the scope of the current module. It is a way of implementing lexically scoped named binding, for example: int add = x=> y=> x+y int addTen = add(10) addTen(5) // this will return 15 In this example, the add() function is internally called by the addTen() function. In an ideal world, the variables x and y should not be accessible when the add() function finishes its execution, but when we are calling the function addTen(), it returns 15. So, the state of the function add() is saved even though code execution is finished, otherwise, there is no way of knowing the add(10) value, where x = 10. We are able to find the value of x because of lexical scoping and this is called closure. Function composition As we discussed earlier in HOF, function composition means getting two functions together to create a third new function where the output of a function is the input of another function. There are n number of functional programming features. Functional programming is a technique to solve problems and write code in an efficient way. It is not language-specific, but many languages support functional programming. We can also use non-functional languages (such as C#) to write programs in a functional way. F# is a Microsoft programming language for concise and declarative syntax. Getting started with F# In this section, we will discuss F# in more detail. Classes Classes are types of object which can contain functions, properties, and events. An F# class must have a parameter and a function attached to a member. Both properties and functions can use the member keyword. The following is the class definition syntax: type [access-modifier] type-name [type-params] [access-modifier] (parameter-list) [ as identifier ] = [ class ] [ inherit base-type-name(base-constructor-args) ] [ let-bindings ] [ do-bindings ] member-list [ end ] // Mutually recursive class definitions: type [access-modifier] type-name1 ... and [access-modifier] type-name2 ... Let’s discuss the preceding syntax for class declaration: type: In the F# language, class definition starts with a type keyword. access-modifier: The F# language supports three access modifiers—public, private, and internal. By default, it considers the public modifier if no other access modifier is provided. The Protected keyword is not used in the F# language, and the reason is that it will become object-oriented rather than functional programming. For example, F# usually calls a member using a lambda expression and if we make a member type protected and call an object of a different instance, it will not work. type-name: It is any of the previously mentioned valid identifiers; the default access modifier is public. type-params: It defines optional generic type parameters. parameter-list: It defines constructor parameters; the default access modifier for the primary constructor is public. identifier: It is used with the optional as keyword, the as keyword gives a name to an instance variable which can be used in the type definition to refer to the instance of the type. Inherit: This keyword allows us to specify the base class for a class. let-bindings: This is used to declare fields or function values in the context of a class. do-bindings: This is useful for the execution of code to create an object member-list: The member-list comprises extra constructors, instance and static method declarations, abstract bindings, interface declarations, and event and property declarations. Here is an example of a class: type StudentName(firstName,lastName) = member this.FirstName = firstName member this.LastName = lastName In the previous example, we have not defined the parameter type. By default, the program considers it as a string value but we can explicitly define a data type, as follows: type StudentName(firstName:string,lastName:string) = member this.FirstName = firstName member this.LastName = lastName Constructor of a class In F#, the constructor works in a different way to any other .NET language. The constructor creates an instance of a class. A parameter list defines the arguments of the primary constructor and class. The constructor contains let and do bindings, which we will discuss next. We can add multiple constructors, apart from the primary constructor, using the new keyword and it must invoke the primary constructor, which is defined with the class declaration. The syntax of defining a new constructor is as shown: new (argument-list) = constructor-body Here is an example to explain the concept. In the following code, the StudentDetail class has two constructors: a primary constructor that takes two arguments and another constructor that takes no arguments: type StudentDetail(x: int, y: int) = do printfn "%d %d" x y new() = StudentDetail(0, 0) A let and do binding A let and do binding creates the primary constructor of a class and runs when an instance of a class is created. A function is compiled into a member if it has a let binding. If the let binding is a value which is not used in any function or member, then it is compiled into a local variable of a constructor; otherwise, it is compiled into a field of the class. The do expression executes the initialized code. As any extra constructors always call the primary constructor, let and do bindings always execute, irrespective of which constructor is called. Fields that are created by let bindings can be accessed through the methods and properties of the class, though they cannot be accessed from static methods, even if the static methods take an instance variable as a parameter: type Student(name) as self = let data = name do self.PrintMessage() member this.PrintMessage() = printf " Student name is %s" data Generic type parameters F# also supports a generic parameter type. We can specify multiple generic type parameters separated by a comma. The syntax of a generic parameter declaration is as follows: type MyGenericClassExample<'a> (x: 'a) = do printfn "%A" x The type of the parameter infers where it is used. In the following code, we call the MyGenericClassExample method and pass a sequence of tuples, so here the parameter type became a sequence of tuples: let g1 = MyGenericClassExample( seq { for i in 1 .. 10 -> (i, i*i) } ) Properties Values related to an object are represented by properties. In object-oriented programming, properties represent data associated with an instance of an object. The following snippet shows two types of property syntax: // Property that has both get and set defined. [ attributes ] [ static ] member [accessibility-modifier] [self- identifier.]PropertyName with [accessibility-modifier] get() = get-function-body and [accessibility-modifier] set parameter = set-function-body // Alternative syntax for a property that has get and set. [ attributes-for-get ] [ static ] member [accessibility-modifier-for-get] [self-identifier.]PropertyName = get-function-body [ attributes-for-set ] [ static ] member [accessibility-modifier-for-set] [self- identifier.]PropertyName with set parameter = set-function-body There are two kinds of property declaration: Explicitly specify the value: We should use the explicit way to implement the property if it has non-trivial implementation. We should use a member keyword for the explicit property declaration. Automatically generate the value: We should use this when the property is just a simple wrapper for a value. There are many ways of implementing an explicit property syntax based on need: Read-only: Only the get() method Write-only: Only the set() method Read/write: Both get() and set() methods An example is shown as follows: // A read-only property. member this.MyReadOnlyProperty = myInternalValue // A write-only property. member this.MyWriteOnlyProperty with set (value) = myInternalValue <- value // A read-write property. member this.MyReadWriteProperty with get () = myInternalValue and set (value) = myInternalValue <- value Backing stores are private values that contain data for properties. The keyword, member val instructs the compiler to create backing stores automatically and then gives an expression to initialize the property. The F# language supports immutable types, but if we want to make a property mutable, we should use get and set. As shown in the following example, the MyClassExample class has two properties: propExample1 is read-only and is initialized to the argument provided to the primary constructor, and propExample2 is a settable property initialized with a string value ".Net Core 2.0": type MyClassExample(propExample1 : int) = member val propExample1 = property1 member val propExample2 = ".Net Core 2.0" with get, set Automatically implemented properties don't work efficiently with some libraries, for example, Entity Framework. In these cases, we should use explicit properties. Static and instance properties We can further categorize properties as static or instance properties. Static, as the name suggests, can be invoked without any instance. The self-identifier is neglected by the static property while it is necessary for the instance property. The following is an example of the static property: static member MyStaticProperty with get() = myStaticValue and set(value) = myStaticValue <- value Abstract properties Abstract properties have no implementation and are fully abstract. They can be virtual. It should not be private and if one accessor is abstract all others must be abstract. The following is an example of the abstract property and how to use it: // Abstract property in abstract class. // The property is an int type that has a get and // set method [<AbstractClass>] type AbstractBase() = abstract Property1 : int with get, set // Implementation of the abstract property type Derived1() = inherit AbstractBase() let mutable value = 10 override this.Property1 with get() = value and set(v : int) = value <- v // A type with a "virtual" property. type Base1() = let mutable value = 10 abstract Property1 : int with get, set default this.Property1 with get() = value and set(v : int) = value <- v // A derived type that overrides the virtual property type Derived2() = inherit Base1() let mutable value2 = 11 override this.Property1 with get() = value2 and set(v) = value2 <- v Inheritance and casts In F#, the inherit keyword is used while declaring a class. The following is the syntax: type MyDerived(...) = inherit MyBase(...) In a derived class, we can access all methods and members of the base class, but it should not be a private member. To refer to base class instances in the F# language, the base keyword is used. Virtual methods and overrides  In F#, the abstract keyword is used to declare a virtual member. So, here we can write a complete definition of the member as we use abstract for virtual. F# is not similar to other .NET languages. Let's have a look at the following example: type MyClassExampleBase() = let mutable x = 0 abstract member virtualMethodExample : int -> int default u. virtualMethodExample (a : int) = x <- x + a; x type MyClassExampleDerived() = inherit MyClassExampleBase () override u. virtualMethodExample (a: int) = a + 1 In the previous example, we declared a virtual method, virtualMethodExample, in a base class, MyClassExampleBase, and overrode it in a derived class, MyClassExampleDerived. Constructors and inheritance An inherited class constructor must be called in a derived class. If a base class constructor contains some arguments, then it takes parameters of the derived class as input. In the following example, we will see how derived class arguments are passed in the base class constructor with inheritance: type MyClassBase2(x: int) = let mutable z = x * x do for i in 1..z do printf "%d " i type MyClassDerived2(y: int) = inherit MyClassBase2(y * 2) do for i in 1..y do printf "%d " i If a class has multiple constructors, such as new(str) or new(), and this class is inherited in a derived class, we can use a base class constructor to assign values. For example, DerivedClass, which inherits BaseClass, has new(str1,str2), and in place of the first string, we pass inherit BaseClass(str1). Similarly, for blank, we wrote inherit BaseClass(). Let's explore the following example in more detail: type BaseClass = val string1 : string new (str) = { string1 = str } new () = { string1 = "" } type DerivedClass = inherit BaseClass val string2 : string new (str1, str2) = { inherit BaseClass(str1); string2 = str2 } new (str2) = { inherit BaseClass(); string2 = str2 } let obj1 = DerivedClass("A", "B") let obj2 = DerivedClass("A") Functions and lambda expressions A lambda expression is one kind of anonymous function, which means it doesn't have a name attached to it. But if we want to create a function which can be called, we can use the fun keyword with a lambda expression. We can pass the kind parameter in the lambda function, which is created using the fun keyword. This function is quite similar to a normal F# function. Let's see a normal F# function and a lambda function: // Normal F# function let addNumbers a b = a+b // Evaluating values let sumResult = addNumbers 5 6 // Lambda function and evaluating values let sumResult = (fun (a:int) (b:int) -> a+b) 5 6 // Both the function will return value sumResult = 11 Handling data – tuples, lists, record types, and data manipulation F# supports many kind data types, for example: Primitive types: bool, int, float, string values. Aggregate type: class, struct, union, record, and enum Array: int[], int[ , ], and float[ , , ] Tuple: type1 * type2 * like (a,1,2,true) type is—char * int * int * bool Generic: list<’x>, dictionary < ’key, ’value> In an F# function, we can pass one tuple instead parameters of multiple parameters of different types. Declaration of a tuple is very simple and we can assign values of a tuple to different variables, for example: let tuple1 = 1,2,3 // assigning values to variables , v1=1, v2= 2, v3=3 let v1,v2,v3 = tuple1 // if we want to assign only two values out of three, use “_” to skip the value. Assigned values: v1=1, //v3=3 let v1,_,v3 = tuple In the preceding examples, we saw that tuple supports pattern matching. These are option types and an option type in F# supports the idea that the value may or not be present at runtime. List List is a generic type implementation. An F# list is similar to a linked list implementation in any other functional language. It has a special opening and closing bracket construct, a short form of the standard empty list ([ ]) syntax: let empty = [] // This is an empty list of untyped type or we can say //generic type. Here type is: 'a list let intList = [10;20;30;40] // this is an integer type list The cons operator is used to prepend an item to a list using a double colon cons(prepend,::). To append another list to one list, we use the append operator—@: // prepend item x into a list let addItem xs x = x :: xs let newIntList = addItem intList 50 // add item 50 in above list //“intlist”, final result would be- [50;10;20;30;40] // using @ to append two list printfn "%A" (["hi"; "team"] @ ["how";"are";"you"]) // result – ["hi"; "team"; "how";"are";"you"] Lists are decomposable using pattern matching into a head and a tail part, where the head is the first item in the list and the tail part is the remaining list, for example: printfn "%A" newIntList.Head printfn "%A" newIntList.Tail printfn "%A" newIntList.Tail.Tail.Head let rec listLength (l: 'a list) = if l.IsEmpty then 0 else 1 + (listLength l.Tail) printfn "%d" (listLength newIntList) Record type The class, struct, union, record, and enum types come under aggregate types. The record type is one of them, it can have n number of members of any individual type. Record type members are by default immutable but we can make them mutable. In general, a record type uses the members as an immutable data type. There is no way to execute logic during instantiation as a record type don't have constructors. A record type also supports match expression, depending on the values inside those records, and they can also again decompose those values for individual handling, for example: type Box = {width: float ; height:int } let giftbox = {width = 6.2 ; height = 3 } In the previous example, we declared a Box with float a value width and an integer height. When we declare giftbox, the compiler automatically detects its type as Box by matching the value types. We can also specify type like this: let giftbox = {Box.width = 6.2 ; Box.height = 3 } or let giftbox : Box = {width = 6.2 ; height = 3 } This kind of type declaration is used when we have the same type of fields or field type declared in more than one type. This declaration is called a record expression. Object-oriented programming in F# F# also supports implementation inheritance, the creation of object, and interface instances. In F#, constructed types are fully compatible .NET classes which support one or more constructors. We can implement a do block with code logic, which can run at the time of class instance creation. The constructed type supports inheritance for class hierarchy creation. We use the inherit keyword to inherit a class. If the member doesn't have implementation, we can use the abstract keyword for declaration. We need to use the abstractClass attribute on the class to inform the compiler that it is abstract. If the abstractClass attribute is not used and type has all abstract members, the F# compiler automatically creates an interface type. Interface is automatically inferred by the compiler as shown in the following screenshot: The override keyword is used to override the base class implementation; to use the base class implementation of the same member, we use the base keyword. In F#, interfaces can be inherited from another interface. In a class, if we use the construct interface, we have to implement all the members in the interface in that class, as well. In general, it is not possible to use interface members from outside the class instance, unless we upcast the instance type to the required interface type. To create an instance of a class or interface, the object expression syntax is used. We need to override virtual members if we are creating a class instance and need member implementation for interface instantiation: type IExampleInterface = abstract member IntValue: int with get abstract member HelloString: unit -> string type PrintValues() = interface IExampleInterface with member x.IntValue = 15 member x.HelloString() = sprintf "Hello friends %d" (x :> IExampleInterface).IntValue let example = let varValue = PrintValues() :> IExampleInterface { new IExampleInterface with member x.IntValue = varValue.IntValue member x.HelloString() = sprintf "<b>%s</b>" (varValue.HelloString()) } printfn "%A" (example.HelloString()) Exception handling The exception keyword is used to create a custom exception in F#; these exceptions adhere to Microsoft best practices, such as constructors supplied, serialization support, and so on. The keyword raise is used to throw an exception. Apart from this, F# has some helper functions, such as failwith, which throws a failure exception at F# runtime, and invalidop, invalidarg, which throw the .NET Framework standard type invalid operation and invalid argument exception, respectively. try/with is used to catch an exception; if an exception occurred on an expression or while evaluating a value, then the try/with expression could be used on the right side of the value evaluation and to assign the value back to some other value. try/with also supports pattern matching to check an individual exception type and extract an item from it. try/finally expression handling depends on the actual code block. Let's take an example of declaring and using a custom exception: exception MyCustomExceptionExample of int * string raise (MyCustomExceptionExample(10, "Error!")) In the previous example, we created a custom exception called MyCustomExceptionExample, using the exception keyword, passing value fields which we want to pass. Then we used the raise keyword to raise exception passing values, which we want to display while running the application or throwing the exception. However, as shown here, while running this code, we don't get our custom message in the error value and the standard exception message is displayed: We can see in the previous screenshot that the exception message doesn't contain the message that we passed. In order to display our custom error message, we need to override the standard message property on the exception type. We will use pattern matching assignment to get two values and up-cast the actual type, due to the internal representation of the exception object. If we run this program again, we will get the custom message in the exception: exception MyCustomExceptionExample of int * string with override x.Message = let (MyCustomExceptionExample(i, s)) = upcast x sprintf "Int: %d Str: %s" i s raise (MyCustomExceptionExample(20, "MyCustomErrorMessage!")) Now, we will get the following error message: In the previous screenshot, we can see our custom message with integer and string values included in the output. We can also use the helper function, failwith, to raise a failure exception, as it includes our message as an error message, as follows: failwith "An error has occurred" The preceding error message can be seen in the following screenshot: Here is a detailed exception screenshot: An example of the invalidarg helper function follows. In this factorial function, we are checking that the value of x is greater than zero. For cases where x is less than 0, we call invalidarg, pass x as the parameter name that is invalid, and then some error message saying the value should be greater than 0. The invalidarg helper function throws an invalid argument exception from the standard system namespace in .NET: let rec factorial x = if x < 0 then invalidArg "x" "Value should be greater than zero" match x with | 0 -> 1 | _ -> x * (factorial (x - 1)) To summarize, we discussed functional programming and its features, such as higher-order functions, purity, lazy evaluation and how to write functions and lambda expressions in F#, exception handling, and so on. You enjoyed an excerpt from a book written by Rishabh Verma and Neha Shrivastava, titled  .NET Core 2.0 By Example. This book will give a detailed walkthrough on functional programming with F# and .NET Core from scratch. What is functional reactive programming? Functional Programming in C#  
Read more
  • 0
  • 0
  • 39815

article-image-8-recipes-to-master-promises-in-ecmascript-2018
Richa Tripathi
08 May 2018
17 min read
Save for later

8 recipes to master Promises in ECMAScript 2018

Richa Tripathi
08 May 2018
17 min read
What are Promises in ECMAScript? In earlier versions of JavaScript, the callback pattern was the most common way to organize asynchronous code. It got the job done, but it didn't scale well. With callbacks, as more asynchronous functions are added, the code becomes more deeply nested, and it becomes more difficult to add to, refactor, and understand the code. This situation is commonly known as callback hell. Promises were introduced to improve on this situation. Promises allow the relationships of asynchronous operations to be rearranged and organized with more freedom and flexibility. In this context, today we will learn about Promises and how to use it to create and organize asynchronous functions. We will also explore how to handle error conditions. Creating and waiting for Promises Promises provide a way to compose and combine asynchronous functions in an organized and easier to read way. This recipe demonstrates a very basic usage of promises. This recipe assumes that you already have a workspace that allows you to create and run ES modules in your browser for all the recipes given below: How to do it... Open your command-line application and navigate to your workspace. Create a new folder named 03-01-creating-and-waiting-for-promises. Copy or create an index.html that loads and runs a main function from main.js. Create a main.js file that creates a promise and logs messages before and after the promise is created, as well as while the promise is executing and after it has been resolved: // main.js export function main () { console.log('Before promise created'); new Promise(function (resolve) { console.log('Executing promise'); resolve(); }).then(function () { console.log('Finished promise'); }); console.log('After promise created'); } Start your Python web server and open the following link in your browser: http://localhost:8000/. You will see the following output: How it works... By looking at the order of the log messages, you can clearly see the order of operations. First, the initial log is executed. Next, the promise is created with an executor method. The executor method takes resolve as an argument. The resolve function fulfills the promise. Promises adhere to an interface named thenable. This means that we can chain then callbacks. The callback we attached with this method is executed after the resolve function is called. This function executes asynchronously (not immediately after the Promise has been resolved). Finally, there is a log after the promise has been created. The order the logs messages appear reveals the asynchronous nature of the code. All of the logs are seen in the order they appear in the code, except the Finished promise message. That function is executed asynchronously after the main function has exited! Resolving Promise results In the previous recipe, we saw how to use promises to execute asynchronous code. However, this code is pretty basic. It just logs a message and then calls resolve. Often, we want to use asynchronous code to perform some long-running operation, then return that value. This recipe demonstrates how to use resolve in order to return the result of a long-running operation. How to do it... Open your command-line application and navigate to your workspace.  Create a new folder named 3-02-resolving-promise-results. Copy or create an index.html that loads and runs a main function from main.js. Create a main.js file that creates a promise and logs messages before and after the promise is created: // main.js export function main () { console.log('Before promise created'); new Promise(function (resolve) { }); console.log('After promise created'); } Within the promise, resolve a random number after a 5-second timeout: new Promise(function (resolve) { setTimeout(function () { resolve(Math.random()); }, 5000); }) Chain a then call off the promise. Pass a function that logs out the value of its only argument: new Promise(function (resolve) { setTimeout(function () { resolve(Math.random()); }, 5000); }).then(function (result) { console.log('Long running job returned: %s', result); }); Start your Python web server and open the following link in your browser: http://localhost:8000/. You should see the following output: How it works... Just as in the previous recipe, the promise was not fulfilled until resolve was executed (this time after 5 seconds). This time however, we passed the called  resolve immediately with a random number for an argument. When this happens, the argument is provided to the callback for the subsequent then function. We'll see in future recipes how this can be continued to create promise chains. Rejecting Promise errors In the previous recipe, we saw how to use resolve to provide a result from a successfully fulfilled promise. Unfortunately, the code doesn't always run as expected. Network connections can be down, data can be corrupted, and uncountable other errors can occur. We need to be able to handle those situations as well. This recipe demonstrates how to use reject when errors arise. How to do it... Open your command-line application and navigate to your workspace. Create a new folder named 3-03-rejecting-promise-errors. Copy or create an index.html that loads and runs a main function from main.js. Create a main.js file that creates a promise, and logs messages before and after the promise is created and when the promise is fulfilled: new Promise(function (resolve) { resolve(); }).then(function (result) { console.log('Promise Completed'); }); Add a second argument to the promise callback named reject, and call reject with a new error: new Promise(function (resolve, reject) { reject(new Error('Something went wrong'); }).then(function (result) { console.log('Promise Completed'); }); Chain a catch call off the promise. Pass a function that logs out its only argument: new Promise(function (resolve, reject) { reject(new Error('Something went wrong'); }).then(function (result) { console.log('Promise Completed'); }).catch(function (error) { console.error(error); }); Start your Python web server and open the following link in your browser: http://localhost:8000/. You should see the following output: How it works... Previously we saw how to use resolve to return a value in the case of a successful fulfillment of a promise. In this case, we called reject before resolve. This means that the Promise finished with an error before it could resolve. When the Promise completes in an error state, the then callbacks are not executed. Instead we have to use catch in order to receive the error that the Promise rejects. You'll also notice that the catch callback is only executed after the main function has returned. Like successful fulfillment, listeners to unsuccessful ones execute asynchronously. See also Handle errors with Promise.catch Simulating finally with Promise.then Chaining Promises So far in this article, we've seen how to use promises to run single asynchronous tasks. This is helpful but doesn't provide a significant improvement over the callback pattern. The real advantage that promises offer comes when they are composed. In this recipe, we'll use promises to combine asynchronous functions in series. How to do it... Open your command-line application and navigate to your workspace. Create a new folder named 3-04-chaining-promises. Copy or create an index.html that loads and runs a main function from main.js. Create a main.js file that creates a promise. Resolve a random number from the promise: new Promise(function (resolve) { resolve(Math.random()); }); ); Chain a then call off of the promise. Return true from the callback if the random value is greater than or equal to 0.5: new Promise(function (resolve, reject) { resolve(Math.random()); }).then(function(value) { return value >= 0.5; }); Chain a final then call after the previous one. Log out a different message if the argument is true or false: new Promise(function (resolve, reject) { resolve(Math.random()); }).then(function (value) { return value >= 0.5; }).then(function (isReadyForLaunch) { if (isReadyForLaunch) { console.log('Start the countdown! '); } else { console.log('Abort the mission. '); } }); Start your Python web server and open the following link in your browser: http://localhost:8000/. If you are lucky, you'll see the following output: If you are unlucky, we'll see the following output: How it works... We've already seen how to use then to wait for the result of a promise. Here, we are doing the same thing multiple times in a row. This is called a promise chain. After the promise chain is started with the new promise, all of the subsequent links in the promise chain return promises as well. That is, the callback of each then function is resolve like another promise. See also Using Promise.all to resolve multiple Promises Handle errors with Promise.catch Simulating finally with a final Promise.then call Starting a Promise chain with Promise.resolve In this article's preceding recipes, we've been creating new promise objects with the constructor. This gets the jobs done, but it creates a problem. The first callback in the promise chain has a different shape than the subsequent callbacks. In the first callback, the arguments are the resolve and reject functions that trigger the subsequent then or catch callbacks. In subsequent callbacks, the returned value is propagated down the chain, and thrown errors are captured by catch callbacks. This difference adds mental overhead. It would be nice to have all of the functions in the chain behave in the same way. In this recipe, we'll see how to use Promise.resolve to start a promise chain. How to do it... Open your command-line application and navigate to your workspace. Create a new folder named 3-05-starting-with-resolve. Copy or create an index.html that loads and runs a main function from main.js. Create a main.js file that calls Promise.resolve with an empty object as the first argument: export function main () { Promise.resolve({}) } Chain a then call off of resolve, and attach rocket boosters to the passed object: export function main () { Promise.resolve({}).then(function (rocket) { console.log('attaching boosters'); rocket.boosters = [{ count: 2, fuelType: 'solid' }, { count: 1, fuelType: 'liquid' }]; return rocket; }) } Add a final then call to the chain that lets you know when the boosters have been added: export function main () { Promise.resolve({}) .then(function (rocket) { console.log('attaching boosters'); rocket.boosters = [{ count: 2, fuelType: 'solid' }, { count: 1, fuelType: 'liquid' }]; return rocket; }) .then(function (rocket) { console.log('boosters attached'); console.log(rocket); }) } Start your Python web server and open the following link in your browser: http://localhost:8000/. You should see the following output: How it works... Promise.resolve creates a new promise that resolves the value passed to it. The subsequent then method will receive that resolved value as it's argument. This method can seem a little roundabout but can be very helpful for composing asynchronous functions. In effect, the constituents of the promise chain don't need to be aware that they are in the chain (including the first step). This makes transitioning from code that doesn't use promises to code that does much easier. Using Promise.all to resolve multiple promises So far, we've seen how to use promises to perform asynchronous operations in sequence. This is useful when the individual steps are long-running operations. However, this might not always be the more efficient configuration. Quite often, we can perform multiple asynchronous operations at the same time. In this recipe, we'll see how to use Promise.all to start multiple asynchronous operations, without waiting for the previous one to complete. How to do it... Open your command-line application and navigate to your workspace. Create a new folder named 3-06-using-promise-all. Copy or create an index.html that loads and runs a main function from main.js. Create a main.js file that creates an object named rocket, and calls Promise.all with an empty array as the first argument: export function main() { console.log('Before promise created'); const rocket = {}; Promise.all([]) console.log('After promise created'); } Create a function named addBoosters that creates an object with boosters to an object: function addBoosters (rocket) { console.log('attaching boosters'); rocket.boosters = [{ count: 2, fuelType: 'solid' }, { count: 1, fuelType: 'liquid' }]; return rocket; } Create a function named performGuidanceDiagnostic that returns a promise of a successfully completed task: function performGuidanceDiagnostic (rocket) { console.log('performing guidance diagnostic'); return new Promise(function (resolve) { setTimeout(function () { console.log('guidance diagnostic complete'); rocket.guidanceDiagnostic = 'Completed'; resolve(rocket); }, 2000); }); } Create a function named loadCargo that adds a payload to the cargoBay: function loadCargo (rocket) { console.log('loading satellite'); rocket.cargoBay = [{ name: 'Communication Satellite' }] return rocket; } Use Promise.resolve to pass the rocket object to these functions within Promise.all: export function main() { console.log('Before promise created'); const rocket = {}; Promise.all([ Promise.resolve(rocket).then(addBoosters), Promise.resolve(rocket).then(performGuidanceDiagnostic), Promise.resolve(rocket).then(loadCargo) ]); console.log('After promise created'); } Attach a then call to the chain and log that the rocket is ready for launch: const rocket = {}; Promise.all([ Promise.resolve(rocket).then(addBoosters), Promise.resolve(rocket).then(performGuidanceDiagnostic), Promise.resolve(rocket).then(loadCargo) ]).then(function (results) { console.log('Rocket ready for launch'); console.log(results); }); Start your Python web server and open the following link in your browser: http://localhost:8000/. You should see the following output: How it works... Promise.all is similar to Promise.resolve; the arguments are resolved as promises. The difference is that instead of a single result, Promise.all accepts an iterable argument, each member of which is resolved individually. In the preceding example, you can see that each of the promises is initiated immediately. Two of them are able to complete while performGuidanceDiagnostic continues. The promise returned by Promise.all is fulfilled when all the constituent promises have been resolved. The results of the promises are combined into an array and propagated down the chain. You can see that three references to rocket are packed into the results argument. And you can see that the operations of each promise have been performed on the resulting object. There's more As you may have guessed, the results of the constituent promises don't have to return the same value. This can be useful, for example, when performing multiple independent network requests. The index of the result for each promise corresponds to the index of the operation within the argument to Promise.all. In these cases, it can be useful to use array destructuring to name the argument of the then callback: Promise.all([ findAstronomers, findAvailableTechnicians, findAvailableEquipment ]).then(function ([astronomers, technicians, equipment]) { // use results for astronomers, technicians, and equipment }); Handling errors with Promise.catch In a previous recipe, we saw how to fulfill a promise with an error state using reject, and we saw that this triggers the next catch callback in the promise chain. Because promises are relatively easy to compose, we need to be able to handle errors that are reported in different ways. Luckily promises are able to handle this seamlessly. In this recipe, we'll see how Promises.catch can handle errors that are reported by being thrown or through rejection. How to do it... Open your command-line application and navigate to your workspace.  Create a new folder named 3-07-handle-errors-promise-catch. Copy or create an index.html that loads and runs a main function from main.js. Create a main.js file with a main function that creates an object named rocket: export function main() { console.log('Before promise created'); const rocket = {}; console.log('After promise created'); } Create a function addBoosters that throws an error: function addBoosters (rocket) { throw new Error('Unable to add Boosters'); } Create a function performGuidanceDiagnostic that returns a promise that rejects an error: function performGuidanceDiagnostic (rocket) { return new Promise(function (resolve, reject) { reject(new Error('Unable to finish guidance diagnostic')); }); } Use Promise.resolve to pass the rocket object to these functions, and chain a catch off each of them: export function main() { console.log('Before promise created'); const rocket = {}; Promise.resolve(rocket).then(addBoosters) .catch(console.error); Promise.resolve(rocket).then(performGuidanceDiagnostic) .catch(console.error); console.log('After promise created'); } Start your Python web server and open the following link in your browser: http://localhost:8000/. You should see the following output: How it works... As we saw before, when a promise is fulfilled in a rejected state, the callback of the catch functions is triggered. In the preceding recipe, we see that this can happen when the reject method is called (as with performGuidanceDiagnostic). It also happens when a function in the chain throws an error (as will addBoosters). This has similar benefit to how Promise.resolve can normalize asynchronous functions. This handling allows asynchronous functions to not know about the promise chain, and announce error states in a way that is familiar to developers who are new to promises. This makes expanding the use of promises much easier. Simulating finally with the promise API In a previous recipe, we saw how catch can be used to handle errors, whether a promise has rejected, or a callback has thrown an error. Sometimes, it is desirable to execute code whether or not an error state has been detected. In the context of try/catch blocks, the finally block can be used for this purpose. We have to do a little more work to get the same behavior when working with promises In this recipe, we'll see how a final then call to execute some code in both successful and failing fulfillment states. How to do it... Open your command-line application and navigate to your workspace.  Create a new folder named 3-08-simulating-finally. Copy or create an index.html that loads and runs a main function from main.js. Create a main.js file with a main function that logs out messages for before and after promise creation: export function main() { console.log('Before promise created'); console.log('After promise created'); } Create a function named addBoosters that throws an error if its first parameter is false: function addBoosters(shouldFail) { if (shouldFail) { throw new Error('Unable to add Boosters'); } return { boosters: [{ count: 2, fuelType: 'solid' }, { count: 1, fuelType: 'liquid' }] }; } Use Promise.resolve to pass a Boolean value that is true if a random number is greater than 0.5 to addBoosters: export function main() { console.log('Before promise created'); Promise.resolve(Math.random() > 0.5) .then(addBoosters) console.log('After promise created'); } Add a then function to the chain that logs a success message: export function main() { console.log('Before promise created'); Promise.resolve(Math.random() > 0.5) .then(addBoosters) .then(() => console.log('Ready for launch: ')) console.log('After promise created'); } Add a catch to the chain and log out the error if thrown: export function main() { console.log('Before promise created'); Promise.resolve(Math.random() > 0.5) .then(addBoosters) .then(() => console.log('Ready for launch: ')) .catch(console.error) console.log('After promise created'); } Add a then after the catch, and log out that we need to make an announcement: export function main() { console.log('Before promise created'); Promise.resolve(Math.random() > 0.5) .then(addBoosters) .then(() => console.log('Ready for launch: ')) .catch(console.error) .then(() => console.log('Time to inform the press.')); console.log('After promise created'); } Start your Python web server and open the following link in your browser: http://localhost:8000/. If you are lucky and the boosters are added successfully, you'll see the following output: 12. If you are unlucky, you'll see an error message like the following: How it works... We can see in the preceding output that whether or not the asynchronous function completes in an error state, the last then callback is executed. This is possible because the catch method doesn't stop the promise chain. It simply catches any error states from the previous links in the chain, and then propagates a new value forward. The final then is then protected from being bypassed by an error state by this catch. And so, regardless of the fulfillment state of prior links in the chain, we can be sure that the callback of this final then will be executed. To summarize, we learned how to use the Promise API to organize asynchronous programs. We also looked at how to propagate results through promise chains and handle errors.  You read an excerpt from a book written by Ross Harrison, titled ECMAScript Cookbook. It’s a complete guide on how to become a better web programmer by writing efficient and modular code using ES6 and ES8. What’s new in ECMAScript 2018 (ES9)? ECMAScript 7 – What to expect? Modular Programming in ECMAScript 6  
Read more
  • 0
  • 0
  • 19228
article-image-internationalization-localization-node-js-app
Packt Editorial Staff
07 May 2018
15 min read
Save for later

How to implement Internationalization and localization in your Node.js app

Packt Editorial Staff
07 May 2018
15 min read
Internationalization, often abbreviated as i18n, implies a particular software design capable of adapting to the requirements of target local markets. In other words if we want to distribute our application to the markets other than USA we need to take care of translations, formatting of datetime, numbers, addresses, and such. In today’s post we will cover the concept of implementing Internationalization and localization in the Node.js app and look at context menu and system clipboard in detail. Date format by country Internationalization is a cross-cutting concern. When you are changing the locale it usually affects multiple modules. So I suggest going with the observer pattern that we already examined while working on DirService'. The ./js/Service/I18n.jsfile contains the following code: constEventEmitter=require("events"); classI18nServiceextendsEventEmitter{ constructor(){ super(); this.locale="en-US"; } notify(){ this.emit("update"); } } exports.I18nService=I18nService; As you see, we can change the locale by setting a new value to localeproperty. As soon as we call notifymethod, then all the subscribed modules immediately respond. But localeis a public property and therefore we have no control on its access and mutation. We can fix it by using overloading. The ./js/Service/I18n.jsfile contains the following code: //...constructor(){ super(); this._locale="en-US"; } getlocale(){ returnthis._locale; } setlocale(locale){ //validatelocale...this._locale=locale; } //... Now if we access localeproperty of I18ninstance it gets delivered by the getter (getlocale). When setting it a value, it goes through the setter (setlocale). Thus we can add extra functionality such as validation and logging on property access and mutation. Remember we have in the HTML, a combobox for selecting language. Why not give it a view? The ./js/View/LangSelector.jfile contains the following code: classLangSelectorView{ constructor(boundingEl,i18n){ boundingEl.addEventListener("change",this.onChanged.bind(this),false); this.i18n=i18n; } onChanged(e){ constselectEl=e.target;this.i18n.locale=selectEl.value;this.i18n.notify(); } } exports.LangSelectorView=LangSelectorView; In the preceding code, we listen for change events on the combobox. When the event occurs we change localeproperty of the passed in I18ninstance and call notifyto inform the subscribers. The ./js/app.jsfile contains the following code: consti18nService=newI18nService(), {LangSelectorView}=require("./js/View/LangSelector"); newLangSelectorView(document.querySelector("[data-bind=langSelector]"),i18nService); Well, we can change the locale and trigger the event. What about consuming modules? In FileListview we have static method formatTimethat formats the passed in timeStringfor printing. We can make it formated in accordance with currently chosen locale. The ./js/View/FileList.jsfile contains the following code: constructor(boundingEl,dirService,i18nService){ //... this.i18n=i18nService; //Subscribeoni18nServiceupdates i18nService.on("update",()=>this.update( dirService.getFileList())); } staticformatTime(timeString,locale){ constdate=newDate(Date.parse(timeString)),options={ year:"numeric",month:"numeric",day:"numeric",hour:"numeric",minute:"numeric",second:"numeric",hour12:false }; returndate.toLocaleString(locale,options); } update(collection){ //... this.el.insertAdjacentHTML("beforeend",`<liclass="file-listli"data-file="${fInfo.fileName}"> <spanclass="file-listliname">${fInfo.fileName}</span> <spanclass="file- listlisize">${filesize(fInfo.stats.size)}</span> <spanclass="file-listlitime">${FileListView.formatTime( fInfo.stats.mtime,this.i18n.locale)}</span> </li>`); //... } //... In the constructor, we subscribe for I18nupdate event and update the file list every time the locale changes. Static method formatTimeconverts passed in string into a Dateobject and uses Date.prototype.toLocaleString()method to format the datetime according to a given locale. This method belongs to so called ECMAScript Internationalization API (http://norbertlindenberg.com/2012/12/ecmascript-internationalization-api/index.html). The API describes methods of built-in object String, Dateand Numberdesigned to format and compare localized data. But what it really does is formatting a Dateinstance with toLocaleStringfor the English (United States) locale ("en-US") and it returns the date as follows: 3/17/2017, 13:42:23 However if we feed to the method German locale ("de-DE") we get quite a different result: 17.3.2017, 13:42:23 To put it into action we set an identifier to the combobox. The ./index.htmlfile contains the following code: .. <selectclass="footerselect"data-bind="langSelector"> .. And of course, we have to create an instance of I18nservice and pass it in LangSelectorViewand FileListView: ./js/app.js //... const{I18nService}=require("./js/Service/I18n"), {LangSelectorView}=require("./js/View/LangSelector"),i18nService=newI18nService(); newLangSelectorView(document.querySelector("[data-bind=langSelector]"),i18nService); //... newFileListView(document.querySelector("[data-bind=fileList]"),dirService,i18nService); Now we start the application. Yeah! As we change the language in the combobox the file modification dates adjust accordingly: Multilingual support Localization dates and number is a good thing, but it would be more exciting to provide translation to multiple languages. We have a number of terms across the application, namely the column titles of the file list and tooltips (via titleattribute) on windowing action buttons. What we need is a dictionary. Normally it implies sets of token translation pairs mapped to language codes or locales. Thus when you request from the translation service a term, it can correlate to a matching translation according to currently used language/locale. Here I have suggested making the dictionary as a static module that can be loaded with the required function. The ./js/Data/dictionary.jsfile contains the following code: exports.dictionary={"en-US":{ NAME:"Name",SIZE:"Size",MODIFIED:"Modified", MINIMIZE_WIN:"Minimizewindow", RESTORE_WIN:"Restorewindow",MAXIMIZE_WIN:"Maximizewindow",CLOSE_WIN:"Closewindow" }, "de-DE":{ NAME:"Dateiname",SIZE:"Grösse", MODIFIED:"Geändertam",MINIMIZE_WIN:"Fensterminimieren",RESTORE_WIN:"Fensterwiederherstellen",MAXIMIZE_WIN:"Fenstermaximieren", CLOSE_WIN:"Fensterschliessen" } }; So we have two locales with translations per term. We are going to inject the dictionary as a dependency into our I18nservice. The ./js/Service/I18n.jsfile contains the following code: //... constructor(dictionary){ super(); this.dictionary=dictionary; this._locale="en-US"; } translate(token,defaultValue){ constdictionary=this.dictionary[this._locale]; returndictionary[token]||defaultValue; } //... We also added a new method translate that accepts two parameters: tokenand defaulttranslation. The first parameter can be one of the keys from the dictionary like NAME. The second one is guarding value for the case when requested token does not yet exist in the dictionary. Thus we still get a meaningful text at least in English. Let's see how we can use this new method. The ./js/View/FileList.jsfile contains the following code: //... update(collection){ this.el.innerHTML=`<liclass="file-listlifile-listhead"> <spanclass="file-listliname">${this.i18n.translate("NAME","Name")}</span> <spanclass="file-listlisize">${this.i18n.translate("SIZE", "Size")}</span> <spanclass="file-listlitime">${this.i18n.translate("MODIFIED","Modified")}</span> </li>`; //... We change in FileListview hardcoded column titles with calls for translatemethod of I18ninstance, meaning that every time view updates it receives the actual translations. We shall not forget about TitleBarActionsview where we have windowing action buttons. The ./js/View/TitleBarActions.jsfile contains the following code: constructor(boundingEl,i18nService){ this.i18n=i18nService; //... //Subscribeoni18nServiceupdates i18nService.on("update",()=>this.translate()); } translate(){ this.unmaximizeEl.title=this.i18n.translate("RESTORE_WIN","Restorewindow"); this.maximizeEl.title=this.i18n.translate("MAXIMIZE_WIN","Maximizewindow"); this.minimizeEl.title=this.i18n.translate("MINIMIZE_WIN","Minimizewindow"); this.closeEl.title=this.i18n.translate("CLOSE_WIN","Closewindow"); } Here we add method translate, which updates button title attributes with actual translations. We subscribe for i18nupdate event to call the method every time user changes locale: Context menu Well, with our application we can already navigate through the file system and open files. Yet, one might expect more of a File Explorer. We can add some file related actions like delete, copy/paste. Usually these tasks are available via the context menu, what gives us a good opportunity to examine how to make it with NW.js. With the environment integration API we can create an instance of system menu (http://docs.nwjs.io/en/latest/References/Menu/). Then we compose objects representing menu items and attach them to the menu instance (http://docs.nwjs.io/en/latest/References/MenuItem/). This menucan be shown in an arbitrary position: constmenu=newnw.Menu(),menutItem=newnw.MenuItem({ label:"Sayhello", click:()=>console.log("hello!") }); menu.append(menu); menu.popup(10,10); Yet our task is more specific. We have to display the menu on the right mouse click in the position of the cursor. That is, we achieve by subscribing a handler to contextmenuDOM event: document.addEventListener("contextmenu",(e)=>{ console.log(`Showmenuinposition${e.x},${e.y}`); }); Now whenever we right-click within the application window the menu shows up. It's not exactly what we want, isn't it? We need it only when the cursor resides within a particular region. For an instance, when it hovers a file name. That means we have to test if the target element matches our conditions: document.addEventListener("contextmenu",(e)=>{ constel=e.target; if(elinstanceofHTMLElement&&el.parentNode.dataset.file){ console.log(`Showmenuinposition${e.x},${e.y}`); } }); Here we ignore the event until the cursor hovers any cell of file table row, given every row is a list item generated by FileListview and therefore provided with a value for data file attribute. This passage explains pretty much how to build a system menu and how to attach it to the file list. But before starting on a module capable of creating menu, we need a service to handle file operations. The ./js/Service/File.jsfile contains the following code: constfs=require("fs"), path=require("path"), //Copyfilehelper cp=(from,toDir,done)=>{ constbasename=path.basename(from),to=path.join(toDir,basename),write=fs.createWriteStream(to); fs.createReadStream(from) .pipe(write); write .on("finish",done); }; classFileService{ constructor(dirService){this.dir=dirService;this.copiedFile=null; } remove(file){ fs.unlinkSync(this.dir.getFile(file)); this.dir.notify(); } paste(){ constfile=this.copiedFile; if(fs.lstatSync(file).isFile()){ cp(file,this.dir.getDir(),()=>this.dir.notify()); } } copy(file){ this.copiedFile=this.dir.getFile(file); } open(file){ nw.Shell.openItem(this.dir.getFile(file)); } showInFolder(file){ nw.Shell.showItemInFolder(this.dir.getFile(file)); } }; exports.FileService=FileService; What's going on here? FileServicereceives an instance of DirServiceas a constructor argument. It uses the instance to obtain the full path to a file by name ( this.dir.getFile(file)). It also exploits notifymethod of the instance to request all the views subscribed to DirServiceto update. Method showInFoldercalls the corresponding method of nw.Shellto show the file in the parent folder with the system file manager. As you can recon method removedeletes the file. As for copy/paste we do the following trick. When user clicks copy we store the target file path in property copiedFile. So when user next time clicks paste we can use it to copy that file to the supposedly changed current location. Method openevidently opens file with the default associated program. That is what we do in FileListview directly. Actually this action belongs to FileService. So we rather refactor the view to use the service. The ./js/View/FileList.jsfile contains the following code: constructor(boundingEl,dirService,i18nService,fileService){ this.file=fileService; //... } bindUi(){ //... this.file.open(el.dataset.file); //... } Now we have a module to handle context menu for a selected file. The module will subscribe for contextmenuDOM event and build a menu when user right clicks on a file. This menu will contain items Show Item in the Folder, Copy, Paste, and Delete. Whereas copy and paste are separated from other items with delimiters. Besides, Paste will be disabled until we store a file with copy. Further goes the source code. The ./js/View/ContextMenu.jsfile contains the following code: classConextMenuView{ constructor(fileService,i18nService){ this.file=fileService;this.i18n=i18nService;this.attach(); } getItems(fileName){ constfile=this.file, isCopied=Boolean(file.copiedFile); return[ { label:this.i18n.translate("SHOW_FILE_IN_FOLDER","ShowItemintheFolder"), enabled:Boolean(fileName), click:()=>file.showInFolder(fileName) }, { type:"separator" }, { label:this.i18n.translate("COPY","Copy"),enabled:Boolean(fileName), click:()=>file.copy(fileName) }, { label:this.i18n.translate("PASTE","Paste"),enabled:isCopied, click:()=>file.paste() }, { type:"separator" }, { label:this.i18n.translate("DELETE","Delete"),enabled:Boolean(fileName), click:()=>file.remove(fileName) } ]; } render(fileName){ constmenu=newnw.Menu(); this.getItems(fileName).forEach((item)=>menu.append(newnw.MenuItem(item))); returnmenu; } attach(){ document.addEventListener("contextmenu",(e)=>{ constel=e.target; if(!(elinstanceofHTMLElement)){ return; } if(el.classList.contains("file-list")){ e.preventDefault(); this.render() .popup(e.x,e.y); } //Ifachildofanelementmatching[data-file] if(el.parentNode.dataset.file){ e.preventDefault(); this.render(el.parentNode.dataset.file) .popup(e.x,e.y); } }); } } exports.ConextMenuView=ConextMenuView; So in ConextMenuViewconstructor, we receive instances of FileServiceand I18nService. During the construction we also call attach method that subscribes for contextmenuDOM event, creates the menu and shows it in the position of the mouse cursor. The event gets ignored unless the cursor hovers a file or resides in empty area of the file list component. When user right clicks the file list, the menu still appears, but with all items disable except paste (in case a file was copied before). Method render create an instance of menu and populates it with nw.MenuItemscreated by getItemsmethod. The method creates an array representing menu items. Elements of the array are object literals. Property labelaccepts translation for item caption. Property enableddefines the state of item depending on our cases (whether we have copied file or not, whether the cursor on a file or not). Finally property clickexpects the handler for click event. Now we need to enable our new components in the main module. The ./js/app.jsfile contains the following code: const{FileService}=require("./js/Service/File"), {ConextMenuView}=require("./js/View/ConextMenu"),fileService=newFileService(dirService); newFileListView(document.querySelector("[data-bind=fileList]"),dirService,i18nService,fileService); newConextMenuView(fileService,i18nService); Let's now run the application, right-click on a file and voilà! We have the context menu and new file actions. System clipboard Usually Copy/Paste functionality involves system clipboard. NW.jsprovides an API to control it (http://docs.nwjs.io/en/latest/References/Clipboard/). Unfortunately it's quite limited, we cannot transfer an arbitrary file between applications, what you may expect of a file manager. Yet some things we are still available to us. Transferring text In order to examine text transferring with the clipboard we modify the method copy of FileService: copy(file){ this.copiedFile=this.dir.getFile(file);constclipboard=nw.Clipboard.get();clipboard.set(this.copiedFile,"text"); } What does it do? As soon as we obtained file full path, we create an instance of nw.Clipboardand save the file path there as a text. So now, after copying a file within the File Explorer we can switch to an external program (for example, a text editor) and paste the copied path from the clipboard. Transferring graphics It doesn't look very handy, does it? It would be more interesting if we could copy/paste a file. Unfortunately NW.jsdoesn't give us many options when it comes to file exchange. Yet we can transfer between NW.jsapplication and external programs PNG and JPEG images. The ./js/Service/File.jsfile contains the following code: //... copyImage(file,type){ constclip=nw.Clipboard.get(), //loadfilecontentasBase64 data=fs.readFileSync(file).toString("base64"), //imageasHTML html=`<imgsrc="file:///${encodeURI(data.replace(/^//,"") )}">`; //writebothoptions(rawimageandHTML)totheclipboardclip.set([ {type,data:data,raw:true}, {type:"html",data:html} ]); } copy(file){ this.copiedFile=this.dir.getFile(file); constext=path.parse(this.copiedFile).ext.substr(1); switch(ext){case"jpg":case"jpeg": returnthis.copyImage(this.copiedFile,"jpeg"); case"png": returnthis.copyImage(this.copiedFile,"png"); } } //... We extended our FileServicewith private method copyImage. It reads a given file, converts its contents in Base64 and passes the resulting code in a clipboard instance. In addition, it creates HTML with image tag with Base64-encoded image in data Uniform Resource Identifier (URI). Now after copying an image (PNG or JPEG) in the File Explorer, we can paste it in an external program such as graphical editor or text processor. Receiving text and graphics We've learned how to pass a text and graphics from our NW.jsapplication to external programs. But how can we receive data from outside? As you can guess it is accessible through get method of nw.Clipboard. Text can be retrieved that simple: constclip=nw.Clipboard.get(); console.log(clip.get("text")); When graphic is put in the clipboard we can get it with NW.js only as Base64-encoded content or as HTML. To see it in practice we add a few methods to FileService. The ./js/Service/File.jsfile contains the following code: //...hasImageInClipboard(){ constclip=nw.Clipboard.get(); returnclip.readAvailableTypes().indexOf("png")!==-1; } pasteFromClipboard(){ constclip=nw.Clipboard.get(); if(this.hasImageInClipboard()){ constbase64=clip.get("png",true), binary=Buffer.from(base64,"base64"),filename=Date.now()+"--img.png"; fs.writeFileSync(this.dir.getFile(filename),binary); this.dir.notify(); } } //... Method hasImageInClipboardchecks if the clipboard keeps any graphics. Method pasteFromClipboardtakes graphical content from the clipboard as Base64-encoded PNG. It converts the content into binary code, writes into a file and requests DirServicesubscribers to update. To make use of these methods we need to edit ContextMenuview. The ./js/View/ContextMenu.jsfile contains the following code: getItems(fileName){ constfile=this.file, isCopied=Boolean(file.copiedFile); return[ //... { label:this.i18n.translate("PASTE_FROM_CLIPBOARD","Pasteimagefromclipboard"), enabled:file.hasImageInClipboard(), click:()=>file.pasteFromClipboard() }, //... ]; } We add to the menu a new item Pasteimagefromclipboard, which is enabled only when there is any graphic in the clipboard. [box type="shadow" align="" class="" width=""]This article is an excerpt from the book Cross Platform Desktop Application Development: Electron, Node, NW.js and React written by Dmitry Sheiko. This book will help you build powerful cross-platform desktop applications with web technologies such as Node, NW.JS, Electron, and React.[/box] Read More: How to deploy a Node.js application to the web using Heroku How is Node.js Changing Web Development? With Node.js, it’s easy to get things done  
Read more
  • 0
  • 0
  • 21546

article-image-single-responsibility-principle-solid-net-core
Aaron Lazar
07 May 2018
7 min read
Save for later

Applying Single Responsibility principle from SOLID in .NET Core

Aaron Lazar
07 May 2018
7 min read
In today's tutorial, we'll learn how to apply the Single Responsibility principle from the SOLID principles, to .NET Core Applications. This brings us to another interesting concept in OOP called SOLID design principles. These design principles are applied to any OOP design and are intended to make software easier to understand, more flexible, and easily maintainable. [box type="shadow" align="" class="" width=""]This article is an extract from the book C# 7 and .NET Core Blueprints, authored by Dirk Strauss and Jas Rademeyer. The book is a step-by-step guide that will teach you the essential .NET Core and C# concepts with the help of real-world projects.[/box] The term SOLID is a mnemonic for: Single responsibility principle Open/closed principle Liskov substitution principle Interface segregation principle Dependency inversion principle In this article, we will take a look at the first of the principles—the single responsibility principle. Let's look at the single responsibility principle now. Single responsibility principle Simply put, a module or class should have the following characteristics only: It should do one single thing and only have a single reason to change It should do its one single thing well The functionality provided needs to be entirely encapsulated by that class or module What is meant when saying that a module must be responsible for a single thing? The Google definition of a module is: "Each of a set of standardized parts or independent units that can be used to construct a more complex structure, such as an item of furniture or a building." From this, we can understand that a module is a simple building block. It can be used or reused to create something bigger and more complex when used with other modules. In C# therefore, the module does closely resemble a class, but I will go so far as to say that a module can also be extended to be a method. The function that the class or module performs can only be one thing. That is to say that it has a narrow responsibility. It is not concerned with anything else other than doing that one thing it was designed to do. If we had to apply the single responsibility principle to a person, then that person would be only a software developer, for example. But what if a software developer also was a doctor and a mechanic and a school teacher? Would that person be effective in any of those roles? That would contravene the single responsibility principle. The same is true for code. Having a look at our AllRounder and Batsman classes, you will notice that in AllRounder, we have the following code: private double CalculateStrikeRate(StrikeRate strikeRateType) { switch (strikeRateType) { case StrikeRate.Bowling: return (BowlerBallsBowled / BowlerWickets); case StrikeRate.Batting: return (BatsmanRuns * 100) / BatsmanBallsFaced; default: throw new Exception("Invalid enum"); } } public override int CalculatePlayerRank() { return 0; } In Batsman, we have the following code: public double BatsmanBattingStrikeRate => (BatsmanRuns * 100) / BatsmanBallsFaced; public override int CalculatePlayerRank() { return 0; } Using what we have learned about the single responsibility principle, we notice that there is an issue here. To illustrate the problem, let's compare the code side by side: We are essentially repeating code in the Batsman and AllRounder classes. This doesn't really bode well for single responsibility, does it? I mean, the one principle is that a class must only have a single function to perform. At the moment, both the Batsman and AllRounder classes are taking care of calculating strike rates. They also both take care of calculating the player rank. They even both have exactly the same code for calculating the strike rate of a batsman! The problem comes in when the strike rate calculation changes (not that it easily would, but let's assume it does). We now know that we have to change the calculation in both places. As soon as the developer changes one calculation and not the other, a bug is introduced into our application. Let's simplify our classes. In the BaseClasses folder, create a new abstract class called Statistics. The code should look as follows: namespace cricketScoreTrack.BaseClasses { public abstract class Statistics { public abstract double CalculateStrikeRate(Player player); public abstract int CalculatePlayerRank(Player player); } } In the Classes folder, create a new derived class called PlayerStatistics (that is to say it inherits from the Statistics abstract class). The code should look as follows: using cricketScoreTrack.BaseClasses; using System; namespace cricketScoreTrack.Classes { public class PlayerStatistics : Statistics { public override int CalculatePlayerRank(Player player) { return 1; } public override double CalculateStrikeRate(Player player) { switch (player) { case AllRounder allrounder: return (allrounder.BowlerBallsBowled / allrounder.BowlerWickets); case Batsman batsman: return (batsman.BatsmanRuns * 100) / batsman.BatsmanBallsFaced; default: throw new ArgumentException("Incorrect argument supplied"); } } } } You will see that the PlayerStatistics class is now solely responsible for calculating player statistics for the player's rank and the player's strike rate. You will see that I have not included much of an implementation for calculating the player's rank. I briefly commented the code on GitHub for this method on how a player's rank is determined. It is quite a complicated calculation and differs for batsmen and bowlers. I have therefore omitted it for the purposes of this chapter on OOP. Your Solution should now look as follows: Swing back over to your Player abstract class and remove abstract public int CalculatePlayerRank(); from the class. In the IBowler interface, remove the double BowlerStrikeRate { get; } property. In the IBatter interface, remove the double BatsmanBattingStrikeRate { get; } property. In the Batsman class, remove public double BatsmanBattingStrikeRate and public override int CalculatePlayerRank() from the class. The code in the Batsman class will now look as follows: using cricketScoreTrack.BaseClasses; using cricketScoreTrack.Interfaces; namespace cricketScoreTrack.Classes { public class Batsman : Player, IBatter { #region Player public override string FirstName { get; set; } public override string LastName { get; set; } public override int Age { get; set; } public override string Bio { get; set; } #endregion #region IBatsman public int BatsmanRuns { get; set; } public int BatsmanBallsFaced { get; set; } public int BatsmanMatch4s { get; set; } public int BatsmanMatch6s { get; set; } #endregion } } Looking at the AllRounder class, remove the public enum StrikeRate { Bowling = 0, Batting = 1 } enum as well as the public double BatsmanBattingStrikeRate and public double BowlerStrikeRate properties. Lastly, remove the private double CalculateStrikeRate(StrikeRate strikeRateType) and public override int CalculatePlayerRank() methods. The code for the AllRounder class now looks as follows: using cricketScoreTrack.BaseClasses; using cricketScoreTrack.Interfaces; using System; namespace cricketScoreTrack.Classes { public class AllRounder : Player, IBatter, IBowler { #region Player public override string FirstName { get; set; } public override string LastName { get; set; } public override int Age { get; set; } public override string Bio { get; set; } #endregion #region IBatsman public int BatsmanRuns { get; set; } public int BatsmanBallsFaced { get; set; } public int BatsmanMatch4s { get; set; } public int BatsmanMatch6s { get; set; } #endregion #region IBowler public double BowlerSpeed { get; set; } public string BowlerType { get; set; } public int BowlerBallsBowled { get; set; } public int BowlerMaidens { get; set; } public int BowlerWickets { get; set; } public double BowlerEconomy => BowlerRunsConceded / BowlerOversBowled; public int BowlerRunsConceded { get; set; } public int BowlerOversBowled { get; set; } #endregion } } Looking back at our AllRounder and Batsman classes, the code is clearly simplified. It is definitely more flexible and is starting to look like a well-constructed set of classes. Give your solution a rebuild and make sure that it is all working. So now you know how to simplify your .NET Core applications by applying the Single Responsibility principle. If you found this tutorial helpful and you'd like to learn more, go ahead and pick up the book C# 7 and .NET Core Blueprints, authored by Dirk Strauss and Jas Rademeyer. What is ASP.NET Core? How to call an Azure function from an ASP.NET Core MVC application How to dockerize an ASP.NET Core application  
Read more
  • 0
  • 0
  • 16646

article-image-distributed-tensorflow-multiple-gpu-server
Kunal Chaudhari
07 May 2018
12 min read
Save for later

Distributed TensorFlow: Working with multiple GPUs and servers

Kunal Chaudhari
07 May 2018
12 min read
Some neural networks models are so large they cannot fit in memory of a single device (GPU). Such models need to be split over many devices, carrying out the training in parallel on the devices. This means anyone can now scale out distributed training to 100s of GPUs using TensorFlow. But that’s not the only advantage of distributed TensorFlow, you can also massively reduce your experimentation time by running many experiments in parallel on many GPUs and servers. Today, we will discuss about distributed TensorFlow and present a number of recipes to work with TensorFlow, GPUs, and multiple servers. Working with TensorFlow and GPUs We will learn how to use TensorFlow with GPUs: the operation performed is a simple matrix multiplication either on CPU or on GPU. Getting ready The first step is to install a version of TensorFlow that supports GPUs. The official TensorFlow Installation Instruction is your starting point . Remember that you need to have an environment supporting GPUs either via CUDA or CuDNN. How to do it... We proceed with the recipe as follows: Start by importing a few modules import sys import numpy as np import tensorflow as tf from datetime import datetime Get from command line the type of processing unit that you desire to use (either "gpu" or "cpu") device_name = sys.argv[1] # Choose device from cmd line. Options: gpu or cpu shape = (int(sys.argv[2]), int(sys.argv[2])) if device_name == "gpu": device_name = "/gpu:0" else: device_name = "/cpu:0" Execute the matrix multiplication either on GPU or on CPU. The key instruction is with tf.device(device_name). It creates a new context manager, telling TensorFlow to perform those actions on either the GPU or the CPU with tf.device(device_name): random_matrix = tf.random_uniform(shape=shape, minval=0, maxval=1) dot_operation = tf.matmul(random_matrix, tf.transpose(random_matrix)) sum_operation = tf.reduce_sum(dot_operation) startTime = datetime.now() with tf.Session(config=tf.ConfigProto(log_device_placement=True)) as session: result = session.run(sum_operation) print(result)  Print some debug timing just to verify what is the difference between CPU and GPU print("Shape:", shape, "Device:", device_name) print("Time taken:", datetime.now() - startTime) How it works... This recipe explains how to assign TensorFlow computations either to CPUs or to GPUs. The code is pretty simple and it will be used as a basis for the next recipe. Playing with Distributed TensorFlow: multiple GPUs and one CPU We will show an example of data parallelism where data is split across multiple GPUs Getting ready This recipe is inspired by a good blog posting written by Neil Tenenholtz and available online: https://clindatsci.com/blog/2017/5/31/distributed-tensorflow How to do it... We proceed with the recipe as follows: Consider this piece of code which runs a matrix multiplication on a single GPU. # single GPU (baseline) import tensorflow as tf # place the initial data on the cpu with tf.device('/cpu:0'): input_data = tf.Variable([[1., 2., 3.], [4., 5., 6.], [7., 8., 9.], [10., 11., 12.]]) b = tf.Variable([[1.], [1.], [2.]]) # compute the result on the 0th gpu with tf.device('/gpu:0'): output = tf.matmul(input_data, b) # create a session and run with tf.Session() as sess: sess.run(tf.global_variables_initializer()) print sess.run(output) Partition the code with in graph replication as in the following snippet between 2 different GPUs. Note that the CPU is acting as the master node distributing the graph and collecting the final results. # in-graph replication import tensorflow as tf num_gpus = 2 # place the initial data on the cpu with tf.device('/cpu:0'): input_data = tf.Variable([[1., 2., 3.], [4., 5., 6.], [7., 8., 9.], [10., 11., 12.]]) b = tf.Variable([[1.], [1.], [2.]]) # split the data into chunks for each gpu inputs = tf.split(input_data, num_gpus) outputs = [] # loop over available gpus and pass input data for i in range(num_gpus): with tf.device('/gpu:'+str(i)): outputs.append(tf.matmul(inputs[i], b)) # merge the results of the devices with tf.device('/cpu:0'): output = tf.concat(outputs, axis=0) # create a session and run with tf.Session() as sess: sess.run(tf.global_variables_initializer()) print sess.run(output) How it works... This is a very simple recipe where the graph is split in two parts by the CPU acting as master and distributed to two GPUs acting as distributed workers. The result of the computation is collected back to the CPU. Playing with Distributed TensorFlow: multiple servers We will learn how to distribute a TensorFlow computation across multiple servers. The key assumption is that the code is same for both the workers and the parameter servers. Therefore the role of each computation node is passed by a command line argument. Getting ready Again, this recipe is inspired by a good blog posting written by Neil Tenenholtz and available online: https://clindatsci.com/blog/2017/5/31/distributed-tensorflow How to do it... We proceed with the recipe as follows: Consider this piece of code where we specify the cluster architecture with one master running on 192.168.1.1:1111 and two workers running on 192.168.1.2:1111 and 192.168.1.3:1111 respectively. import sys import tensorflow as tf # specify the cluster's architecture cluster = tf.train.ClusterSpec({'ps': ['192.168.1.1:1111'], 'worker': ['192.168.1.2:1111', '192.168.1.3:1111'] }) Note that the code is replicated on multiple machines and therefore it is important to know what is the role of the current execution node. This information we get from the command line. A machine can be either a worker or a parameter server (ps). # parse command-line to specify machine job_type = sys.argv[1] # job type: "worker" or "ps" task_idx = sys.argv[2] # index job in the worker or ps list # as defined in the ClusterSpec Run the training server where given a cluster, we bless each computational with a role (either worker or ps), and an id. # create TensorFlow Server. This is how the machines communicate. server = tf.train.Server(cluster, job_name=job_type, task_index=task_idx) The computation is different according to the role of the specific computation node: If the role is a parameter server, then the condition is to join the server. Note that in this case there is no code to execute because the workers will continuously push updates and the only thing that the Parameter Server has to do is waiting. Otherwise the worker code is executed on a specific device within the cluster. This part of code is similar to the one executed on a single machine where we first build the model and then we train it locally. Note that all the distribution of the work and the collection of the updated results is done transparently by Tensoflow. Note that TensorFlow provides a convenient tf.train.replica_device_setter that automatically assigns operations to devices. # parameter server is updated by remote clients. # will not proceed beyond this if statement. if job_type == 'ps': server.join() else: # workers only with tf.device(tf.train.replica_device_setter( worker_device='/job:worker/task:'+task_idx, cluster=cluster)): # build your model here as if you only were using a single machine with tf.Session(server.target): # train your model here How it works... We have seen how to create a cluster with multiple computation nodes. A node can be either playing the role of a Parameter server or playing the role of a worker. In both cases the code executed is the same but the execution of the code is different according to parameters collected from the command line. The parameter server only needs to wait until the workers send updates. Note that tf.train.replica_device_setter(..) takes the role of automatically assigning operations to available devices, while tf.train.ClusterSpec(..) is used for cluster setup. There is more... An example of distributed training for MNIST is available online. In addition, to that you can decide to have more than one parameter server for efficiency reasons. Using parameters the server can provide better network utilization, and it allows to scale models to more parallel machines. It is possible to allocate more than one parameter server. The interested reader can have a look in here. Training a Distributed TensorFlow MNIST classifier In this we trained a full MNIST classifier in a distributed way. This recipe is inspired by the blog post in http://ischlag.github.io/2016/06/12/async-distributed- tensorflow/ and the code running on TensorFlow 1.2 is available here https://github. com/ischlag/distributed-tensorflow-example Getting ready This recipe is based on the previous one. So it might be convenient to read them in order. How to do it... We proceed with the recipe as follows: Import a few standard modules and define the TensorFlow cluster where the computation is run. Then start a server for a specific task import tensorflow as tf import sys import time # cluster specification parameter_servers = ["pc-01:2222"] workers = [ "pc-02:2222", "pc-03:2222", "pc-04:2222"] cluster = tf.train.ClusterSpec({"ps":parameter_servers, "worker":workers}) # input flags tf.app.flags.DEFINE_string("job_name", "", "Either 'ps' or 'worker'") tf.app.flags.DEFINE_integer("task_index", 0, "Index of task within the job")FLAGS = tf.app.flags.FLAGS # start a server for a specific task server = tf.train.Server( cluster, job_name=FLAGS.job_name, task_index=FLAGS.task_index) Read MNIST data and define the hyperparameters used for training # config batch_size = 100 learning_rate = 0.0005 training_epochs = 20 logs_path = "/tmp/mnist/1" # load mnist data set from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data', one_hot=True) Check if your role is Parameter Server or Worker. If worker then define a simple dense neural network, define an optimizer, and the metric used for evaluating the classifier (for example accuracy). if FLAGS.job_name == "ps": server.join() elif FLAGS.job_name == "worker": # Between-graph replication with tf.device(tf.train.replica_device_setter( worker_device="/job:worker/task:%d" % FLAGS.task_index, cluster=cluster)): # count the number of updates global_step = tf.get_variable( 'global_step', [], initializer = tf.constant_initializer(0), trainable = False) # input images with tf.name_scope('input'): # None -> batch size can be any size, 784 -> flattened mnist image x = tf.placeholder(tf.float32, shape=[None, 784], name="x-input") # target 10 output classes y_ = tf.placeholder(tf.float32, shape=[None, 10], name="y-input") # model parameters will change during training so we use tf.Variable tf.set_random_seed(1) with tf.name_scope("weights"): W1 = tf.Variable(tf.random_normal([784, 100])) W2 = tf.Variable(tf.random_normal([100, 10])) # bias with tf.name_scope("biases"): b1 = tf.Variable(tf.zeros([100])) b2 = tf.Variable(tf.zeros([10])) # implement model with tf.name_scope("softmax"): # y is our prediction z2 = tf.add(tf.matmul(x,W1),b1) a2 = tf.nn.sigmoid(z2) z3 = tf.add(tf.matmul(a2,W2),b2) y = tf.nn.softmax(z3) # specify cost function with tf.name_scope('cross_entropy'): # this is our cost cross_entropy = tf.reduce_mean( -tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1])) # specify optimizer with tf.name_scope('train'): # optimizer is an "operation" which we can execute in a session grad_op = tf.train.GradientDescentOptimizer(learning_rate) train_op = grad_op.minimize(cross_entropy, global_step=global_step) with tf.name_scope('Accuracy'): # accuracy correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # create a summary for our cost and accuracy tf.summary.scalar("cost", cross_entropy) tf.summary.scalar("accuracy", accuracy) # merge all summaries into a single "operation" which we can execute in a session summary_op = tf.summary.merge_all() init_op = tf.global_variables_initializer() print("Variables initialized ...") Start a supervisor which acts as a Chief machine for the distributed setting. The chief is the worker machine which manages all the rest of the cluster. The session is maintained by the chief and the key instruction is sv = tf.train.Supervisor(is_chief=(FLAGS.task_index == 0)). Also, with prepare_or_wait_for_session(server.target) the supervisor will wait for the model to be ready for use. Note that each worker will take care of different batched models and the final model is then available for the chief. sv = tf.train.Supervisor(is_chief=(FLAGS.task_index == 0), begin_time = time.time() frequency = 100 with sv.prepare_or_wait_for_session(server.target) as sess: # create log writer object (this will log on every machine) writer = tf.summary.FileWriter(logs_path, graph=tf.get_default_graph()) # perform training cycles start_time = time.time() for epoch in range(training_epochs): # number of batches in one epoch batch_count = int(mnist.train.num_examples/batch_size) count = 0 for i in range(batch_count): batch_x, batch_y = mnist.train.next_batch(batch_size) # perform the operations we defined earlier on batch _, cost, summary, step = sess.run( [train_op, cross_entropy, summary_op, global_step], feed_dict={x: batch_x, y_: batch_y}) writer.add_summary(summary, step) count += 1 if count % frequency == 0 or i+1 == batch_count: elapsed_time = time.time() - start_time start_time = time.time() print("Step: %d," % (step+1), " Epoch: %2d," % (epoch+1), " Batch: %3d of %3d," % (i+1, batch_count), " Cost: %.4f," % cost, "AvgTime:%3.2fms" % float(elapsed_time*1000/frequency)) count = 0 print("Test-Accuracy: %2.2f" % sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})) print("Total Time: %3.2fs" % float(time.time() - begin_time)) print("Final Cost: %.4f" % cost) sv.stop() print("done") How it works... This recipe describes an example of distributed MNIST classifier. In this example, TensorFlow allows us to define a cluster of three machines. One acts as parameter server and two more machines are used as workers working on separate batches of the training data. If you enjoyed this excerpt, check out the book TensorFlow 1.x Deep Learning Cookbook, to become an expert in implementing deep learning techniques in real-world applications. Emoji Scavenger Hunt showcases TensorFlow.js The 5 biggest announcements from TensorFlow Developer Summit 2018 Setting up Logistic Regression model using TensorFlow  
Read more
  • 0
  • 0
  • 36207
article-image-implementing-3-naive-bayes-classifiers-in-scikit-learn
Packt Editorial Staff
07 May 2018
13 min read
Save for later

Implementing 3 Naive Bayes classifiers in scikit-learn

Packt Editorial Staff
07 May 2018
13 min read
Scikit-learn provide three naive Bayes implementations: Bernoulli, multinomial and Gaussian. The only difference is about the probability distribution adopted. The first one is a binary algorithm particularly useful when a feature can be present or not. Multinomial naive Bayes assumes to have feature vector where each element represents the number of times it appears (or, very often, its frequency). This technique is very efficient in natural language processing or whenever the samples are composed starting from a common dictionary. The Gaussian Naive Bayes, instead, is based on a continuous distribution and it's suitable for more generic classification tasks. Ok, now that we have established naive Bayes variants are a handy set of algorithms to have in our machine learning arsenal and that Scikit-learn is a good tool to implement them, let’s rewind a bit. What is Naive Bayes? Naive Bayes are a family of powerful and easy-to-train classifiers, which determine the probability of an outcome, given a set of conditions using the Bayes' theorem. In other words, the conditional probabilities are inverted so that the query can be expressed as a function of measurable quantities. The approach is simple and the adjective naive has been attributed not because these algorithms are limited or less efficient, but because of a fundamental assumption about the causal factors that we will discuss. Naive Bayes are multi-purpose classifiers and it's easy to find their application in many different contexts. However, the performance is particularly good in all those situations when the probability of a class is determined by the probabilities of some causal factors. A good example is given by natural language processing, where a text can be considered as a particular instance of a dictionary and the relative frequencies of all terms provide enough information to infer a belonging class. Our examples may be generic, so to let you understand the application of naive Bayes in various context. The Bayes' theorem Let's consider two probabilistic events A and B. We can correlate the marginal probabilities P(A) and P(B) with the conditional probabilities P(A|B) and P(B|A) using the product rule: Considering that the intersection is commutative, the first members are equal, so we can derive the Bayes' theorem: This formula has very deep philosophical implications and it's a fundamental element of statistical learning. First of all, let's consider the marginal probability P(A): this is normally a value that determines how probable a target event is, like P(Spam) or P(Rain). As there are no other elements, this kind of probability is called Apriori, because it's often determined by mathematical considerations or simply by a frequency count. For example, imagine we want to implement a very simple spam filter and we've collected 100 emails. We know that 30 are spam and 70 are regular. So we can say that P(Spam) = 0.3. However, we'd like to evaluate using some criteria (for simplicity, let's consider a single one), for example, e-mail text is shorter than 50 characters. Therefore, our query becomes: The first term is similar to P(Spam) because it's the probability of spam given a certain condition. For this reason, it's called a posteriori (in other words, it's a probability that can estimate after knowing some additional elements). On the right side, we need to calculate the missing values, but it's simple. Let's suppose that 35 emails have a text shorter than 50 characters, P(Text < 50 chars) = 0.35 and, looking only into our spam folder, we discover that only 25 spam emails have a short text, so that P(Text < 50 chars|Spam) = 25/30 = 0.83. The result is: So, after receiving a very short email, there is 71% probability that it's a spam. Now we can understand the role of P(Text < 50 chars|Spam): as we have actual data, we can measure how probable is our hypothesis given the query, in other words, we have defined a likelihood (compare this with logistic regression) which is a weight between the Apriori probability and the a posteriori one (the term on the denominator is less important because it works as normalizing factor): The normalization factor is often represented by the Greek letter alpha, so the formula becomes: The last step is considering the case when there are more concurrent conditions (that is more realistic in real-life problems): A common assumption is called conditional independence (in other words, the effects produced by every cause are independent among each other) and allows us to write a simplified expression: Naive Bayes classifiers A naive Bayes classifier is called in this way because it's based on a naive condition, which implies the conditional independence of causes. This can seem very difficult to accept in many contexts where the probability of a particular feature is strictly correlated to another one. For example, in spam filtering, a text shorter than 50 characters can increase the probability of the presence of an image, or if the domain has been already blacklisted for sending the same spam emails to million users, it's likely to find particular keywords. In other words, the presence of a cause isn't normally independent from the presence of other ones. However, in Zhang H., The Optimality of Naive Bayes, AAAI 1, no. 2 (2004): 3, the author showed that under particular conditions (not so rare to happen), different dependencies clears one another, and a naive Bayes classifier succeeds in achieving very high performances even if its naiveness is violated. Let's consider a dataset: Every feature vector, for simplicity, will be represented as: We need also a target dataset: where each y can belong to one of P different classes. Considering the Bayes' theorem under conditional independence, we can write: The values of the marginal Apriori probability P(y) and of the conditional probabilities P(xi|y) is obtained through a frequency count, therefore, given an input vector x, the predicted class is the one which a posteriori probability is maximum. Naive Bayes in scikit-learn scikit-learn implements three naive Bayes variants based on the same number of different probabilistic distributions: Bernoulli, multinomial, and Gaussian. The first one is a binary distribution useful when a feature can be present or absent. The second one is a discrete distribution used whenever a feature must be represented by a whole number (for example, in natural language processing, it can be the frequency of a term), while the latter is a continuous distribution characterized by its mean and variance. Bernoulli naive Bayes If X is random variable Bernoulli-distributed, it can assume only two values (for simplicity, let's call them 0 and 1) and their probability is: To try this algorithm with scikit-learn, we're going to generate a dummy dataset. Bernoulli naive Bayes expects binary feature vectors, however, the class BernoulliNB has a binarize parameter which allows specifying a threshold that will be used internally to transform the features: from sklearn.datasets import make_classification >>> nb_samples = 300 >>> X, Y = make_classification(n_samples=nb_samples, n_features=2, n_informative=2, n_redundant=0) We have a generated the bidimensional dataset shown in the following figure: We have decided to use 0.0 as a binary threshold, so each point can be characterized by the quadrant where it's located. Of course, this is a rational choice for our dataset, but Bernoulli naive Bayes is thought for binary feature vectors or continuous values which can be precisely split with a predefined threshold. from sklearn.naive_bayes import BernoulliNB from sklearn.model_selection import train_test_split >>> X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.25) >>> bnb = BernoulliNB(binarize=0.0) >>> bnb.fit(X_train, Y_train) >>> bnb.score(X_test, Y_test) 0.85333333333333339 The score in rather good, but if we want to understand how the binary classifier worked, it's useful to see how the data have been internally binarized: Now, checking the naive Bayes predictions we obtain: >>> data = np.array([[0, 0], [0, 1], [1, 0], [1, 1]]) >>> bnb.predict(data) array([0, 0, 1, 1]) Which is exactly what we expected. Multinomial naive Bayes A multinomial distribution is useful to model feature vectors where each value represents, for example, the number of occurrences of a term or its relative frequency. If the feature vectors have n elements and each of them can assume k different values with probability pk, then: The conditional probabilities P(xi|y) are computed with a frequency count (which corresponds to applying a maximum likelihood approach), but in this case, it's important to consider the alpha parameter (called Laplace smoothing factor) which default value is 1.0 and prevents the model from setting null probabilities when the frequency is zero. It's possible to assign all non-negative values, however, larger values will assign higher probabilities to the missing features and this choice could alter the stability of the model. In our example, we're going to consider the default value of 1.0. For our purposes, we're going to use the DictVectorizer. There are automatic instruments to compute the frequencies of terms, but we're going to discuss them later. Let's consider only two records: the first one representing a city, while the second one countryside. Our dictionary contains hypothetical frequencies, like if the terms were extracted from a text description: from sklearn.feature_extraction import DictVectorizer >>> data = [ {'house': 100, 'street': 50, 'shop': 25, 'car': 100, 'tree': 20}, {'house': 5, 'street': 5, 'shop': 0, 'car': 10, 'tree': 500, 'river': 1} ] >>> dv = DictVectorizer(sparse=False) >>> X = dv.fit_transform(data) >>> Y = np.array([1, 0]) >>> X array([[ 100., 100., 0., 25., 50., 20.], [ 10., 5., 1., 0., 5., 500.]]) Note that the term 'river' is missing from the first set, so it's useful to keep alpha equal to 1.0 to give it a small probability. The output classes are 1 for city and 0 for the countryside. Now we can train a MultinomialNB instance: from sklearn.naive_bayes import MultinomialNB >>> mnb = MultinomialNB() >>> mnb.fit(X, Y) MultinomialNB(alpha=1.0, class_prior=None, fit_prior=True) To test the model, we create a dummy city with a river and a dummy country place without any river. >>> test_data = data = [ {'house': 80, 'street': 20, 'shop': 15, 'car': 70, 'tree': 10, 'river': 1}, ] {'house': 10, 'street': 5, 'shop': 1, 'car': 8, 'tree': 300, 'river': 0} >>> mnb.predict(dv.fit_transform(test_data)) array([1, 0]) As expected the prediction is correct. Later on, when discussing some elements of natural language processing, we're going to use multinomial naive Bayes for text classification with larger corpora. Even if the multinomial distribution is based on the number of occurrences, it can be successfully used with frequencies or more complex functions. Gaussian Naive Bayes Gaussian Naive Bayes is useful when working with continuous values which probabilities can be modeled using a Gaussian distribution: The conditional probabilities P(xi|y) are also Gaussian distributed and, therefore, it's necessary to estimate mean and variance of each of them using the maximum likelihood approach. This quite easy, in fact, considering the property of a Gaussian, we get: Where the k index refers to the samples in our dataset and P(xi|y) is a Gaussian itself. By minimizing the inverse of this expression (in Russel S., Norvig P., Artificial Intelligence: A Modern Approach, Pearson there's a complete analytical explanation), we get mean and variance for each Gaussian associated to P(xi|y) and the model is hence trained. As an example, we compare Gaussian Naive Bayes with logistic regression using the ROC curves. The dataset has 300 samples with two features. Each sample belongs to a single class: from sklearn.datasets import make_classification >>> nb_samples = 300 >>> X, Y = make_classification(n_samples=nb_samples, n_features=2, n_informative=2, n_redundant=0) A plot of the dataset is shown in the following figure: Now we can train both models and generate the ROC curves (the Y scores for naive Bayes are obtained through the predict_proba method): from sklearn.naive_bayes import GaussianNB from sklearn.linear_model import LogisticRegression from sklearn.metrics import roc_curve, auc from sklearn.model_selection import train_test_split >>> X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.25) >>> gnb = GaussianNB() >>> gnb.fit(X_train, Y_train) >>> Y_gnb_score = gnb.predict_proba(X_test) >>> lr = LogisticRegression() >>> lr.fit(X_train, Y_train) >>> Y_lr_score = lr.decision_function(X_test) >>> fpr_gnb, tpr_gnb, thresholds_gnb = roc_curve(Y_test, Y_gnb_score[:, 1]) >>> fpr_lr, tpr_lr, thresholds_lr = roc_curve(Y_test, Y_lr_score) The resulting ROC curves are shown in the following figure: Naive Bayes performances are slightly better than logistic regression, however, the two classifiers have similar accuracy and Area Under the Curve (AUC). It's interesting to compare the performances of Gaussian and multinomial naive Bayes with the MNIST digit dataset. Each sample (belonging to 10 classes) is an 8x8 image encoded as an unsigned integer (0 - 255), therefore, even if each feature doesn't represent an actual count, it can be considered like a sort of magnitude or frequency. from sklearn.datasets import load_digits from sklearn.model_selection import cross_val_score >>> digits = load_digits() >>> gnb = GaussianNB() >>> mnb = MultinomialNB() >>> cross_val_score(gnb, digits.data, digits.target, scoring='accuracy', cv=10).mean() 0.81035375835678214 >>> cross_val_score(mnb, digits.data, digits.target, scoring='accuracy', cv=10).mean() 0.88193962163008377 The multinomial naive Bayes performs better than the Gaussian variant and the result is not really surprising. In fact, each sample can be thought as a feature vector derived from a dictionary of 64 symbols. The value can be the count of each occurrence, so a multinomial distribution can better fit the data, while a Gaussian is slightly more limited by its mean and variance. We've exposed the generic naive Bayes approach starting from the Bayes' theorem and its intrinsic philosophy. The naiveness of such algorithm is due to the choice to assume all the causes to be conditional independent. It means that each contribution is the same in every combination and the presence of a specific cause cannot alter the probability of the other ones. This is not so often realistic, however, under some assumptions; it's possible to show that internal dependencies clear each other so that the resulting probability appears unaffected by their relations. [box type="note" align="" class="" width=""]You read an excerpt from the book, Machine Learning Algorithms, written by Giuseppe Bonaccorso. This book will help you build strong foundation to enter the world of machine learning and data science. You will learn to build a data model and see how it behaves using different ML algorithms, explore support vector machines, recommendation systems, and even create a machine learning architecture from scratch. Grab your copy today![/box] What is Naïve Bayes classifier? Machine Learning Algorithms: Implementing Naive Bayes with Spark MLlib Implementing Apache Spark MLlib Naive Bayes to classify digital breath test data for drunk driving  
Read more
  • 0
  • 0
  • 85097

article-image-unit-testing-in-net-core-with-visual-studio-2017-for-better-code-quality
Kunal Chaudhari
07 May 2018
12 min read
Save for later

Unit Testing in .NET Core with Visual Studio 2017 for better code quality

Kunal Chaudhari
07 May 2018
12 min read
The famous Java programmer, Bruce Eckel, came up with a slogan which highlights the importance of testing software: If it ain't tested, it's broken. Though a confident programmer may challenge this, it beautifully highlights the ability to determine that code works as expected over and over again, without any exception. How do we know that the code we are shipping to the end user or customer is of good quality and all the user requirements would work? By testing? Yes, by testing, we can be confident that the software works as per customer requirements and specifications. If there are any discrepancies between expected and actual behavior, it is referred to as a bug/defect in the software. The earlier the discrepancies are caught, the more easily they can be fixed before the software is shipped, and the results are good quality. No wonder software testers are also referred to as quality control analysts in various software firms. The mantra for a good software tester is: In God we trust, the rest we test. In this article, we will understand the testing deployment model of .NET Core applications, the Live Unit Testing feature of Visual Studio 2017. We will look at types of testing methods briefly and write our unit tests, which a software developer must write after writing any program. Software testing is conducted at various levels: Unit testing: While coding, the developer conducts tests on a unit of a program to validate that the code they have written is error-free. We will write a few unit tests shortly. Integration testing: In a team where a number of developers are working, there may be different components that the developers are working on. Even if all developers perform unit testing and ensure that their units are working fine, there is still a need to ensure that, upon integration of these components, they work without any error. This is achieved through integration testing. System testing: The entire software product is tested as a whole. This is accomplished using one or more of the following: Functionality testing: Test all the functionality of the software against the business requirement document. Performance testing: To test how performant the software is. It tests the average time, resource utilization, and so on, taken by the software to complete a desired business use case. This is done by means of load testing and stress testing, where the software is put under high user and data load. Security testing: Tests how secure the software is against common and well-known security threats. Accessibility testing: Tests if the user interface is accessible and user-friendly to specially-abled people or not. User acceptance testing: When the software is ready to be handed over to the customer, it goes through a round of testing by the customer for user interaction and response. Regression testing: Whenever a piece of code is added/updated in the software to add a new functionality or fix an existing functionality, it is tested to detect if there are any side-effects from the newly added/updated code. Of all these different types of testing, we will focus on unit testing, as it is done by the developer while coding the functionality. Unit testing .NET Core has been designed with testability in mind. .NET Core 2.0 has unit test project templates for VB, F#, and C#. We can also pick the testing framework of our choice amongst xUnit, NUnit, and MSTest. Unit tests that test single programming parts are the most minimal-level tests. Unit tests should just test code inside the developer's control, and ought to not test infrastructure concerns, for example, databases, filesystems, or network resources. Unit tests might be composed utilizing test-driven development (TDD) or they can be added to existing code to affirm its accuracy. The naming convention of Test class names should end with Test and reside in the same namespace as the class being tested. For instance, the unit tests for the Microsoft.Example.AspNetCore class would be in the Microsoft.Example.AspNetCoreTest class in the test assembly. Also, unit test method names must be descriptive about what is being tested, under what conditions, and what the expectations are. A good unit test has three main parts to it in the following specified order: Arrange Act Assert We first arrange the code and then act on it and then do a series of asserts to check if the actual output matches the expected output. Let's have a look at them in detail: Arrange: All the parameter building, and method invocations needed for making a call in the act section must be declared in the arrange section. Act: The act stage should be one statement and as simple as possible. This one statement should be a call to the method that we are trying to test. Assert: The only reason method invocation may fail is if the method itself throws an exception, else, there should always be some state change or output from any meaningful method invocation. When we write the act statement, we anticipate an output and then do assertions if the actual output is the same as expected. If the method under test should throw an exception under normal circumstances, we can do assertions on the type of exception and the error message that should be thrown. We should be watchful while writing unit test cases, that we don't inject any dependencies on the infrastructure. Infrastructure dependencies should be taken care of in integration test cases, not in unit tests. We can maintain a strategic distance from these shrouded dependencies in our application code by following the Explicit Dependencies Principle and utilizing Dependency Injection to request our dependencies on the framework. We can likewise keep our unit tests in a different project from our integration tests and guarantee our unit test project doesn't have references to the framework. Testing using xUnit In this section, we will learn to write unit and integration tests for our controllers. There are a number of options available to us for choosing the test framework. We will use xUnit for all our unit tests and Moq for mocking objects. Let's create an xUnit test project by doing the following: Open the Let's Chat project in Visual Studio 2017 Create a new folder named  Test Right-click the Test folder and click Add | New Project Select xUnit Test Project (.NET Core) under Visual C# project templates, as shown here: Delete the default test class that gets created with the template Create a test class inside this project AuthenticationControllerUnitTests for the unit test We need to add some NuGet packages. Right-click the project in VS 2017 to edit the project file and add the references manually, or use the NuGet Package Manager to add these packages: // This package contains dependencies to ASP.NET Core <PackageReference Include="Microsoft.AspNetCore.All" Version="2.0.0" /> // This package is useful for the integration testing, to build a test host for the project to test. <PackageReference Include="Microsoft.AspNetCore.TestHost" Version="2.0.0" /> // Moq is used to create fake objects <PackageReference Include="Moq" Version="4.7.63" /> With this, we are now ready to write our unit tests. Let's start doing this, but before we do that, here's some quick theory about xUnit and Moq. The documentation from the xUnit website and Wikipedia tells us that xUnit.net is a free, open source, community-focused unit testing tool for the .NET Framework. It is the latest technology for unit testing C#, F#, Visual Basic .NET, and other .NET languages. All xUnit frameworks share the following basic component architecture: Test runner: It is an executable program that runs tests implemented using an xUnit framework and reports the test results. Test case: It is the most elementary class. All unit tests are inherited from here. Test fixtures: Test fixures (also known as a test context) are the set of preconditions or state needed to run a test. The developer should set up a known good state before the tests, and return to the original state after the tests. Test suites: It is a set of tests that all share the same fixture. The order of the tests shouldn't matter. xUnit.net includes support for two different major types of unit test: Facts: Tests which are always true. They test invariant conditions, that is, data-independent tests. Theories: Tests which are only true for a particular set of data. Moq is a mocking framework for C#/.NET. It is used in unit testing to isolate the class under test from its dependencies and ensure that the proper methods on the dependent objects are being called. Recall that in unit tests, we only test a unit or a layer/part of the software in isolation and hence do not bother about external dependencies, so we assume they work fine and just mock them using the mocking framework of our choice. Let's put this theory into action by writing a unit test for the following action in AuthenticationController: public class AuthenticationController : Controller { private readonly ILogger<AuthenticationController> logger; public AuthenticationController(ILogger<AuthenticationController> logger) { this.logger = logger; } [Route("signin")] public IActionResult SignIn() { logger.LogInformation($"Calling {nameof(this.SignIn)}"); return Challenge(new AuthenticationProperties { RedirectUri = "/" }); } } The unit test code depends on how the method to be tested is written. To understand this, let's write a unit test for a SignIn action. To test the SignIn method, we need to invoke the SignIn action in AuthenticationController. To do so, we need an instance of the AuthenticationController class, on which the SignIn action can be invoked. To create the instance of AuthenticationController, we need a logger object, as the AuthenticationController constructor expects it as a parameter. Since we are only testing the SignIn action, we do not bother about the logger and so we can mock it. Let's do it: /// <summary> /// Authentication Controller Unit Test - Notice the naming convention {ControllerName}Test /// </summary> public class AuthenticationControllerTest { /// <summary> /// Mock the dependency needed to initialize the controller. /// </summary> private Mock<ILogger<AuthenticationController>> mockedLogger = new Mock<ILogger<AuthenticationController>>(); /// <summary> /// Tests the SignIn action. /// </summary> [Fact] public void SignIn_Pass_Test() { // Arrange - Initialize the controller. Notice the mocked logger object passed as the parameter. var controller = new AuthenticationController(mockedLogger.Object); // Act - Invoke the method to be tested. var actionResult = controller.SignIn(); // Assert - Make assertions if actual output is same as expected output. Assert.NotNull(actionResult); Assert.IsType<ChallengeResult>(actionResult); Assert.Equal(((ChallengeResult)actionResult). Properties.Items.Count, 1); } } Reading the comments would explain the unit test code. The previous example shows how easy it is to write a unit test. Agreed, depending on the method to be tested, things can get complicated. But it is likely to be around mocking the objects and, with some experience on the mocking framework and binging around, mocking should not be a difficult task. The unit test for the SignOut action would be a bit complicated in terms of mocking as it uses HttpContext. The unit test for the SignOut action is left to the reader as an exercise. Let's explore a new feature introduced in Visual Studio 2017 called Live Unit Testing. Live Unit Testing It may disappoint you but Live Unit Testing (LUT) is available only in the Visual Studio 2017 Enterprise edition and not in the Community edition. What is Live Unit Testing? It's a new productivity feature, introduced in the Visual Studio 2017 Enterprise edition, that provides real-time feedback directly in the Visual Studio editor on how code changes are impacting unit tests and code coverage. All this happens live, while you write the code and hence it is called Live Unit Testing. This will help in maintaining the quality by keeping tests passing as changes are made. It will also remind us when we need to write additional unit tests, as we are making bug fixes or adding features. To start Live Unit Testing: Go to the Test menu item Click Live Unit Testing Click Start, as shown here: On clicking this, your CPU usage may go higher as Visual Studio spawns the MSBuild and tests runner processes in the background. In a short while, the editor will display the code coverage of the individual lines of code that are covered by the unit test. The following image displays the lines of code in AuthenticationController that are covered by the unit test. On clicking the right icon, it displays the tests covering this line of code and also provides the option to run and debug the test: Similarly, if we open the test file, it will show the indicator there as well. Super cool, right! If we navigate to Test|Live Unit Testing now, we would see the options to Stop and Pause. So, in case we wish to save  our resources after getting the data once, we can pause or stop Live Unit Testing: There are numerous icons which indicates the code coverage status of individual lines of code. These are: Red cross: Indicates that the line is covered by at least one failing test Green check mark: Indicates that the line is covered by only passing tests Blue dash: Indicates that the line is not covered by any test If you see a clock-like icon just below any of these icons, it indicates that the data is not up to date. With this productivity-enhancing feature, we conclude our discussion on basic unit testing. Next, we will learn about containers and how we can do the deployment and testing of our .NET Core 2.0 applications in containers. To summarize, we learned the importance of testing and how we can write unit tests using Moq and xUnit. We saw a new productivity-enhancing feature introduced in Visual Studio 2017 Enterprise edition called Live Unit Testing and how it helps us write better-quality code. You read an excerpt from a book written by Rishabh Verma and Neha Shrivastava, titled  .NET Core 2.0 By Example. This book will help you build cross-platform solutions with .NET Core 2.0 through real-life scenarios.   More on Testing: Unit Testing and End-To-End Testing Testing RESTful Web Services with Postman Selenium and data-driven testing: Interview insights
Read more
  • 0
  • 0
  • 30221
Modal Close icon
Modal Close icon