Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-orchestration-service-openstack
Packt
24 Sep 2015
6 min read
Save for later

The orchestration service for OpenStack

Packt
24 Sep 2015
6 min read
This article by Adnan Ahmed, the author of the book, OpenStack Orchestration, will discuss the orchestration service for OpenStack. (For more resources related to this topic, see here.) Orchestration is a main feature provided and supported by OpenStack. It is used to orchestrate cloud resources, including applications, disk resources, IP addresses, load balancers, and so on. Heat contains a template engine that supports text files, where cloud resources are defined. These text files are defined in a special format compatible with Amazon CloudFormation. A new OpenStack native standard has also been developed for providing templates for orchestration called HOT (Heat Orchestration Template). Heat provides two types of clients; namely, a command-line client and a web-based client integrated into OpenStack dashboard. The orchestration project (Heat) itself is composed of several subcomponents. These subcomponents are listed as follows: Heat Heat engine Heat API Heat API-CFN Heat uses the term stack to define a group of services, resources, parameters inputs, constraints, and dependencies. A stack can be defined using a text file; however, the important point is to use the correct format. The JASON format used by AWS Cloud Formation is also supported by Heat. Heat workflow Heat provides two types of interfaces, including a web-based interface integrated into the OpenStack dashboard, and also a command-line interface (CLI), which can be used from inside a Linux shell. The interfaces use the Heat API to send commands to the Heat engine via the messaging service (for example, Rabbit MQ). A metering service such as the Ceilometer or CloudWatch API is used to monitor the performance of resources in the stack. These monitoring/metering services are used to trigger actions upon reaching a certain threshold. An example of this could be automatically launching a redundant web server behind a load balancer when the CPU load on the primary web server reaches above 90 percent. The orchestration authorization model The Heat component of OpenStack uses an authorization model composed of mainly two types: Password-based authorization Authorization-based on OpenStack identity trusts This process is known as orchestration authorization. Password authorization In this type of authorization, a password is expected from the user. This password must match with the password stored in a database by the Heat engine in an encrypted form. The following are the steps used to generate a username/password: A request is made to the Heat engine for a token or an authorization password. Normally, the Heat command-line client or the dashboard is used. The validation checks will fail if the stack contains any resources under deferred operations. If everything is normal, then a username/password is provided. The username/password are stored in the database in encrypted form. In some cases, the Heat engine, after obtaining the credentials, requests another token on the user's behalf, and thereafter, access to all the roles of stack owner are provided. Keystone trusts authorization Keystone trusts are extensions to the OpenStack identity service that are used for enabling delegation of resources. The trustor and the trustee are the two delegates used in this method. The trustor is the user who delegates and the trustee is the user who is being delegated. The following information from the trustor is required by the identity service to delegate a trustee: The ID of the trustee (the user to be delegated, in the case of Heat, it will be the Heat user) The roles to be delegated (The roles are configured using the Heat configuration file. For example, to launch a new instance to achieve auto-scaling in the case of reaching a threshold) Trusts authorization execution The creating a Stack via an API request step can be followed to execute a trust-based authorization. A token is used to create a trust between the stack owner (the trustor) and the Heat service user (also known as trustee in this case). A special role is delegated. This role must be predefined in the trusts_delegated-roles list inside the heat.conf file. By default, all the available roles for trustor are set to be available for the trustee if it is not modified using a local RBAC policy. This trust ID is stored in an encrypted form in the database. This trust ID is retrieved from the database when an operation is required. Authorization model configuration Heat used to support the password-based authorization until the Kilo version of OpenStack was released. Using the kilo version of OpenStack, the following changes can be made to enable trusts-based authorization in the Heat configuration file: Default setting in heat.conf: deferred_auth_method=password To be replaced to enable trusts-based authentication: deferred_auth_method=trusts The following parameter need to be set to specify trustor roles: trusts_delegated_roles = As mentioned earlier, all available roles for trustor will be assigned to the trustee if no specific roles are mentioned in the heat.conf file. Stack domain users The Heat stack domain user is used to authorize a user to carry out certain operations inside a virtual machine. Agents running inside virtual machine instances are provided with metadata. These agents repot and share the performance statistics of the VM on which they are running. They use this metadata to apply any changes or some sort of configuration expressed in the metadata. A signal is passed to the Heat engine when an event is completed successfully or with failed status. A typical example could be to generate an alert when the installation of an application is completed on a specific virtual machine after its first reboot. Heat provides features for encapsulating all the stack-defined users into a separate domain. This domain is usually created to store the information related to the Heat service. A domain admin is created, which is used by Heat for the management of the stack-domain users. Summary In this article, we learned that Heat is the orchestration service for OpenStack. We learned about the Heat authorization models, including password authorization, keystone trust authorization, and how these models work. For more information on OpenStack, you can visit: https://www.packtpub.com/virtualization-and-cloud/mastering-openstack https://www.packtpub.com/virtualization-and-cloud/openstack-essentials Resources for Article: Further resources on this subject: Using OpenStack Swift[article] Installing OpenStack Swift [article] Securing OpenStack Networking [article]
Read more
  • 0
  • 0
  • 1967

article-image-creating-jee-application-ejb
Packt
24 Sep 2015
11 min read
Save for later

Creating a JEE Application with EJB

Packt
24 Sep 2015
11 min read
In this article by Ram Kulkarni, author of Java EE Development with Eclipse (e2), we will be using EJBs (Enterprise Java Beans) to implement business logic. This is ideal in scenarios where you want components that process business logic to be distributed across different servers. But that is just one of the advantages of EJB. Even if you use EJBs on the same server as the web application, you may gain from a number of services that the EJB container provides to the applications through EJBs. You can specify security constraints for calling EJB methods declaratively (using annotations), and you can also easily specify transaction boundaries (specify which method calls from a part of one transaction) using annotations. In addition to this, the container handles the life cycle of EJBs, including pooling of certain types of EJB objects so that more objects can be created when the load on the application increases. (For more resources related to this topic, see here.) In this article, we will create the same application using EJBs and deploy it in a Glassfish 4 server. But before that, you need to understand some basic concepts of EJBs. Types of EJB EJBs can be of following types as per the EJB 3 specifications: Session bean: Stateful session bean Stateless session bean Singleton session bean Message-driven bean In this article, we will focus on session beans. Session beans In general, session beans are meant for containing methods used to execute the main business logic of enterprise applications. Any Plain Old Java Object (POJO) can be annotated with the appropriate EJB-3-specific annotations to make it a session bean. Session beans come in three types, as follows. Stateful session bean One stateful session bean serves requests for one client only. There is a one-to-one mapping between the stateful session bean and the client. Therefore, stateful beans can hold state data for the client between multiple method calls. In our CourseManagement application, we can use a stateful bean to hold the Student data (student profile and the courses taken by him/her) after a student logs-in. The state maintained by the Stateful bean is lost when the server restarts or when the session times out. Since there is one stateful bean per client, using a stateful bean might impact the scalability of the application. We use the @Stateful annotation to create a stateful session bean. Stateless session bean A stateless session bean does not hold any state information for any client. Therefore, one session bean can be shared across multiple clients. The EJB container maintains pools of stateless beans, and when a client request comes, it takes out a bean from the pool, executes methods, and returns the bean to the pool. Stateless session beans provide excellent scalability because they can be shared and need not be created for each client. We use the @Stateless annotation to create a stateless session bean. Singleton session bean As the name suggests, there is only one instance of a singleton bean class in the EJB container (this is true in the clustered environment too; each EJB container will have an instance of a singleton bean). This means that they are shared by multiple clients, and they are not pooled by EJB containers (because there can be only one instance). Since a singleton session bean is a shared resource, we need to manage concurrency in it. Java EE provides two concurrency management options for singleton session beans: container-managed concurrency and bean-managed concurrency. Container-managed concurrency can easily be specified by annotations. See https://docs.oracle.com/javaee/7/tutorial/ejb-basicexamples002.htm#GIPSZ for more information on managing concurrency in a singleton session bean. Using a singleton bean could have an impact on the scalability of the application if there are resource contentions in the code. We use the @Singleton annotation to create a singleton session bean Accessing a session bean from the client Session beans can be designed to be accessed locally (within the same application as a session bean) or remotely (from a client running in a different application or JVM) or both. In the case of remote access, session beans are required to implement a remote interface. For local access, session beans can implement a local interface or no interface (the no-interface view of a session bean). Remote and local interfaces that session beans implement are sometimes also called business interfaces, because they typically expose the primary business functionality. Creating a no-interface session bean To create a session bean with a no-interface view, create a POJO and annotate it with the appropriate EJB annotation type and @LocalBean. For example, we can create a local stateful Student bean as follows: import javax.ejb.LocalBean; import javax.ejb.Singleton; @Singleton @LocalBean public class Student { ... } Accessing a session bean using dependency injection You can access session beans by either using the @EJBannotation (for dependency injection) or performing a Java Naming and Directory Interface (JNDI) lookup. EJB containers are required to make the JNDI URLs of EJBs available to clients. Dependency injection of session beans using @EJB work only for managed components, that is, components of the application whose life cycle is managed by the EJB container. When a component is managed by the container, it is created (instantiated) by the container and also destroyed by the container. You do not create managed components using the new operator. JEE-managed components that support direct injection of EJBs are servlets, managed beans of JSF pages and EJBs themselves (one EJB can have other EJBs injected into it). Unfortunately, you cannot have a web container injecting EJBs into JSPs or JSP beans. Also, you cannot have EJBs injected into any custom classes that you create and are instantiated using the new operator. We can use the Student bean (created previously) from a managed bean of JSF, as follows: import javax.ejb.EJB; import javax.faces.bean.ManagedBean; @ManagedBean public class StudentJSFBean { @EJB private Student studentEJB; } Note that if you create an EJB with a no-interface view, then all the public methods in that EJB will be exposed to the clients. If you want to control which methods can be called by clients, then you should implement the business interface. Creating a session bean using a local business interface A business interface for EJB is a simple Java interface with either the @Remote or @Local annotation. So we can create a local interface for the Student bean as follows: import java.util.List; import javax.ejb.Local; @Local public interface StudentLocal { public List<Course> getCourses(); } We implement a session bean like this: import java.util.List; import javax.ejb.Local; import javax.ejb.Stateful; @Stateful @Local public class Student implements StudentLocal { @Override public List<CourseDTO> getCourses() { //get courses are return … } } Clients can access the Student EJB only through the local interface: import javax.ejb.EJB; import javax.faces.bean.ManagedBean; @ManagedBean public class StudentJSFBean { @EJB private StudentLocal student; } The session bean can implement multiple business interfaces. Accessing a session bean using a JNDI lookup Though accessing EJB using dependency injection is the easiest way, it works only if the container manages the class that accesses the EJB. If you want to access EJB from a POJO that is not a managed bean, then dependency injection will not work. Another scenario where dependency injection does not work is when EJB is deployed in a separate JVM (this could be on a remote server). In such cases, you will have to access EJB using a JNDI lookup (visit https://docs.oracle.com/javase/tutorial/jndi/ for more information on JNDI). JEE applications can be packaged in an Enterprise Application Archive (EAR), which contains a .jar file for EJBs and a WAR file for web applications (and the lib folder contains the libraries required for both). If, for example, the name of an EAR file is CourseManagement.ear and the name of an EJB JAR file in it is CourseManagementEJBs.jar, then the name of the application is CourseManagement (the name of the EAR file) and the module name is CourseManagementEJBs. The EJB container uses these names to create a JNDI URL for lookup EJBs. A global JNDI URL for EJB is created as follows: "java:global/<application_name>/<module_name>/<bean_name>![<bean_interface>]" java:global: Indicates that it is a global JNDI URL. <application_name>: The application name is typically the name of the EAR file. <module_name>: This is the name of the EJB JAR. <bean_name>: This is the name of the EJB bean class. <bean_interface>: This is optional if EJB has a no-interface view, or if it implements only one business interface. Otherwise, it is a fully qualified name of a business interface. EJB containers are also required to publish two more variations of JNDI URLs for each EJB. These are not global URLs, which means that they can't be used to access EJBs from clients that are not in the same JEE application (in the same EAR): "java:app/[<module_name>]/<bean_name>![<bean_interface>]" "java:module/<bean_name>![<bean_interface>]" The first URL can be used if the EJB client is in the same application, and the second URL can be used if the client is in the same module (the same JAR file as the EJB). Before you look up any URL in a JNDI server, you need to create an InitialContext that includes information, among other things such as the hostname of JNDI server and the port on which it is running. If you are creating InitialContext in the same server, then there is no need to specify these attributes: InitialContext initCtx = new InitialContext(); Object obj = initCtx.lookup("jndi_url"); We can use the following JNDI URLs to access a no-interface (LocalBean) Student EJB (assuming that the name of the EAR file is CourseManagement and the name of the JAR file for EJBs is CourseManagementEJBs): URL When to use java:global/CourseManagement/ CourseManagementEJBs/Student The client can be anywhere in the EAR file, because we are using a global URL. Note that we haven't specified the interface name because we are assuming that the Student bean provides a no-interface view in this example. java:app/CourseManagementEJBs/Student The client can be anywhere in the EAR. We skipped the application name because the client is expected to be in the same application. This is because the namespace of the URL is java:app. java:module/Student The client must be in the same JAR file as EJB. We can use the following JNDI URLs to access the Student EJB that implemented a local interface, StudentLocal: URL When to use java:global/CourseManagement/ CourseManagementEJBs/Student!packt.jee.book.ch6.StudentLocal The client can be anywhere in the EAR file, because we are using a global URL. java:global/CourseManagement/ CourseManagementEJBs/Student The client can be anywhere in the EAR. We skipped the interface name because the bean implements only one business interface. Note that the object returned from this call will be of the StudentLocal type, and not Student. java:app/CourseManagementEJBs/Student Or java:app/CourseManagementEJBs/Student!packt.jee.book.ch6.StudentLocal   The client can be anywhere in the EAR. We skipped the application name because the JNDI namespace is java:app. java:module/Student Or java:module/Student!packt.jee.book.ch6.StudentLocal The client must be in the same EAR as the EJB. Here is an example of how we can call the Student bean with the local business interface from one of the objects (that is not managed by the web container) in our web application: InitialContext ctx = new InitialContext(); StudentLocal student = (StudentLocal) ctx.loopup ("java:app/CourseManagementEJBs/Student"); return student.getCourses(id) ; //get courses from Student EJB Creating EAR for Deployment outside Eclipse. Summary EJBs are ideal for writing business logic in web applications. They can act as the perfect bridge between web interface components, such as a JSF, servlet, or JSP, and data access objects, such as JDO. EJBs can be distributed across multiple JEE application servers (this could improve application scalability) and their life cycle is managed by the container. EJBs can easily be injected into managed objects or can be looked up using JNDI. The Eclipse JEE makes creating and consuming EJBs very easy. The JEE application server Glassfish can also be managed and applications can be deployed from within Eclipse. Resources for Article: Further resources on this subject: Contexts and Dependency Injection in NetBeans[article] WebSockets in Wildfly[article] Creating Java EE Applications [article]
Read more
  • 0
  • 0
  • 16346

article-image-zombie-attacks
Packt
24 Sep 2015
9 min read
Save for later

The Zombie Attacks!

Packt
24 Sep 2015
9 min read
 In this article by Jamie Dean author of the book Unity Character Animation with Mecanim: RAW, we will demonstrate the process of importing and animating a rigged character in Unity. In this article, we will cover: Starting a blank Unity project and importing the necessary packages Importing a rigged character model in the FBX format and adjusting import settings Typically, an enemy character such as this will have a series of different animation sequences, which will be imported separately or together from a 3D package. In this case, our animation sequences are included in separate files. We will begin, by creating the Unity project. (For more resources related to this topic, see here.) Setting up the project Before we start exploring the animation workflow with Mecanim's tools, we need to set up the Unity project: Create a new project within Unity by navigating to File | New Project.... When prompted, choose an appropriate name and location for the project. In the Unity - Project Wizard dialog that appears, check the relevant boxes for the Character Controller.unityPackage and Scripts.unityPackage packages. Click on the Create button. It may take a few minutes for Unity to initialize. When the Unity interface appears, import the PACKT_cawm package by navigating to Assets | Import Package | Custom Package.... The Import package... window will appear. Navigate to the location where you unzipped the project files, select the unity package, and click on Open.The assets package will take a little time to decompress. When the Importing Package checklist appears, click on the Import button in the bottom-right of the window. Once the assets have finished importing, you will start with a default blank scene. Importing our enemy Now, it is time to import our character model: Minimize Unity. Navigate to the location where you unzipped the project files. Double-click on the Models folder to view its contents. Double-click on the zombie_m subfolder to view its contents.The folder contains an FBX file containing the rigged male zombie model and a separate subfolder containing the associated textures. Open Unity and resize the window so that both Unity and the zombie_m folder contents are visible. In Unity, click on the Assets folder in the Project panel. Drag the zombie_m FBX asset into the Assets panel to import it.Because the FBX file contains a normal map, a window will pop up asking if you want to set this file's import settings to read it correctly. Click on the Fix Now button. FBX files can contain embedded bitmap textures, which can be imported with the model. This will create subfolders containing the materials and textures within the folder where the model has been imported. Leaving the materials and textures as subfolders of the model will make them difficult to find within the project. The zombie model and two folders should now be visible in the FBX_Imports folder in the Assets panel. In the next step, we will move the imported material and texture assets into the appropriate folders in the Unity project. Organizing the material and textures The material and textures associated with the zombie_m model are currently located within the FBX_Imports folder. We will move these into different folders to organize them within the hierarchy of our project: Double-click on the Materials folder and drag the material asset contained within it into the PACKT_Materials folder in the Project panel. Return to the FBX_Imports folder by clicking on its title at the top of the Assets panel interface. Double-click on the textures folder. This will be named to be consistent with the model. Drag the two bitmap textures into the PACKT_Textures folder in the Project panel. Return to the FBX_Imports folder and delete the two empty subfolders.The moved material and textures will still be linked to the model. We will make sure of this by instancing it in the current empty scene. Drag the zombie_m asset into the Hierarchy panel. It may not be immediately visible within the Scene view due to the default import scale settings. We will take care of this in the next step. Adjusting the import scale Unity's import settings can be adjusted to account for the different tools commonly used to create 2D and 3D assets. Import settings are adjusted in the Inspector panel, which will appear on the right of the unity interface by default: Click on the zombie_m game object within the Hierarchy panel.This will bring up the file's import settings in the Inspector panel. Click on the Model tab. In the Scale Factor field, highlight the current number and type 1. The character model has been modeled to scale in meters to make it compatible with Unity's units. All 3D software applications have their own native scale. Unity does a pretty good job at accommodating all of them, but it often helps to know which software was used to create them. Scroll down until the Materials settings are visible. Uncheck the Import Materials checkbox.Now that we have got our textures and materials organized within the project, we want to make sure they are not continuously imported into the same folder as the model. Leave the remaining Model Import settings at their default values.We will be discussing these later on in the article, when we demonstrate the animation import. Click on the Apply button. You may need to scroll down within the Inspector panel to see this: The zombie_m character should now be visible in the Scene view: This character model is a medium resolution model—4410 triangles—and has a single 1024 x 1024 albedo texture and separate 1024 x 1024 specular and normal maps. The character has been rigged with a basic skeleton. The rigging process is essential if the model is to be animated. We need to save our progress, before we get any further: Save the scene by navigating to File | Save Scene as.... Choose an appropriate filename for the scene. Click on the Apply button. Despite the fact that we have only added a single game object to the default scene, there are more steps that we will need to take to set up the character and it will be convenient for us to save the current set up in case anything goes wrong. In the character animation, there are looping and single-shot animation sequences. Some animation sequences such as walk, run, idle are usually seamless loops designed to play back-to-back without the player being aware of where they start and end. Other sequences, typically, shooting, hitting, being injured or dying are often single-shot animations, which do not need to loop. We will start with this kind, and discuss looping animation sequences later in the article. In order to use Mecanim's animation tools, we need to set up the character's Avatar so that the character's hierarchy of bones is recognized and can be used correctly within Unity. Adjusting the rig import settings and creating the Avatar Now that we have imported the model, we will need to adjust the import settings so that the character functions correctly within our scene: Select zombie_m in the Assets panel. The asset's import settings should become visible within the Inspector panel. This settings rollout contains three tabs: Model, Rig, and Animations. Since we have already adjusted the Scale Factor within the Model Import settings, we will move on to the Rig import settings where we can define what kind of skeleton our character has. Choosing the appropriate rig import settings Mecanim has three options for importing rigged models: Legacy, Generic, and Humanoid. It also has a none option that should be applied to models that are not intended to be animated. Legacy format was previously the only option for importing skeletal animation in Unity. It is not possible to retarget animation sequences between models using Legacy, and setting up functioning state machines requires quite a bit of scripting. It is still a useful tool for importing models with fewer animation sequences and for simple mechanical animations. Legacy format animations are not compatible with Mecanim. Generic is one of the new animation formats that are compatible with Mecanim's animator controllers. It does not have the full functionality of Mecanim's character animation tools. Animations sequences imported with the generic format cannot be retargeted and are best used for quadrupeds, mechanical devices, pretty much anything except a character with two arms and two legs. The Humanoid animation type allows the full use of Mecanim's powerful toolset. It requires a minimum of 15 bones, and assumes that your rig is roughly human shaped with a pair of arms and legs. It can accommodate many more intermediary joints and some basic facial animation. One of the greatest benefits of using the Humanoid type is that it allows animation sequences to be retargeted or adapted to work with different rigs. For instance, you may have a detailed player character model with a full skeletal rig (including fingers and toes joints), maybe you want to reuse this character's idle sequence with a background character that is much less detailed, and has a simpler arrangement of bones. Mecanim makes it possible reuse purpose built motion sequences and even create useable sequences from motion capture data. Now that we have introduced these three rig types, we need to choose the appropriate setting for our imported zombie character, which in this case is Humanoid: In the Inspector panel, click on the Rig tab. Set the Animation Type field to Humanoid to suit our character skeleton type. Leave Avatar Definition set to Create From This Model. Optimize Game Objects can be left checked. Click on the Apply button to save the settings and transfer all of the changes that you have made to the instance in the scene.  The Humanoid animation type is the only one that supports retargeting. So if you are importing animations that are not unique and will be used for multiple characters, it is a good idea to use this setting. Summary In this article, we covered the major steps involved in animating a premade character using the Mecanim system in Unity. We started with FBX import settings for the model and the rig. We covered the creation of the Avatar by defining the bones in the Avatar Definition settings. Resources for Article: Further resources on this subject: Adding Animations[article] 2D Twin-stick Shooter[article] Skinning a character [article]
Read more
  • 0
  • 0
  • 25732

article-image-scripting-strategies
Packt
24 Sep 2015
9 min read
Save for later

Scripting Strategies

Packt
24 Sep 2015
9 min read
 In this article by Chris Dickinson, the author of Unity 5 Game Optimization, you will learn how scripting consumes a great deal of our development time and how it will be enormously beneficial to learn some best practices in optimizing scripts. Scripting is a very broad term, so we will try to limit our exposure in this article to situations that are Unity specific, focussing on problems arising from within the Unity APIs and Engine design. Whether you have some specific problems in mind that we wish to solve or whether you just want to learn some techniques for future reference, this article will introduce you to methods that you can use to improve your scripting effort now and in the future. In each case, we will explore how and why the performance issue arises, an example situation where the problem is occurring, and one or more solutions to combat the issue. (For more resources related to this topic, see here.) Cache Component references A common mistake when scripting in Unity is to overuse the GetComponent() method. For example, the following script code is trying to check a creature's health value, and if its health goes below 0, then disable a series of components to prepare it for a death animation: void TakeDamage() { if (GetComponent<HealthComponent>().health < 0) { GetComponent<Rigidbody>().enabled = false; GetComponent<Collider>().enabled = false; GetComponent<AIControllerComponent>().enabled = false; GetComponent<Animator>().SetTrigger("death"); } } Each time this method executes, it will reacquire five different Component references. This is good in terms of heap memory consumption (in that, it doesn't cost any), but it is not very friendly on CPU usage. This is particularly problematic if the main method were called during Update(). Even if it is not, it still might coincide with other important events such as creating particle effects, replacing an object with a ragdoll (thus invoking various activity in the physics engine), and so on. This coding style can seem harmless, but it could cause a lot of long-term problems and runtime work for very little benefit. It costs us very little memory space (only 32 or 64 bits each; Unity version, platform and fragmentation-permitting) to cache these references for future usage. So, unless we're extremely bottlenecked on memory, a better approach will be to acquire the references during initialization and keep them until they are needed: private HealthComponent _healthComponent; private Rigidbody _rigidbody; private Collider _collider; private AIControllerComponent _aiController; private Animator _animator; void Awake() { _healthComponent = GetComponent<HealthComponent>(); _rigidbody = GetComponent<Rigidbody>(); _collider = GetComponent<Collider>(); _aiController = GetComponent<AIControllerComponent>(); _animator = GetComponent<Animator>(); } void TakeDamage() { if (_healthComponent.health < 0) { _rigidbody.detectCollisions = false; _collider.enabled = false; _aiController.enabled = false; _animator.SetTrigger("death"); } } Caching the Component references in this way spares us from reacquiring them each time they're needed, saving us some CPU overhead each time, at the expense of some additional memory consumption. Obtain components using the fastest method There are several variations of the GetComponent() method, and it becomes prudent to call the fastest version of this method as possible. The three overloads available are GetComponent(string), GetComponent<T>(), and GetComponent(typeof(T)). It turns out that the fastest version depends on which version of Unity we are running. In Unity 4, the GetComponent(typeof(T)) method is the fastest of the available options by a reasonable margin. Let's prove this with some simple testing: int numTests = 1000000; TestComponent test; using (new CustomTimer("GetComponent(string)", numTests)) { for (var i = 0; i < numTests; ++i) { test = (TestComponent)GetComponent("TestComponent"); } } using (new CustomTimer("GetComponent<ComponentName>", numTests)) { for (var i = 0; i < numTests; ++i) { test = GetComponent<TestComponent>(); } } using (new CustomTimer("GetComponent(typeof(ComponentName))", numTests)) { for (var i = 0; i < numTests; ++i) { test = (TestComponent)GetComponent(typeof(TestComponent)); } } This code tests each of the GetComponent() overloads one million times. This is far more tests than would be sensible for a typical project, but it is enough tests to prove the point. Here is the result we get when the test completes: As we can see, GetComponent(typeof(T)) is significantly faster than GetComponent<T>(), which is around five times faster than GetComponent(string). This test was performed against Unity 4.5.5, but the behavior should be equivalent all the way back to Unity 3.x. The GetComponent(string) method should not be used, since it is notoriously slow and is only included for completeness. These results change when we run the exact same test in Unity 5. Unity Technologies made some performance enhancements to how System.Type references are passed around in Unity 5.0 and as a result, GetComponent<T>() and GetComponent(typeof(T)) become essentially equivalent: As we can see, the GetComponent<T>() method is only a tiny fraction faster than GetComponent(typeof(T)), while GetComponent(string) is now around 30 times slower than the alternatives (interestingly, it became even slower than it was in Unity 4). Multiple tests will probably yield small variations in these results, but ultimately we can favor either of the type-based versions of GetComponent() when we're working in Unity 5 and the outcome will be about the same. However, there is one caveat. If we're running Unity 4, then we still have access to a variety of quick accessor properties such as collider, rigidbody, camera, and so on. These properties behave like precached Component member variables, which are significantly faster than all of the traditional GetComponent() methods: int numTests = 1000000; Rigidbody test; using (new CustomTimer("Cached reference", numTests)) { for (var i = 0; i < numTests; ++i) { test = gameObject.rigidbody; } } Note that this code is intended for Unity 4 and cannot be compiled in Unity 5 due to the removal of the rigidbody property. Running this test in Unity 4 gives us the following result: In an effort to reduce dependencies and improve code modularization in the Engine's backend, Unity Technologies deprecated all of these quick accessor variables in Unity5. Only the transform property remains. Unity 4 users considering an upgrade to Unity 5 should know that upgrading will automatically modify any of these properties to use the GetComponent<T>() method. However, this will result in un-cached GetComponent<T>() calls scattered throughout our code, possibly requiring us to revisit the techniques introduced in the earlier section titled Cache Component References. The moral of the story is that if we are running Unity 4, and the required Component is one of GameObject's built-in accessor properties, then we should use that version. If not, then we should favor GetComponent(typeof(T)). Meanwhile, if we're running Unity5, then we can favor either of the type-based versions: GetComponent<T>() or GetComponent(typeof(T)). Remove empty callback declarations When we create new MonoBehaviour script files in Unity, irrespective we're using Unity 4 or Unity 5, it creates two boiler-plate methods for us: // Use this for initialization void Start () { } // Update is called once per frame void Update () { } The Unity Engine hooks in to these methods during initialization and adds them to a list of methods to call back to at key moments. But, if we leave these as empty declarations in our codebase, then they will cost us a small overhead whenever the Engine invokes them. The Start() method is only called when the GameObject is instantiated for the first time, which could be whenever the Scene is loaded, or a new GameObject is instantiated from a Prefab. Therefore, leaving the empty Start() declaration may not be particularly noticeable unless there's a lot of GameObjects in the Scene invoking them at startup time. But, it also adds unnecessary overhead to any GameObject.Instantiate() call, which typically happens during key events, so they could potentially contribute to, and exacerbate, an already poor performance situation when lots of events are happening simultaneously. Meanwhile, the Update() method is called every time the Scene is rendered. If our Scene contains thousands of GameObjects owning components with these empty Update() declarations, then we can be wasting a lot of CPU cycles and cause havoc on our frame rate. Let's prove this with a simple test. Our test Scene should have GameObjects with two types of components. One type is with an empty Update() declaration and another with no methods defined: public class CallbackTestComponent : MonoBehaviour { void Update () {} } public class EmptyTestComponent : MonoBehaviour { } Here are the test results for 32,768 components of each type. If we enable all objects with no stub methods during runtime, then nothing interesting happens with CPU usage in the Profiler. We may note that some memory consumption changes and a slight difference in the VSync activity, but nothing very concerning. However, as soon as we enable all the objects with empty Unity callback declarations, then we will observe a huge increase in CPU usage: The fix for this is simple; delete the empty declarations. Unity will have nothing to hook into, and nothing will be called. Sometimes, finding such empty declarations in an expansive codebase can be difficult, but using some basic regular expressions (regex), we should be able to find what we're looking for relatively easily. All common code-editing tools for Unity, such as MonoDevelop, Visual Studio, and even Notepad++, provide a way to perform a regex-based search on the entire codebase–check the tool's documentation for more information, since the method can vary greatly depending on the tool and its version. The following regex search should find any empty Update() declarations in our code: voids*Updates*?(s*?)s*?n*?{n*?s*?} This regex checks for a standard method definition of the Update() method, while including any surplus whitespace and newline characters that can be distributed throughout the method declaration. Naturally, all of the above is also true for non-boilerplate Unity callbacks, such as OnGUI(), OnEnable(), OnDestroy(), FixedUpdate(), and so on. Check the MonoBehaviour Unity Documentation page for a complete list of these callbacks at http://docs.unity3d.com/ScriptReference/MonoBehaviour.html. It might seem unlikely that someone generated empty versions of these callbacks in our codebase, but never say never. For example, if we use a common base class MonoBehaviour throughout all of our custom components, then a single empty callback declaration in that base class will permeate the entire game, which could cost us dearly. Be particularly careful of the OnGUI() method, as it can be invoked multiple times within the same frame or user interface (UI) event. Summary In this article, you have learned how you can optimize scripts while creating less CPU and memory-intensive applications and games. You learned about the Cache Component references and how you can optimize a code using the fastest method. For more information on code optimization, you can visit: http://www.paladinstudios.com/2012/07/30/4-ways-to-increase-performance-of-your-unity-game/ http://docs.unity3d.com/Manual/OptimizingGraphicsPerformance.html Resources for Article: Further resources on this subject: Components in Unity[article] Saying Hello to Unity and Android[article] Unity 3-0 Enter the Third Dimension [article]
Read more
  • 0
  • 0
  • 54460

article-image-integration-spark-sql
Packt
24 Sep 2015
11 min read
Save for later

Integration with Spark SQL

Packt
24 Sep 2015
11 min read
 In this article by Sumit Gupta, the author of the book Learning Real-time Processing with Spark Streaming, we will discuss the integration of Spark Streaming with various other advance Spark libraries such as Spark SQL. (For more resources related to this topic, see here.) No single software in today's world can fulfill the varied, versatile, and complex demands/needs of the enterprises, and to be honest, neither should it! Software are made to fulfill specific needs arising out of the enterprises at a particular point in time, which may change in future due to many other factors. These factors may or may not be controlled like government policies, business/market dynamics, and many more. Considering all these factors integration and interoperability of any software system with internal/external systems/software's is pivotal in fulfilling the enterprise needs. Integration and interoperability are categorized as nonfunctional requirements, which are always implicit and may or may not be explicitly stated by the end users. Over the period of time, architects have realized the importance of these implicit requirements in modern enterprises, and now, all enterprise architectures provide support due diligence and provisions in fulfillment of these requirements. Even the enterprise architecture frameworks such as The Open Group Architecture Framework (TOGAF) defines the specific set of procedures and guidelines for defining and establishing interoperability and integration requirements of modern enterprises. Spark community realized the importance of both these factors and provided a versatile and scalable framework with certain hooks for integration and interoperability with the different systems/libraries; for example; data consumed and processed via Spark streams can also be loaded into the structured (table: rows/columns) format and can be further queried using SQL. Even the data can be stored in the form of Hive tables in HDFS as persistent tables, which will exist even after our Spark program has restarted. In this article, we will discuss querying streaming data in real time using Spark SQL. Querying streaming data in real time Spark Streaming is developed on the principle of integration and interoperability where it not only provides a framework for consuming data in near real time from varied data sources, but at the same time, it also provides the integration with Spark SQL where existing DStreams can be converted into structured data format for querying using standard SQL constructs. There are many such use cases where SQL on streaming data is a much needed feature; for example, in our distributed log analysis use case, we may need to combine the precomputed datasets with the streaming data for performing exploratory analysis using interactive SQL queries, which is difficult to implement only with streaming operators as they are not designed for introducing new datasets and perform ad hoc queries. Moreover SQL's success at expressing complex data transformations derives from the fact that it is based on a set of very powerful data processing primitives that do filtering, merging, correlation, and aggregation, which is not available in the low-level programming languages such as Java/ C++ and may result in long development cycles and high maintenance costs. Let's move forward and first understand few things about Spark SQL, and then, we will also see the process of converting existing DStreams into the Structured formats. Understanding Spark SQL Spark SQL is one of the modules developed over the Spark framework for processing structured data, which is stored in the form of rows and columns. At a very high level, it is similar to the data residing in RDBMS in the form rows and columns, and then SQL queries are executed for performing analysis, but Spark SQL is much more versatile and flexible as compared to RDBMS. Spark SQL provides distributed processing of SQL queries and can be compared to frameworks Hive/Impala or Drill. Here are the few notable features of Spark SQL: Spark SQL is capable of loading data from variety of data sources such as text files, JSON, Hive, HDFS, Parquet format, and of course RDBMS too so that we can consume/join and process datasets from different and varied data sources. It supports static and dynamic schema definition for the data loaded from various sources, which helps in defining schema for known data structures/types, and also for those datasets where the columns and their types are not known until runtime. It can work as a distributed query engine using the thrift JDBC/ODBC server or command-line interface where end users or applications can interact with Spark SQL directly to run SQL queries. Spark SQL provides integration with Spark Streaming where DStreams can be transformed into the structured format and further SQL Queries can be executed. It is capable of caching tables using an in-memory columnar format for faster reads and in-memory data processing. It supports Schema evolution so that new columns can be added/deleted to the existing schema, and Spark SQL still maintains the compatibility between all versions of the schema. Spark SQL defines the higher level of programming abstraction called DataFrames, which is also an extension to the existing RDD API. Data frames are the distributed collection of the objects in the form of rows and named columns, which is similar to tables in the RDBMS, but with much richer functionality containing all the previously defined features. The DataFrame API is inspired by the concepts of data frames in R (http://www.r-tutor.com/r-introduction/data-frame) and Python (http://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe). Let's move ahead and understand how Spark SQL works with the help of an example: As a first step, let's create sample JSON data about the basic information about the company's departments such as Name, Employees, and so on, and save this data into the file company.json. The JSON file would look like this: [ { "Name":"DEPT_A", "No_Of_Emp":10, "No_Of_Supervisors":2 }, { "Name":"DEPT_B", "No_Of_Emp":12, "No_Of_Supervisors":2 }, { "Name":"DEPT_C", "No_Of_Emp":14, "No_Of_Supervisors":3 }, { "Name":"DEPT_D", "No_Of_Emp":10, "No_Of_Supervisors":1 }, { "Name":"DEPT_E", "No_Of_Emp":20, "No_Of_Supervisors":5 } ] You can use any online JSON editor such as http://codebeautify.org/online-json-editor to see and edit data defined in the preceding JSON code. Next, let's extend our Spark-Examples project and create a new package by the name chapter.six, and within this new package, create a new Scala object and name it as ScalaFirstSparkSQL.scala. Next, add the following import statements just below the package declaration: import org.apache.spark.SparkConf import org.apache.spark.SparkContext import org.apache.spark.sql._ import org.apache.spark.sql.functions._ Further, in your main method, add following set of statements to create SQLContext from SparkContext: //Creating Spark Configuration val conf = new SparkConf() //Setting Application/ Job Name conf.setAppName("My First Spark SQL") // Define Spark Context which we will use to initialize our SQL Context val sparkCtx = new SparkContext(conf) //Creating SQL Context val sqlCtx = new SQLContext(sparkCtx) SQLContext or any of its descendants such as HiveContext—for working with Hive tables or CassandraSQLContext—for working with Cassandra tables is the main entry point for accessing all functionalities of Spark SQL. It allows the creation of data frames, and also provides functionality to fire SQL queries over data frames. Next, we will define the following code to load the JSON file (company.json) using the SQLContext, and further, we will also create a data frame: //Define path of your JSON File (company.json) which needs to be processed val path = "/home/softwares/spark/data/company.json"; //Use SQLCOntext and Load the JSON file. //This will return the DataFrame which can be further Queried using SQL queries. val dataFrame = sqlCtx.jsonFile(path) In the preceding piece of code, we used the jsonFile(…) method for loading the JSON data. There are other utility method defined by SQLContext for reading raw data from filesystem or creating data frames from the existing RDD and many more. Spark SQL supports two different methods for converting the existing RDDs into data frames. The first method uses reflection to infer the schema of an RDD from the given data. This approach leads to more concise code and helps in instances where we already know the schema while writing Spark application. We have used the same approach in our example. The second method is through a programmatic interface that allows to construct a schema. Then, apply it to an existing RDD and finally generate a data frame. This method is more verbose, but provides flexibility and helps in those instances where columns and data types are not known until the data is received at runtime. Refer to https://spark.apache.org/docs/1.3.0/api/scala/index.html#org.apache.spark.sql.SQLContext for a complete list of methods exposed by SQLContext. Once the DataFrame is created, we need to register DataFrame as a temporary table within the SQL context so that we can execute SQL queries over the registered table. Let's add the following piece of code for registering our DataFrame with our SQL context and name it company: //Register the data as a temporary table within SQL Context //Temporary table is destroyed as soon as SQL Context is destroyed. dataFrame.registerTempTable("company"); And we are done… Our JSON data is automatically organized into the table (rows/column) and is ready to accept the SQL queries. Even the data types is inferred from the type of data entered within the JSON file itself. Now, we will start executing the SQL queries on our table, but before that let's see the schema being created/defined by SQLContext: //Printing the Schema of the Data loaded in the Data Frame dataFrame.printSchema(); The execution of the preceding statement will provide results similar to mentioned illustration: The preceding illustration shows the schema of the JSON data loaded by Spark SQL. Pretty simple and straight, isn't it? Spark SQL has automatically created our schema based on the data defined in our company.json file. It has also defined the data type of each of the columns. We can also define the schema using reflection (https://spark.apache.org/docs/1.3.0/sql-programming-guide.html#inferring-the-schema-using-reflection) or can also programmatically define the schema (https://spark.apache.org/docs/1.3.0/sql-programming-guide.html#inferring-the-schema-using-reflection). Next, let's execute some SQL queries to see the data stored in DataFrame, so the first SQL would be to print all records: //Executing SQL Queries to Print all records in the DataFrame println("Printing All records") sqlCtx.sql("Select * from company").collect().foreach(print) The execution of the preceding statement will produce the following results on the console where the driver is executed: Next, let's also select only few columns instead of all records and print the same on console: //Executing SQL Queries to Print Name and Employees //in each Department println("n Printing Number of Employees in All Departments") sqlCtx.sql("Select Name, No_Of_Emp from company").collect().foreach(println) The execution of the preceding statement will produce the following results on the Console where the driver is executed: Now, finally let's do some aggregation and count the total number of all employees across the departments: //Using the aggregate function (agg) to print the //total number of employees in the Company println("n Printing Total Number of Employees in Company_X") val allRec = sqlCtx.sql("Select * from company").agg(Map("No_Of_Emp"->"sum")) allRec.collect.foreach ( println ) In the preceding piece of code, we used the agg(…) function and performed the sum of all employees across the departments, where sum can be replaced by avg, max, min, or count. The execution of the preceding statement will produce the following results on the console where the driver is executed: The preceding images shows the results of executing the aggregation on our company.json data. Refer to the Data Frame API at https://spark.apache.org/docs/1.3.0/api/scala/index.html#org.apache.spark.sql.DataFrame for further information on the available functions for performing aggregation. As a last step, we will stop our Spark SQL context by invoking the stop() function on SparkContext—sparkCtx.stop(). This is required so that your application can notify master or resource manager to release all resources allocated to the Spark job. It also ensures the graceful shutdown of the job and avoids any resource leakage, which may happen otherwise. Also, as of now, there can be only one Spark context active per JVM, and we need to stop() the active SparkContext class before creating a new one. Summary In this article, we have seen the step-by-step process of using Spark SQL as a standalone program. Though we have considered JSON files as an example, but we can also leverage Spark SQL with Cassandra (https://github.com/datastax/spark-cassandra-connector/blob/master/doc/2_loading.md) or MongoDB (https://github.com/Stratio/spark-mongodb) or Elasticsearch (http://chapeau.freevariable.com/2015/04/elasticsearch-and-spark-1-dot-3.html). Resources for Article: Further resources on this subject: Getting Started with Apache Spark DataFrames[article] Sabermetrics with Apache Spark[article] Getting Started with Apache Spark [article]
Read more
  • 0
  • 0
  • 4455

Packt
24 Sep 2015
8 min read
Save for later

Snap – The Code Snippet Sharing Application

Packt
24 Sep 2015
8 min read
In this article by Joel Perras, author of the book Flask Blueprints, we will build our first fully functional, database-backed application. This application with the codename, Snap, will allow users to create an account with a username and password. In this account, users will be allowed to add, update, and delete the so-called semiprivate snaps of text (with a focus on lines of code) that can be shared with others. For this you should be familiar with at least one of the following relational database systems: PostgreSQL, MySQL, or SQLite. Additionally, some knowledge of the SQLAlchemy Python library, which acts as an abstraction layer and object-relational mapper for these (and several other) databases, will be an asset. If you are not well versed in the usage of SQLAlchemy, fear not. We will have a gentle introduction to the library that will bring the new developers up to speed and serve as a refresher for the more experienced folks. The SQLite database will be our relational database of choice due to its very simple installation and operation. The other database systems that we listed are all client/server-based with a multitude of configuration options that may need adjustment depending on the system they are installed in, while SQLite's default mode of operation is self-contained, serverless, and zero-configuration. Any major relational database supported by SQLAlchemy as a first-class citizen will do. (For more resources related to this topic, see here.) Diving In To make sure things start correctly, let's create a folder where this project will exist and a virtual environment to encapsulate any dependencies that we will require: $ mkdir -p ~/src/snap && cd ~/src/snap $ mkvirtualenv snap -i flask This will create a folder called snap at the given path and take us to this newly created folder. It will then create the snap virtual environment and install Flask in this environment. Remember that the mkvirtualenv tool will create the virtual environment, which will be the default set of locations to install the packages from pip, but the mkvirtualenv command does not create the project folder for you. This is why we will run a command to create the project folder first and then create the virtual environment. Virtual environments, by virtue of the $PATH manipulation performed once they are activated, are completely independent of where in your file system your project files exist. We will then create our basic blueprint-based project layout with an empty users blueprint: application ├── __init__.py ├── run.py └── users ├── __init__.py ├── models.py └── views.py Flask-SQLAlchemy Once this has been established, we need to install the next important set of dependencies: SQLAlchemy, and the Flask extension that makes interacting with this library a bit more Flask-like, Flask-SQLAlchemy: $ pip install flask-sqlalchemy This will install the Flask extension to SQLAlchemy along with the base distribution of the latter and several other necessary dependencies in case they are not already present. Now, if we were using a relational database system other than SQLite, this is the point where we would create the database entity in, say, PostgreSQL along with the proper users and permissions so that our application can create tables and modify the contents of these tables. SQLite, however, does not require any of that. Instead, it assumes that any user that has access to the filesystem location that the database is stored in should also have permission to modify the contents of this database. For the sake of completeness, however, here is how one would create an empty database in the current folder of your filesystem: $ sqlite3 snap.db # hit control-D to escape out of the interactive SQL console if necessary.   As mentioned previously, we will be using SQLite as the database for our example applications and the directions given will assume that SQLite is being used; the exact name of the binary may differ on your system. You can substitute the equivalent commands to create and administer the database of your choice if anything other than SQLite is being used. Now, we can begin the basic configuration of the Flask-SQLAlchemy extension. Configuring Flask-SQLAlchemy First, we must register the Flask-SQLAlchemy extension with the application object in the application/__init__.py: from flask import Flask fromflask.ext.sqlalchemy import SQLAlchemy app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///../snap.db' db = SQLAlchemy(app) The value of app.config['SQLALCHEMY_DATABASE_URI'] is the escaped relative path to the snap.db SQLite database that we created previously. Once this simple configuration is in place, we will be able to create the SQLite database automatically via the db.create_all() method, which can be invoked in an interactive Python shell: $ python >>>from application import db >>>db.create_all() This should be an idempotent operation, which means that nothing would change even if the database already exists. If the local database file did not exist, however, it would be created. This also applies to adding new data models: running db.create_all() will add their definitions to the database, ensuring that the relevant tables have been created and are accessible. It does not, however, take into account the modification of an existing model/table definition that already exists in the database. For this, you will need to use the relevant tools (for example, the sqlite CLI) to modify the corresponding table definitions to match those that have been updated in your models or use a more general schema tracking and updating tool such as Alembic to do the majority of the heavy lifting for you. SQLAlchemy basics SQLAlchemy is, first and foremost, a toolkit to interact with the relational databases in Python. While it provides an incredible number of features—including the SQL connection handling and pooling for various database engines, ability to handle custom datatypes, and a comprehensive SQL expression API—the one feature that most developers are familiar with is the Object Relational Mapper. This mapper allows a developer to connect a Python object definition to a SQL table in the database of their choice, thus allowing them the flexibility to control the domain models in their own application and requiring only minimal coupling to the database product and the engine-specific SQLisms that each of them exposes. While debating the usefulness (or the lack thereof) of an object relational mapper is outside the scope of for those who are unfamiliar with SQLAlchemy we will provide a list of benefits that using this tool brings to the table, as follows: Your domain models are written to interface with one of the most well-respected, tested, and deployed Python packages ever created—SQLAlchemy. Onboarding new developers to a project becomes an order of magnitude easier due to the extensive documentation, tutorials, books, and articles that have been written about using SQLAlchemy. Import-time validation of queries written using the SQLAlchemy expression language; instead of having to execute each query string against the database to determine if there is a syntax error present. The expression language is in Python and can thus be validated with your usual set of tools and IDE. Thanks to the implementation of design patterns such as the Unit of Work, the Identity Map, and various lazy loading features, the developer can often be saved from performing more database/network roundtrips than necessary. Considering that the majority of a request/response cycle in a typical web application can easily be attributed to network latency of one form or another, minimizing the number of database queries in a typical response is a net performance win on many fronts. While many successful, performant applications can be built entirely on the ORM, SQLAlchemy does not force it upon you. If, for some reason, it is preferable to write raw SQL query strings or to use the SQLAlchemy expression language directly, then you can do that and still benefit from the connection pooling and the Python DBAPI abstraction functionality that is the core of SQLAlchemy itself. Now that we've given you several reasons why you should be using this database query and domain data abstraction layer, let's look at how we would go about defining a basic data model. Summary After having gone through this article we have seen several facets of how Flask may be augmented with the use of extensions. While Flask itself is relatively spartan, the ecology of extensions that are available make it such that building a fully fledged user-authenticated application may be done quickly and relatively painlessly. Resources for Article: Further resources on this subject: Creating Controllers with Blueprints[article] Deployment and Post Deployment [article] Man, Do I Like Templates! [article]
Read more
  • 0
  • 0
  • 3109
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-learning-nodejs-mobile-application-development
Packt
23 Sep 2015
5 min read
Save for later

Learning Node.js for Mobile Application Development

Packt
23 Sep 2015
5 min read
  In Learning Node.js for Mobile Application Development by Christopher Svanefalk and Stefan Buttigieg, the overarching goal of this article is to give you the tools and know-how to install Node.js on multiple OS platforms and how to verify the installation. After reading this article you will know how to install, configure and use the fundamental software components. You will also have a good understanding of why these tools are appropriate for developing modern applications. (For more resources related to this topic, see here.) Why Node.js? Modern apps have several requirements which cannot be provided by the app itself, such as central data storage, communication routing, and user management. In order to provide such services, apps rely on an external software component known as a backend. The backend we will use for this is Node.js, a powerful but strange beast in its category. Node.js is known for being both reliable and highly performing. Node.js comes with its own package management system, NPM (Node Package Manager), through which you can easily install, remove and manage packages for your project. What this article covers? This article covers the installation of Node.js on multiple OS platforms and how to verify the installation. The installation Node.js is delivered as a set of JavaScript libraries, executing on a C/C++ runtime that is built around the Google V8 JavaScript Engine. The two come bundled together for most major operating systems, and we will look at the specifics of installing it. Google V8 JavaScript Engine is the same JavaScript engine that is used in the Chrome browser, built for speed and efficiency. Windows For Windows, there is a dedicated MSI wizard that can be used to install Node.js, which can be downloaded from the project's official website. To do so, go to the main page, navigate to Downloads, and then select Windows Installer. After it is downloaded, run MSI, follow the steps given to select the installation options, and conclude the install. Keep in mind that you will need to restart your system in order to make the changes effective. Linux Most major Linux distributions provide convenient installs of Node.js through their own package management systems. However, it is important to keep in mind that for many of them, NPM will not come bundled with the main Node.js package. Rather, it will be provided as a separate package. We will show how to install both in the following section. Ubuntu/Debian Open a terminal and issue sudo apt-get update to make sure that you have the latest package listings. After this, issue apt-get install nodejsnpm in order to install both Node.js and NPM in one swoop. Fedora/RHEL/CentOS On Fedora 18 or later, open a terminal and issue sudo yum install nodejsnpm. The system will perform the full setup for you. If you are running RHEL or CentOS, you will need to enable the optional EPEL repository. This can be done in conjunction with the install process, so that you do not need to do it again while upgrading, by issuing the sudo yum install nodejsnpm --enablerepo=epel command. Verifying your installation Now that we have finished the install, let's do a sanity check and make sure that everything works as expected. To do so, we can use the Node.js shell, which is an interactive runtime environment that is used to execute JavaScript code. To open it, first open a terminal, and then issue the following on it: node This will start the interpreter, which will appear as a shell, with the input line starting with the > sign. Once you are in it, type the following: console.log(“Hello world!); Then, press Enter. The Hello world! phrase will appear on the next line. Congratulations, your system is now set up to run Node.js! Mac OS X For Mac OS X, you can find a ready-to-install PKG file by going to www.nodejs.org, navigating to Downloads, and selecting the Mac OS X Installer option. Otherwise, you can click on Install, and your package file will automatically be downloaded as shown in the followin screenshot: Once you have downloaded the file, run it and follow the instructions on the screen. It is recommended that you keep all the offered default settings, unless there are compelling reasons for you to change something with regard to your specific machine. Verifying your installation for Mac OS X After the install finishes, open a terminal and start the Node.js shell by issuing the following command: node This will start the interactive node shell where you can execute JavaScript code. To make sure that everything works, try issuing the following command to the interpreter: console.log(“hello world!”); After pressing Enter, the Hello world! phrase will appear on your screen. Congratulations, Node.js is all set up and good to go! Who this article is written for Intended for web developers of all levels of expertise who want to deep dive into cross-platform mobile application development without going through the pains of understanding the languages and native frameworks which form an integral part of developing for different mobile platforms. This article will provide the readers with the necessary basic idea to develop mobile applications with near-native functionality and help them understand the process to develop a successful cross-platform mobile application. Summary In this article, we learned the different techniques that can be used to install Node.js across different platforms. Read Learning Node.js for Mobile Application Development to dive into cross-platform mobile application development. The following are some other related titles: Node.js Design Patterns Web Development with MongoDB and Node.js Deploying Node.js Node Security Resources for Article: Further resources on this subject: Welcome to JavaScript in the full stack[article] Introduction and Composition[article] Deployment and Maintenance [article]
Read more
  • 0
  • 0
  • 8185

article-image-user-interface
Packt
23 Sep 2015
10 min read
Save for later

User Interface

Packt
23 Sep 2015
10 min read
This article, written by John Doran, the author of the Unreal Engine Game Development Cookbook, covers the following recipes: Creating a main menu Animating a menu (For more resources related to this topic, see here.) In order to create a good game project, you need to be able to communicate information to the player. To do this, we need to create a user interface (UI), which will allow us to display information such as the player's health, inventory, and so on. Inside Unreal 4, we use the Slate UI framework to create user interfaces, however, it's a very complex system. To make things easier for end users, Unreal also released the Unreal Motion Graphics (UMG) UI Designer which is a visual UI authoring tool with a much easier workflow. This is what we will be using in this article. For more information on Slate, refer to https://docs.unrealengine.com/latest/INT/Programming/Slate/index.html. Creating a main menu A main menu can serve as an introduction to your game and is a great place for us to discuss some additional things that UMG has, such as Texts and Buttons. We'll also learn how we can make buttons do things. Let's spend some time to see just how easy it is to create one! For more information on the client-server model, refer to https://en.wikipedia.org/wiki/Client%E2%80%93server_model. How to do it… To give you an idea of how it works, let's take a simple example of a coin collectable: Create a new level by going to File | New Level and select Empty Level. Next, inside the Content Browser tab, go to our UI folder, then to Add New | User Interface | Widget Blueprint, and give it a name of MainMenu. Double-click on it to open the editor. In this menu, we are going to have the title of the game and then a series of buttons the player can press: From the Palette tab, open up the Common section and drag and drop a Button onto the middle of the screen. Select the button and change its Size X to 400 and Size Y to 80. We will also rename the button to Play Game. Drag and drop a Text object onto the Play Game button and you should see it snap on to the button as a child. Under Content, change Text to Play Game. From here under Appearance, change the color of the button to black and change the Font size to 32. From the Hierarchy tab, select the Play Game button and copy and paste it to create duplicate. Move the button down, rename it to Quit Game, and change the Text to Content as well. Move both of the objects so that they're on the bottom part of the HUD, slightly above and side by side, as shown in the following image: Lastly, we'll want to set our pivots and anchors accordingly. When you select either the Quit Game or Play Game buttons, you may notice a sun-like looking widget that displays the Anchors of the object (known as the Anchor Medallion). In our case, open Anchors from the Details panel and click on the bottom-center option. Now that we have the buttons created, we want them to actually do something when we click on them. Select the Play Game button and from the Details tab, scroll down until you see the Events component. There should be a series of big green + buttons. Click on the green button beside OnClicked. Next, it will take us to the Event Graph with the appropriate event created for us. To the right of the event, right-click and create an Open Level action. Under Level Name, put in whatever level you like (for example, StarterMap) and then connect the output of the OnClicked action to the input of the Open Level action. To the right of that, create a Remove from Parent action to make sure that when we leave that, the menu doesn't stay. Finally, create a Get Player Controller action and to the right of it a Set Show Mouse Cursor action, which should be disabled, so that the mouse will no longer be visible since we will want to see the mouse in the menu. (Drag Return Value from the Get Player Controller action to create a new node and search for the mouse cursor action.) Now, go back to the Designer button and then select the Quit Game button. Click on the OnClicked button as well and to the right of this one, create a Quit Game action and connect the output of the OnClicked action to the input of the Quit Game action. Lastly, as a bit of polish, let's add our game's title to the screen. Drag and drop another Text object onto the scene, this time with Anchor at the top-center. From here, change Position X to 0 and Position Y to 176. Change Alignment in the X axis to .5 and check the Size to Content option for it to automatically resize. Set the Content component's Text property to the game's name (in my case, Game Name). Under the Appearance component, set the Font size to 93 and set Justification to Center. There are a number of other styling options that you may wish to use when developing your HUDs. For more information about it, refer to https://docs.unrealengine.com/latest/INT/Engine/UMG/UserGuide/Styling/index.html. Compile the menu, and saveit. Now we need to actually have the widget show up. To do so, we'll need to take the same steps as we did earlier. Open up Level Blueprint by going to Blueprints | Open Level Blueprint and create an EventBeginPlay event. Then, to the right of this, right-click and create a Create Widget action. From the dropdown under Class, select MainMenu and connect the arrow from Event Begin Play to the input of Create MainMenu_C Widget. After this, click and drag the output arrow and create an Add to Viewport event. Then, connect Return Value of our Create Widget action to Target of the Add to Viewport action. Now lastly, we also want to display the player's cursor on the screen to show buttons. To do this, right-click and select Get Player Controller. Then, from Return Value of that, create a Show Mouse Cursor object in Set. Connect the output of the Add to Viewport action to the input of the Show Mouse Cursor action. Compile, save, and run the project! With this, our menu is completed! We can quit the game without any problem, and pressing the Play Game button will start our level! Animating a menu You may have created a menu or UI element at some point, but rather than having it static and non-moving, let's spend some time looking at how we can animate the menus by having them fly in and out or animating them in some way. This will help add to the polish of the title as well as enable players to notice things easier as they move in. Getting ready Before we start working on this, we need to have a project created and set up. Do the previous recipe all the way to completion. How to do it… Open up the MainMenu blueprint once more and from the bottom-left in the Animations tab, click on the +Animation button and give the new animation a name of MenuFlyIn. Select the newly created animation and you should see the window on the right-hand side brighten up. Next, click on the Auto Key toggle to have the animation editor automatically set keys that are appropriate for our implementation. If it's not there already, move the timeline bar (the white line with two orange ends on the top and bottom) to the 0.00 mark on the animation timeline. Next, select the Game Name object and under Color and Opacity, open it and change the A (alpha) value to 0. Now move the timeline bar to the 1.00 mark and then open the color again and set the A value to 1. You'll notice a transition—going from a completely transparent text to a fully shown one. This is a good start. Let's have the buttons fly in after the text appears. Next, move the Time bar to the 2.00 mark and select the Play Game button. Now from the Details tab, you'll notice that under the variables, there are new + icons to the left of variables. This value will save the value for use in the animations. Click on the + icon by the Position Y value. If you use your scroll wheel while inside the dark grey portion of the timeline bar (where the keyframe numbers are displayed), it zooms in and out. This can be quite useful when you create more complex animations. Now move the Time bar to the 1.00 mark and move the Play Game button off the screen. By doing the animation in this way, we are saving where we want it to be first at the end, and then going back in time to do the animations. Do the same animation for the Quit Game button. Now that our animation is created, let's make it in a way so that when the object starts, this animation is played. Click on the Graph button and from the MyBlueprint tab under the Graphs section, double-click on the Event Construct event, which is called as soon as we add the menu to the scene. Grab the pin on the end of it and create a Play Animation action. Drag and drop a MenuFlyIn animation into the scene and select Get. Connect its output pin to the In Animation property of the Play Animation action. Now that we have the animation work when we create the menu, let's have it play when we leave the menu. Select the Play Animation and Menu Fly In variables and copy them. Then move to the OnClicked (Play Game) action. Drag the OnClicked event over to the left and remove its original connection to the Open Level action by holding down Alt and clicking. Now paste (Ctrl + V) the new objects and connect the out pin of OnClicked (Play Game) to the input of Play Animation. Now change Play Mode to Reverse. To the right of this, create a Delay action. For the Duration variable, we want it to wait as long as the animation is, so from the Menu Fly In variable, create another pin and create a Get End Time action. Connect Return Value of Get End Time to the input of the Delay action. Connect the output of the Play Animation action to the input of the Delay action and the Completed output of the Delay action to the input of the Open Level action. Now we need to do the same for the OnClicked (Quit Game) event. Now compile, save, and run the game! Our menu is now completed and we've learned about how animation works inside UMG! For more examples of using UMG for animation, refer to https://docs.unrealengine.com/latest/INT/Engine/UMG/UserGuide/Animation/index.html. Summary This article gave you some insight on Slate and the UMG Editor to create a number of UI elements and an animated main menu to tie your whole game together. We created a main menu and also learned how to make buttons do things. We spent some time looking at how we can animate menus by having them fly in and out. Resources for Article: Further resources on this subject: The Blueprint Class[article] Adding Fog to Your Games [article] Overview of Unreal Engine 4 [article]
Read more
  • 0
  • 0
  • 11570

article-image-debugging-applications-pdb-and-log-files
Packt
23 Sep 2015
13 min read
Save for later

Debugging Applications with PDB and Log Files

Packt
23 Sep 2015
13 min read
 In this article by Dan Nixon of the book Getting Started with Python and Raspberry Pi, we will learn more about how to debug Python code using the Python Debugger (PDB) tool and how we can use the Python logging framework to make complex applications written in Python easier to debug when they fail. (For more resources related to this topic, see here.) We will also look at the technique of unit testing and how the unittest Python module can be used to test small sections of a Python application to ensure that it is functioning as expected. These techniques are commonly used in applications written in other languages and are good skills to learn if you are often going to be developing applications. The Python debugger PDB is a tool that allows real time debugging of running Python code. It can help to track down issues with the logic of a program to help find the cause of a crash or unexpected behavior. PDB can be launched with the following command: pdb2.7 do_calculaton.py This will open a new PDB shell, as shown in the following screenshot: We can use the continue command (which can be shortened to c) to execute the next section of the code until a breakpoint is hit. As we are yet to declare any breakpoints, this will run the script until it exits normally, as shown in the following screenshot: We can set breakpoints in the application, where the program will be stopped, and you will be taken back to the PDB shell in order to debug the control flow of the program. The easiest way to set a breakpoint is by giving a specific line in a file, for example: break Operation.py:7 This command will add a breakpoint on line 7 of Operation.py. When this is added, PDB will confirm the file and the line number, as shown in the following screenshot: Now, when we run the application, we will see the program stop each time the breakpoint is reached. When a breakpoint is reached, we can resume the program using the c command: When paused at a breakpoint, we can view the details of the local variables in the current scope. For example, in the breakpoint we have added, there is a variable named name, which we can see the value of by using the following command: p name This outputs the value of the variable, as shown in the following screenshot: When at a breakpoint, we can also get a stack trace of the functions that have been called so far. This is done using the bt command and gives output like that shown in the following screenshot: We can also modify the values of the variables when paused at a breakpoint. To do this, simply assign a value to the variable name as you would in a regular Python script: name = 'subtract' In the following screenshot, this was used to change the first operation in the do_calculation.py script from add to subtract; the effect on the calculation is seen in the different result value: When at a breakpoint, we can also use the l command to see the current line the program is paused at. An example of this is shown in the following screenshot: We can also setup a series of commands to be executed when we hit a breakpoint. This can allow debugging to be automated to an extent by automatically recording or modifying the values of the variables at certain points in the program's execution. This can be demonstrated using the following commands on a new instance of PDB with no breakpoints set (first, quit PDB using the q command, and then re-launch it): break Operation.py:7 commands p name c This gives the following output. Note that the commands are entered on a terminal prefixed (com) rather than the PDB terminal prefixed (pdb). This set of commands tells PDB to print the value of the name variable and continue execution when the last added breakpoint was hit. This gives the output shown in the following screenshot: Within PDB, you can also use the ? command to get a full list of the available commands and help on using them, as shown in the following screenshot: Further information and full documentation on PDB is available at https://docs.python.org/2/library/pdb.html. Writing log files The next technique we will look at is having our application output a log file. This allows us to get a better understanding of what was happening at the time an application failed, which can provide key information into finding the cause of the failure, especially when the failure is being reported by a user of your application. We will add some logging statements to the Calculator.py and Operation.py files. To do this, we must first add the import for the logging module (https://docs.python.org/2/library/logging.html) to the start of each python file, which is simply: import logging In the Operation.py file, we will add two logging calls in the evaluate function, as shown in the following code: def evaluate(self, a, b): logging.getLogger(__name__).info("Evaluating operation: %s" % (self._operation)) logging.getLogger(__name__).debug("RHS: %f, LHS: %f" % (a, b)) This will output two logging statements: one at the debug level and one at the information level. There are in total five unique levels at which messages can be output. In increasing severity, they are: debug() info() warning() error() critical() Log handlers can be filtered to only process the log messages of a certain severity if required. We will see this in action later in this section. The logging.getLogger(__name__) call is used to retrieve the Logger class for the current module (where the name of the module is given by the __name__ variable). By default, each module uses its own Logger class identified by the name of the module. Next, we can add some debugging statements to the Calculator.py file in the same way. Here, we will add logging to the enter_value, enter_operation, evaluate, and all_clear functions, as shown in the following code snippet: def enter_value(self, value): if len(self._input_list) > 0 and not isinstance(self._input_list[-1], Operation): raise RuntimeError("Must enter an operation next") logging.getLogger(__name__).info("Adding value: %f" % (value)) self._input_list.append(float(value)) def enter_operation(self, operation_name): if len(self._input_list) == 0 or isinstance(self._input_list[-1], Operation): raise RuntimeError("Must enter a value next") logging.getLogger(__name__).info("Adding operation: %s" % (operation_name)) self._input_list.append(Operation(operation_name)) def evaluate(self): logging.getLogger(__name__).info("Evaluating calculation") if len(self._input_list) % 2 == 0: raise RuntimeError("Input length mismatch") self._result = self._input_list[0] for idx in range(1, len(self._input_list), 2): operation = self._input_list[idx] next_value = self._input_list[idx + 1] logging.getLogger(__name__).debug("Next function: %f %s %f" % ( self._result, str(operation), next_value)) self._result = operation.evaluate(self._result, next_value) logging.getLogger(__name__).info("Result is: %f" % (self._result)) return self._result def all_clear(self): logging.getLogger(__name__).info("Clearing calculator") self._input_list = [] self._result = 0.0 Finally, we need to configure a handler for the log messages. This is what will handle the messages sent by each logger and output them to a suitable destination; for example, the standard output or a file. We will configure this in the do_conversion.py file. First, we will configure a basic handler that will print all the log messages to the standard output so that they appear on the terminal. This can be achieved with the following code: logging.basicConfig(level=logging.DEBUG) We will also add the following line to the end of the script. This is used to close any open log handlers and should be included at the very end of an application (the logging framework should not be used after calling this function). logging.shutdown() Now, we can see the effects by running the script using the following command: python do_calculation.py This will give an output to the terminal, as shown in the following screenshot: We can also have the log output written to a file instead of printed to the terminal by adding a filename to the logger configuration. This helps to keep the terminal free of unnecessary information. logging.basicConfig(level=logging.DEBUG, filename='calc.log') When executed, this will give no additional output other than the result of the calculation, but will have created an additional file, calc.log, which contains the log messages, as shown in the following screenshot: Unit testing Unit testing is a technique for automated testing of small sections ("units") of code to ensure that the components of a larger application are working as intended, independently of each other. There are many frameworks for this in almost every language. In Python, we will be using the unittest module, as this is included with the language and is the most common framework used in the Python applications. To add unit tests to our calculator module, we will create an additional module in the same directory named test. Inside that will be three files: __init__.py (used to denote that a directory is a Python package), test_Calculator.py, and test_Operation.py. After creating this additional module, the structure of the code will be the same as shown in the following image: Next, we will modify the test_Operation.py file to include a test case for the Operation class. As always, this will start with the required imports for the modules we will be using: import unittest from calculator.Operation import Operation We will be creating a class, test_Operation, which inherits from the TestCase class provided by the unittest module. This contains the logic required to run the functions of the class as individual unit tests. class test_Operation(unittest.TestCase): Now, we will define four tests to test the creation of a new Operation instance for each of the operations that are supported by the class. Here, the assertEquals function is used to test for equality between two variables; this determines if the test passes or not. def test_create_add(self): op = Operation('add') self.assertEqual(str(op), 'add') def test_create_subtract(self): op = Operation('subtract') self.assertEqual(str(op), 'subtract') def test_create_multiply(self): op = Operation('multiply') self.assertEqual(str(op), 'multiply') def test_create_divide(self): op = Operation('divide') self.assertEqual(str(op), 'divide') In this test we are checking that a RuntimeError is raised when an unknown operation is given to the Operation constructor. We will do this using the assertRaises function. def test_create_fails(self): self.assertRaises(ValueError, Operation, 'not_a_function') Next, we will create four tests to ensure that each of the known operations evaluates to the correct result: def test_add(self): op = Operation('add') result = op.evaluate(5, 2) self.assertEqual(result, 7) def test_subtract(self): op = Operation('subtract') result = op.evaluate(5, 2) self.assertEqual(result, 3) def test_multiply(self): op = Operation('multiply') result = op.evaluate(5, 2) self.assertEqual(result, 10) def test_divide(self): op = Operation('divide') result = op.evaluate(5, 2) self.assertEqual(result, 2) This will form the test case for the Operation class. Typically, the test file for a module should have the name of the module prefixed by test, and the name of each test function within a test case class should start with test. Next, we will create a test case for the Calculator class in the test_Calculator.py file. This again starts by importing the required modules and defining the class: import unittest from calculator.Calculator import Calculator class test_Operation(unittest.TestCase): We will now add two test cases that test the correct handling of errors when operations and values are entered in the incorrect order. This time, we will use the assertRaises function to create a context to test for RuntimeError being raised. In this case, the error must be raised by any of the code within the context. def test_add_value_out_of_order_fails(self): with self.assertRaises(RuntimeError): calc = Calculator() calc.enter_value(5) calc.enter_value(5) calc.evaluate() def test_add_operation_out_of_order_fails(self): with self.assertRaises(RuntimeError): calc = Calculator() calc.enter_operation('add') calc.evaluate() This test is to ensure that the all_clear function works as expected. Note that, here, we have multiple test assertions in the function, and all assertions have to pass for the test to pass. def test_all_clear(self): calc = Calculator() calc.enter_value(5) calc.evaluate() self.assertEqual(calc.get_result(), 5) calc.all_clear() self.assertEqual(calc.get_result(), 0) This test ensured that the evaluate() function works as expected and checks the output of a known calculation. Note, here, that we are using the assertAlmostEqual function, which ensures that two numerical variables are equal within a given tolerance, in this case 13 decimal places. def test_evaluate(self): calc = Calculator() calc.enter_value(5.0) calc.enter_operation('multiply') calc.enter_value(2.0) calc.enter_operation('divide') calc.enter_value(5.0) calc.enter_operation('add') calc.enter_value(18.0) calc.enter_operation('subtract') calc.enter_value(5.0) self.assertAlmostEqual(calc.evaluate(), 15.0, 13) self.assertAlmostEqual(calc.get_result(), 15.0, 13) These two tests will test that the errors are handled correctly when the evaluate() function is called, when there are values missing from the input or the input is empty: def test_evaluate_failure_empty(self): with self.assertRaises(RuntimeError): calc = Calculator() calc.enter_operation('add') calc.evaluate() def test_evaluate_failure_missing_value(self): with self.assertRaises(RuntimeError): calc = Calculator() calc.enter_value(5) calc.enter_operation('add') calc.evaluate() That completes the test case for the Calculator class. Note that we have only used a small subset of the available test assertions over our two test classes. A full list of all the test assertions is available in the unittest module documentation at https://docs.python.org/2/library/unittest.html#test-cases. Once all the tests are written, they can be executed using the following command in the directory containing both the calculator and tests directories: python -m unittest discover -v Here, we have the unit test framework discover all the tests automatically (which is why following the expected naming convention of prefixing names with "test" is important). We also request verbose output with the -v parameter, which shows all the tests executed and their results, as shown in the following screenshot: Summary In this article, we looked at how the PDB tool can be used to find faults in Python code and applications. We also looked at using the logging module to have Python code output a log file during execution and how this can make debugging the failures easier, as well as automated unit testing for portions of the application. Resources for Article: Further resources on this subject: Basic Image Processing[article] IRemote Desktop to Your Pi from Everywhere[article] Scraping the Data [article]
Read more
  • 0
  • 0
  • 13393

article-image-introduction-react-native
Eugene Safronov
23 Sep 2015
7 min read
Save for later

Introduction to React Native

Eugene Safronov
23 Sep 2015
7 min read
React is an open-sourced JavaScript library made by Facebook for building UI applications. The project has a strong emphasis on the component-based approach and utilizes the full power of JavaScript for constructing all elements. The React Native project was introduced during the first React conference in January 2015. It allows you to build native mobile applications using the same concepts from React. In this post I am going to explain the main building blocks of React Native through the example of an iOS demo application. I assume that you have previous experience in writing web applications with React. Setup Please go through getting started section on the React Native website if you would like to build an application on your machine. Quick start When all of the necessary tools are installed, let's initialize the new React application with the following command: react-native init LastFmTopArtists After the command fetches the code and the dependencies, you can open the new project (LastFmTopArtists/LastFmTopArtists.xcodeproj) in Xcode. Then you can build and run the app with cmd+R. You will see a similar screen on the iOS simulator: You can make changes in index.ios.js, then press cmd+R and see instant changes in the simulator. Demo app In this post I will show you how to build a list of popular artists using the Last.fm api. We will display them with help of ListView component and redirect on the artist page using WebView. First screen Let's start with adding a new screen into our application. For now it will contain dump text. Create file ArtistListScreen with the following code: var React = require('react-native'); var { ListView, StyleSheet, Text, View, } = React; class ArtistListScreen extendsReact.Component { render() { return ( <View style={styles.container}> <Text>Artist list would be here</Text> </View> ); } } var styles = StyleSheet.create({ container: { flex: 1, backgroundColor: 'white', marginTop: 64 } }) module.exports = ArtistListScreen; Here are some things to note: I declare react components with ES6 Classes syntax. ES6 Destructuring assignment syntax is used for React objects declaration. FlexBox is a default layout system in React Native. Flex values can be either integers or doubles, indicating the relative size of the box. So, when you have multiple elements they will fill the relative proportion of the view based on their flex value. ListView is declared but will be used later. From index.ios.js we call ArtistListScreen using NavigatorIOS component: var React = require('react-native'); var ArtistListScreen = require('./ArtistListScreen'); var { AppRegistry, NavigatorIOS, StyleSheet } = React; var LastFmArtists = React.createClass({ render: function() { return ( <NavigatorIOS style={styles.container} initialRoute={{ title: "last.fm Top Artists", component: ArtistListScreen }} /> ); } }); var styles = StyleSheet.create({ container: { flex: 1, backgroundColor: 'white', }, }); Switch to iOS Simulator, refresh with cmd+R and you will see: ListView After we have got the empty screen, let's render some mock data in a ListView component. This component has a number of performance improvements such as rendering of only visible elements and removing which are off screen. The new version of ArtistListScreen looks like the following: class ArtistListScreen extendsReact.Component { constructor(props) { super(props) this.state = { isLoading: false, dataSource: newListView.DataSource({ rowHasChanged: (row1, row2) => row1 !== row2 }) } } componentDidMount() { this.loadArtists(); } loadArtists() { this.setState({ dataSource: this.getDataSource([{name: 'Muse'}, {name: 'Radiohead'}]) }) } getDataSource(artists: Array<any>): ListView.DataSource { returnthis.state.dataSource.cloneWithRows(artists); } renderRow(artist) { return ( <Text>{artist.name}</Text> ); } render() { return ( <View style={styles.container}> <ListView dataSource={this.state.dataSource} renderRow={this.renderRow.bind(this)} automaticallyAdjustContentInsets={false} /> </View> ); } } Side notes: The DataSource is an interface that ListView is using to determine which rows have changed over the course of updates. ES6 constructor is an analog of getInitialState. The end result of the changes: Api token The Last.fm web api is free to use but you will need a personal api token in order to access it. At first it is necessary to join Last.fm and then get an API account. Fetching real data I assume you have successfully set up the API account. Let's call a real web service using fetch API: const API_KEY='put token here'; const API_URL = 'http://ws.audioscrobbler.com/2.0/?method=geo.gettopartists&country=ukraine&format=json&limit=40'; const REQUEST_URL = API_URL + '&api_key=' + API_KEY; loadArtists() { this.setState({ isLoading: true }); fetch(REQUEST_URL) .then((response) => response.json()) .catch((error) => { console.error(error); }) .then((responseData) => { this.setState({ isLoading: false, dataSource: this.getDataSource(responseData.topartists.artist) }) }) .done(); } After a refresh, the iOS simulator should display: ArtistCell Since we have real data, it is time to add artist's images and rank them on the display. Let's move artist cell display logic into separate component ArtistCell: 'use strict'; var React = require('react-native'); var { Image, View, Text, TouchableHighlight, StyleSheet } = React; class ArtistCell extendsReact.Component { render() { return ( <View> <View style={styles.container}> <Image source={{uri: this.props.artist.image[2]["#text"]}} style={styles.artistImage} /> <View style={styles.rightContainer}> <Text style={styles.rank}>## {this.props.artist["@attr"].rank}</Text> <Text style={styles.name}>{this.props.artist.name}</Text> </View> </View> <View style={styles.separator}/> </View> ); } } var styles = StyleSheet.create({ container: { flex: 1, flexDirection: 'row', justifyContent: 'center', alignItems: 'center', padding: 5 }, artistImage: { height: 84, width: 126, marginRight: 10 }, rightContainer: { flex: 1 }, name: { textAlign: 'center', fontSize: 14, color: '#999999' }, rank: { textAlign: 'center', marginBottom: 2, fontWeight: '500', fontSize: 16 }, separator: { height: 1, backgroundColor: '#E3E3E3', flex: 1 } }) module.exports = ArtistCell; Changes in ArtistListScreen: // declare new component var ArtistCell = require('./ArtistCell'); // use it in renderRow method: renderRow(artist) { return ( <ArtistCell artist={artist} /> ); } Press cmd+R in iOS Simulator: WebView The last piece of the application would be to open a web page by clicking in ListView. Declare new component WebView: 'use strict'; var React = require('react-native'); var { View, WebView, StyleSheet } = React; class Web extendsReact.Component { render() { return ( <View style={styles.container}> <WebView url={this.props.url}/> </View> ); } } var styles = StyleSheet.create({ container: { flex: 1, backgroundColor: '#F6F6EF', flexDirection: 'column', }, }); Web.propTypes = { url: React.PropTypes.string.isRequired }; module.exports = Web; Then by using TouchableHighlight we will call onOpenPage from ArtistCell: class ArtistCell extendsReact.Component { render() { return ( <View> <TouchableHighlight onPress={this.props.onOpenPage} underlayColor='transparent'> <View style={styles.container}> <Image source={{uri: this.props.artist.image[2]["#text"]}} style={styles.artistImage} /> <View style={styles.rightContainer}> <Text style={styles.rank}>## {this.props.artist["@attr"].rank}</Text> <Text style={styles.name}>{this.props.artist.name}</Text> </View> </View> </TouchableHighlight> <View style={styles.separator}/> </View> ); } } Finally open web page from ArtistListScreen component: // declare new component var WebView = require('WebView'); class ArtistListScreen extendsReact.Component { // will be called on touch from ArtistCell openPage(url) { this.props.navigator.push({ title: 'Web View', component: WebView, passProps: {url} }); } renderRow(artist) { return ( <ArtistCell artist={artist} // specify artist's url on render onOpenPage={this.openPage.bind(this, artist.url)} /> ); } } Now a touch on any cell in ListView will load a web page for selected artist: Conclusion You can explore source code of the app on Github repo. For me it was a real fun to play with React Native. I found debugging in Chrome and error stack messages extremely easy to work with. By using React's component-based approach you can build complex UI without much effort. I highly recommend to explore this technology for rapid prototyping and maybe for your next awesome project. Useful links Building a flashcard app with React Native Examples of React Native apps React Native Videos Video course on React Native Want more JavaScript? Visit our dedicated page here. About the author Eugene Safronov is a software engineer with a proven record of delivering high quality software. He has an extensive experience building successful teams and adjusting development processes to the project’s needs. His primary focuses are Web (.NET, node.js stacks) and cross-platform mobile development (native and hybrid). He can be found on Twitter @sejoker.
Read more
  • 0
  • 0
  • 4446
article-image-enhancing-your-blog-advanced-features
Packt
22 Sep 2015
8 min read
Save for later

Enhancing Your Blog with Advanced Features

Packt
22 Sep 2015
8 min read
In this article by Antonio Melé, the author of the Django by Example book shows how to use the Django forms, and ModelForms. You will let your users share posts by e-mail, and you will be able to extend your blog application with a comment system. You will also learn how to integrate third-party applications into your project, and build complex QuerySets to get useful information from your models. In this article, you will learn how to add tagging functionality using a third-party application. (For more resources related to this topic, see here.) Adding tagging functionality After implementing our comment system, we are going to create a system for adding tags to our posts. We are going to do this by integrating in our project a third-party Django tagging application. django-taggit is a reusable application that primarily offers you a Tag model, and a manager for easily adding tags to any model. You can take a look at its source code at https://github.com/alex/django-taggit. First, you need install django-taggit via pip by running the pip install django-taggit command. Then, open the settings.py file of the project, and add taggit to your INSTALLED_APPS setting as the following: INSTALLED_APPS = ( # ... 'mysite.blog', 'taggit', ) Then, open the models.py file of your blog application, and add to the Post model the TaggableManager manager, provided by django-taggit as the following: from taggit.managers import TaggableManager # ... class Post(models.Model): # ... tags = TaggableManager() You just added tags for this model. The tags manager will allow you to add, retrieve, and remove tags from the Post objects. Run the python manage.py makemigrations blog command to create a migration for your model changes. You will get the following output: Migrations for 'blog': 0003_post_tags.py: Add field tags to post Now, run the python manage.py migrate command to create the required database tables for django-taggit models and synchronize your model changes. You will see an output indicating that the migrations have been applied: Operations to perform: Apply all migrations: taggit, admin, blog, contenttypes, sessions, auth Running migrations: Applying taggit.0001_initial... OK Applying blog.0003_post_tags... OK Your database is now ready to use django-taggit models. Open the terminal with the python manage.py shell command, and learn how to use the tags manager. First, we retrieve one of our posts (the one with the ID as 1): >>> from mysite.blog.models import Post >>> post = Post.objects.get(id=1) Then, add some tags to it and retrieve its tags back to check that they were successfully added: >>> post.tags.add('music', 'jazz', 'django') >>> post.tags.all() [<Tag: jazz>, <Tag: django>, <Tag: music>] Finally, remove a tag and check the list of tags again: >>> post.tags.remove('django') >>> post.tags.all() [<Tag: jazz>, <Tag: music>] This was easy, right? Run the python manage.py runserver command to start the development server again, and open http://127.0.0.1:8000/admin/taggit/tag/ in your browser. You will see the admin page with the list of the Tag objects of the taggit application: Navigate to http://127.0.0.1:8000/admin/blog/post/ and click on a post to edit it. You will see that the posts now include a new Tags field as the following one where you can easily edit tags: Now, we are going to edit our blog posts to display the tags. Open the blog/post/list.html template and add the following HTML code below the post title: <p class="tags">Tags: {{ post.tags.all|join:", " }}</p> The join template filter works as the Python string join method to concatenate elements with the given string. Open http://127.0.0.1:8000/blog/ in your browser. You will see the list of tags under each post title: Now, we are going to edit our post_list view to let users see all posts tagged with a tag. Open the views.py file of your blog application, import the Tag model form django-taggit, and change the post_list view to optionally filter posts by tag as the following: from taggit.models import Tag def post_list(request, tag_slug=None): post_list = Post.published.all() if tag_slug: tag = get_object_or_404(Tag, slug=tag_slug) post_list = post_list.filter(tags__in=[tag]) # ... The view now takes an optional tag_slug parameter that has a None default value. This parameter will come in the URL. Inside the view, we build the initial QuerySet, retrieving all the published posts. If there is a given tag slug, we get the Tag object with the given slug using the get_object_or_404 shortcut. Then, we filter the list of posts by the ones which tags are contained in a given list composed only by the tag we are interested in. Remember that QuerySets are lazy. The QuerySet for retrieving posts will only be evaluated when we loop over the post list to render the template. Now, change the render function at the bottom of the view to pass all the local variables to the template using locals(). The view will finally look as the following: def post_list(request, tag_slug=None): post_list = Post.published.all() if tag_slug: tag = get_object_or_404(Tag, slug=tag_slug) post_list = post_list.filter(tags__in=[tag]) paginator = Paginator(post_list, 3) # 3 posts in each page page = request.GET.get('page') try: posts = paginator.page(page) except PageNotAnInteger: # If page is not an integer deliver the first page posts = paginator.page(1) except EmptyPage: # If page is out of range deliver last page of results posts = paginator.page(paginator.num_pages) return render(request, 'blog/post/list.html', locals()) Now, open the urls.py file of your blog application, and make sure you are using the following URL pattern for the post_list view: url(r'^$', post_list, name='post_list'), Now, add another URL pattern as the following one for listing posts by tag: url(r'^tag/(?P<tag_slug>[-w]+)/$', post_list, name='post_list_by_tag'), As you can see, both the patterns point to the same view, but we are naming them differently. The first pattern will call the post_list view without any optional parameters, whereas the second pattern will call the view with the tag_slug parameter. Let’s change our post list template to display posts tagged with a specific tag, and also link the tags to the list of posts filtered by this tag. Open blog/post/list.html and add the following lines before the for loop of posts: {% if tag %} <h2>Posts tagged with "{{ tag.name }}"</h2> {% endif %} If the user is accessing the blog, he will the list of all posts. If he is filtering by posts tagged with a specific tag, he will see this information. Now, change the way the tags are displayed into the following: <p class="tags"> Tags: {% for tag in post.tags.all %} <a href="{% url "blog:post_list_by_tag" tag.slug %}">{{ tag.name }}</a> {% if not forloop.last %}, {% endif %} {% endfor %} </p> Notice that now we are looping through all the tags of a post, and displaying a custom link to the URL for listing posts tagged with this tag. We build the link with {% url "blog:post_list_by_tag" tag.slug %} using the name that we gave to the URL, and the tag slug as parameter. We separate the tags by commas. The complete code of your template will look like the following: {% extends "blog/base.html" %} {% block title %}My Blog{% endblock %} {% block content %} <h1>My Blog</h1> {% if tag %} <h2>Posts tagged with "{{ tag.name }}"</h2> {% endif %} {% for post in posts %} <h2><a href="{{ post.get_absolute_url }}">{{ post.title }}</a></h2> <p class="tags"> Tags: {% for tag in post.tags.all %} <a href="{% url "blog:post_list_by_tag" tag.slug %}">{{ tag.name }}</a> {% if not forloop.last %}, {% endif %} {% endfor %} </p> <p class="date">Published {{ post.publish }} by {{ post.author }}</p> {{ post.body|truncatewords:30|linebreaks }} {% endfor %} {% include "pagination.html" with page=posts %} {% endblock %} Open http://127.0.0.1:8000/blog/ in your browser, and click on any tag link. You will see the list of posts filtered by this tag as the following: Summary In this article, you added tagging to your blog posts by integrating a reusable application. The book Django By Example, hands-on-guide will also show you how to integrate other popular technologies with Django in a fun and practical way. Resources for Article: Further resources on this subject: Code Style in Django[article] So, what is Django? [article] Share and Share Alike [article]
Read more
  • 0
  • 0
  • 4402

article-image-stata-data-analytics-software
Packt
22 Sep 2015
11 min read
Save for later

Stata as Data Analytics Software

Packt
22 Sep 2015
11 min read
In this article by Prasad Kothari, the author of the book Data Analysis with STATA, the overall goal is to cover the STATA related topics such as data management, graphs and visualization and programming in STATA. The article will give a detailed description of STATA starting with an introduction to STATA and Data analytics and then talks about STATA programming and data management. After which it takes you through Data visualization and all the important statistical tests in STATA. Then the article will cover the Linear and the Logistics regression in STATA and in the end it will take you through few analyses like Survey analysis, Time Series analysis and Survival analysis in STATA. It also teaches different types of statistical modelling techniques and how to implement these techniques in STATA. (For more resources related to this topic, see here.) These days, many people use Stata for econometric and medical research purposes, among other things. There are many people who use different packages, such as Statistical Package for the Social Sciences (SPSS) and EViews, Micro, RATS/CATS (used by time series experts), and R for Matlab/Guass/Fortan (used for hardcore analysis). One should know the usage of Stata and then apply it in their relative fields. Stata is a command-driven language; there are over 500 different commands and menu options, and each has a particular syntax required to invoke any of the various options. Learning these commands is a time-consuming process, but it is not hard. At the end of each class, your do-file will contain all the commands that we have covered, but there is no way we will cover all of these commands in this short introductory course. Stata is a combined statistical analytical tool that is intended for use by research scholars and analytics practitioners. Stata has many strengths, but we are going to talk about the most important one: managing, adjusting, and arranging large sets of data. Stata has many versions, and with every version, it keeps on improving; for example, in Stata versions 11 to 14, there are changes and progress in the computing speed, capabilities and functionalities, as well as flexible graphic capabilities. Over a period of time, Stata keeps on changing and updating the model as per users' suggestions. In short, the regression method is based on a nonstandard feature, which means that you can easily get help from the Web if another person has written a program that can be integrated with their software for the purpose of analysis. The following topics will be covered in this articler: Introducing Data analytics Introducing the Stata interface and basic techniques Introducing data analytics We analyze data everyday for various reasons. To predict an event or forecast the key indicators, such as the revenue for given organization, is fast becoming a major requirement in the industry. There are various types of techniques and tools that can be leveraged to analyze the data. Here are the techniques that will be covered in this article using Stata as a tool: Stata Programming and Data management: Before predicting anything, we need to manage and massage the data in order to make it good enough to be something through which insights can be derived. The programming aspect helps in creating new variables to treat data in such a way that finding patterns in historical data or predicting the outcome of given event becomes much easier. Data visualization: After the data preparation, we need to visualize the data for the the following: To view what patterns in the data look like To check whether there are any outliers in the data To understand the data better To draw preliminary insights from the data Important statistical tests in Stata: After data visualization, based on observations, you can try to come up with various hypotheses about the data. We need to test these hypotheses on the datasets to check whether they are statistically significant and whether we can depend on and apply these hypotheses in future situations as well. Linear regression in Stata: Once done with the hypothesis testing, there is always a business need to predict one of the variables, such as what the revenue of the financial organization will be given the specific conditions, and so on. These predictions about continuous variables, such as the revenue, the default amount on the credit card, and the number of items sold in a given store, come through linear regression. Linear regression is the most basic and widely used prediction methodology. We will go into details of linear regression in a later chapter. Logistic regression in Stata: When you need to predict the outcome of a particular event along with the probability, logistic regression is the best and most acknowledged method by far. Predicting which team will win the match in football or cricket or predicting whether a customer will default on a loan payment can be decided through the probabilities given by logistic regression. Survey analysis in Stata: Understanding the customer sentiment and consumer experience is one of the biggest requirements of the retail industry. The research industry also needs data about people's opinion in order to derive the effect of a certain event or the sentiments of the affected people. All of these can be achieved by conducting and analyzing survey datasets. Survey analysis can have various subtechniques, such as factor analysis, principle component analysis, panel data analysis, and so on. Time series analysis in Stata: When you try to forecast a time-dependent variable with reasonable cyclic behavior of seasonality, time series analysis comes handy. There are many techniques of time series analysis, but we will talk about a couple of them: Autoregressive Integrated Moving Average (ARIMA) and Box Jenkins. Forecasting the amount of rainfall depending on the amount of rainfall in the past 5 years is a classic time series analysis problem. Survival analysis in Stata: These days, lots of customers attrite from telecom plans, healthcare plans, and so on and join the competitors. When you need to develop a churn model or attrition model to check who will attrite, survival analysis is the best model. The Stata interface Let's discuss the location and layout of Stata. It is very easy to locate Stata on a computer or laptop; after installing the software, go to the start menu, go to the search menu, and type Stata. You can find out the path where the file is saved. This depends on which version has been installed. Another way to find Stata on computer is through the quick launch button as well as through start programs. The preceding diagram represents the Stata layout. The four types of processors in Stata are multiprocessor (two or four), special edition processor (flavors), intercooled, and small processor. The multiprocessor is one of the most efficient processors. Though all processor versions function in a similar fashion, only variables' repressors frequency increases with each new version. At present, Stata version 11 is in demand and is being used on various computers. It is a type of software that runs on commands. In the new versions of Stata, new ways, such as menus that can search Stata, have come in the market; however, typing a command is the most simple and quick way to learn Stata. The more you leverage the functionality of typing the command, the better your learning is. Through the typing technique method, programming becomes easy and simple for analytics. Sometimes, it is difficult to find the exact syntax in commands; therefore, it is advisable that the menu command be used. Later on, you just copy the same command for further use. There are three ways to enter the commands, as follows: Use the do-file program. This is a type of program in which one has to inform the computer (through a command) that it needs to use the do-file type. Type the command manually through typing. Enter the command interactively; just click on the menu screen. Though all the three types discussed in the preceding bullets are used, the do-file type is the most frequently used one. The reason is that for a bigger file, it is faster as compared to manual typing. Secondly, it can store the data and keep it in the same format in which it was stored. Suppose you make a mistake and want to rectify it; what would you do? In this case, do-file is useful; one can correct it and run the program once again. Generally, an interactive command is used to find out the problem and later on, do-file is used to solve it. The following is an example of an interactive command: Data-storing techniques in Stata Stata is a multipurpose program, which can serve not only its own data, but also other data in a simple format, for example, ASCII. Regardless of the data type format (Excel/statistical package), it gets automatically exported to the ASCII file. This means that all the data can now easily be imported to Stata. The data entered in Stata is in different types of variables, such as vectors with individual observations in every row; it also holds strings and numeric strings. Every row has a detailed observation of the individual, country, firm, or whatever information is entered in Stata. As the data is stored in variables, it makes Stata the most efficient way to store information. Sometimes, it is better to save the data in a different storage form, such as the following: Matrices Macros Matrices should be used carefully as they consume more memory as compared to variables, so there might be a possibility of low space memory before work is started. Another form is macros; these are similar to variables in other programming languages and are named containers, which means they contain information of any type. There are two flavors of macros: local/temporary and global. Global macros are flexible and easy to manage; once they are defined in a computer or laptop, they can be easily opened through all commands. On the other hand, local macros are temporary objects that are formed for a particular environment and cannot be use in another area. For example, if you use a local macro for do-file, that code will only exist in that particular environment. Directories and folders in Stata Stata has a tree-style structure to organize directories as well as folders similar to other operating systems, such as Windows, Linux, Unix, and Mac OS. This makes things easy and can be retrieved later on dates that are convenient. For example, the data folder is used to save entire datasets, subfolders for every single dataset, and so on. In Stata, the following commands can be leveraged: Dos Linux Unix For example, if you need to change the directory, you can use the CD command for example: CD C:Stataforlder You can also generate a new directory along with the current directory you have been using. For example: mkdir "newstata". You can leverage the dir command to get the details of the directory. If you need the current directory name along with the directory, you can utilize the pwd or cd command. The use of paths in Stata depends on the type of data; usually, there are two paths: absolute and relative. The absolute path contains the full address, denoting the folder. In the command you have seen in the earlier example, we leveraged the CD command using the path that is absolute. On the contrary, the relative path provides us with the location of the file. The following example of mkdir has used the relative path: mkdir "EStata|Stata1" The use of the relative path will be beneficial, especially when working on different devices, such as a PC at home or a library or server. To separate folders, Windows and Dos use a backslash (), whereas Linux and Unix use a slash (/). Sometimes, these connotations might be troublesome when working on the server where Stata is installed. As a general rule, it is advisable that you use slashes in the relative path as Stata can easily understand slash as a separator. The following is an example of this: mkdir "/Stata1/Data" – this is how you create the new folder for your STATA work. Summary In this Article we discussed lots of basic commands, which can be leveraged while performing Stata programming. Read Data Analysis with Stata to gain detailed knowledge of the different data management techniques and programming in detail. As you learn more about Stata, you will understand the various commands and functions and their business applications. Resources for Article: Further resources on this subject: Big Data Analysis (R and Hadoop) [article] Financial Management with Microsoft Dynamics AX 2012 R3 [article] Taming Big Data using HDInsight [article]
Read more
  • 0
  • 0
  • 4208

article-image-introduction-penetration-testing-and-kali-linux
Packt
22 Sep 2015
4 min read
Save for later

Introduction to Penetration Testing and Kali Linux

Packt
22 Sep 2015
4 min read
 In this article by Juned A Ansari, author of the book, Web Penetration Testing with Kali Linux, Second Edition, the author wants us to learn about the following topics: Introduction to penetration testing An Overview of Kali Linux Using Tor for penetration testing (For more resources related to this topic, see here.) Introduction to penetration testing Penetration testing or Ethical hacking is a proactive way of testing your web applications by simulating an attack that's similar to a real attack that could occur on any given day. We will use the tools provided in Kali Linux to accomplish this. Kali Linux is the rebranded version of Backtrack and is now based on Debian-derived Linux distribution. It comes preinstalled with a large list of popular hacking tools that are ready to use with all the prerequisites installed. We will dwell deep into the tools that would help Pentest web applications, and also attack websites in a lab vulnerable to major flaws found in real world web applications. An Overview of Kali Linux Kali Linux is security-focused Linux distribution based on Debian. It's a rebranded version of the famous Linux distribution known as Backtrack, which came with a huge repository of open source hacking tools for network, wireless, and web application penetration testing. Although Kali Linux contains most of the tools from Backtrack, the main aim of Kali Linux is to make it portable so that it can be installed on devices based on the ARM architectures, such as tablets and Chromebook, which makes the tools available at your disposal with much ease. Using open source hacking tools comes with a major drawback. They contain a whole lot of dependencies when installed on Linux, and they need to be installed in a predefined sequence; authors of some tools have not released accurate documentation, which makes our life difficult. Kali Linux simplifies this process; it contains many tools preinstalled with all the dependencies and are in ready-to-use condition so that you can pay more attention for the actual attack and not on installing the tool. Updates for tools installed in Kali Linux are more frequently released, which helps you to keep the tools up to date. A noncommercial toolkit that has all the major hacking tools preinstalled to test real-world networks and applications is a dream of every ethical hacker and the authors of Kali Linux make every effort to make our life easy, which enables us to spend more time on finding the actual flaws rather than building a toolkit. Using Tor for penetration testing The main aim of a penetration test is to hack into a web application in a way that a real-world malicious hacker would do it. Tor provides an interesting option to emulate the steps that a black hat hacker uses to protect his identity and location. Although an ethical hacker trying to improve the security of a web application should be not be concerned about hiding his location, Tor will give an additional option of testing the edge security systems such as network firewalls, web application firewalls, and IPS devices. Black hat hackers try every method to protect their location and true identity; they do not use a permanent IP address and constantly change it to fool cybercrime investigators. You will find port scanning request from a different range of IP addresses, and the actual exploitation having the source IP address that you edge security systems are logging for the first time. With the necessary written approval from the client, you can use Tor to emulate an attacker by connecting to the web application from an unknown IP address that the system does not usually see connections from. Using Tor makes it more difficult to trace back the intrusion attempt to the actual attacker. Tor uses a virtual circuit of interconnected network relays to bounce encrypted data packets. The encryption is multilayered and the final network relay releasing the data to the public Internet cannot identify the source of the communication as the entire packet was encrypted and only a part of it is decrypted at each node. The destination computer sees the final exit point of the data packet as the source of the communication, thus protecting the real identify and location of the user. The following figure shows the working of Tor: Summary This article served as an introduction to penetration testing of web application and Kali Linux. At the end, we looked at how to use Tor for penetration testing. Resources for Article: Further resources on this subject: An Introduction to WEP[article] WLAN Encryption Flaws[article] What is Kali Linux [article]
Read more
  • 0
  • 0
  • 24387
article-image-internet-connected-smart-water-meter
Packt
22 Sep 2015
13 min read
Save for later

Internet Connected Smart Water Meter

Packt
22 Sep 2015
13 min read
In this article by Pradeeka Seneviratne, author of the book Internet of Things with Arduino Blueprints, goes on to say that for many years and even now, water meter readings are collected manually. To do this, a person has to visit the location where the water meter is installed. In this article, we learn how to make a smart water meter with an LCD screen that has the ability to connect to the Internet wirelessly and serve meter readings to the utility company as well as the consumer. (For more resources related to this topic, see here.) In this article, we will: Learn about water flow meters and its basic operation Learn how to mount and plumb a water flow meter to the pipeline Read and count water flow sensor pulses Calculate water flow rate and volume Learn about LCD displays and connecting with Arduino Convert a water flow meter to a simple web server and serve meter readings over the Internet Prerequisites The following are the prerequisites: One Arduino UNO board (The latest version is REV 3) One Arduino Wi-Fi Shield (The latest version is REV 3) One Adafruit Liquid flow meter or a similar one One Hitachi HD44780 DRIVER compatible LCD Screen (16x2) One 10K ohm resistor One 10K ohm potentiometer Few Jumper wires with male and female headers (https://www.sparkfun.com/products/9140) Water Flow Meters The heart of a water flow meter consists of a Hall Effect sensor that outputs pulses for magnetic field changes. Inside the housing, there is a small pinwheel with a permanent magnet attached. When the water flows through the housing, the pinwheel begins to spin and the magnet attached to it passes very close to the Hall Effect sensor in every cycle. The Hall Effect sensor is covered with a separate plastic housing to protect it from the water. The result generates an electric pulse that transitions from low voltage to high voltage, or high voltage to low voltage, depending on the attached permanent magnet's polarity. The resulting pulse can be read and counted using Arduino. For this project, we will be using Adafruit Liquid Flow Meter. You can visit the product page at http://www.adafruit.com/products/828. The following image shows Adafruit Liquid Flow Meter: This image is taken from http://www.adafruit.com/products/828 Pinwheel attached inside the water flow meter A little bit about Plumbing Typically, the direction of the water flow is indicated by an arrow mark on top of the water flow meter's enclosure. Also, you can mount the water flow meter either horizontally or vertically according to its specifications. Some water flow meters can mount both horizontally and vertically. You can install your water flow meter to a half-inch pipeline using normal BSP pipe connectors. The outer diameter of the connector is 0.78" and the inner thread size is half an inch. The water flow meter has threaded ends on both sides. Connect the threaded side of the PVC connectors to both ends of the water flow meter. Use the thread seal tape to seal the connection, and then connect the other ends to an existing half-inch pipe line using PVC pipe glue or solvent cement. Make sure to connect the water flow meter with the pipeline in the correct direction. See the arrow mark on top of the water flow meter for flow direction. BNC Pipeline Connector made by PVC Securing the connection between Water Flow Meter and BNC Pipe Connector using Thread seal PVC Solvent cement used to secure the connection between pipeline and BNC pipe connector. Wiring the water flow meter with Arduino The water flow meter that we are using with this project has three wires, which are as follows: The red wire indicates the positive terminal The black wire indicates the Negative terminal The yellow wire indicates the DATA terminal All three wire ends are connected to a JST connector. Always refer to the datasheet before connecting them with the microcontroller and the power source. Use jumper wires with male and female headers as follows: Connect the positive terminal of the water flow meter to Arduino 5V. Connect the negative terminal of the water flow meter to Arduino GND. Connect the DATA terminal of the water flow meter to Arduino digital pin 2 through a 10K ohm resistor. You can directly power the water flow sensor using Arduino since most of the residential type water flow sensors operate under 5V and consume a very low amount of current. You can read the product manual for more information about the supply voltage and supply current range to save your Arduino from high current consumption by the water flow sensor. If your water flow sensor requires a supply current of more than 200mA or a supply voltage of more than 5V to function correctly, use a separate power source with it. The following image illustrates jumper wires with male and female headers: Reading pulses Water flow meter produces and outputs digital pulses according to the amount of water flowing through it that can be detected and counted using Arduino. According to the data sheet, the water flow meter that we are using for this project will generate approximately 450 pulses per liter. So 1 pulse approximately equals to [1000 ml/450 pulses] 2.22 ml. These values can be different depending on the speed of the water flow and the mounting polarity. Arduino can read digital pulses by generating the water flow meter through the DATA line. Rising edge and falling edge There are two type of pulses, which are as follows: Positive-going pulse: In an idle state, the logic level is normally LOW. It goes to HIGH state, stays at HIGH state for time t, and comes back to LOW state. Negative-going pulse: In an idle state, the logic level is normally HIGH. It goes LOW state, stays at LOW state for time t, and comes back to HIGH state. The rising edge and falling edge of a pulse are vertical. The transition from LOW state to HIGH state is called RISING EDGE and the transition from HIGH state to LOW state is called falling EDGE. You can capture digital pulses using rising edge or falling edge, and in this project, we will be using the rising edge. Reading and counting pulses with Arduino In the previous section, you have attached the water flow meter to Arduino. The pulse can be read by digital pin 2 and the interrupt 0 is attached to digital pin 2. The following sketch counts pulses per second and displays on the Arduino Serial Monitor. Using Arduino IDE, upload the following sketch into your Arduino board: int pin = 2; volatile int pulse; const int pulses_per_litre=450; void setup() { Serial.begin(9600); pinMode(pin, INPUT); attachInterrupt(0, count_pulse, RISING); } void loop() { pulse=0; interrupts(); delay(1000); noInterrupts(); Serial.print("Pulses per second: "); Serial.println(pulse); } void count_pulse() { pulse++; } Calculating the water flow rate The water flow rate is the amount of water flowing at a given time and can be expressed in gallons per second or liters per second. The number of pulses generated per liter of water flowing through the sensor can be found in the water flow sensor's specification sheet. Let's say m. So, you can count the number of pulses generated by the sensor per second, Let's say n. Thus, the water flow rate R can be expressed as follows: The water flow rate is measured in liters per second. Also, you can calculate the water flow rate in liters per minute as follows: For example, if your water flow sensor generates 450 pulses for one liter of water flowing through it and you get 10 pulses for the first second, then the elapsed water flow rate is 10/450 = 0.022 liters per second or 0.022 * 1000 = 22 milliliters per second. Using your Arduino IDE, upload the following sketch into your Arduino board. It will output water flow rate in liters per second on the Arduino Serial Monitor. int pin = 2; volatile int pulse; const int pulses_per_litre=450; void setup() { Serial.begin(9600); pinMode(pin, INPUT); attachInterrupt(0, count_pulse, RISING); } void loop() { pulse=0; interrupts(); delay(1000); noInterrupts(); Serial.print("Pulses per second: "); Serial.println(pulse); Serial.print("Water flow rate: "); Serial.print(pulse/pulses_per_litre); Serial.println("litres per second"); } void count_pulse() { pulse++; } Calculating water flow volume Water flow volume can be calculated by adding all the flow rates per second of a minute and can be expressed as follows: Volume = ∑ Flow Rates The following Arduino sketch will calculate and output the total water volume since startup. Upload the sketch into your Arduino board using Arduino IDE. int pin = 2; volatile int pulse; float volume = 0; float flow_rate =0; const int pulses_per_litre=450; void setup() { Serial.begin(9600); pinMode(pin, INPUT); attachInterrupt(0, count_pulse, RISING); } void loop() { pulse=0; volume=0; interrupts(); delay(1000); noInterrupts(); Serial.print("Pulses per second: "); Serial.println(pulse); flow_rate = pulse/pulses_per_litre; Serial.print("Water flow rate: "); Serial.print(flow_rate); Serial.println("litres per second"); volume = volume + flow_rate; Serial.print("Volume: "); Serial.print(volume); Serial.println(" litres"); } void count_pulse() { pulse++; } To measure the accurate water flow rate and volume, the water flow meter will need careful calibration. The sensor inside the water flow meter is not a precision sensor, and the pulse rate does vary a bit depending on the flow rate, fluid pressure, and sensor orientation. Adding an LCD screen to the water meter You can add an LCD screen to your water meter to display readings rather than displaying them on the Arduino serial monitor. You can then disconnect your water meter from the computer after uploading the sketch onto your Arduino. Using a Hitachi HD44780 driver compatible LCD screen and Arduino LiquidCrystal library, you can easily integrate it with your water meter. Typically, this type of LCD screen has 16 interface connectors. The display has 2 rows and 16 columns, so each row can display up to 16 characters. Wire your LCD screen with Arduino as shown in the preceding diagram. Use the 10K potentiometer to control the contrast of the LCD screen. Perform the following steps to connect your LCD screen with your Arduino: LCD RS pin to digital pin 8 LCD Enable pin to digital pin 7 LCD D4 pin to digital pin 6 LCD D5 pin to digital pin 5 LCD D6 pin to digital pin 4 LCD D7 pin to digital pin 3 Wire a 10K pot to +5V and GND, with its wiper (output) to LCD screens VO pin (pin3). Now, upload the following sketch into your Arduino board using Arduino IDE, and then remove the USB cable from your computer. Make sure the water is flowing through the water meter and press the Arduino reset button. You can see number of pulses per second, water flow rate per second, and the total water volume from the beginning of the time displayed on the LCD screen. #include <LiquidCrystal.h> int pin = 2; volatile int pulse; float volume = 0; float flow_rate =0; const int pulses_per_litre=450; // initialize the library with the numbers of the interface pins LiquidCrystal lcd(8, 7, 6, 5, 4, 3); void setup() { Serial.begin(9600); pinMode(pin, INPUT); attachInterrupt(0, count_pulse, RISING); // set up the LCD's number of columns and rows: lcd.begin(16, 2); // Print a message to the LCD. lcd.print("Welcome"); } void loop() { pulse=0; volume=0; interrupts(); delay(1000); noInterrupts(); lcd.setCursor(0, 0); lcd.print("Pulses/s: "); lcd.print(pulse); flow_rate = pulse/pulses_per_litre; lcd.setCursor(0, 1); lcd.print(flow_rate,DEC); lcd.print(" l/s"); volume = volume + flow_rate; lcd.setCursor(0, 8); lcd.print(volume, DEC); lcd.println(" l"); } void count_pulse() { pulse++; } Converting your water meter to a web server In the previous steps, you have learned how to display your water flow sensor's readings, and calculate water flow rate and total volume on the Arduino serial monitor. In this step, we learn about integrating a simple web server to your water flow sensor and remotely read your water flow sensor's readings. You can make a wireless web server with Arduino Wi-Fi shield or Ethernet connected web server with the Arduino Ethernet shield. Remove all the wires you have connected to your Arduino in the previous sections in this article. Stack the Arduino Wi-Fi shield on the Arduino board using wire-wrap headers. Make sure the Wi-Fi shield is properly seated on the Arduino board. Now reconnect the wires from water flow sensor to the Wi-Fi shield. Use the same pin numbers as in previous step. Connect 9V DC power supply to the Arduino board. Connect your Arduino to your PC using the USB cable and upload the following sketch. Once the upload is complete, remove your USB cable from the water flow meter. Upload the following Arduino sketch into your Arduino board using Arduino IDE: #include <SPI.h> #include <WiFi.h> char ssid[] = "yourNetwork"; char pass[] = "secretPassword"; int keyIndex = 0; int pin = 2; volatile int pulse; float volume = 0; float flow_rate =0; const int pulses_per_litre=450; int status = WL_IDLE_STATUS; WiFiServer server(80); void setup() { Serial.begin(9600); while (!Serial) { ; } if (WiFi.status() == WL_NO_SHIELD) { Serial.println("WiFi shield not present"); while(true); } // attempt to connect to Wifi network: while ( status != WL_CONNECTED) { Serial.print("Attempting to connect to SSID: "); Serial.println(ssid); status = WiFi.begin(ssid, pass); delay(10000); } server.begin(); } void loop() { WiFiClient client = server.available(); if (client) { Serial.println("new client"); boolean currentLineIsBlank = true; while (client.connected()) { if (client.available()) { char c = client.read(); Serial.write(c); if (c == 'n' &&currentLineIsBlank) { client.println("HTTP/1.1 200 OK"); client.println("Content-Type: text/html"); client.println("Connection: close"); client.println("Refresh: 5"); client.println(); client.println("<!DOCTYPE HTML>"); client.println("<html>"); if (WiFi.status() != WL_CONNECTED) { client.println("Couldn't get a wifi connection"); while(true); } else { //print meter readings on web page pulse=0; volume=0; interrupts(); delay(1000); noInterrupts(); client.print("Pulses per second: "); client.println(pulse); flow_rate = pulse/pulses_per_litre; client.print("Water flow rate: "); client.print(flow_rate); client.println("litres per second"); volume = volume + flow_rate; client.print("Volume: "); client.print(volume); client.println(" litres"); //end } client.println("</html>"); break; } if (c == 'n') { currentLineIsBlank = true; } else if (c != 'r') { currentLineIsBlank = false; } } } delay(1); client.stop(); Serial.println("client disconnected"); } } void count_pulse() { pulse++; } Open the water valve and make sure the water flows through the meter. Click on the RESET button on the WiFi shield. In your web browser, type your WiFi shield's IP address and press Enter. You can see your water flow sensor's flow rate and total volume on the web page. The page refreshes every 5 seconds to display the updated information. Summary In this article, you gained hands-on experience and knowledge about water flow sensors and counting pulses while calculating and displaying them. Finally, you made a simple web server to allow users to read the water meter through the Internet. You can apply this to any type of liquid, but make sure to select the correct flow sensor because some liquids react chemically with the material the sensor is made of. You can search on Google and find which flow sensors support your preferred liquid type. Resources for Article: Further resources on this subject: Getting Started with Arduino[article] Arduino Development [article] Prototyping Arduino Projects using Python [article]
Read more
  • 0
  • 0
  • 17336

article-image-putting-function-functional-programming
Packt
22 Sep 2015
27 min read
Save for later

Putting the Function in Functional Programming

Packt
22 Sep 2015
27 min read
 In this article by Richard Reese, the author of the book Learning Java Functional Programming, we will cover lambda expressions in more depth. We will explain how they satisfy the mathematical definition of a function and how we can use them in supporting Java applications. In this article, you will cover several topics, including: Lambda expression syntax and type inference High-order, pure, and first-class functions Referential transparency Closure and currying (For more resources related to this topic, see here.) Our discussions cover high-order functions, first-class functions, and pure functions. Also examined are the concepts of referential transparency, closure, and currying. Examples of nonfunctional approaches are followed by their functional equivalent where practical. Lambda expressions usage A lambda expression can be used in many different situations, including: Assigned to a variable Passed as a parameter Returned from a function or method We will demonstrate how each of these are accomplished and then elaborate on the use of functional interfaces. Consider the forEach method supported by several classes and interfaces, including the List interface. In the following example, a List interface is created and the forEach method is executed against it. The forEach method expects an object that implements the Consumer interface. This will display the three cartoon character names: List<String> list = Arrays.asList("Huey", "Duey", "Luey"); list.forEach(/* Implementation of Consumer Interface*/); More specifically, the forEach method expects an object that implements the accept method, the interface's single abstract method. This method's signature is as follows: void accept(T t) The interface also has a default method, andThen, which is passed and returns an instance of the Consumer interface. We can use any of three different approaches for implementing the functionality of the accept method: Use an instance of a class that implements the Consumer interface Use an anonymous inner class Use a lambda expression We will demonstrate each method so that it will be clear how each technique works and why lambda expressions will often result in a better solution. We will start with the declaration of a class that implements the Consumer interface as shown next: public class ConsumerImpl<T> implements Consumer<T> { @Override public void accept(T t) { System.out.println(t); } } We can then use it as the argument of the forEach method: list.forEach(new ConsumerImpl<>()); Using an explicit class allows us to reuse the class or its objects whenever an instance is needed. The second approach uses an anonymous inner function as shown here: list.forEach(new Consumer<String>() { @Override public void accept(String t) { System.out.println(t); } }); This was a fairly common approach used prior to Java 8. It avoids having to explicitly declare and instantiate a class, which implements the Consumer interface. A simple statement that uses a lambda expression is shown next: list.forEach(t->System.out.println(t)); The lambda expression accepts a single argument and returns void. This matches the signature of the Consumer interface. Java 8 is able to automatically perform this matching process. This latter technique obviously uses less code, making it more succinct than the other solutions. If we desire to reuse this lambda expression elsewhere, we could have assigned it to a variable first and then used it in the forEach method as shown here: Consumer consumer = t->System.out.println(t); list.forEach(consumer); Anywhere a functional interface is expected, we can use a lambda expression. Thus, the availability of a large number of functional interfaces will enable the frequent use of lambda expressions and programs that exhibit a functional style of programming. While developers can define their own functional interfaces, which we will do shortly, Java 8 has added a large number of functional interfaces designed to support common operations. Most of these are found in the java.util.function package. We will use several of these throughout the book and will elaborate on their purpose, definition, and use as we encounter them. Functional programming concepts in Java In this section, we will examine the underlying concept of functions and how they are implemented in Java 8. This includes high-order, first-class, and pure functions. A first-class function is a function that can be used where other first-class entities can be used. These types of entities include primitive data types and objects. Typically, they can be passed to and returned from functions and methods. In addition, they can be assigned to variables. A high-order function either takes another function as an argument or returns a function as the return value. Languages that support this type of function are more flexible. They allow a more natural flow and composition of operations. Pure functions have no side effects. The function does not modify nonlocal variables and does not perform I/O. High-order functions We will demonstrate the creation and use of the high-order function using an imperative and a functional approach to convert letters of a string to lowercase. The next code sequence reuses the list variable, developed in the previous section, to illustrate the imperative approach. The for-each statement iterates through each element of the list using the String class' toLowerCase method to perform the conversion: for(String element : list) { System.out.println(element.toLowerCase()); } The output will be each name in the list displayed in lowercase, each on a separate line. To demonstrate the use of a high-order function, we will create a function called, processString, which is passed a function as the first parameter and then apply this function to the second parameter as shown next:   public String processString(Function<String,String> operation,String target) { return operation.apply(target); } The function passed will be an instance of the java.util.function package's Function interface. This interface possesses an accept method that is passed one data type and returns a potentially different data type. With our definition, it is passed String and returns String. In the next code sequence, a lambda expression using the toLowerCase method is passed to the processString method. As you may remember, the forEach method accepts a lambda expression, which matches the Consumer interface's accept method. The lambda expression passed to the processString method matches the Function interface's accept method. The output is the same as produced by the equivalent imperative implementation. list.forEach(s ->System.out.println( processString(t->t.toLowerCase(), s))); We could have also used a method reference as show next: list.forEach(s ->System.out.println( processString(String::toLowerCase, s))); The use of the high-order function may initially seem to be a bit convoluted. We needed to create the processString function and then pass either a lambda expression or a method reference to perform the conversion. While this is true, the benefit of this approach is flexibility. If we needed to perform a different string operation other than converting the target string to lowercase, we will need to essentially duplicate the imperative code and replace toLowerCase with a new method such as toUpperCase. However, with the functional approach, all we need to do is replace the method used as shown next: list.forEach(s ->System.out.println(processString(t- >t.toUpperCase(), s))); This is simpler and more flexible. A lambda expression can also be passed to another lambda expression. Let's consider another example where high-order functions can be useful. Suppose we need to convert a list of one type into a list of a different type. We might have a list of strings that we wish to convert to their integer equivalents. We might want to perform a simple conversion or perhaps we might want to double the integer value. We will use the following lists:   List<String> numberString = Arrays.asList("12", "34", "82"); List<Integer> numbers = new ArrayList<>(); List<Integer> doubleNumbers = new ArrayList<>(); The following code sequence uses an iterative approach to convert the string list into an integer list:   for (String num : numberString) { numbers.add(Integer.parseInt(num)); } The next sequence uses a stream to perform the same conversion: numbers.clear(); numberString .stream() .forEach(s -> numbers.add(Integer.parseInt(s))); There is not a lot of difference between these two approaches, at least from a number of lines perspective. However, the iterative solution will only work for the two lists: numberString and numbers. To avoid this, we could have written the conversion routine as a method. We could also use lambda expression to perform the same conversion. The following two lambda expression will convert a string list to an integer list and from a string list to an integer list where the integer has been doubled:   Function<List<String>, List<Integer>> singleFunction = s -> { s.stream() .forEach(t -> numbers.add(Integer.parseInt(t))); return numbers; }; Function<List<String>, List<Integer>> doubleFunction = s -> { s.stream() .forEach(t -> doubleNumbers.add( Integer.parseInt(t) * 2)); return doubleNumbers; }; We can apply these two functions as shown here: numbers.clear(); System.out.println(singleFunction.apply(numberString)); System.out.println(doubleFunction.apply(numberString)); The output follows: [12, 34, 82] [24, 68, 164] However, the real power comes from passing these functions to other functions. In the next code sequence, a stream is created consisting of a single element, a list. This list contains a single element, the numberString list. The map method expects a Function interface instance. Here, we use the doubleFunction function. The list of strings is converted to integers and then doubled. The resulting list is displayed: Arrays.asList(numberString).stream() .map(doubleFunction) .forEach(s -> System.out.println(s)); The output follows: [24, 68, 164] We passed a function to a method. We could easily pass other functions to achieve different outputs. Returning a function When a value is returned from a function or method, it is intended to be used elsewhere in the application. Sometimes, the return value is used to determine how subsequent computations should proceed. To illustrate how returning a function can be useful, let's consider a problem where we need to calculate the pay of an employee based on the numbers of hours worked, the pay rate, and the employee type. To facilitate the example, start with an enumeration representing the employee type: enum EmployeeType {Hourly, Salary, Sales}; The next method illustrates one way of calculating the pay using an imperative approach. A more complex set of computation could be used, but these will suffice for our needs: public float calculatePay(int hoursWorked, float payRate, EmployeeType type) { switch (type) { case Hourly: return hoursWorked * payRate; case Salary: return 40 * payRate; case Sales: return 500.0f + 0.15f * payRate; default: return 0.0f; } } If we assume a 7 day workweek, then the next code sequence shows an imperative way of calculating the total number of hours worked: int hoursWorked[] = {8, 12, 8, 6, 6, 5, 6, 0}; int totalHoursWorked = 0; for (int hour : hoursWorked) { totalHoursWorked += hour; } Alternatively, we could have used a stream to perform the same operation as shown next. The Arrays class's stream method accepts an array of integers and converts it into a Stream object. The sum method is applied fluently, returning the number of hours worked: totalHoursWorked = Arrays.stream(hoursWorked).sum(); The latter approach is simpler and easier to read. To calculate and display the pay, we can use the following statement which, when executed, will return 803.25.    System.out.println( calculatePay(totalHoursWorked, 15.75f, EmployeeType.Hourly)); The functional approach is shown next. A calculatePayFunction method is created that is passed by the employee type and returns a lambda expression. This will compute the pay based on the number of hours worked and the pay rate. This lambda expression is based on the BiFunction interface. It has an accept method that takes two arguments and returns a value. Each of the parameters and the return type can be of different data types. It is similar to the Function interface's accept method, except that it is passed two arguments instead of one. The calculatePayFunction method is shown next. It is similar to the imperative's calculatePay method, but returns a lambda expression: public BiFunction<Integer, Float, Float> calculatePayFunction( EmployeeType type) { switch (type) { case Hourly: return (hours, payRate) -> hours * payRate; case Salary: return (hours, payRate) -> 40 * payRate; case Sales: return (hours, payRate) -> 500f + 0.15f * payRate; default: return null; } } It can be invoked as shown next: System.out.println( calculatePayFunction(EmployeeType.Hourly) .apply(totalHoursWorked, 15.75f)); When executed, it will produce the same output as the imperative solution. The advantage of this approach is that the lambda expression can be passed around and executed in different contexts. First-class functions To demonstrate first-class functions, we use lambda expressions. Assigning a lambda expression, or method reference, to a variable can be done in Java 8. Simply declare a variable of the appropriate function type and use the assignment operator to do the assignment. In the following statement, a reference variable to the previously defined BiFunction-based lambda expression is declared along with the number of hours worked: BiFunction<Integer, Float, Float> calculateFunction; int hoursWorked = 51; We can easily assign a lambda expression to this variable. Here, we use the lambda expression returned from the calculatePayFunction method: calculateFunction = calculatePayFunction(EmployeeType.Hourly); The reference variable can then be used as shown in this statement: System.out.println( calculateFunction.apply(hoursWorked, 15.75f)); It produces the same output as before. One shortcoming of the way an hourly employee's pay is computed is that overtime pay is not handled. We can add this functionality to the calculatePayFunction method. However, to further illustrate the use of reference variables, we will assign one of two lambda expressions to the calculateFunction variable based on the number of hours worked as shown here: if(hoursWorked<=40) { calculateFunction = (hours, payRate) -> 40 * payRate; } else { calculateFunction = (hours, payRate) -> hours*payRate + (hours-40)*1.5f*payRate; } When the expression is evaluated as shown next, it returns a value of 1063.125: System.out.println( calculateFunction.apply(hoursWorked, 15.75f)); Let's rework the example developed in the High-order functions section, where we used lambda expressions to display the lowercase values of an array of string. Part of the code has been duplicated here for your convenience: list.forEach(s ->System.out.println( processString(t->t.toLowerCase(), s))); Instead, we will use variables to hold the lambda expressions for the Consumer and Function interfaces as shown here: Consumer<String> consumer; consumer = s -> System.out.println(toLowerFunction.apply(s)); Function<String,String> toLowerFunction; toLowerFunction= t -> t.toLowerCase(); The declaration and initialization could have been done with one statement for each variable. To display all of the names, we simply use the consumer variable as the argument of the forEach method: list.forEach(consumer); This will display the names as before. However, this is much easier to read and follow. The ability to use lambda expressions as first-class entities makes this possible. We can also assign method references to variables. Here, we replaced the initialization of the function variable with a method reference: function = String::toLowerCase; The output of the code will not change. The pure function The pure function is a function that has no side effects. By side effects, we mean that the function does not modify nonlocal variables and does not perform I/O. A method that squares a number is an example of a pure method with no side effects as shown here: public class SimpleMath { public static int square(int x) { return x * x; } } Its use is shown here and will display the result, 25: System.out.println(SimpleMath.square(5)); An equivalent lambda expression is shown here: Function<Integer,Integer> squareFunction = x -> x*x; System.out.println(squareFunction.apply(5)); The advantages of pure functions include the following: They can be invoked repeatedly producing the same results There are no dependencies between functions that impact the order they can be executed They support lazy evaluation They support referential transparency We will examine each of these advantages in more depth. Support repeated execution Using the same arguments will produce the same results. The previous square operation is an example of this. Since the operation does not depend on other external values, re-executing the code with the same arguments will return the same results. This supports the optimization technique call memoization. This is the process of caching the results of an expensive execution sequence and retrieving them when they are used again. An imperative technique for implementing this approach involves using a hash map to store values that have already been computed and retrieving them when they are used again. Let's demonstrate this using the square function. The technique should be used for those functions that are compute intensive. However, using the square function will allow us to focus on the technique. Declare a cache to hold the previously computed values as shown here: private final Map<Integer, Integer> memoizationCache = new HashMap<>(); We need to declare two methods. The first method, called doComputeExpensiveSquare, does the actual computation as shown here. A display statement is included only to verify the correct operation of the technique. Otherwise, it is not needed. The method should only be called once for each unique value passed to it. private Integer doComputeExpensiveSquare(Integer input) { System.out.println("Computing square"); return 2 * input; } A second method is used to detect when a value is used a subsequent time and return the previously computed value instead of calling the square method. This is shown next. The containsKey method checks to see if the input value has already been used. If it hasn't, then the doComputeExpensiveSquare method is called. Otherwise, the cached value is returned. public Integer computeExpensiveSquare(Integer input) { if (!memoizationCache.containsKey(input)) { memoizationCache.put(input, doComputeExpensiveSquare(input)); } return memoizationCache.get(input); } The use of the technique is demonstrated with the next code sequence: System.out.println(computeExpensiveSquare(4)); System.out.println(computeExpensiveSquare(4)); The output follows, which demonstrates that the square method was only called once: Computing square 16 16 The problem with this approach is the declaration of a hash map. This object may be inadvertently used by other elements of the program and will require the explicit declaration of new hash maps for each memoization usage. In addition, it does not offer flexibility in handling multiple memoization. A better approach is available in Java 8. This new approach wraps the hash map in a class and allows easier creation and use of memoization. Let's examine a memoization class as adapted from http://java.dzone.com/articles/java-8-automatic-memoization. It is called Memoizer. It uses ConcurrentHashMap to cache value and supports concurrent access from multiple threads. Two methods are defined. The doMemoize method returns a lambda expression that does all of the work. The memorize method creates an instance of the Memoizer class and passes the lambda expression implementing the expensive operation to the doMemoize method. The doMemoize method uses the ConcurrentHashMap class's computeIfAbsent method to determine if the computation has already been performed. If the value has not been computed, it executes the Function interface's apply method against the function argument: public class Memoizer<T, U> { private final Map<T, U> memoizationCache = new ConcurrentHashMap<>(); private Function<T, U> doMemoize(final Function<T, U> function) { return input -> memoizationCache.computeIfAbsent(input, function::apply); } public static <T, U> Function<T, U> memoize(final Function<T, U> function) { return new Memoizer<T, U>().doMemoize(function); } } A lambda expression is created for the square operation: Function<Integer, Integer> squareFunction = x -> { System.out.println("In function"); return x * x; }; The memoizationFunction variable will hold the lambda expression that is subsequently used to invoke the square operations: Function<Integer, Integer> memoizationFunction = Memoizer.memoize(squareFunction); System.out.println(memoizationFunction.apply(2)); System.out.println(memoizationFunction.apply(2)); System.out.println(memoizationFunction.apply(2)); The output of this sequence follows where the square operation is performed only once: In function 4 4 4 We can easily use the Memoizer class for a different function as shown here: Function<Double, Double> memoizationFunction2 = Memoizer.memoize(x -> x * x); System.out.println(memoizationFunction2.apply(4.0)); This will square the number as expected. Functions that are recursive present additional problems. Eliminating dependencies between functions When dependencies between functions are eliminated, then more flexibility in the order of execution is possible. Consider these Function and BiFunction declarations, which define simple expressions for computing hourly, salaried, and sales type pay, respectively: BiFunction<Integer, Double, Double> computeHourly = (hours, rate) -> hours * rate; Function<Double, Double> computeSalary = rate -> rate * 40.0; BiFunction<Double, Double, Double> computeSales = (rate, commission) -> rate * 40.0 + commission; These functions can be executed, and their results are assigned to variables as shown here: double hourlyPay = computeHourly.apply(35, 12.75); double salaryPay = computeSalary.apply(25.35); double salesPay = computeSales.apply(8.75, 2500.0); These are pure functions as they do not use external values to perform their computations. In the following code sequence, the sum of all three pays are totaled and displayed: System.out.println(computeHourly.apply(35, 12.75) + computeSalary.apply(25.35) + computeSales.apply(8.75, 2500.0)); We can easily reorder their execution sequence or even execute them concurrently, and the results will be the same. There are no dependencies between the functions that restrict them to a specific execution ordering. Supporting lazy evaluation Continuing with this example, let's add an additional sequence, which computes the total pay based on the type of employee. The variable, hourly, is set to true if we want to know the total of the hourly employee pay type. It will be set to false if we are interested in salary and sales-type employees: double total = 0.0; boolean hourly = ...; if(hourly) { total = hourlyPay; } else { total = salaryPay + salesPay; } System.out.println(total); When this code sequence is executed with an hourly value of false, there is no need to execute the computeHourly function since it is not used. The runtime system could conceivably choose not to execute any of the lambda expressions until it knows which one is actually used. While all three functions are actually executed in this example, it illustrates the potential for lazy evaluation. Functions are not executed until needed. Referential transparency Referential transparency is the idea that a given expression is made up of subexpressions. The value of the subexpression is important. We are not concerned about how it is written or other details. We can replace the subexpression with its value and be perfectly happy. With regards to pure functions, they are said to be referentially transparent since they have same effect. In the next declaration, we declare a pure function called pureFunction: Function<Double,Double> pureFunction = t -> 3*t; It supports referential transparency. Consider if we declare a variable as shown here: int num = 5; Later, in a method we can assign a different value to the variable: num = 6; If we define a lambda expression that uses this variable, the function is no longer pure: Function<Double,Double> impureFunction = t -> 3*t+num; The function no longer supports referential transparency. Closure in Java The use of external variables in a lambda expression raises several interesting questions. One of these involves the concept of closures. A closure is a function that uses the context within which it was defined. By context, we mean the variables within its scope. This sometimes is referred to as variable capture. We will use a class called ClosureExample to illustrate closures in Java. The class possesses a getStringOperation method that returns a Function lambda expression. This expression takes a string argument and returns an augmented version of it. The argument is converted to lowercase, and then its length is appended to it twice. In the process, both an instance variable and a local variable are used. In the implementation that follows, the instance variable and two local variables are used. One local variable is a member of the getStringOperation method and the second one is a member of the lambda expression. They are used to hold the length of the target string and for a separator string: public class ClosureExample { int instanceLength; public Function<String,String> getStringOperation() { final String seperator = ":"; return target -> { int localLength = target.length(); instanceLength = target.length(); return target.toLowerCase() + seperator + instanceLength + seperator + localLength; }; } } The lambda expression is created and used as shown here: ClosureExample ce = new ClosureExample(); final Function<String,String> function = ce.getStringOperation(); System.out.println(function.apply("Closure")); Its output follows: closure:7:7 Variables used by the lambda expression are restricted in their use. Local variables or parameters cannot be redefined or modified. These variables need to be effectively final. That is, they must be declared as final or not be modified. If the local variable and separator, had not been declared as final, the program would still be executed properly. However, if we tried to modify the variable later, then the following syntax error would be generated, indicating such variable was not permitted within a lambda expression: local variables referenced from a lambda expression must be final or effectively final If we add the following statements to the previous example and remove the final keyword, we will get the same syntax error message: function = String::toLowerCase; Consumer<String> consumer = s -> System.out.println(function.apply(s)); This is because the function variable is used in the Consumer lambda expression. It also needs to be effectively final, but we tried to assign a second value to it, the method reference for the toLowerCase method. Closure refers to functions that enclose variable external to the function. This permits the function to be passed around and used in different contexts. Currying Some functions can have multiple arguments. It is possible to evaluate these arguments one-by-one. This process is called currying and normally involves creating new functions, which have one fewer arguments than the previous one. The advantage of this process is the ability to subdivide the execution sequence and work with intermediate results. This means that it can be used in a more flexible manner. Consider a simple function such as: f(x,y) = x + y The evaluation of f(2,3) will produce a 5. We could use the following, where the 2 is "hardcoded": f(2,y) = 2 + y If we define: g(y) = 2 + y Then the following are equivalent: f(2,y) = g(y) = 2 + y Substituting 3 for y we get: f(2,3) = g(3) = 2 + 3 = 5 This is the process of currying. An intermediate function, g(y), was introduced which we can pass around. Let's see, how something similar to this can be done in Java 8. Start with a BiFunction method designed for concatenation of strings. A BiFunction method takes two parameters and returns a single value: BiFunction<String, String, String> biFunctionConcat = (a, b) -> a + b; The use of the function is demonstrated with the following statement: System.out.println(biFunctionConcat.apply("Cat", "Dog")); The output will be the CatDog string. Next, let's define a reference variable called curryConcat. This variable is a Function interface variable. This interface is based on two data types. The first one is String and represents the value passed to the Function interface's accept method. The second data type represents the accept method's return type. This return type is defined as a Function instance that is passed a string and returns a string. In other words, the curryConcat function is passed a string and returns an instance of a function that is passed and returns a string. Function<String, Function<String, String>> curryConcat; We then assign an appropriate lambda expression to the variable: curryConcat = (a) -> (b) -> biFunctionConcat.apply(a, b); This may seem to be a bit confusing initially, so let's take it one piece at a time. First of all, the lambda expression needs to return a function. The lambda expression assigned to curryConcat follows where the ellipses represent the body of the function. The parameter, a, is passed to the body: (a) ->...; The actual body follows: (b) -> biFunctionConcat.apply(a, b); This is the lambda expression or function that is returned. This function takes two parameters, a and b. When this function is created, the a parameter will be known and specified. This function can be evaluated later when the value for b is specified. The function returned is an instance of a Function interface, which is passed two parameters and returns a single value. To illustrate this, define an intermediate variable to hold this returned function: Function<String,String> intermediateFunction; We can assign the result of executing the curryConcat lambda expression using it's apply method as shown here where a value of Cat is specified for the a parameter: intermediateFunction = curryConcat.apply("Cat"); The next two statements will display the returned function: System.out.println(intermediateFunction); System.out.println(curryConcat.apply("Cat")); The output will look something similar to the following: packt.Chapter2$$Lambda$3/798154996@5305068a packt.Chapter2$$Lambda$3/798154996@1f32e575 Note that these are the values representing this functions as returned by the implied toString method. They are both different, indicating that two different functions were returned and can be passed around. Now that we have confirmed a function has been returned, we can supply a value for the b parameter as shown here: System.out.println(intermediateFunction.apply("Dog")); The output will be CatDog. This illustrates how we can split a two parameter function into two distinct functions, which can be evaluated when desired. They can be used together as shown with these statements: System.out.println(curryConcat.apply("Cat").apply("Dog")); System.out.println(curryConcat.apply("Flying ").apply("Monkeys")); The output of these statements is as follows: CatDog Flying Monkeys We can define a similar operation for doubles as shown here: Function<Double, Function<Double, Double>> curryAdd = (a) -> (b) -> a * b; System.out.println(curryAdd.apply(3.0).apply(4.0)); This will display 12.0 as the returned value. Currying is a valuable approach useful when the arguments of a function need to be evaluated at different times. Summary In this article, we investigated the use of lambda expressions and how they support the functional style of programming in Java 8. When possible, we used examples to contrast the use of classes and methods against the use of functions. This frequently led to simpler and more maintainable functional implementations. We illustrated how lambda expressions support the functional concepts of high-order, first-class, and pure functions. Examples were used to help clarify the concept of referential transparency. The concepts of closure and currying are found in most functional programming languages. We provide examples of how they are supported in Java 8. Lambda expressions have a specific syntax, which we examined in more detail. Also, there are several variations of the function that can be used to support the expression in the form, which we illustrated. Lambda expressions are based on functional interfaces using type inference. It is important to understand how to create functional interfaces and to know what standard functional interfaces are available in Java 8. Resources for Article: Further resources on this subject: An Introduction to Mastering JavaScript Promises and Its Implementation in Angular.js[article] Finding Peace in REST[article] Introducing JAX-RS API [article]
Read more
  • 0
  • 0
  • 23805
Modal Close icon
Modal Close icon