Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7008 Articles
article-image-building-microservices-from-a-monolith-java-ee-app-tutorial
Aaron Lazar
03 Aug 2018
11 min read
Save for later

Building microservices from a monolith Java EE app [Tutorial]

Aaron Lazar
03 Aug 2018
11 min read
Microservices are one of the top buzzwords these days. It's easy to understand why: in a growing software industry where the amount of services, data, and users increases crazily, we really need a way to build and deliver faster, decoupled, and scalable solutions. In this tutorial, we'll help you get started with microservices or go deeper into your ongoing project. This article is an extract from the book Java EE 8 Cookbook, authored by Elder Moraes. One common question that I have heard dozens of times is, "how do I break down my monolith into microservices?", or, "how do I migrate from a monolith approach to microservices?" Well, that's what this recipe is all about. Getting ready with monolith and microservice projects For both monolith and microservice projects, we will use the same dependency: <dependency> <groupId>javax</groupId> <artifactId>javaee-api</artifactId> <version>8.0</version> <scope>provided</scope> </dependency> Working with entities and beans First, we need the entities that will represent the data kept by the application. Here is the User entity: @Entity public class User implements Serializable { private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; @Column private String name; @Column private String email; public User(){ } public User(String name, String email) { this.name = name; this.email = email; } public Long getId() { return id; } public void setId(Long id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } public String getEmail() { return email; } public void setEmail(String email) { this.email = email; } } Here is the UserAddress entity: @Entity public class UserAddress implements Serializable { private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; @Column @ManyToOne private User user; @Column private String street; @Column private String number; @Column private String city; @Column private String zip; public UserAddress(){ } public UserAddress(User user, String street, String number, String city, String zip) { this.user = user; this.street = street; this.number = number; this.city = city; this.zip = zip; } public Long getId() { return id; } public void setId(Long id) { this.id = id; } public User getUser() { return user; } public void setUser(User user) { this.user = user; } public String getStreet() { return street; } public void setStreet(String street) { this.street = street; } public String getNumber() { return number; } public void setNumber(String number) { this.number = number; } public String getCity() { return city; } public void setCity(String city) { this.city = city; } public String getZip() { return zip; } public void setZip(String zip) { this.zip = zip; } } Now we define one bean to deal with the transaction over each entity. Here is the UserBean class: @Stateless public class UserBean { @PersistenceContext private EntityManager em; public void add(User user) { em.persist(user); } public void remove(User user) { em.remove(user); } public void update(User user) { em.merge(user); } public User findById(Long id) { return em.find(User.class, id); } public List<User> get() { CriteriaBuilder cb = em.getCriteriaBuilder(); CriteriaQuery<User> cq = cb.createQuery(User.class); Root<User> pet = cq.from(User.class); cq.select(pet); TypedQuery<User> q = em.createQuery(cq); return q.getResultList(); } } Here is the UserAddressBean class: @Stateless public class UserAddressBean { @PersistenceContext private EntityManager em; public void add(UserAddress address){ em.persist(address); } public void remove(UserAddress address){ em.remove(address); } public void update(UserAddress address){ em.merge(address); } public UserAddress findById(Long id){ return em.find(UserAddress.class, id); } public List<UserAddress> get() { CriteriaBuilder cb = em.getCriteriaBuilder(); CriteriaQuery<UserAddress> cq = cb.createQuery(UserAddress.class); Root<UserAddress> pet = cq.from(UserAddress.class); cq.select(pet); TypedQuery<UserAddress> q = em.createQuery(cq); return q.getResultList(); } } Finally, we build two services to perform the communication between the client and the beans. Here is the UserService class: @Path("userService") public class UserService { @EJB private UserBean userBean; @GET @Path("findById/{id}") @Consumes(MediaType.APPLICATION_JSON) @Produces(MediaType.APPLICATION_JSON) public Response findById(@PathParam("id") Long id){ return Response.ok(userBean.findById(id)).build(); } @GET @Path("get") @Consumes(MediaType.APPLICATION_JSON) @Produces(MediaType.APPLICATION_JSON) public Response get(){ return Response.ok(userBean.get()).build(); } @POST @Path("add") @Consumes(MediaType.APPLICATION_JSON) @Produces(MediaType.APPLICATION_JSON) public Response add(User user){ userBean.add(user); return Response.accepted().build(); } @DELETE @Path("remove/{id}") @Consumes(MediaType.APPLICATION_JSON) @Produces(MediaType.APPLICATION_JSON) public Response remove(@PathParam("id") Long id){ userBean.remove(userBean.findById(id)); return Response.accepted().build(); } } Here is the UserAddressService class: @Path("userAddressService") public class UserAddressService { @EJB private UserAddressBean userAddressBean; @GET @Path("findById/{id}") @Consumes(MediaType.APPLICATION_JSON) @Produces(MediaType.APPLICATION_JSON) public Response findById(@PathParam("id") Long id){ return Response.ok(userAddressBean.findById(id)).build(); } @GET @Path("get") @Consumes(MediaType.APPLICATION_JSON) @Produces(MediaType.APPLICATION_JSON) public Response get(){ return Response.ok(userAddressBean.get()).build(); } @POST @Path("add") @Consumes(MediaType.APPLICATION_JSON) @Produces(MediaType.APPLICATION_JSON) public Response add(UserAddress address){ userAddressBean.add(address); return Response.accepted().build(); } @DELETE @Path("remove/{id}") @Consumes(MediaType.APPLICATION_JSON) @Produces(MediaType.APPLICATION_JSON) public Response remove(@PathParam("id") Long id){ userAddressBean.remove(userAddressBean.findById(id)); return Response.accepted().build(); } } Now let's break it down! Building microservices from the monolith Our monolith deals with User and UserAddress. So we will break it down into three microservices: A user microservice A user address microservice A gateway microservice A gateway service is an API between the application client and the services. Using it allows you to simplify this communication, also giving you the freedom of doing whatever you like with your services without breaking the API contracts (or at least minimizing it). The user microservice The User entity, UserBean, and UserService will remain exactly as they are in the monolith. Only now they will be delivered as a separated unit of deployment. The user address microservice The UserAddress classes will suffer just a single change from the monolith version, but keep their original APIs (that is great from the point of view of the client). Here is the UserAddress entity: @Entity public class UserAddress implements Serializable { private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; @Column private Long idUser; @Column private String street; @Column private String number; @Column private String city; @Column private String zip; public UserAddress(){ } public UserAddress(Long user, String street, String number, String city, String zip) { this.idUser = user; this.street = street; this.number = number; this.city = city; this.zip = zip; } public Long getId() { return id; } public void setId(Long id) { this.id = id; } public Long getIdUser() { return idUser; } public void setIdUser(Long user) { this.idUser = user; } public String getStreet() { return street; } public void setStreet(String street) { this.street = street; } public String getNumber() { return number; } public void setNumber(String number) { this.number = number; } public String getCity() { return city; } public void setCity(String city) { this.city = city; } public String getZip() { return zip; } public void setZip(String zip) { this.zip = zip; } } Note that User is no longer a property/field in the UserAddress entity, but only a number (idUser). We will get into more details about it in the following section. The gateway microservice First, we create a class that helps us deal with the responses: public class GatewayResponse { private String response; private String from; public String getResponse() { return response; } public void setResponse(String response) { this.response = response; } public String getFrom() { return from; } public void setFrom(String from) { this.from = from; } } Then, we create our gateway service: @Consumes(MediaType.APPLICATION_JSON) @Path("gatewayResource") @RequestScoped public class GatewayResource { private final String hostURI = "http://localhost:8080/"; private Client client; private WebTarget targetUser; private WebTarget targetAddress; @PostConstruct public void init() { client = ClientBuilder.newClient(); targetUser = client.target(hostURI + "ch08-micro_x_mono-micro-user/"); targetAddress = client.target(hostURI + "ch08-micro_x_mono-micro-address/"); } @PreDestroy public void destroy(){ client.close(); } @GET @Path("getUsers") @Produces(MediaType.APPLICATION_JSON) public Response getUsers() { WebTarget service = targetUser.path("webresources/userService/get"); Response response; try { response = service.request().get(); } catch (ProcessingException e) { return Response.status(408).build(); } GatewayResponse gatewayResponse = new GatewayResponse(); gatewayResponse.setResponse(response.readEntity(String.class)); gatewayResponse.setFrom(targetUser.getUri().toString()); return Response.ok(gatewayResponse).build(); } @POST @Path("addAddress") @Produces(MediaType.APPLICATION_JSON) public Response addAddress(UserAddress address) { WebTarget service = targetAddress.path("webresources/userAddressService/add"); Response response; try { response = service.request().post(Entity.json(address)); } catch (ProcessingException e) { return Response.status(408).build(); } return Response.fromResponse(response).build(); } } As we receive the UserAddress entity in the gateway, we have to have a version of it in the gateway project too. For brevity, we will omit the code, as it is the same as in the UserAddress project. Transformation to microservices The monolith application couldn't be simpler: just a project with two services using two beans to manage two entities. The microservices So we split the monolith into three projects (microservices): the user service, the user address service, and the gateway service. The user service classes remained unchanged after the migration from the monolith version. So there's nothing to comment on. The UserAddress class had to be changed to become a microservice. The first change was made on the entity. Here is the monolith version: @Entity public class UserAddress implements Serializable { ... @Column @ManyToOne private User user; ... public UserAddress(User user, String street, String number, String city, String zip) { this.user = user; this.street = street; this.number = number; this.city = city; this.zip = zip; } ... public User getUser() { return user; } public void setUser(User user) { this.user = user; } ... } Here is the microservice version: @Entity public class UserAddress implements Serializable { ... @Column private Long idUser; ... public UserAddress(Long user, String street, String number, String city, String zip) { this.idUser = user; this.street = street; this.number = number; this.city = city; this.zip = zip; } public Long getIdUser() { return idUser; } public void setIdUser(Long user) { this.idUser = user; } ... } Note that in the monolith version, user was an instance of the User entity: private User user; In the microservice version, it became a number: private Long idUser; This happened for two main reasons: In the monolith, we have the two tables in the same database (User and UserAddress), and they both have physical and logical relationships (foreign key). So it makes sense to also keep the relationship between both the objects. The microservice should have its own database, completely independent from the other services. So we choose to keep only the user ID, as it is enough to load the address properly anytime the client needs. This change also resulted in a change in the constructor. Here is the monolith version: public UserAddress(User user, String street, String number, String city, String zip) Here is the microservice version: public UserAddress(Long user, String street, String number, String city, String zip) This could lead to a change of contract with the client regarding the change of the constructor signature. But thanks to the way it was built, it wasn't necessary. Here is the monolith version: public Response add(UserAddress address) Here is the microservice version: public Response add(UserAddress address) Even if the method is changed, it could easily be solved with @Path annotation, or if we really need to change the client, it would be only the method name and not the parameters (which used to be more painful). Finally, we have the gateway service, which is our implementation of the API gateway design pattern. Basically it is the one single point to access the other services. The nice thing about it is that your client doesn't need to care about whether the other services changed the URL, the signature, or even whether they are available. The gateway will take care of them. The bad part is that it is also on a single point of failure. Or, in other words, without the gateway, all services are unreachable. But you can deal with it using a cluster, for example. So now you've built a microservice in Java EE code, that was once a monolith! If you found this tutorial helpful and would like to learn more, head over to this book Java EE 8 Cookbook, authored by Elder Moraes. Oracle announces a new pricing structure for Java Design a RESTful web API with Java [Tutorial] How to convert Java code into Kotlin
Read more
  • 0
  • 0
  • 33568

article-image-views-in-vrealize-operation-manager
Vijin Boricha
02 Aug 2018
11 min read
Save for later

Building custom views in vRealize operation manager [Tutorial]

Vijin Boricha
02 Aug 2018
11 min read
A view in vRealize Operations manager consists of the view type, subject, and data components: In this tutorial, the view is a trend view, which gets its data from the CPU Demand (%) metric. The subject is the object type, which a view is associated with. A view presents the data of the subject. For example, if the selected object is a host and you select the view named Host CPU Demand (%) Trend View, the result is a trend of the host's CPU demand over a period of time. Today we will walk through the parts needed to define and build custom views in vRealize Operations manager, and learn to apply them to real work situations. This article is an excerpt from Mastering vRealize Operations Manager – Second Edition written by Spas Kaloferov, Scott Norris, Christopher Slater.  Adding Name and description fields Although it might seem obvious, the first thing you need to define when creating a report is the name and description. Before you dismiss this requirement and simply enter My View or Scott's Test, the name and description fields are very useful in defining the scope and target of the view. This is because many views are not really designed to be run/applied on the subject, but rather on one of its parents. This is especially true for lists and distributions, which we will cover below: What are different View types The presentation is the format the view is created in and how the information is displayed. The following types of views are available: List: A list view provides tabular data about specific objects in the environment that correspond to the selected view. Summary: A summary view presents tabular information about the current use of resources in the environment. Trend: A trend view uses historic data to generate trends and forecasts for resource use and availability in the environment. Distribution: A distribution view provides aggregated data about resource distribution in the monitored environment. Pie charts or bar charts are used to present the data. Text: A text view displays text that you provide when you create the view. Image: An image view allows you to insert a static image. List A list is one of the simplest presentation types to use and understand, and at the same time, is one of the most useful. A list provides a tabular layout of values for each data type, with the ability to provide an aggregation row such as sum or average at the end. Lists are the most useful presentation type for a large number of objects, and are able to provide information in the form of metrics and/or properties. Lists are also the most commonly used presentation when showing a collection of objects relative to its parent. An example of a list can be found in the following screenshot: List summary A summary is similar to a list, however the rows are the data types (rather than the objects) and the columns are aggregated values of all children of that subject type. Unlike a list, a summary field is compulsory, as the individual objects are not presented in the view. The summary view type is probably the least commonly used, but it is useful when you simply care about the end result and not the detail of how it was calculated. The following example shows Datastore Space Usage from the cluster level; information such as the average used GB across each Datastore can be displayed without the need to show each Datastore present in a list: Although it will be discussed in more detail in the next chapter, the availability of creating simple summary views of child resources has partially removed the need for creating super metrics for simply rolling up data to parent objects. Trend A trend view is a line graph representation of metrics showing historical data that can be used to generate trends and forecasts. Unlike some of the other presentation types, a trend can only show data from that subject type. As such, trend views do not filter up to parent objects. A trend view, in many ways, is similar to a standard metric chart widget with a set of combined preconfigured data types, with one major exception. The trend view has the ability to forecast data into the future for a specified period of time, as well as show the trend line for historical data for any object type. This allows the trend view to provide detailed and useful capacity planning data for any object in the vRealize Operations inventory. When selecting the data types to use in the view, it is recommended that, if multiple data types are used, that they support the same unit of measurement. Although this is not a requirement, views that have different unit types on the same scale are relatively hard to compare. An example of a trend view is shown as follows: Distribution A distribution view is a graphical representation of aggregated data which shows how resources fall within those aggregation groups. This essentially means that vRealize Operations finds a way of graphically representing a particular metric or property for a group of objects. In this example, it is the distribution of VM OS types in a given vSphere cluster. A distribution like a summary is very useful in displaying a small amount of information about a large number of objects. Distribution views can also be shown as bar charts. In this example, the distribution of Virtual Machine Memory Configuration Distribution is shown in a given vSphere cluster. This view can help spot virtual machines configured with a large amount of memory. An important point when creating distribution views is that the subject must be a child of the preview or target object. This means that you can only see a view for the distribution on one of the subject's parent objects. Both visualization methods essentially group the subjects into buckets, with the number of buckets and their values based on the distribution type. The three distribution types are as follows: Dynamic distribution: vRealize Operations automatically determines how many buckets to create based on an interval, a min/max value, or a logarithmic equation. When dealing with varying data values, this is generally the recommended display. Manual distribution: Allows the administrator to manually set the range of each bucket in the display. Discrete distribution: Used for displaying exact values of objects rather than ranges. A discrete distribution is recommended if most objects only have a few possible values, such as properties or other binary values. Text and images The text and image views are used to insert static text or image content for the purpose of reports and dashboards. They allow an administrator to add context to a report in combination with the dynamic views that are inserted when the reports are generated. Adding Subjects to View Although the subjects are generally selected after the presentation, it makes sense to describe them first. The subject is the base object for which the view shows information. In other words, the subject is the object type that the data is coming from for the view. Any object type from any adapter can be selected. It is important to keep in mind that you may be designing a view for a parent object, however, the subject is actually the data of a child object. For example, if you wish to list all the Datastore free space in a vSphere Cluster itemized by Datastore, the subject will be a Datastore, not a Cluster Compute Resource. This is because although the list will always be viewed in the context of a cluster, the data listed is from Datastore objects themselves. When selecting a subject, an option is provided to select multiple object types. If this is done, only data that is common to both types will be available. Adding Data to View Data is the content that makes up the view based on the selected subject. The type of data that can be added and any additional options available depend on the select presentation type. An important feature with views is that they are able to display and filter based on properties, and not just standard metrics. This is particularly useful when filtering a list or distribution group. For example, the following screenshot shows badge information in a given vSphere Cluster, as long as they contain a vSphere tag of BackupProtectedVM. This allows a view to be filtered only to virtual machines that are deployed and managed by vRealize Automation: Adding Visibility layer One of the most useful features about views is that you have the ability to decide where they show up and where they can be linked from. The visibility layer defines where you can see a hyperlink to a view in vRealize Operations based on a series of checkboxes. The visibility step is broken into three categories, which are Availability, Further Analysis, and Blacklist, as shown in the following screenshot: Subsequently, you can also make the view available in a dashboard. To make this view available inside a dashboard, you can either edit an existing one or create a new dashboard by navigating to Home, Actions, then Create Dashboard. You can add the desired view within your dashboard configuration. Availability The availability checkboxes allow an administrator to devise how their view can be used and if there are cases where they wish to restrict its availability: Dashboard through the view widget: The view widget allows any created view to be displayed on a dashboard. This essentially allows an unlimited amount of data types to be displayed on the classic dashboards, with the flexibility of the different presentation types. Report template creation and modification: This setting allows views to be used in reports. If you are creating views explicitly to use in reports, ensure this box is checked. Details tab in the environment: The Details tab in the environment is the default location where administrators will use views. It is also the location where the Further Analysis links will take an administrator if selected. In most cases, it is recommended that this option be enabled, unless a view is not yet ready to be released to other users. Further Analysis The Further Analysis checkbox is a feature that allows an administrator to link views that they have created to the minor badges in the Object Analysis tab. Although this feature may seem comparatively small, it allows users to create relevant views for certain troubleshooting scenarios and link them directly to where administrators will be working. This allows administrators to leverage views more quickly for troubleshooting rather than simply jumping to the All Metrics tab and looking for dynamic threshold breaches. Blacklist The blacklist allows administrators to ensure that views cannot be used against certain object types. This is useful if you want to ensure that a view is only partially promoted up to a parent and not, for example, to a grandparent. How to Delete a View Views show up in multiple places. When you're tempted to delete a view, ask yourself: Do I want to delete this entire view, or do I just want to no longer show it in one part of the UI? Don't delete a view when you just want to hide it in one part of the UI. When you delete a view, areas in the UI that use the view are adjusted: Report templates: The view is removed from the report template Dashboards: The view widget displays the message The view does not exist Further Analysis panel of badges on the Analysis tab: The link to the view is removed Details > Views tab for the selected object: The view is removed from the list vRealize Operations will display a message informing you that deleting the view will modify the report templates that are using the view. Now, you have learned the new powerful features available in views and the different view presentation types. To know more about handling alerts and notifications in vRealize Operations, check out this book Mastering vRealize Operations Manager - Second Edition. VMware Kubernetes Engine (VKE) launched to offer Kubernetes-as-a-Service Introducing VMware Integrated OpenStack (VIO) 5.0, a new Infrastructure-as-a-Service (IaaS) cloud Are containers the end of virtual machines?
Read more
  • 0
  • 0
  • 11371

article-image-multi-robot-cooperation-model-with-swarm-intelligence
Sugandha Lahoti
02 Aug 2018
7 min read
Save for later

IoT project: Design a Multi-Robot Cooperation model with Swarm Intelligence [Tutorial]

Sugandha Lahoti
02 Aug 2018
7 min read
Collective intelligence (CI) is shared or group intelligence that emerges from the collaboration, collective efforts, and competition of many individuals and appears in consensus decision making. Swarm intelligence (SI) is a subset of collective intelligence and defines the collective behavior of decentralized, self-organized systems, natural or artificial. In this tutorial, we will talk about how to design a multi-robot cooperation model using swarm intelligence. This article is an excerpt from Intelligent IoT Projects in 7 Days by Agus Kurniawan. In this book, you will learn how to build your own Intelligent Internet of Things projects. What is swarm intelligence Swarm intelligence is inspired by the collective behavior of social animal colonies such as ants, birds, wasps, and honey bees. These animals work together to achieve a common goal. Swarm intelligence phenomena can be found in our environment. You can see swarm intelligence in deep-sea animals, shown in the following image of a school of fish in a formation that was captured by a photographer in Cabo Pulmo: Image source: http://octavioaburto.com/cabo-pulmo Using information from swarm intelligence studies, swarm intelligence is applied to coordinate among autonomous robots. Each robot can be described as a self-organization system. Each one negotiates with the others on how to achieve the goal. There are various algorithms to implement swarm intelligence. The following is a list of swarm intelligence types that researchers and developers apply to their problems: Particle swarm optimization Ant system Ant colony system Bees algorithm Bacterial foraging optimization algorithm The Particle Swarm Optimization (PSO) algorithm is inspired by the social foraging behavior of some animals such as the flocking behavior of birds and the schooling behavior of fish. A sample of PSO algorithm in Python can be found at https://gist.github.com/btbytes/79877. This program needs the numpy library. numpy (Numerical Python) is a package for scientific computing with Python. Your computer should have installed Python. If not, you can download and install on this site, https://www.python.org. If your computer does not have numpy , you can install it by typing this command in the terminal (Linux and Mac platforms): $ pip install numpy For Windows platform, please install numpy refer to this https://www.scipy.org/install.html. You can copy the following code into your editor. Save it as code_1.py and then run it on your computer using terminal: from numpy import array from random import random from math import sin, sqrt iter_max = 10000 pop_size = 100 dimensions = 2 c1 = 2 c2 = 2 err_crit = 0.00001 class Particle: pass def f6(param): '''Schaffer's F6 function''' para = param*10 para = param[0:2] num = (sin(sqrt((para[0] * para[0]) + (para[1] * para[1])))) * (sin(sqrt((para[0] * para[0]) + (para[1] * para[1])))) - 0.5 denom = (1.0 + 0.001 * ((para[0] * para[0]) + (para[1] * para[1]))) * (1.0 + 0.001 * ((para[0] * para[0]) + (para[1] * para[1]))) f6 = 0.5 - (num/denom) errorf6 = 1 - f6 return f6, errorf6; #initialize the particles particles = [] for i in range(pop_size): p = Particle() p.params = array([random() for i in range(dimensions)]) p.fitness = 0.0 p.v = 0.0 particles.append(p) # let the first particle be the global best gbest = particles[0] err = 999999999 while i < iter_max : for p in particles: fitness,err = f6(p.params) if fitness > p.fitness: p.fitness = fitness p.best = p.params if fitness > gbest.fitness: gbest = p v = p.v + c1 * random() * (p.best - p.params) + c2 * random() * (gbest.params - p.params) p.params = p.params + v i += 1 if err < err_crit: break #progress bar. '.' = 10% if i % (iter_max/10) == 0: print '.' print 'nParticle Swarm Optimisationn' print 'PARAMETERSn','-'*9 print 'Population size : ', pop_size print 'Dimensions : ', dimensions print 'Error Criterion : ', err_crit print 'c1 : ', c1 print 'c2 : ', c2 print 'function : f6' print 'RESULTSn', '-'*7 print 'gbest fitness : ', gbest.fitness print 'gbest params : ', gbest.params print 'iterations : ', i+1 ## Uncomment to print particles for p in particles: print 'params: %s, fitness: %s, best: %s' % (p.params, p.fitness, p.best) You can run this program by typing this command: $ python code_1.py.py This program will generate PSO output parameters based on input. You can see PARAMETERS value on program output.  At the end of the code, we can print all PSO particle parameter while iteration process. Introducing multi-robot cooperation Communicating and negotiating among robots is challenging. We should ensure our robots address collision while they are moving. Meanwhile, these robots should achieve their goals collectively. For example, Keisuke Uto has created a multi-robot implementation to create a specific formation. They take input from their cameras. Then, these robots arrange themselves to create a formation. To get the correct robot formation, this system uses a camera to detect the current robot formation. Each robot has been labeled so it makes the system able to identify the robot formation. By implementing image processing, Keisuke shows how multiple robots create a formation using multi-robot cooperation. If you are interested, you can read about the project at https://www.digi.com/blog/xbee/multi-robot-formation-control-by-self-made-robots/. Designing a multi-robot cooperation model using swarm intelligence A multi-robot cooperation model enables some robots to work collectively to achieve a specific purpose. Having multi-robot cooperation is challenging. Several aspects should be considered in order to get an optimized implementation. The objective, hardware, pricing, and algorithm can have an impact on your multi-robot design. In this section, we will review some key aspects of designing multi-robot cooperation. This is important since developing a robot needs multi-disciplinary skills. Define objectives The first step to developing multi-robot swarm intelligence is to define the objectives. We should state clearly what the goal of the multi-robot implementation is. For instance, we can develop a multi-robot system for soccer games or to find and fight fire. After defining the objectives, we can continue to gather all the material to achieve them: robot platform, sensors, and algorithms are components that we should have. Selecting a robot platform The robot platform is the MCU model that will be used. There are several MCU platforms that you use for a multi-robot implementation. Arduino, Raspberry Pi, ESP8266, ESP32, TI LaunchPad, and BeagleBone are samples of MCU platforms that can probably be applied for your case. Sometimes, you may nee to consider the price parameter to decide upon a robot platform. Some researchers and makers make their robot devices with minimum hardware to get optimized functionalities. They also share their hardware and software designs. I recommend you visit Open Robotics, https://www.osrfoundation.org, to explore robot projects that might fit your problem. Alternatively, you can consider using robot kits. Using a kit means you don't need to solder electronic components. It is ready to use. You can find robot kits in online stores such as Pololu (https://www.pololu.com), SparkFun (https://www.sparkfun.com), DFRobot (https://www.dfrobot.com), and Makeblock (http://www.makeblock.com). You can see my robots from Pololu and DFRobot here: Selecting the algorithm for swarm intelligence The choice of algorithm, especially for swarm intelligence, should be connected to what kind of robot platform is used. We already know that some hardware for robots have computational limitations. Applying complex algorithms to limited computation devices can drain the hardware battery. You must research the best parameters for implementing multi-robot systems. Implementing swarm intelligence in swarm robots can be described as in the following figure. A swarm robot system will perform sensing to gather its environmental information, including detecting peer robot presence. By combining inputs from sensors and peers, we can actuate the robots based on the result of our swarm intelligence computation. Actuation can be movement and actions. We designed a multi-robot cooperation model using swarm intelligence. To know how to create more smart IoT projects, check out this book Intelligent IoT Projects in 7 Days. AI-powered Robotics: Autonomous machines in the making How to assemble a DIY selfie drone with Arduino and ESP8266 Tips and tricks for troubleshooting and flying drones safely
Read more
  • 0
  • 0
  • 18247

article-image-react-component-lifecycle-methods-tutorial
Sugandha Lahoti
01 Aug 2018
14 min read
Save for later

Implementing React Component Lifecycle methods [Tutorial]

Sugandha Lahoti
01 Aug 2018
14 min read
All the React component’s lifecycle methods can be split into four phases: initialization, mounting, updating and unmounting. The process where all these stages are involved is called the component’s lifecycle and every React component goes through it. React provides several methods that notify us when a certain stage of this process occurs. These methods are called the component’s lifecycle methods and they are invoked in a predictable order. In this article we will learn about the lifecycle of React components and how to write code that responds to lifecycle events. We'll kick things off with a brief discussion on why components need a lifecycle. And then we will implement several example components that will  initialize their properties and state using these methods. This article is an excerpt from React and React Native by Adam Boduch.  Why components need a lifecycle React components go through a lifecycle, whether our code knows about it or not. Rendering is one of the lifecycle events in a React component. For example, there are lifecycle events for when the component is about to be mounted into the DOM, for after the component has been mounted to the DOM, when the component is updated, and so on. Lifecycle events are yet another moving part, so you'll want to keep them to a minimum. Some components do need to respond to lifecycle events to perform initialization, render heuristics, or clean up after the component when it's unmounted from the DOM. The following diagram gives you an idea of how a component flows through its lifecycle, calling the corresponding methods in turn: These are the two main lifecycle flows of a React component. The first happens when the component is initially rendered. The second happens whenever the component is re-rendered. However, the componentWillReceiveProps() method is only called when the component's properties are updated. This means that if the component is re-rendered because of a call to setState(), this lifecycle method isn't called, and the flow starts with shouldComponentUpdate() instead. The other lifecycle method that isn't included in this diagram is componentWillUnmount(). This is the only lifecycle method that's called when a component is about to be removed. Initializing properties and state In this section, you'll see how to implement initialization code in React components. This involves using lifecycle methods that are called when the component is first created. First, we'll walk through a basic example that sets the component up with data from the API. Then, you'll see how state can be initialized from properties, and also how state can be updated as properties change. Fetching component data One of the first things you'll want to do when your components are initialized is populate their state or properties. Otherwise, the component won't have anything to render other than its skeleton markup. For instance, let's say you want to render the following user list component: import React from 'react'; import { Map as ImmutableMap } from 'immutable'; // This component displays the passed-in "error" // property as bold text. If it's null, then // nothing is rendered. const ErrorMessage = ({ error }) => ImmutableMap() .set(null, null) .get( error, (<strong>{error}</strong>) ); // This component displays the passed-in "loading" // property as italic text. If it's null, then // nothing is rendered. const LoadingMessage = ({ loading }) => ImmutableMap() .set(null, null) .get( loading, (<em>{loading}</em>) ); export default ({ error, loading, users, }) => ( <section> { /* Displays any error messages... */ } <ErrorMessage error={error} /> { /* Displays any loading messages, while waiting for the API... */ } <LoadingMessage loading={loading} /> { /* Renders the user list... */ } <ul> {users.map(i => ( <li key={i.id}>{i.name}</li> ))} </ul> </section> ); There are three pieces of data that this JSX relies on: loading: This message is displayed while fetching API data error: This message is displayed if something goes wrong users: Data fetched from the API There's also two helper components used here: ErrorMessage and LoadingMessage. They're used to format the error and the loading state, respectively. However, if error or loading are null, neither do we want to render anything nor do we want to introduce imperative logic into these simple functional components. This is why we're using a cool little trick with Immutable.js maps. First, we create a map that has a single key-value pair. The key is null, and the value is null. Second, we call get() with either an error or a loading property. If the error or loading property is null, then the key is found and nothing is rendered. The trick is that get() accepts a second parameter that's returned if no key is found. This is where we pass in our truthy value and avoid imperative logic all together. This specific component is simple, but the technique is especially powerful when there are more than two possibilities. How should we go about making the API call and using the response to populate the users collection? The answer is to use a container component, introduced in the preceding chapter that makes the API call and then renders the UserList component: import React, { Component } from 'react'; import { fromJS } from 'immutable'; import { users } from './api'; import UserList from './UserList'; export default class UserListContainer extends Component { state = { data: fromJS({ error: null, loading: 'loading...', users: [], }), } // Getter for "Immutable.js" state data... get data() { return this.state.data; } // Setter for "Immutable.js" state data... set data(data) { this.setState({ data }); } // When component has been rendered, "componentDidMount()" // is called. This is where we should perform asynchronous // behavior that will change the state of the component. // In this case, we're fetching a list of users from // the mock API. componentDidMount() { users().then( (result) => { // Populate the "users" state, but also // make sure the "error" and "loading" // states are cleared. this.data = this.data .set('loading', null) .set('error', null) .set('users', fromJS(result.users)); }, (error) => { // When an error occurs, we want to clear // the "loading" state and set the "error" // state. this.data = this.data .set('loading', null) .set('error', error); } ); } render() { return ( <UserList {...this.data.toJS()} /> ); } } Let's take a look at the render() method. It's sole job is to render the <UserList> component, passing in this.state as its properties. The actual API call happens in the componentDidMount() method. This method is called after the component is mounted into the DOM. This means that <UserList> will have rendered once, before any data from the API arrives. But this is fine, because we've set up the UserListContainer state to have a default loading message, and UserList will display this message while waiting for API data. Once the API call returns with data, the users collection is populated, causing the UserList to re-render itself, only this time, it has the data it needs. So, why would we want to make this API call in componentDidMount() instead of in the component constructor, for example? The rule-of-thumb here is actually very simple to follow. Whenever there's asynchronous behavior that changes the state of a React component, it should be called from a lifecycle method. This way, it's easy to reason about how and when a component changes state. Let's take a look at the users() mock API function call used here: // Returns a promise that's resolved after 2 // seconds. By default, it will resolve an array // of user data. If the "fail" argument is true, // the promise is rejected. export function users(fail) { return new Promise((resolve, reject) => { setTimeout(() => { if (fail) { reject('epic fail'); } else { resolve({ users: [ { id: 0, name: 'First' }, { id: 1, name: 'Second' }, { id: 2, name: 'Third' }, ], }); } }, 2000); }); } It simply returns a promise that's resolved with an array after 2 seconds. Promises are a good tool for mocking things like API calls because this enables you to use more than simple HTTP calls as a data source in your React components. For example, you might be reading from a local file or using some library that returns promises that resolve data from unknown sources. Here's what the UserList component renders when the loading state is a string, and the users state is an empty array: Here's what it renders when loading is null and users is non-empty: I can't promise that this is the last time I'm going to make this point in the book, but I'll try to keep it to a minimum. I want to hammer home the separation of responsibilities between the UserListContainer and the UserList components. Because the container component handles the lifecycle management and the actual API communication, this enables us to create a very generic user list component. In fact, it's a functional component that doesn't require any state, which means this is easy to reuse throughout our application. Initializing state with properties The preceding example showed you how to initialize the state of a container component by making an API call in the componentDidMount() lifecycle method. However, the only populated part of the component state is the users collection. You might want to populate other pieces of state that don't come from API endpoints. For example, the error and loading state messages have default values set when the state is initialized. This is great, but what if the code that is rendering UserListContainer wants to use a different loading message? You can achieve this by allowing properties to override the default state. Let's build on the UserListContainer component: import React, { Component } from 'react'; import { fromJS } from 'immutable'; import { users } from './api'; import UserList from './UserList'; class UserListContainer extends Component { state = { data: fromJS({ error: null, loading: null, users: [], }), } // Getter for "Immutable.js" state data... get data() { return this.state.data; } // Setter for "Immutable.js" state data... set data(data) { this.setState({ data }); } // Called before the component is mounted into the DOM // for the first time. componentWillMount() { // Since the component hasn't been mounted yet, it's // safe to change the state by calling "setState()" // without causing the component to re-render. this.data = this.data .set('loading', this.props.loading); } // When component has been rendered, "componentDidMount()" // is called. This is where we should perform asynchronous // behavior that will change the state of the component. // In this case, we're fetching a list of users from // the mock API. componentDidMount() { users().then( (result) => { // Populate the "users" state, but also // make sure the "error" and "loading" // states are cleared. this.data = this.data .set('loading', null) .set('error', null) .set('users', fromJS(result.users)); }, (error) => { // When an error occurs, we want to clear // the "loading" state and set the "error" // state. this.data = this.data .set('loading', null) .set('error', error); } ); } render() { return ( <UserList {...this.data.toJS()} /> ); } } UserListContainer.defaultProps = { loading: 'loading...', }; export default UserListContainer; You can see that loading no longer has a default string value. Instead, we've introduced defaultProps, which provide default values for properties that aren't passed in through JSX markup. The new lifecycle method we've added is componentWillMount(), and it uses the loading property to initialize the state. Since the loading property has a default value, it's safe to just change the state. However, calling setState() (via this.data) here doesn't cause the component to re-render itself. The method is called before the component mounts, so the initial render hasn't happened yet. Let's see how we can pass state data to UserListContainer now: import React from 'react'; import { render } from 'react-dom'; import UserListContainer from './UserListContainer'; // Renders the component with a "loading" property. // This value ultimately ends up in the component state. render(( <UserListContainer loading="playing the waiting game..." /> ), document.getElementById('app') ); Pretty cool, right? Just because the component has state, doesn't mean that we can't be flexible and allow for customization of this state. We'll look at one more variation on this theme—updating component state through properties. Here's what the initial loading message looks like when UserList is first rendered: Updating state with properties You've seen how the componentWillMount() and componentDidMount() lifecycle methods help get your component the data it needs. There's one more scenario that we should consider here—re-rendering the component container. Let's take a look at a simple button component that tracks the number of times it's been clicked: import React from 'react'; export default ({ clicks, disabled, text, onClick, }) => ( <section> { /* Renders the number of button clicks, using the "clicks" property. */ } <p>{clicks} clicks</p> { /* Renders the button. It's disabled state is based on the "disabled" property, and the "onClick()" handler comes from the container component. */} <button disabled={disabled} onClick={onClick} > {text} </button> </section> ); Now, let's implement a container component for this feature: import React, { Component } from 'react'; import { fromJS } from 'immutable'; import MyButton from './MyButton'; class MyFeature extends Component { state = { data: fromJS({ clicks: 0, disabled: false, text: '', }), } // Getter for "Immutable.js" state data... get data() { return this.state.data; } // Setter for "Immutable.js" state data... set data(data) { this.setState({ data }); } // Sets the "text" state before the initial render. // If a "text" property was provided to the component, // then it overrides the initial "text" state. componentWillMount() { this.data = this.data .set('text', this.props.text); } // If the component is re-rendered with new // property values, this method is called with the // new property values. If the "disabled" property // is provided, we use it to update the "disabled" // state. Calling "setState()" here will not // cause a re-render, because the component is already // in the middle of a re-render. componentWillReceiveProps({ disabled }) { this.data = this.data .set('disabled', disabled); } // Click event handler, increments the "click" count. onClick = () => { this.data = this.data .update('clicks', c => c + 1); } // Renders the "<MyButton>" component, passing it the // "onClick()" handler, and the state as properties. render() { return ( <MyButton onClick={this.onClick} {...this.data.toJS()} /> ); } } MyFeature.defaultProps = { text: 'A Button', }; export default MyFeature; The same approach as the preceding example is taken here. Before the component is mounted, set the value of the text state to the value of the text property. However, we also set the text state in the componentWillReceiveProps() method. This method is called when property values change, or in other words, when the component is re-rendered. Let's see how we can re-render this component and whether or not the state behaves as we'd expect it to: import React from 'react'; import { render as renderJSX } from 'react-dom'; import MyFeature from './MyFeature'; // Determines the state of the button // element in "MyFeature". let disabled = true; function render() { // Toggle the state of the "disabled" property. disabled = !disabled; renderJSX( (<MyFeature {...{ disabled }} />), document.getElementById('app') ); } // Re-render the "<MyFeature>" component every // 3 seconds, toggling the "disabled" button // property. setInterval(render, 3000); render(); Sure enough, everything goes as planned. Whenever the button is clicked, the click counter is updated. But as you can see, <MyFeature> is re-rendered every 3 seconds, toggling the disabled state of the button. When the button is re-enabled and clicking resumes, the counter continues from where it left off. Here is what the MyButton component looks like when first rendered: Here's what it looks like after it has been clicked a few times and the button has moved into a disabled state: We learned about the lifecycle of React components. We also discussed why React components need a lifecycle. It turns out that React can't do everything automatically for us, so we need to write some code that's run at the appropriate time during the components' lifecycles. To know more about how to take the concepts of React and apply them to building Native UIs using React Native, read this book React and React Native. What is React.js and how does it work? What is the Reactive Manifesto? Is React Native is really a Native framework?
Read more
  • 0
  • 1
  • 19990

article-image-writing-postgis-functions-in-python-tutorial
Pravin Dhandre
01 Aug 2018
5 min read
Save for later

Writing PostGIS functions in Python language [Tutorial]

Pravin Dhandre
01 Aug 2018
5 min read
In this tutorial, you will learn to write a Python function for PostGIS and PostgreSQL using the PL/Python language and effective libraries like urllib2 and simplejson. You will use Python to query the http://openweathermap.org/ web services to get the weather for a PostGIS geometry from within a PostgreSQL function. This tutorial is an excerpt from a book written by Mayra Zurbaran,Pedro Wightman, Paolo Corti, Stephen Mather, Thomas Kraft and Bborie Park titled PostGIS Cookbook - Second Edition. Adding Python support to database Verify your PostgreSQL server installation has PL/Python support. In Windows, this should be already included, but this is not the default if you are using, for example, Ubuntu 16.04 LTS, so you will most likely need to install it: $ sudo apt-get install postgresql-plpython-9.1 Install PL/Python on the database (you could consider installing it in your template1 database; in this way, every newly created database will have PL/Python support by default): You could alternatively add PL/Python support to your database, using the createlang shell command (this is the only way if you are using PostgreSQL version 9.1 or lower): $ createlang plpythonu postgis_cookbook $ psql -U me postgis_cookbook postgis_cookbook=# CREATE EXTENSION plpythonu; How to do it... Carry out the following steps: In this tutorial, as with the previous one, you will use a http://openweathermap.org/ web service to get the temperature for a point from the closest weather station. The request you need to run (test it in a browser) is http://api.openweathermap.org/data/2.5/find?lat=55&lon=37&cnt=10&appid=YOURKEY. You should get the following JSON output (the closest weather station's data from which you will read the temperature to the point, with the coordinates of the given longitude and latitude): { message: "", cod: "200", calctime: "", cnt: 1, list: [ { id: 9191, dt: 1369343192, name: "100704-1", type: 2, coord: { lat: 13.7408, lon: 100.5478 }, distance: 6.244, main: { temp: 300.37 }, wind: { speed: 0, deg: 141 }, rang: 30, rain: { 1h: 0, 24h: 3.302, today: 0 } } ] } Create the following PostgreSQL function in Python, using the PL/Python language: CREATE OR REPLACE FUNCTION chp08.GetWeather(lon float, lat float) RETURNS float AS $$ import urllib2 import simplejson as json data = urllib2.urlopen( 'http://api.openweathermap.org/data/ 2.1/find/station?lat=%s&lon=%s&cnt=1' % (lat, lon)) js_data = json.load(data) if js_data['cod'] == '200': # only if cod is 200 we got some effective results if int(js_data['cnt'])>0: # check if we have at least a weather station station = js_data['list'][0] print 'Data from weather station %s' % station['name'] if 'main' in station: if 'temp' in station['main']: temperature = station['main']['temp'] - 273.15 # we want the temperature in Celsius else: temperature = None else: temperature = None return temperature $$ LANGUAGE plpythonu; Now, test your function; for example, get the temperature from the weather station closest to Wat Pho Templum in Bangkok: postgis_cookbook=# SELECT chp08.GetWeather(100.49, 13.74); getweather ------------ 27.22 (1 row) If you want to get the temperature for the point features in a PostGIS table, you can use the coordinates of each feature's geometry: postgis_cookbook=# SELECT name, temperature, chp08.GetWeather(ST_X(the_geom), ST_Y(the_geom)) AS temperature2 FROM chp08.cities LIMIT 5; name | temperature | temperature2 -------------+-------------+-------------- Minneapolis | 275.15 | 15 Saint Paul | 274.15 | 16 Buffalo | 274.15 | 19.44 New York | 280.93 | 19.44 Jersey City | 282.15 | 21.67 (5 rows) Now it would be nice if our function could accept not only the coordinates of a point, but also a true PostGIS geometry as well as an input parameter. For the temperature of a feature, you could return the temperature of the weather station closest to the centroid of the feature geometry. You can easily get this behavior using function overloading. Add a new function, with the same name, supporting a PostGIS geometry directly as an input parameter. In the body of the function, call the previous function, passing the coordinates of the centroid of the geometry. Note that in this case, you can write the function without using Python, with the PL/PostgreSQL language: CREATE OR REPLACE FUNCTION chp08.GetWeather(geom geometry) RETURNS float AS $$ BEGIN RETURN chp08.GetWeather(ST_X(ST_Centroid(geom)), ST_Y(ST_Centroid(geom))); END; $$ LANGUAGE plpgsql; Now, test the function, passing a PostGIS geometry to the function: postgis_cookbook=# SELECT chp08.GetWeather( ST_GeomFromText('POINT(-71.064544 42.28787)')); getweather ------------ 23.89 (1 row) If you use the function on a PostGIS layer, you can pass the feature's geometries to the function directly, using the overloaded function written in the PL/PostgreSQL language: postgis_cookbook=# SELECT name, temperature, chp08.GetWeather(the_geom) AS temperature2 FROM chp08.cities LIMIT 5; name | temperature | temperature2 -------------+-------------+-------------- Minneapolis | 275.15 | 17.22 Saint Paul | 274.15 | 16 Buffalo | 274.15 | 18.89 New York | 280.93 | 19.44 Jersey City | 282.15 | 21.67 (5 rows) In this tutorial, you wrote a Python function in PostGIS, using the PL/Python language. Using Python inside PostgreSQL and PostGIS functions gives you the great advantage of being able to use any Python library you wish. Therefore, you will be able to write much more powerful functions compared to those written using the standard PL/PostgreSQL language. In fact, in this case, you used the urllib2 and simplejson Python libraries to query a web service from within a PostgreSQL function—this would be an impossible operation to do using plain PL/PostgreSQL. You have also seen how to overload functions in order to provide the function's user a different way to access the function, using input parameters in a different way. To get armed with all the tools and instructions you need for managing entire spatial database systems, read PostGIS Cookbook - Second Edition. Top 7 libraries for geospatial analysis Learning R for Geospatial Analysis
Read more
  • 0
  • 0
  • 23419

article-image-time-facebook-twitter-other-social-media-take-responsibility-or-face-regulation
Sugandha Lahoti
01 Aug 2018
9 min read
Save for later

Time for Facebook, Twitter and other social media to take responsibility or face regulation

Sugandha Lahoti
01 Aug 2018
9 min read
Of late, the world has been shaken over the rising number of data related scandals and attacks that have overshadowed social media platforms. This shakedown was experienced in Wall Street last week when tech stocks came crashing down after Facebook’s Q2 earnings call on 25th July and then further down after Twitter’s earnings call on 27th July. Social media regulation is now at the heart of discussions across the tech sector. The social butterfly effect is real 2018 began with the Cambridge Analytica scandal where the data analytics company was alleged to have not only been influencing the outcome of UK and US Presidential elections but also of harvesting copious amounts of data from Facebook (illegally).  Then Facebook fell down the rabbit hole with Muller’s indictment report that highlighted the role social media played in election interference in 2016. ‘Fake news’ on Whatsapp triggered mob violence in India while Twitter has been plagued with fake accounts and tweets that never seem to go away. Fake news and friends crash the tech stock party Last week, social media stocks fell in double digits (Facebook by 20% and Twitter by 21%) bringing down the entire tech sector; a fall that continues to keep tech stocks in a bearish market and haunt tech shareholders even today. Wall Street has been a nervous wreck this week hoping for the bad news to stop spirally downwards with good news from Apple to undo last week’s nightmare. Amidst these reports, lawmakers, regulators and organizations alike are facing greater pressure for regulation of social media platforms. How are lawmakers proposing to regulate social media? Even though lawmakers have started paying increased attention to social networks over the past year, there has been little progress made in terms of how much they actually understand them. This could soon change as Axios’ David McCabe published a policy paper from the office of Senator Mark Warner. This paper describes a comprehensive regulatory policy covering almost every aspect of social networks. The paper-proposal is designed to address three broad categories: combating misinformation, privacy and data protection, and promoting competition in tech space. Misinformation, disinformation, and the exploitation of technology covers ideas such as: Networks are to label automated bots. Platforms are to verify identities, Platforms are to make regular disclosures about how many fake accounts they’ve deleted. Platforms are to create APIs for academic research. Privacy and data protection include policies such as: Create a US version of the GDPR. Designate platforms as information fiduciaries with the legal responsibility of protecting user’s data. Empowering the Federal Trade Commission to make rules around data privacy. Create a legislative ban on dark patterns that trick users into accepting terms and conditions without reading them. Allow the government to audit corporate algorithms. Promoting competition in tech space that requires: Tech companies to continuously disclose to consumers how their data is being used. Social network data to be made portable. Social networks to be interoperable. Designate certain products as essential facilities and demand that third parties get fair access to them. Although these proposals and more of them (British parliamentary committee recommended imposing much stricter guidelines on social networks) remain far from becoming the law, they are an assurance that legal firms and lawmakers are serious about taking steps to ensure that social media platforms don’t go out of hand. Taking measures to ensure data regulations by lawmakers and legal authorities is only effective if the platforms themselves care about the issues themselves and are motivated to behave in the right way. Losing a significant chunk of their user base in EU lately seems to have provided that very incentive. Social network platforms, themselves have now started seeking ways to protecting user data and improve their platforms in general to alleviate some of the problems they helped create or amplify. How is Facebook planning to course correct it’s social media Frankenstein? Last week, Mark Zuckerberg started the fated earnings call by saying, “I want to start by talking about all the investments we've made over the last six months to improve safety, security, and privacy across our services. This has been a lot of hard work, and it's starting to pay off.” He then goes on to elaborate key areas of focus for Facebook in the coming months, the next 1.5 years to be more specific. Ad transparency tools: All ads can be viewed by anyone, even if they are not targeted at them. Facebook is also developing an archive of ads with political or issue content which will be labeled to show who paid for them, what the budget was and how many people viewed the ads, and will also allow one to search ads by an advertiser for the past 7 years. Disallow and report known election interference attempts: Facebook will proactively look for and eliminate fake accounts, pages, and groups that violated their policies. This could minimize election interference, says Zuckerberg. Fight against misinformation: Remove the financial incentives for spammers to create fake news.  Stop pages that repeatedly spread false information from buying ads. Shift from reactive to proactive detection with AI: Use AI to prevent fake accounts that generate a lot of the problematic content from ever being created in the first place.  They can now remove more bad content quickly because we don't have to wait until after it's reported. In Q1, for example, almost 90% of graphic violence content that Facebook removed or added a warning label to was identified using AI. Invest heavily in security and privacy. No further elaboration on this aspect was given on the call. This week, Facebook reported that they’d  detected and removed 32 pages and fake accounts that had engaged in a coordinated inauthentic behavior. These accounts and pages were of a political influence campaign that was potentially built to disrupt the midterm elections. According to Facebook’s Head of Cybersecurity, Nathaniel Gleicher, “So far, the activity encompasses eight Facebook Pages, 17 profiles and seven accounts on Instagram.” Facebook’s action is a change from last year when it was widely criticized for failing to detect Russian interference in the 2016 presidential election. Although the current campaign hasn’t been linked to Russia (yet), Facebook officials pointed out that some of the tools and techniques used by the accounts were similar to those used by the Russian government-linked Internet Research Agency. How Twitter plans to make its platform a better place for real and civilized conversation “We want people to feel safe freely expressing themselves and have launched new tools to address problem behaviors that distort and distract from the public conversation. We’re also continuing to make it easier for people to find and follow breaking news and events…” said  Jack Dorsey, Twitter's CEO, at Q2 2018 Earnings call. The letter to Twitter shareholders further elaborates on this point: We continue to invest in improving the health of the public conversation on Twitter, making the service better by integrating new behavioral signals to remove spammy and suspicious accounts and continuing to prioritize the long-term health of the platform over near-term metrics. We also acquired Smyte, a company that specializes in spam prevention, safety, and security.   Unlike Facebook’s explanatory anecdotal support for the claims made, Twitter provided quantitative evidence to show the seriousness of their endeavor. Here are some key metrics from the shareholders’ letter this quarter. Results from early experiments on using new tools to address behaviors that distort and distract from the public conversation show a 4% drop in abuse reports from search and 8% fewer abuse reports from conversations More than 9 million potentially spammy or automated accounts identified and challenged per week 8k fewer average spam reports per day Removing more than 2x the number of accounts for violating Twitter’s spam policies than they did last year It is clear that Twitter has been quite active when it comes to looking for ways to eliminate toxicity from the website’s network. CEO Jack Dorsey in a series of tweets stated that the company did not always meet users’ expectations. “We aren’t proud of how people have taken advantage of our service, or our inability to address it fast enough, with the company needing a “systemic framework.” Back in March 2018, Twitter invited external experts,  to measure the health of the company in order to encourage a more healthy conversation, debate, and critical thinking. Twitter asked them to create proposals taking inspiration from the concept of measuring conversation health defined by a non-profit firm Cortico. As of yesterday, they now have their dream team of researchers finalized and ready to take up the challenge of identifying echo chambers on Twitter for unhealthy behavior and then translating their findings into practical algorithms down the line. [dropcap]W[/dropcap]ith social media here to stay, both lawmakers and social media platforms are looking for new ways to regulate. Any misstep by these social media sites will have solid repercussions which include not only closer scrutiny by the government and private watchdogs but also losing out on stock value, a bad reputation, as well as being linked to other forms of data misuse and accusations of political bias. Lastly, let’s not forget the responsibility that lies with the ‘social’ side of these platforms. Individuals need to play their part in being proactive in reporting fake news and stories, and they also need to be more selective about the content they share on social. Why Wall Street unfriended Facebook: Stocks fell $120 billion in market value after Q2 2018 earnings call Facebook must stop discriminatory advertising in the US, declares Washington AG, Ferguson Facebook is investigating data analytics firm Crimson Hexagon over misuse of data
Read more
  • 0
  • 0
  • 25163
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-transactions-for-async-programming-in-javaee
Aaron Lazar
31 Jul 2018
5 min read
Save for later

Using Transactions with Asynchronous Tasks in JavaEE [Tutorial]

Aaron Lazar
31 Jul 2018
5 min read
Threading is a common issue in most software projects, no matter which language or other technology is involved. When talking about enterprise applications, things become even more important and sometimes harder. Using asynchronous tasks could be a challenge: what if you need to add some spice and add a transaction to it? Thankfully, the Java EE environment has some great features for dealing with this challenge, and this article will show you how. This article is an extract from the book Java EE 8 Cookbook, authored by Elder Moraes. Usually, a transaction means something like code blocking. Isn't it awkward to combine two opposing concepts? Well, it's not! They can work together nicely, as shown here. Adding Java EE 8 dependency Let's first add our Java EE 8 dependency: <dependency> <groupId>javax</groupId> <artifactId>javaee-api</artifactId> <version>8.0</version> <scope>provided</scope> </dependency> Let's first create a User POJO: public class User { private Long id; private String name; public Long getId() { return id; } public void setId(Long id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } public User(Long id, String name) { this.id = id; this.name = name; } @Override public String toString() { return "User{" + "id=" + id + ", name=" + name + '}'; } } And here is a slow bean that will return User: @Stateless public class UserBean { public User getUser(){ try { TimeUnit.SECONDS.sleep(5); long id = new Date().getTime(); return new User(id, "User " + id); } catch (InterruptedException ex) { System.err.println(ex.getMessage()); long id = new Date().getTime(); return new User(id, "Error " + id); } } } Now we create a task to be executed that will return User using some transaction stuff: public class AsyncTask implements Callable<User> { private UserTransaction userTransaction; private UserBean userBean; @Override public User call() throws Exception { performLookups(); try { userTransaction.begin(); User user = userBean.getUser(); userTransaction.commit(); return user; } catch (IllegalStateException | SecurityException | HeuristicMixedException | HeuristicRollbackException | NotSupportedException | RollbackException | SystemException e) { userTransaction.rollback(); return null; } } private void performLookups() throws NamingException{ userBean = CDI.current().select(UserBean.class).get(); userTransaction = CDI.current() .select(UserTransaction.class).get(); } } And finally, here is the service endpoint that will use the task to write the result to a response: @Path("asyncService") @RequestScoped public class AsyncService { private AsyncTask asyncTask; @Resource(name = "LocalManagedExecutorService") private ManagedExecutorService executor; @PostConstruct public void init(){ asyncTask = new AsyncTask(); } @GET public void asyncService(@Suspended AsyncResponse response){ Future<User> result = executor.submit(asyncTask); while(!result.isDone()){ try { TimeUnit.SECONDS.sleep(1); } catch (InterruptedException ex) { System.err.println(ex.getMessage()); } } try { response.resume(Response.ok(result.get()).build()); } catch (InterruptedException | ExecutionException ex) { System.err.println(ex.getMessage()); response.resume(Response.status(Response .Status.INTERNAL_SERVER_ERROR) .entity(ex.getMessage()).build()); } } } To try this code, just deploy it to GlassFish 5 and open this URL: http://localhost:8080/ch09-async-transaction/asyncService How the Asynchronous execution works The magic happens in the AsyncTask class, where we will first take a look at the performLookups method: private void performLookups() throws NamingException{ Context ctx = new InitialContext(); userTransaction = (UserTransaction) ctx.lookup("java:comp/UserTransaction"); userBean = (UserBean) ctx.lookup("java:global/ ch09-async-transaction/UserBean"); } It will give you the instances of both UserTransaction and UserBean from the application server. Then you can relax and rely on the things already instantiated for you. As our task implements a Callabe<V> object that it needs to implement the call() method: @Override public User call() throws Exception { performLookups(); try { userTransaction.begin(); User user = userBean.getUser(); userTransaction.commit(); return user; } catch (IllegalStateException | SecurityException | HeuristicMixedException | HeuristicRollbackException | NotSupportedException | RollbackException | SystemException e) { userTransaction.rollback(); return null; } } You can see Callable as a Runnable interface that returns a result. Our transaction code lives here: userTransaction.begin(); User user = userBean.getUser(); userTransaction.commit(); And if anything goes wrong, we have the following: } catch (IllegalStateException | SecurityException | HeuristicMixedException | HeuristicRollbackException | NotSupportedException | RollbackException | SystemException e) { userTransaction.rollback(); return null; } Now we will look at AsyncService. First, we have some declarations: private AsyncTask asyncTask; @Resource(name = "LocalManagedExecutorService") private ManagedExecutorService executor; @PostConstruct public void init(){ asyncTask = new AsyncTask(); } We are asking the container to give us an instance from ManagedExecutorService, which It is responsible for executing the task in the enterprise context. Then we call an init() method, and the bean is constructed (@PostConstruct). This instantiates the task. Now we have our task execution: @GET public void asyncService(@Suspended AsyncResponse response){ Future<User> result = executor.submit(asyncTask); while(!result.isDone()){ try { TimeUnit.SECONDS.sleep(1); } catch (InterruptedException ex) { System.err.println(ex.getMessage()); } } try { response.resume(Response.ok(result.get()).build()); } catch (InterruptedException | ExecutionException ex) { System.err.println(ex.getMessage()); response.resume(Response.status(Response. Status.INTERNAL_SERVER_ERROR) .entity(ex.getMessage()).build()); } } Note that the executor returns Future<User>: Future<User> result = executor.submit(asyncTask); This means this task will be executed asynchronously. Then we check its execution status until it's done: while(!result.isDone()){ try { TimeUnit.SECONDS.sleep(1); } catch (InterruptedException ex) { System.err.println(ex.getMessage()); } } And once it's done, we write it down to the asynchronous response: response.resume(Response.ok(result.get()).build()); The full source code of this recipe is at Github. So now, using Transactions with Asynchronous Tasks in JavaEE isn't such a daunting task, is it? If you found this tutorial helpful and would like to learn more, head on to this book Java EE 8 Cookbook. Oracle announces a new pricing structure for Java Design a RESTful web API with Java [Tutorial] How to convert Java code into Kotlin
Read more
  • 0
  • 0
  • 8836

article-image-ansible-2-automate-networking-tasks-on-google-cloud
Vijin Boricha
31 Jul 2018
8 min read
Save for later

Ansible 2 for automating networking tasks on Google Cloud Platform [Tutorial]

Vijin Boricha
31 Jul 2018
8 min read
Google Cloud Platform is one of the largest and most innovative cloud providers out there. It is used by various industry leaders such as Coca-Cola, Spotify, and Philips. Amazon Web Services and Google Cloud are always involved in a price war, which benefits consumers greatly. Google Cloud Platform covers 12 geographical regions across four continents with new regions coming up every year. In this tutorial, we will learn about Google compute engine and network services and how Ansible 2 can be leveraged to automate common networking tasks. This is an excerpt from Ansible 2 Cloud Automation Cookbook written by Aditya Patawari, Vikas Aggarwal.  Managing network and firewall rules By default, inbound connections are not allowed to any of the instances. One way to allow the traffic is by allowing incoming connections to a certain port of instances carrying a particular tag. For example, we can tag all the webservers as http and allow incoming connections to port 80 and 8080 for all the instances carrying the http tag. How to do it… We will create a firewall rule with source tag using the gce_net module: - name: Create Firewall Rule with Source Tags gce_net: name: my-network fwname: "allow-http" allowed: tcp:80,8080 state: "present" target_tags: "http" subnet_region: us-west1 service_account_email: "{{ service_account_email }}" project_id: "{{ project_id }}" credentials_file: "{{ credentials_file }}" tags: - recipe6 Using tags for firewalls is not possible all the time. A lot of organizations whitelist internal IP ranges or allow office IPs to reach the instances over the network. A simple way to allow a range of IP addresses is to use a source range: - name: Create Firewall Rule with Source Range gce_net: name: my-network fwname: "allow-internal" state: "present" src_range: ['10.0.0.0/16'] subnet_name: public-subnet allowed: 'tcp' service_account_email: "{{ service_account_email }}" project_id: "{{ project_id }}" credentials_file: "{{ credentials_file }}" tags: - recipe6 How it works... In step 1, we have created a firewall rule called allow-http to allow incoming requests to TCP port 80 and 8080. Since our instance app is tagged with http, it can accept incoming traffic to port 80 and 8080. In step 2, we have allowed all the instances with IP 10.0.0.0/16, which is a private IP address range. Along with connection parameters and the source IP address CIDR, we have defined the network name and subnet name. We have allowed all TCP connections. If we want to restrict it to a port or a range of ports, then we can use tcp:80 or tcp:4000-5000 respectively. Managing load balancer An important reason to use a cloud is to achieve scalability at a relatively low cost. Load balancers play a key role in scalability. We can attach multiple instances behind a load balancer to distribute the traffic between the instances. Google Cloud load balancer also supports health checks which helps to ensure that traffic is sent to healthy instances only. How to do it… Let us create a load balancer and attach an instance to it: - name: create load balancer and attach to instance gce_lb: name: loadbalancer1 region: us-west1 members: ["{{ zone }}/app"] httphealthcheck_name: hc httphealthcheck_port: 80 httphealthcheck_path: "/" service_account_email: "{{ service_account_email }}" project_id: "{{ project_id }}" credentials_file: "{{ credentials_file }}" tags: - recipe7 For creating a load balancer, we need to supply a comma separated list of instances. We also need to provide health check parameters including a name, a port and the path on which a GET request can be sent. Managing GCE images in Ansible 2 Images are a collection of a boot loader, operating system, and a root filesystem. There are public images provided by Google and various open source communities. We can use these images to create an instance. GCE also provides us capability to create our own image which we can use to boot instances. It is important to understand the difference between an image and a snapshot. A snapshot is incremental but it is just a disk snapshot. Due to its incremental nature, it is better for creating backups. Images consist of more information such as a boot loader. Images are non-incremental in nature. However, it is possible to import images from a different cloud provider or datacenter to GCE. Another reason we recommend snapshots for backup is that taking a snapshot does not require us to shut down the instance, whereas building an image would require us to shut down the instance. Why build images at all? We will discover that in subsequent sections. How to do it… Let us create an image for now: - name: stop the instance gce: instance_names: app zone: "{{ zone }}" machine_type: f1-micro image: centos-7 state: stopped service_account_email: "{{ service_account_email }}" credentials_file: "{{ credentials_file }}" project_id: "{{ project_id }}" disk_size: 15 metadata: "{{ instance_metadata }}" tags: - recipe8 - name: create image gce_img: name: app-image source: app zone: "{{ zone }}" state: present service_account_email: "{{ service_account_email }}" pem_file: "{{ credentials_file }}" project_id: "{{ project_id }}" tags: - recipe8 - name: start the instance gce: instance_names: app zone: "{{ zone }}" machine_type: f1-micro image: centos-7 state: started service_account_email: "{{ service_account_email }}" credentials_file: "{{ credentials_file }}" project_id: "{{ project_id }}" disk_size: 15 metadata: "{{ instance_metadata }}" tags: - recipe8 How it works... In these tasks, we are stopping the instance first and then creating the image. We just need to supply the instance name while creating the image, along with the standard connection parameters. Finally, we start the instance back. The parameters of these tasks are self-explanatory. Creating instance templates Instance templates define various characteristics of an instance and related attributes. Some of these attributes are: Machine type (f1-micro, n1-standard-1, custom) Image (we created one in the previous tip, app-image) Zone (us-west1-a) Tags (we have a firewall rule for tag http) How to do it… Once a template is created, we can use it to create a managed instance group which can be auto-scale based on various parameters. Instance templates are typically available globally as long as we do not specify a restrictive parameter like a specific subnet or disk: - name: create instance template named app-template gce_instance_template: name: app-template size: f1-micro tags: http,http-server image: app-image state: present subnetwork: public-subnet subnetwork_region: us-west1 service_account_email: "{{ service_account_email }}" credentials_file: "{{ credentials_file }}" project_id: "{{ project_id }}" tags: - recipe9 We have specified the machine type, image, subnets, and tags. This template can be used to create instance groups. Creating managed instance groups Traditionally, we have managed virtual machines individually. Instance groups let us manage a group of identical virtual machines as a single entity. These virtual machines are created from an instance template, like the one which we created in the previous tip. Now, if we have to make a change in instance configuration, that change would be applied to all the instances in the group. How to do it… Perhaps, the most important feature of an instance group is auto-scaling. In event of high resource requirements, the instance group can scale up to a predefined number automatically: - name: create an instance group with autoscaling gce_mig: name: app-mig zone: "{{ zone }}" service_account_email: "{{ service_account_email }}" credentials_file: "{{ credentials_file }}" project_id: "{{ project_id }}" state: present size: 2 named_ports: - name: http port: 80 template: app-template autoscaling: enabled: yes name: app-autoscaler policy: min_instances: 2 max_instances: 5 cool_down_period: 90 cpu_utilization: target: 0.6 load_balancing_utilization: target: 0.8 tags: - recipe10 How it works... The preceding task creates an instance group with an initial size of two instances, defined by size. We have named port 80 as HTTP. This can be used by other GCE components to route traffic. We have used the template that we created in the previous recipe. We also enable autoscaling with a policy to allow scaling up to five instances. At any given point, at least two instances would be running. We are scaling on two parameters, cpu_utilization, where 0.6 would trigger scaling after the utilization exceeds 60% and load_balancing_utilization where the scaling will trigger after 80% of the requests per minutes capacity is reached. Typically, when an instance is booted, it might take some time for initialization and startup. Data collected during that period might not make much sense. The parameter, cool_down_period, indicates that we should start collecting data from the instance after 90 seconds and should not trigger scaling based on data before. We learnt a few networking tricks to manage public cloud infrastructure effectively. You can know more about building the public cloud infrastructure by referring to this book Ansible 2 Cloud Automation Cookbook. Why choose Ansible for your automation and configuration management needs? Getting Started with Ansible 2 Top 7 DevOps tools in 2018
Read more
  • 0
  • 0
  • 25295

article-image-leaders-successful-agile-enterprises-share-in-common
Packt Editorial Staff
30 Jul 2018
11 min read
Save for later

What leaders at successful agile Enterprises share in common

Packt Editorial Staff
30 Jul 2018
11 min read
Adopting agile ways of working is easier said than done. Firms like Barclays, C.H.Robinson, Ericsson, Microsoft, and Spotify are considered as agile enterprises and are operating entrepreneurially on a large scale. Do you think the leadership of these firms have something in common? Let us take a look at it in this article. The leadership of a firm has a very high bearing on the extent of Enterprise Agility which the company can achieve. Leaders are in a position to influence just about every aspect of a business, including vision, mission, strategy, structure, governance, processes, and more importantly, the culture of the enterprise and the mindset of the employees. This article is an extract from the Enterprise Agility written by Sunil Mundra. In this article we’ll explore the personal traits of leaders that are critical for Enterprise Agility. Personal traits are by definition intrinsic in nature. They enable the personal development of an individual and are also enablers for certain behaviors. We explore the various personal traits in detail. #1 Willingness to expand mental models Essentially, a mental model is an individual's perception of reality and how something works in that reality. A mental model represents one way of approaching a situation and is a form of deeply-held belief. The critical point is that a mental model represents an individual's view, which may not be necessarily true. Leaders must also consciously let go of mental models that are no longer relevant today. This is especially important for those leaders who have spent a significant part of their career leading enterprises based on mechanistic modelling, as these models will create impediments for Agility in "living" businesses. For example, using monetary rewards as a primary motivator may work for physical work, which is repetitive in nature. However, it does not work as a primary motivator for knowledge workers, for whom intrinsic motivators, namely, autonomy, mastery, and purpose, are generally more important than money. Examining the values and assumptions underlying a mental model can help in ascertaining the relevance of that model. #2 Self-awareness Self-awareness helps leaders to become cognizant of their strengths and weaknesses. This will enable the leaders to consciously focus on utilizing their strengths and leveraging the strengths of their peers and teams, in areas where they are not strong. Leaders should validate the view of strengths and weaknesses by seeking feedback regularly from people that they work with. According to a survey of senior executives, by Cornell's School of Industrial and Labor Relations: "Leadership searches give short shrift to 'self-awareness,' which should actually be a top criterion. Interestingly, a high self-awareness score was the strongest predictor of overall success. This is not altogether surprising as executives who are aware of their weaknesses are often better able to hire subordinates who perform well in categories in which the leader lacks acumen. These leaders are also more able to entertain the idea that someone on their team may have an idea that is even better than their own." Self-awareness, a mostly underrated trait, is a huge enabler for enhancing other personal traits. #3 Creativity Since emergence is a primary property of complexity, leaders will often be challenged to deal with unprecedented circumstances emerging from within the enterprise and also in the external environment. This implies that what may have worked in the past is less likely to work in the new circumstances, and new approaches will be needed to deal with them. Hence, the ability to think creatively, that is, "out of the box," for coming up with innovative approaches and solutions is critical. The creativity of an individual will have its limitations, and hence leaders must harness the creativity of a broader group of people in the enterprise. A leader can be a huge enabler to this by ideating jointly with a group of people and also by facilitating discussions by challenging status quo and spurring the teams to suggest improvements. Leaders can also encourage innovation through experimentation. With the fast pace of change in the external environment, and consequently the continuous evolution of businesses, leaders will often find themselves out of their comfort zone. Leaders will therefore have to get comfortable with being uncomfortable. It will be easier for leaders to think more creatively once they accept this new reality. #4 Emotional intelligence Emotional intelligence (EI), also known as emotional quotient (EQ), is defined by Wikipedia as "the capability of individuals to recognize their own emotions and those of others, discern between different feelings and label them appropriately, use emotional information to guide thinking and behavior, and manage and/or adjust emotions to adapt to environments or achieve one's goal/s". [iii] EI is made up of four core skills: Self-awareness Social awareness Self-management Relationship management The importance of EI in people-centric enterprises, especially for leaders, cannot be overstated. While people in a company may be bound by purpose and by being a part of a team, people are inherently different from each other in terms of personality types and emotions. This can have a significant bearing on how people in a business deal with and react to circumstances, especially adverse ones. Having high EI enables leaders to understand people "from the inside." This helps leaders to build better rapport with people, thereby enabling them to bring out the best in employees and support them as needed. #5 Courage An innovative approach to dealing with an unprecedented circumstance will, by definition, carry some risk. The hypothesis about the appropriateness of that approach can only be validated by putting it to the test against reality. Leaders will therefore need to be courageous as they take the calculated risky bets, strike hard, and own the outcome of those bets. According to Guo Xiao, the President and CEO of ThoughtWorks, "There are many threats—and opportunities—facing businesses in this age of digital transformation: industry disruption from nimble startups, economic pressure from massive digital platforms, evolving security threats, and emerging technologies. Today's era, in which all things are possible, demands a distinct style of leadership. It calls for bold individuals who set their company's vision and charge ahead in a time of uncertainty, ambiguity, and boundless opportunity. It demands courage." Taking risks does not mean being reckless. Rather, leaders need to take calculated risks, after giving due consideration to intuition, facts, and opinions. Despite best efforts and intentions, some decisions will inevitably go wrong. Leaders must have the courage and humility to admit that the decision went wrong and own the outcomes of that decision, and not let these failures deter them from taking risks in the future. #6 Passion for learning Learnability is the ability to upskill, reskill, and deskill. In today's highly dynamic era, it is not what one knows, or what skills one has, that matters as much as the ability to quickly adapt to a different skill set. It is about understanding what is needed to optimize success and what skills and abilities are necessary, from a leadership perspective, to make the enterprise as a whole successful. Leaders need to shed inhibitions about being seen as "novices" while they acquire and practice new skills. The fact that leaders are willing to acquire new skills can be hugely impactful in terms of encouraging others in the enterprise to do the same. This is especially important in terms of bringing in and encouraging the culture of learnability across the business. #7 Awareness of cognitive biases Cognitive biases are flaws in thinking that can lead to suboptimal decisions. Leaders need to become aware of these biases so that they can objectively assess whether their decisions are being influenced by any biases. Cognitive biases lead to shortcuts in decision-making. Essentially, these biases are an attempt by the brain to simplify information processing. Leaders today are challenged with an overload of information and also the need to make decisions quickly. These factors can contribute to decisions and judgements being influenced by cognitive biases. Over decades, psychologists have discovered a huge number of biases. However, the following biases are more important from decision-making perspective: Confirmation bias This is the tendency of selectively seeking and holding onto information to reaffirm what you already believe to be true. For example, a leader believes that a recently launched product is doing well, based on the initial positive response. He has developed a bias that this product is successful. However, although the product is succeeding in attracting new customers, it is also losing existing customers. The confirmation bias is making the leader focus only on data pertaining to new customers, so he is ignoring data related to the loss of existing customers. Bandwagon effect bias Bandwagon effect bias, also known as "herd mentality," encourages doing something because others are doing it. The bias creates a feeling of not wanting to be left behind and hence can lead to irrational or badly-thought-through decisions. Enterprises launching the Agile transformation initiative, without understanding the implications of the long and difficult journey ahead, is an example of this bias. "Guru" bias Guru bias leads to blindly relying on an expert's advice. This can be detrimental, as the expert could be wrong in their assessment and therefore the advice could also be wrong. Also, the expert might give advice which is primarily furthering his or her interests over the interests of the enterprise. Projection bias Projection bias leads the person to believe that other people have understood and are aligned with their thinking, while in reality this may not be true. This bias is more prevalent in enterprises where employees are fearful of admitting that they have not understood what their "bosses" have said, asking questions to clarify or expressing disagreement. Stability bias Stability bias, also known as "status quo" bias, leads to a belief that change will lead to unfavorable outcomes, that is, the risk of loss is greater than the possibility of benefit. It makes a person believe that stability and predictability lead to safety. For decades, the mandate for leaders was to strive for stability and hence, many older leaders are susceptible to this bias. Leaders must encourage others in the enterprise to challenge biases, which can uncover "blind spots" arising from them. Once decisions are made, attention should be paid to information coming from feedback. #8 Resilience Resilience is the capacity to quickly recover from difficulties. Given the turbulent business environment, rapidly changing priorities, and the need to take calculated risks, leaders are likely to encounter difficult and challenging situations quite often. Under such circumstances, having resilience will help the leader to "take knocks on the chin" and keep moving forward. Resilience is also about maintaining composure when something fails, analyzing the failure with the team in an objective manner and leaning from that failure. The actions of leaders are watched by the people in the enterprise even more closely in periods of crisis and difficulty, and hence leaders showing resilience go a long way in increasing resilience across the company. #9 Responsiveness Responsiveness, from the perspective of leadership, is the ability to quickly grasp and respond to both challenges and opportunities. Leaders must listen to feedback coming from customers and the marketplace, learn from it, and adapt accordingly. Leaders must be ready to enable the morphing of the enterprise's offerings in order to stay relevant for customers and also to exploit opportunities. This implies that leaders must be willing to adjust the "pivot" of their offerings based on feedback, for example, the journey of Amazon Web Services, which was an internal system but has now grown into a highly successful business. Other prominent examples are Twitter, which was an offshoot of Odeo, a website focused on sound and podcasting, and PayPal's move from transferring money via PalmPilots to becoming a highly robust online payment service. We discovered that leaders are the primary catalysts for any enterprise aspiring to enhance its Agility. Leaders need specific capabilities, which are over and above the standard leadership capabilities, in order to take the business on the path of enhanced Enterprise Agility. These capabilities comprise of personal traits and behaviors that are intrinsic in nature and enable leadership Agility, which is the foundation of Enterprise Agility. Want to know more about how an enterprise can thrive in a dynamic business environment, check out the book Enterprise Agility. Skill Up 2017: What we learned about tech pros and developers 96% of developers believe developing soft skills is important Soft skills every data scientist should teach their child
Read more
  • 0
  • 1
  • 21668

article-image-how-does-elasticsearch-work-tutorial
Savia Lobo
30 Jul 2018
12 min read
Save for later

How does Elasticsearch work? [Tutorial]

Savia Lobo
30 Jul 2018
12 min read
Elasticsearch is much more than just a search engine; it supports complex aggregations, geo filters, and the list goes on. Best of all, you can run all your queries at a speed you have never seen before.  Elasticsearch, like any other open source technology, is very rapidly evolving, but the core fundamentals that power Elasticsearch don't change. In this article, we will briefly discuss how Elasticsearch works internally and explain the basic query APIs.  All the data in Elasticsearch is internally stored in  Apache Lucene as an inverted index. Although data is stored in Apache Lucene, Elasticsearch is what makes it distributed and provides the easy-to-use APIs. This Elasticsearch tutorial is an excerpt taken from the book,'Learning Elasticsearch' written by Abhishek Andhavarapu. Inverted index in Elasticsearch Inverted index will help you understand the limitations and strengths of Elasticsearch compared with the traditional database systems out there. Inverted index at the core is how Elasticsearch is different from other NoSQL stores, such as MongoDB, Cassandra, and so on. We can compare an inverted index to an old library catalog card system. When you need some information/book in a library, you will use the card catalog, usually at the entrance of the library, to find the book. An inverted index is similar to the card catalog. Imagine that you were to build a system like Google to search for the web pages mentioning your search keywords. We have three web pages with Yoda quotes from Star Wars, and you are searching for all the documents with the word fear. Document1: Fear leads to anger Document2: Anger leads to hate Document3: Hate leads to suffering In a library, without a card catalog to find the book you need, you would have to go to every shelf row by row, look at each book title, and see whether it's the book you need. Computer-based information retrieval systems do the same. Without the inverted index, the application has to go through each web page and check whether the word exists in the web page. An inverted index is similar to the following table. It is like a map with the term as a key and list of the documents the term appears in as value. Term Document Fear 1 Anger 1,2 Hate 2,3 Suffering 3 Leads 1,2,3 Once we construct an index, as shown in this table, to find all the documents with the term fear is now just a lookup. Just like when a library gets a new book, the book is added to the card catalog, we keep building an inverted index as we encounter a new web page. The preceding inverted index takes care of simple use cases, such as searching for the single term. But in reality, we query for much more complicated things, and we don't use the exact words. Now let's say we encountered a document containing the following: Yosemite national park may be closed for the weekend due to forecast of substantial rainfall We want to visit Yosemite National Park, and we are looking for the weather forecast in the park. But when we query for it in the human language, we might query something like weather in yosemite or rain in yosemite. With the current approach, we will not be able to answer this query as there are no common terms between the query and the document, as shown: Document Query rainfall rain To be able to answer queries like this and to improve the search quality, we employ various techniques such as stemming, synonyms discussed in the following sections. Stemming Stemming is the process of reducing a derived word into its root word. For example, rain, raining, rained, rainfall has the common root word "rain". When a document is indexed, the root word is stored in the index instead of the actual word. Without stemming, we end up storing rain, raining, rained in the index, and search relevance would be very low. The query terms also go through the stemming process, and the root words are looked up in the index. Stemming increases the likelihood of the user finding what he is looking for. When we query for rain in yosemite, even though the document originally had rainfall, the inverted index will contain term rain. We can configure stemming in Elasticsearch using Analyzers. Synonyms Similar to rain and raining, weekend and sunday mean the same thing. The document might not contain Sunday, but if the information retrieval system can also search for synonyms, it will significantly improve the search quality. Human language deals with a lot of things, such as tense, gender, numbers. Stemming and synonyms will not only improve the search quality but also reduce the index size by removing the differences between similar words. More examples: Pen, Pen[s] -> Pen Eat, Eating  -> Eat Phrase search As a user, we almost always search for phrases rather than single words. The inverted index in the previous section would work great for individual terms but not for phrases. Continuing the previous example, if we want to query all the documents with a phrase anger leads to in the inverted index, the previous index would not be sufficient. The inverted index for terms anger and leads is shown below: Term Document Anger 1,2 Leads 1,2,3 From the preceding table, the words anger and leads exist both in document1 and document2. To support phrase search along with the document, we also need to record the position of the word in the document. The inverted index with word position is shown here: Term Document Fear 1:1 Anger 1:3, 2:1 Hate 2:3, 3:1 Suffering 3:3 Leads 1:2, 2:2, 3:2 Now, since we have the information regarding the position of the word, we can search if a document has the terms in the same order as the query. Term Document anger 1:3, 2:1 leads 1:2, 2:2 Since document2 has anger as the first word and leads as the second word, the same order as the query, document2 would be a better match than document1. With the inverted index, any query on the documents is just a simple lookup. This is just an introduction to inverted index; in real life, it's much more complicated, but the fundamentals remain the same. When the documents are indexed into Elasticsearch, documents are processed into the inverted index. Scalability and availability in Elasticsearch Let's say you want to index a billion documents; having just a single machine might be very challenging. Partitioning data across multiple machines allows Elasticsearch to scale beyond what a single machine do and support high throughput operations. Your data is split into small parts called shards. When you create an index, you need to tell Elasticsearch the number of shards you want for the index and Elasticsearch handles the rest for you. As you have more data, you can scale horizontally by adding more machines. We will go in to more details in the sections below. There are type of shards in Elasticsearch - primary and replica. The data you index is written to both primary and replica shards. Replica is the exact copy of the primary. In case of the node containing the primary shard goes down, the replica takes over. This process is completely transparent and managed by Elasticsearch. We will discuss this in detail in the Failure Handling section below. Since primary and replicas are the exact copies, a search query can be answered by either the primary or the replica shard. This significantly increases the number of simultaneous requests Elasticsearch can handle at any point in time. As the index is distributed across multiple shards, a query against an index is executed in parallel across all the shards. The results from each shard are then gathered and sent back to the client. Executing the query in parallel greatly improves the search performance. Now, we will discuss the relation between node, index and shard. Relation between node, index, and shard Shard is often the most confusing topic when I talk about Elasticsearch at conferences or to someone who has never worked on Elasticsearch. In this section, I want to focus on the relation between node, index, and shard. We will use a cluster with three nodes and create the same index with multiple shard configuration, and we will talk through the differences. Three shards with zero replicas We will start with an index called esintroduction with three shards and zero replicas. The distribution of the shards in a three node cluster is as follows: In the above screenshot, shards are represented by the green squares. We will talk about replicas towards the end of this discussion. Since we have three nodes(servers) and three shards, the shards are evenly distributed across all three nodes. Each node will contain one shard. As you index your documents into the esintroduction index, data is spread across the three shards. Six shards with zero replicas Now, let's recreate the same esintroduction index with six shards and zero replicas. Since we have three nodes (servers) and six shards, each node will now contain two shards. The esintroduction index is split between six shards across three nodes. The distribution of shards for an index with six shards is as follows: The esintroduction index is spread across three nodes, meaning these three nodes will handle the index/query requests for the index. If these three nodes are not able to keep up with the indexing/search load, we can scale the esintroduction index by adding more nodes. Since the index has six shards, you could add three more nodes, and Elasticsearch automatically rearranges the shards across all six nodes. Now, index/query requests for the esintroduction index will be handled by six nodes instead of three nodes. If this is not clear, do not worry, we will discuss more about this as we progress in the book. Six shards with one replica Let's now recreate the same esintroduction index with six shards and one replica, meaning the index will have 6 primary shards and 6 replica shards, a total of 12 shards. Since we have three nodes (servers) and twelve shards, each node will now contain four shards. The esintroduction index is split between six shards across three nodes. The green squares represent shards in the following figure. The solid border represents primary shards, and replicas are the dotted squares: As we discussed before, the index is distributed into multiple shards across multiple nodes. In a distributed environment, a node/server can go down due to various reasons, such as disk failure, network issue, and so on. To ensure availability, each shard, by default, is replicated to a node other than where the primary shard exists. If the node containing the primary shard goes down, the shard replica is promoted to primary, and the data is not lost, and you can continue to operate on the index. In the preceding figure, the esintroduction index has six shards split across the three nodes. The primary of shard 2 belongs to node elasticsearch 1, and the replica of the shard 2 belongs to node elasticsearch 3. In the case of the elasticsearch 1 node going down, the replica in elasticsearch 3 is promoted to primary. This switch is completely transparent and handled by Elasticsearch. Distributed search One of the reasons queries executed on Elasticsearch are so fast is because they are distributed. Multiple shards act as one index. A search query on an index is executed in parallel across all the shards. Let's take an example: in the following figure, we have a cluster with two nodes: Node1, Node2 and an index named chapter1 with two shards: S0, S1 with one replica: Assuming the chapter1 index has 100 documents, S1 would have 50 documents, and S0 would have 50 documents. And you want to query for all the documents that contain the word Elasticsearch. The query is executed on S0 and S1 in parallel. The results are gathered back from both the shards and sent back to the client. Imagine, you have to query across million of documents, using Elasticsearch the search can be distributed. For the application I'm currently working on, a query on more than 100 million documents comes back within 50 milliseconds; which is simply not possible if the search is not distributed. Failure handling in Elasticsearch Elasticsearch handles failures automatically. This section describes how the failures are handled internally. Let's say we have an index with two shards and one replica. In the following diagram, the shards represented in solid line are primary shards, and the shards in the dotted line are replicas: As shown in preceding diagram, we initially have a cluster with two nodes. Since the index has two shards and one replica, shards are distributed across the two nodes. To ensure availability, primary and replica shards never exist in the same node. If the node containing both primary and replica shards goes down, the data cannot be recovered. In the preceding diagram, you can see that the primary shard S0 belongs to Node 1 and the replica shard S0 to the Node 2. Next, just like we discussed in the Relation between Node, Index and Shard section, we will add two new nodes to the existing cluster, as shown here: The cluster now contains four nodes, and the shards are automatically allocated to the new nodes. Each node in the cluster will now contain either a primary or replica shard. Now, let's say Node2, which contains the primary shard S1, goes down as shown here: Since the node that holds the primary shard went down, the replica of S1, which lives in Node3, is promoted to primary. To ensure the replication factor of 1, a copy of the shard S1 is made on Node1. This process is known as rebalancing of the cluster. Depending on the application, the number of shards can be configured while creating the index. The process of rebalancing the shards to other nodes is entirely transparent to the user and handled automatically by Elasticsearch. We discussed inverted indexes, relation between nodes, index and shard, distributed search and how failures are handled automatically in Elasticsearch. Check out this book, 'Learning Elasticsearch' to know about handling document relationships, working with geospatial data, and much more. How to install Elasticsearch in Ubuntu and Windows Working with Kibana in Elasticsearch 5.x CRUD (Create Read, Update and Delete) Operations with Elasticsearch
Read more
  • 0
  • 2
  • 86390
article-image-firefox-nightly-browser-debugging-your-app-is-now-fun-with-mozillas-new-time-travel-feature
Natasha Mathur
30 Jul 2018
3 min read
Save for later

Firefox Nightly browser: Debugging your app is now fun with Mozilla’s new ‘time travel’ feature

Natasha Mathur
30 Jul 2018
3 min read
Earlier this month, Mozilla announced a fancy new feature called “Time Travel debugging” for its Firefox Nightly web browser at the JSConf EU 2018.  With time travel debugging, you can easily track the bugs in your code or app as it lets you pause and rewind to the exact time when your app broke down. Time travel debugging technology is particularly useful for local web development where it allows you to pause and step forward or backward, pause and rewind to a previous state, rewind to the time a console message was logged and rewind to the time where an element had a certain style. It is also great for times where you might want to save user recordings or view a test recording when the testing fails. With time travel debugging, you can record a tab on your browser and later replay it using WebReplay, an experimental project which allows you to record, rewind and replay the processes for the web. According to Jason Laster, a Senior Software Engineer at Mozilla,“ with time travel, we have a full recording of time, you can jump to any point in the path and see it immediately, you don’t have to refresh or re-click or pause or look at logs”. Here’s a video of Jason Laster talking about the potential of time travel debugging. JSConf  He also mentioned how time travel is “not a new thing” and he was inspired by Dan Abramov, creator of Redux when he showcased Redux at JSConfEU saying how he wanted “time travel” to “reduce his action over time”. With Redux, you get a slider that shows you all the actions over time and as you’re moving, you get to see the UI update as well. In fact, Mozilla rebuilt the debugger in order to use React and redux for its time travel feature. Their debugger comes equipped with Redux dev tools, which shows a list of all the actions for the debugger. So, the dev tools show you the state of the app, sources, and the pause data. Finally, Laster added how “this is just the beginning” and that “they hope to pull this off well in the future”. To use this new time travel debugging feature, you must install the Firefox Nightly browser first. For more details on the new feature, check out the official documentation. Mozilla is building a bridge between Rust and JavaScript Firefox has made a password manager for your iPhone Firefox 61 builds on Firefox Quantum, adds Tab Warming, WebExtensions, and TLS 1.3  
Read more
  • 0
  • 0
  • 30438

article-image-setting-gradle-properties-to-build-a-project
Savia Lobo
30 Jul 2018
10 min read
Save for later

Setting Gradle properties to build a project [Tutorial]

Savia Lobo
30 Jul 2018
10 min read
A Gradle script is a program. We use a Groovy DSL to express our build logic. Gradle has several useful built-in methods to handle files and directories as we often deal with files and directories in our build logic. In today's post, we will take a look at how to set Gradle properties in a project build.  We will also see how to use the Gradle Wrapper task to distribute a configurable Gradle with our build scripts. This article is an excerpt taken from, 'Gradle Effective Implementations Guide - Second Edition' written by Hubert Klein Ikkink.  Setting Gradle project properties In a Gradle build file, we can access several properties that are defined by Gradle, but we can also create our own properties. We can set the value of our custom properties directly in the build script and we can also do this by passing values via the command line. The default properties that we can access in a Gradle build are displayed in the following table: NameTypeDefault valueprojectProjectThe project instance.nameStringThe name of the project directory. The name is read-only.pathStringThe absolute path of the project.descriptionStringThe description of the project.projectDirFileThe directory containing the build script. The value is read-only.buildDirFileThe directory with the build name in the directory, containing the build script.rootDirFileThe directory of the project at the root of a project structure.groupObjectNot specified.versionObjectNot specified.antAntBuilderAn AntBuilder instance. The following build file has a task of showing the value of the properties: version = '1.0' group = 'Sample' description = 'Sample build file to show project properties' task defaultProperties << { println "Project: $project" println "Name: $name" println "Path: $path" println "Project directory: $projectDir" println "Build directory: $buildDir" println "Version: $version" println "Group: $project.group" println "Description: $project.description" println "AntBuilder: $ant" } When we run the build, we get the following output: $ gradle defaultProperties :defaultProperties Project: root project 'props' Name: defaultProperties Path: :defaultProperties Project directory: /Users/mrhaki/gradle-book/Code_Files/props Build directory: /Users/mrhaki/gradle-book/Code_Files/props/build Version: 1.0 Group: Sample Description: Sample build file to show project properties AntBuilder: org.gradle.api.internal.project.DefaultAntBuilder@3c95cbbd BUILD SUCCESSFUL Total time: 1.458 secs Defining custom properties in script To add our own properties, we have to define them in an  ext{} script block in a build file. Prefixing the property name with ext. is another way to set the value. To read the value of the property, we don't have to use the ext. prefix, we can simply refer to the name of the property. The property is automatically added to the internal project property as well. In the following script, we add a customProperty property with a String value custom. In the showProperties task, we show the value of the property: // Define new property. ext.customProperty = 'custom' // Or we can use ext{} script block. ext { anotherCustomProperty = 'custom' } task showProperties { ext { customProperty = 'override' } doLast { // We can refer to the property // in different ways: println customProperty println project.ext.customProperty println project.customProperty } } After running the script, we get the following output: $ gradle showProperties :showProperties override custom custom BUILD SUCCESSFUL Total time: 1.469 secs Defining properties using an external file We can also set the properties for our project in an external file. The file needs to be named gradle.properties, and it should be a plain text file with the name of the property and its value on separate lines. We can place the file in the project directory or Gradle user home directory. The default Gradle user home directory is $USER_HOME/.gradle. A property defined in the properties file, in the Gradle user home directory, overrides the property values defined in a properties file in the project directory. We will now create a gradle.properties file in our project directory, with the following contents. We use our build file to show the property values: task showProperties { doLast { println "Version: $version" println "Custom property: $customProperty" } } If we run the build file, we don't have to pass any command-line options, Gradle will use gradle.properties to get values of the properties: $ gradle showProperties :showProperties Version: 4.0 Custom property: Property value from gradle.properties BUILD SUCCESSFUL Total time: 1.676 secs Passing properties via the command line Instead of defining the property directly in the build script or external file, we can use the -P command-line option to add an extra property to a build. We can also use the -P command-line option to set a value for an existing property. If we define a property using the -P command-line option, we can override a property with the same name defined in the external gradle.properties file. The following build script has a showProperties task that shows the value of an existing property and a new property: task showProperties { doLast { println "Version: $version" println "Custom property: $customProperty" } } Let's run our script and pass the values for the existing version property and the non-existent  customProperty: $ gradle -Pversion=1.1 -PcustomProperty=custom showProperties :showProperties Version: 1.1 Custom property: custom BUILD SUCCESSFUL Total time: 1.412 secs Defining properties via system properties We can also use Java system properties to define properties for our Gradle build. We use the -D command-line option just like in a normal Java application. The name of the system property must start with org.gradle.project, followed by the name of the property we want to set, and then by the value. We can use the same build script that we created before: task showProperties { doLast { println "Version: $version" println "Custom property: $customProperty" } } However, this time we use different command-line options to get a result: $ gradle -Dorg.gradle.project.version=2.0 -Dorg.gradle.project.customProperty=custom showProperties :showProperties Version: 2.0 Custom property: custom BUILD SUCCESSFUL Total time: 1.218 secs Adding properties via environment variables Using the command-line options provides much flexibility; however, sometimes we cannot use the command-line options because of environment restrictions or because we don't want to retype the complete command-line options each time we invoke the Gradle build. Gradle can also use environment variables set in the operating system to pass properties to a Gradle build. The environment variable name starts with ORG_GRADLE_PROJECT_ and is followed by the property name. We use our build file to show the properties: task showProperties { doLast { println "Version: $version" println "Custom property: $customProperty" } } Firstly, we set ORG_GRADLE_PROJECT_version and ORG_GRADLE_PROJECT_customProperty environment variables, then we run our showProperties task, as follows: $ ORG_GRADLE_PROJECT_version=3.1 ORG_GRADLE_PROJECT_customProperty="Set by environment variable" gradle showProp :showProperties Version: 3.1 Custom property: Set by environment variable BUILD SUCCESSFUL Total time: 1.373 secs Using the Gradle Wrapper Normally, if we want to run a Gradle build, we must have Gradle installed on our computer. Also, if we distribute our project to others and they want to build the project, they must have Gradle installed on their computers. The Gradle Wrapper can be used to allow others to build our project even if they don't have Gradle installed on their computers. The wrapper is a batch script on the Microsoft Windows operating systems or shell script on other operating systems that will download Gradle and run the build using the downloaded Gradle. By using the wrapper, we can make sure that the correct Gradle version for the project is used. We can define the Gradle version, and if we run the build via the wrapper script file, the version of Gradle that we defined is used. Creating wrapper scripts To create the Gradle Wrapper batch and shell scripts, we can invoke the built-in wrapper task. This task is already available if we have installed Gradle on our computer. Let's invoke the wrapper task from the command-line: $ gradle wrapper :wrapper BUILD SUCCESSFUL Total time: 0.61 secs After the execution of the task, we have two script files—gradlew.bat and gradlew—in the root of our project directory. These scripts contain all the logic needed to run Gradle. If Gradle is not downloaded yet, the Gradle distribution will be downloaded and installed locally. In the gradle/wrapper directory, relative to our project directory, we find the gradle-wrapper.jar and gradle-wrapper.properties files. The gradle-wrapper.jar file contains a couple of class files necessary to download and invoke Gradle. The gradle-wrapper.properties file contains settings, such as the URL, to download Gradle. The gradle-wrapper.properties file also contains the Gradle version number. If a new Gradle version is released, we only have to change the version in the gradle-wrapper.properties file and the Gradle Wrapper will download the new version so that we can use it to build our project. All the generated files are now part of our project. If we use a version control system, then we must add these files to the version control. Other people that check out our project can use the gradlew scripts to execute tasks from the project. The specified Gradle version is downloaded and used to run the build file. If we want to use another Gradle version, we can invoke the wrapper task with the --gradle-version option. We must specify the Gradle version that the Wrapper files are generated for. By default, the Gradle version that is used to invoke the wrapper task is the Gradle version used by the wrapper files. To specify a different download location for the Gradle installation file, we must use the --gradle-distribution-url option of the wrapper task. For example, we could have a customized Gradle installation on our local intranet, and with this option, we can generate the Wrapper files that will use the Gradle distribution on our intranet. In the following example, we generate the wrapper files for Gradle 2.12 explicitly: $ gradle wrapper --gradle-version=2.12 :wrapper BUILD SUCCESSFUL Total time: 0.61 secs Customizing the Gradle Wrapper If we want to customize properties of the built-in wrapper task, we must add a new task to our Gradle build file with the org.gradle.api.tasks.wrapper.Wrapper type. We will not change the default wrapper task, but create a new task with new settings that we want to apply. We need to use our new task to generate the Gradle Wrapper shell scripts and support files. We can change the names of the script files that are generated with the scriptFile property of the Wrapper task. To change the name and location of the generated JAR and properties files, we can change the jarFile property: task createWrapper(type: Wrapper) { // Set Gradle version for wrapper files. gradleVersion = '2.12' // Rename shell scripts name to // startGradle instead of default gradlew. scriptFile = 'startGradle' // Change location and name of JAR file // with wrapper bootstrap code and // accompanying properties files. jarFile = "${projectDir}/gradle-bin/gradle-bootstrap.jar" } If we run the createWrapper task, we get a Windows batch file and shell script and the Wrapper bootstrap JAR file with the properties file is stored in the gradle-bin directory: $ gradle createWrapper :createWrapper BUILD SUCCESSFUL Total time: 0.605 secs $ tree . . ├── gradle-bin │ ├── gradle-bootstrap.jar │ └── gradle-bootstrap.properties ├── startGradle ├── startGradle.bat └── build.gradle 2 directories, 5 files To change the URL from where the Gradle version must be downloaded, we can alter the distributionUrl property. For example, we could publish a fixed Gradle version on our company intranet and use the distributionUrl property to reference a download URL on our intranet. This way we can make sure that all developers in the company use the same Gradle version: task createWrapper(type: Wrapper) { // Set URL with custom Gradle distribution. distributionUrl = 'http://intranet/gradle/dist/gradle-custom- 2.12.zip' } We discussed the Gradle properties and how to use the Gradle Wrapper to allow users to build our projects even if they don't have Gradle installed. We discussed how to customize the Wrapper to download a specific version of Gradle and use it to run our build. If you've enjoyed reading this post, do check out our book 'Gradle Effective Implementations Guide - Second Edition' to know more about how to use Gradle for Java Projects. Top 7 Python programming books you need to read 4 operator overloading techniques in Kotlin you need to know 5 Things you need to know about Java 10
Read more
  • 0
  • 0
  • 64427

article-image-deepcube-a-new-deep-reinforcement-learning-approach-solves-the-rubiks-cube-with-no-human-help
Savia Lobo
29 Jul 2018
4 min read
Save for later

DeepCube: A new deep reinforcement learning approach solves the Rubik’s cube with no human help

Savia Lobo
29 Jul 2018
4 min read
Humans have been excellent players in most of the gameplays be it indoor or outdoors. However, over the recent years we have been increasingly coming across machines that are playing and winning popular board games Go and Chess against humans using machine learning algorithms. If you think machines are only good at solving the black and whites, you are wrong. The recent achievement of a machine trying to solve a complex game (a Rubik’s cube) is DeepCube. Rubik cube is a challenging piece of puzzle that’s captivated everyone since childhood. Solving it is a brag-worthy accomplishment for most adults. A group of UC Irvine researchers have now developed a new algorithm (used by DeepCube) known as Autodidactic Iteration, which can solve a Rubik’s cube with no human assistance. The Erno Rubik’s cube conundrum Rubik’s cube, a popular three-dimensional puzzle was developed by Erno Rubik in the year 1974. Rubik worked for a month to figure out the first algorithm to solve the cube. Researchers at the UC Irvine state that “Since then, the Rubik’s Cube has gained worldwide popularity and many human-oriented algorithms for solving it have been discovered. These algorithms are simple to memorize and teach humans how to solve the cube in a structured, step-by-step manner.” After the cube became popular among mathematicians and computer scientists, questions around how to solve the cube with least possible turns became mainstream. In 2014, it was proved that the least number of steps to solve the cube puzzle was 26. More recently, computer scientists have tried to find ways for machines to solve the Rubik’s cube. As a first step, they tried and tested ways to use the same successful approach tried in the games Go and Chess. However, this approach did not work well for the Rubik’s cube. The approach: Rubik vs Chess and Go Algorithms used in Go and Chess are fed with rules of the game and then they play against themselves. The deep learning machine here is rewarded based on its performance at every step it takes. Reward process is considered as important as it helps the machine to distinguish between a good and a bad move. Following this, the machine starts playing well i.e it learns how to play well. On the other hand, the rewards in the case of Rubik’s cube are nearly hard to determine. This is because there are random turns in the cube and it is hard to judge whether the new configuration is any closer to a solution. The random turns can be unlimited and hence earning an end-state reward is very rare. Both Chess and Go have a large search space but each move can be evaluated and rewarded accordingly. This isn’t the case for Rubik’s cube! UC Irvine researchers have found a way for machines to create its own set of rewards in the Autodidactic Iteration method for DeepCube. Autodidactic Iteration: Solving the Rubik’s Cube without human Knowledge DeepCube’s Autodidactic Iteration (ADI) is a form of deep learning known as deep reinforcement learning (DRL). It combines classic reinforcement learning, deep learning, and Monte Carlo Tree Search (MCTS). When DeepCube gets an unsolved cube, it decides whether the specific move is an improvement on the existing configuration. To do this, it must be able to evaluate the move. The algorithm, Autodidactic iteration starts with the finished cube and works backwards to find a configuration that is similar to the proposed move. Although this process is imperfect, deep learning helps the system figure out which moves are generally better than others. Researchers trained a network using ADI for 2,000,000 iterations. They further reported, “The network witnessed approximately 8 billion cubes, including repeats, and it trained for a period of 44 hours. Our training machine was a 32-core Intel Xeon E5-2620 server with three NVIDIA Titan XP GPUs.” After training, the network uses a standard search tree to hunt for suggested moves for each configuration. The researchers in their paper said, “Our algorithm is able to solve 100% of randomly scrambled cubes while achieving a median solve length of 30 moves — less than or equal to solvers that employ human domain knowledge.” Researchers also wrote, “DeepCube is able to teach itself how to reason in order to solve a complex environment with only one reward state using pure reinforcement learning.” Furthermore, this approach will have a potential to provide approximate solutions to a broad class of combinatorial optimization problems. To explore Deep Reinforcement Learning check out our latest releases, Hands-On Reinforcement Learning with Python and Deep Reinforcement Learning Hands-On. How greedy algorithms work Creating a reference generator for a job portal using Breadth First Search (BFS) algorithm Anatomy of an automated machine learning algorithm (AutoML)    
Read more
  • 0
  • 0
  • 20484
article-image-wireshark-analyze-malicious-emails-in-pop-imap-smtp
Vijin Boricha
29 Jul 2018
10 min read
Save for later

Wireshark for analyzing issues and malicious emails in POP, IMAP, and SMTP [Tutorial]

Vijin Boricha
29 Jul 2018
10 min read
One of the contributing factors in the evolution of digital marketing and business is email. Email allows users to exchange real-time messages and other digital information such as files and images over the internet in an efficient manner. Each user is required to have a human-readable email address in the form of username@domainname.com. There are various email providers available on the internet, and any user can register to get a free email address. There are different email application-layer protocols available for sending and receiving mails, and the combination of these protocols helps with end-to-end email exchange between users in the same or different mail domains. In this article, we will look at the normal operation of email protocols and how to use Wireshark for basic analysis and troubleshooting. This article is an excerpt from Network Analysis using Wireshark 2 Cookbook - Second Edition written by Nagendra Kumar Nainar, Yogesh Ramdoss, Yoram Orzach. The three most commonly used application layer protocols are POP3, IMAP, and SMTP: POP3: Post Office Protocol 3 (POP3) is an application layer protocol used by email systems to retrieve mail from email servers. The email client uses POP3 commands such as LOGIN, LIST, RETR, DELE, QUIT to access and manipulate (retrieve or delete) the email from the server. POP3 uses TCP port 110 and wipes the mail from the server once it is downloaded to the local client. IMAP: Internet Mail Access Protocol (IMAP) is another application layer protocol used to retrieve mail from the email server. Unlike POP3, IMAP allows the user to read and access the mail concurrently from more than one client device. With current trends, it is very common to see users with more than one device to access emails (laptop, smartphone, and so on), and the use of IMAP allows the user to access mail any time, from any device. The current version of IMAP is 4 and it uses TCP port 143. SMTP: Simple Mail Transfer Protocol (SMTP) is an application layer protocol that is used to send email from the client to the mail server. When the sender and receiver are in different email domains, SMTP helps to exchange the mail between servers in different domains. It uses TCP port 25: As shown in the preceding diagram, SMTP is the email client used to send the mail to the mail server, and POP3 or IMAP is used to retrieve the email from the server. The email server uses SMTP to exchange the mail between different domains. In order to maintain the privacy of end users, most email servers use different encryption mechanisms at the transport layer. The transport layer port number will differ from the traditional email protocols if they are used over secured transport layer (TLS). For example, POP3 over TLS uses TCP port 995, IMAP4 over TLS uses TCP port 993, and SMTP over TLS uses port 465. Normal operation of mail protocols As we saw above, the common mail protocols for mail client to server and server to server communication are POP3, SMTP, and IMAP4. Another common method for accessing emails is web access to mail, where you have common mail servers such as Gmail, Yahoo!, and Hotmail. Examples include Outlook Web Access (OWA) and RPC over HTTPS for the Outlook web client from Microsoft. In this recipe, we will talk about the most common client-server and server-server protocols, POP3 and SMTP, and the normal operation of each protocol. Getting ready Port mirroring to capture the packets can be done either on the email client side or on the server side. How to do it... POP3 is usually used for client to server communications, while SMTP is usually used for server to server communications. POP3 communications POP3 is usually used for mail client to mail server communications. The normal operation of POP3 is as follows: Open the email client and enter the username and password for login access. Use POP as a display filter to list all the POP packets. It should be noted that this display filter will only list packets that use TCP port 110. If TLS is used, the filter will not list the POP packets. We may need to use tcp.port == 995 to list the POP3 packets over TLS. Check the authentication has been passed correctly. In the following screenshot, you can see a session opened with a username that starts with doronn@ (all IDs were deleted) and a password that starts with u6F. To see the TCP stream shown in the following screenshot, right-click on one of the packets in the stream and choose Follow TCP Stream from the drop-down menu: Any error messages in the authentication stage will prevent communications from being established. You can see an example of this in the following screenshot, where user authentication failed. In this case, we see that when the client gets a Logon failure, it closes the TCP connection: Use relevant display filters to list the specific packet. For example, pop.request.command == "USER" will list the POP request packet with the username and pop.request.command == "PASS" will list the POP packet carrying the password. A sample snapshot is as follows: During the mail transfer, be aware that mail clients can easily fill a narrow-band communications line. You can check this by simply configuring the I/O graphs with a filter on POP. Always check for common TCP indications: retransmissions, zero-window, window-full, and others. They can indicate a busy communication line, slow server, and other problems coming from the communication lines or end nodes and servers. These problems will mostly cause slow connectivity. When the POP3 protocol uses TLS for encryption, the payload details are not visible. We explain how the SSL captures can be decrypted in the There's more... section. IMAP communications IMAP is similar to POP3 in that it is used to retrieve the mail from the server by the client. The normal behavior of IMAP communication is as follows: Open the email client and enter the username and password for the relevant account. Compose a new message and send it from any email account. Retrieve the email on the client that is using IMAP. Different clients may have different ways of retrieving the email. Use the relevant button to trigger it. Check you received the email on your local client. SMTP communications SMTP is commonly used for the following purposes: Server to server communications, in which SMTP is the mail protocol that runs between the servers In some clients, POP3 or IMAP4 are configured for incoming messages (messages from the server to the client), while SMTP is configured for outgoing messages (messages from the client to the server) The normal behavior of SMTP communication is as follows: The local email client resolves the IP address of the configured SMTP server address. This triggers a TCP connection to port number 25 if SSL/TLS is not enabled. If SSL/TLS is enabled, a TCP connection is established over port 465. It exchanges SMTP messages to authenticate with the server. The client sends AUTH LOGIN to trigger the login authentication. Upon successful login, the client will be able to send mails. It sends SMTP message such as "MAIL FROM:<>", "RCPT TO:<>" carrying sender and receiver email addresses. Upon successful queuing, we get an OK response from the SMTP server. The following is a sample SMTP message flow between client and server: How it works... In this section, let's look into the normal operation of different email protocols with the use of Wireshark. Mail clients will mostly use POP3 for communication with the server. In some cases, they will use SMTP as well. IMAP4 is used when server manipulation is required, for example, when you need to see messages that exist on a remote server without downloading them to the client. Server to server communication is usually implemented by SMTP. The difference between IMAP and POP is that in IMAP, the mail is always stored on the server. If you delete it, it will be unavailable from any other machine. In POP, deleting a downloaded email may or may not delete that email on the server. In general, SMTP status codes are divided into three categories, which are structured in a way that helps you understand what exactly went wrong. The methods and details of SMTP status codes are discussed in the following section. POP3 POP3 is an application layer protocol used by mail clients to retrieve email messages from the server. A typical POP3 session will look like the following screenshot: It has the following steps: The client opens a TCP connection to the server. The server sends an OK message to the client (OK Messaging Multiplexor). The user sends the username and password. The protocol operations begin. NOOP (no operation) is a message sent to keep the connection open, STAT (status) is sent from the client to the server to query the message status. The server answers with the number of messages and their total size (in packet 1042, OK 0 0 means no messages and it has a total size of zero) When there are no mail messages on the server, the client send a QUIT message (1048), the server confirms it (packet 1136), and the TCP connection is closed (packets 1137, 1138, and 1227). In an encrypted connection, the process will look nearly the same (see the following screenshot). After the establishment of a connection (1), there are several POP messages (2), TLS connection establishment (3), and then the encrypted application data: IMAP The normal operation of IMAP is as follows: The email client resolves the IP address of the IMAP server: As shown in the preceding screenshot, the client establishes a TCP connection to port 143 when SSL/TSL is disabled. When SSL is enabled, the TCP session will be established over port 993. Once the session is established, the client sends an IMAP capability message requesting the server sends the capabilities supported by the server. This is followed by authentication for access to the server. When the authentication is successful, the server replies with response code 3 stating the login was a success: The client now sends the IMAP FETCH command to fetch any mails from the server. When the client is closed, it sends a logout message and clears the TCP session. SMTP The normal operation of SMTP is as follows: The email client resolves the IP address of the SMTP server: The client opens a TCP connection to the SMTP server on port 25 when SSL/TSL is not enabled. If SSL is enabled, the client will open the session on port 465: Upon successful TCP session establishment, the client will send an AUTH LOGIN message to prompt with the account username/password. The username and password will be sent to the SMTP client for account verification. SMTP will send a response code of 235 if authentication is successful: The client now sends the sender's email address to the SMTP server. The SMTP server responds with a response code of 250 if the sender's address is valid. Upon receiving an OK response from the server, the client will send the receiver's address. SMTP server will respond with a response code of 250 if the receiver's address is valid. The client will now push the actual email message. SMTP will respond with a response code of 250 and the response parameter OK: queued. The successfully queued message ensures that the mail is successfully sent and queued for delivery to the receiver address. We have learned how to analyse issues in POP, IMAP, and SMTP  and malicious emails. Get to know more about  DNS Protocol Analysis and FTP, HTTP/1, AND HTTP/2 from our book Network Analysis using Wireshark 2 Cookbook - Second Edition. What’s new in Wireshark 2.6? Analyzing enterprise application behavior with Wireshark 2 Capturing Wireshark Packets
Read more
  • 0
  • 0
  • 60094

article-image-creating-effective-dashboards-using-splunk-tutorial
Sunith Shetty
28 Jul 2018
10 min read
Save for later

Creating effective dashboards using Splunk [Tutorial]

Sunith Shetty
28 Jul 2018
10 min read
Splunk is easy to use for developing a powerful analytical dashboard with multiple panels. A dashboard with too many panels, however, will require scrolling down the page and can cause the viewer to miss crucial information. An effective dashboard should generally meet the following conditions: Single screen view: The dashboard fits in a single window or page, with no scrolling Multiple data points: Charts and visualizations should display a number of data points Crucial information highlighted: The dashboard points out the most important information, using appropriate titles, labels, legends, markers, and conditional formatting as required Created with the user in mind: Data is presented in a way that is meaningful to the user Loads quickly: The dashboard returns results in 10 seconds or less Avoid redundancy: The display does not repeat information in multiple places In this tutorial, we learn to create different types of dashboards using Splunk. We will also discuss how to gather business requirements for your dashboards. Types of Splunk dashboards There are three kinds of dashboards typically created with Splunk: Dynamic form-based dashboards Real-time dashboards Dashboards as scheduled reports Dynamic form-based dashboards allow Splunk users to modify the dashboard data without leaving the page. This is accomplished by adding data-driven input fields (such as time, radio button, textbox, checkbox, dropdown, and so on) to the dashboard. Updating these inputs changes the data based on the selections. Dynamic form-based dashboards have existed in traditional business intelligence tools for decades now, so users who frequently use them will be familiar with changing prompt values on the fly to update the dashboard data. Real-time dashboards are often kept on a big panel screen for constant viewing, simply because they are so useful. You see these dashboards in data centers, network operations centers (NOCs), or security operations centers (SOCs) with constant format and data changing in real time. The dashboard will also have indicators and alerts for operators to easily identify and act on a problem. Dashboards like this typically show the current state of security, network, or business systems, using indicators for web performance and traffic, revenue flow, login failures, and other important measures. Dashboards as scheduled reports may not be exposed for viewing; however, the dashboard view will generally be saved as a PDF file and sent to email recipients at scheduled times. This format is ideal when you need to send information updates to multiple recipients at regular intervals, and don't want to force them to log in to Splunk to capture the information themselves. We will create the first two types of dashboards, and you will learn how to use the Splunk dashboard editor to develop advanced visualizations along the way. Gathering business requirements As a Splunk administrator, one of the most important responsibilities is to be responsible for the data. As a custodian of data, a Splunk admin has significant influence over how to interpret and present information to users. It is common for the administrator to create the first few dashboards. A more mature implementation, however, requires collaboration to create an output that is beneficial to a variety of user requirements and may be completed by a Splunk development resource with limited administrative rights. Make it a habit to consistently request users input regarding the Splunk delivered dashboards and reports and what makes them useful. Sit down with day-to-day users and layout, on a drawing board, for example, the business process flows or system diagrams to understand how the underlying processes and systems you're trying to measure really work. Look for key phrases like these, which signify what data is most important to the business: If this is broken, we lose tons of revenue... This is a constant point of failure... We don't know what's going on here... If only I can see the trend, it will make my work easier... This is what my boss wants to see... Splunk dashboard users may come from many areas of the business. You want to talk to all the different users, no matter where they are on the organizational chart. When you make friends with the architects, developers, business analysts, and management, you will end up building dashboards that benefit the organization, not just individuals. With an initial dashboard version, ask for users thoughts as you observe them using it in their work and ask what can be improved upon, added, or changed. We hope that at this point, you realize the importance of dashboards and are ready to get started creating some, as we will do in the following sections. Dynamic form-based dashboard In this section, we will create a dynamic form-based dashboard in our Destinations app to allow users to change input values and rerun the dashboard, presenting updated data. Here is a screenshot of the final output of this dynamic form-based dashboard: Let's begin by creating the dashboard itself and then generate the panels: Go the search bar in the Destinations app Run this search command: SPL> index=main status_type="*" http_uri="*" server_ip="*" | top status_type, status_description, http_uri, server_ip Be careful when copying commands with quotation marks. It is best to type in the entire search command to avoid problems. Go to Save As | Dashboard Panel Fill in the information based on the following screenshot: Click on Save Close the pop-up window that appears (indicating that the dashboard panel was created) by clicking on the X in the top-right corner of the window Creating a Status Distribution panel We will go to the after all the panel searches have been generated. Let's go ahead and create the second panel: In the search window, type in the following search command: SPL> index=main status_type="*" http_uri=* server_ip=* | top status_type You will save this as a dashboard panel in the newly created dashboard. In the Dashboard option, click on the Existing button and look for the new dashboard, as seen here. Don't forget to fill in the Panel Title as Status Distribution: Click on Save when you are done and again close the pop-up window, signaling the addition of the panel to the dashboard. Creating the Status Types Over Time panel Now, we'll move on to create the third panel: Type in the following search command and be sure to run it so that it is the active search: SPL> index=main status_type="*" http_uri=* server_ip=* | timechart count by http_status_code You will save this as a Dynamic Form-based Dashboard panel as well. Type in Status Types Over Time in the Panel Title field: Click on Save and close the pop-up window, signaling the addition of the panel to the dashboard. Creating the Hits vs Response Time panel Now, on to the final panel. Run the following search command: SPL> index=main status_type="*" http_uri=* server_ip=* | timechart count, avg(http_response_time) as response_time Save this dashboard panel as Hits vs Response Time: Arrange the dashboard We'll move on to look at the dashboard we've created and make a few changes: Click on the View Dashboard button. If you missed out on the View Dashboard button, you can find your dashboard by clicking on Dashboards in the main navigation bar. Let's edit the panel arrangement. Click on the Edit button. Move the Status Distribution panel to the upper-right row. Move the Hits vs Response Time panel to the lower-right row. Click on Save to save your layout changes. Look at the following screenshot. The dashboard framework you've created should now look much like this. The dashboard probably looks a little plainer than you expected it to. But don't worry; we will improve the dashboard visuals one panel at a time: Panel options in dashboards In this section, we will learn how to alter the look of our panels and create visualizations. Go to the edit dashboard mode by clicking on the Edit button. Each dashboard panel will have three setting options to work with: edit search, select visualization, and visualization format options. They are represented by three drop-down icons: The Edit Search window allows you to modify the search string, change the time modifier for the search, add auto-refresh and progress bar options, as well as convert the panel into a report: The Select Visualization dropdown allows you to change the type of visualization to use for the panel, as shown in the following screenshot: Finally, the Visualization Options dropdown will give you the ability to fine-tune your visualization. These options will change depending on the visualization you select. For a normal statistics table, this is how it will look: Pie chart – Status Distribution Go ahead and change the Status Distribution visualization panel to a pie chart. You do this by selecting the Select Visualization icon and selecting the Pie icon. Once done, the panel will look like the following screenshot: Stacked area chart – Status Types Over Time We will change the view of the Status Types Over Time panel to an area chart. However, by default, area charts will not be stacked. We will update this through adjusting the visualization options: Change the Status Types Over Time panel to an Area Chart using the same Select Visualization button as the prior pie chart exercise. Make the area chart stacked using the Format Visualization icon. In the Stack Mode section, click on Stacked. For Null Values, select Zero. Use the chart that follows for guidance: Click on Apply. The panel will change right away. Remove the _time label as it is already implied. You can do this in the X-Axis section by setting the Title to None. Close the Format Visualization window by clicking on the X in the upper-right corner: Here is the new stacked area chart panel: Column with overlay combination chart – Hits vs Response Time When representing two or more kinds of data with different ranges, using a combination chart—in this case combining a column and a line—can tell a bigger story than one metric and scale alone. We'll use the Hits vs Response Time panel to explore the combination charting options: In the Hits vs Response Time panel, change the chart panel visualization to Column In the Visualization Options window, click on Chart Overlay In the Overlay selection box, select response_time Turn on View as Axis Click on X-Axis from the list of options on the left of the window and change the Title to None Click on Legend from the list of options on the left Change the Legend Position to Bottom Click on the X in the upper-right-hand corner to close the Visualization Options window The new panel will now look similar to the following screenshot. From this and the prior screenshot, you can see there was clearly an outage in the overnight hours: Click on Done to save all the changes you made and exit the Edit mode The dashboard has now come to life. This is how it should look now: To summarize we saw how to create different types of dashboards. To know more about core Splunk functionalities to transform machine data into powerful insights, check out this book Splunk 7 Essentials, Third Edition. Splunk leverages AI in its monitoring tools Splunk Industrial Asset Intelligence (Splunk IAI) targets Industrial IoT marketplace Create a data model in Splunk to enable interactive reports and dashboards
Read more
  • 0
  • 0
  • 24904
Modal Close icon
Modal Close icon