Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7008 Articles
article-image-ai-unity-game-developers-emulate-real-world-senses
Kunal Chaudhari
06 Jun 2018
19 min read
Save for later

AI for Unity game developers: How to emulate real-world senses in your NPC agent behavior

Kunal Chaudhari
06 Jun 2018
19 min read
An AI character system needs to be aware of its environment such as where the obstacles are, where the enemy is, whether the enemy is visible in the player's sight, and so on. The quality of our  Non-Player Character (NPC's) AI completely depends on the information it can get from the environment. Nothing breaks the level of immersion in a game like an NPC getting stuck behind a wall. Based on the information the NPC can collect, the AI system can decide which logic to execute in response to that data. If the sensory systems do not provide enough data, or the AI system is unable to properly take action on that data, the agent can begin to glitch, or behave in a way contrary to what the developer, or more importantly the player, would expect. Some games have become infamous for their comically bad AI glitches, and it's worth a quick internet search to find some videos of AI glitches for a good laugh. In this article, we'll learn to implement AI behavior using the concept of a sensory system similar to what living entities have. We will learn the basics of sensory systems, along with some of the different sensory systems that exist. You are reading an extract from Unity 2017 Game AI programming - Third Edition, written by Ray Barrera, Aung Sithu Kyaw, Thet Naing Swe. Basic sensory systems Our agent's sensory systems should believably emulate real-world senses such as vision, sound, and so on, to build a model of its environment, much like we do as humans. Have you ever tried to navigate a room in the dark after shutting off the lights? It gets more and more difficult as you move from your initial position when you turned the lights off because your perspective shifts and you have to rely more and more on your fuzzy memory of the room's layout. While our senses rely on and take in a constant stream of data to navigate their environment, our agent's AI is a lot more forgiving, giving us the freedom to examine the environment at predetermined intervals. This allows us to build a more efficient system in which we can focus only on the parts of the environment that are relevant to the agent. The concept of a basic sensory system is that there will be two components, Aspect and Sense. Our AI characters will have senses, such as perception, smell, and touch. These senses will look out for specific aspects such as enemies and bandits. For example, you could have a patrol guard AI with a perception sense that's looking for other game objects with an enemy aspect, or it could be a zombie entity with a smell sense looking for other entities with an aspect defined as a brain. For our demo, this is basically what we are going to implement—a base interface called Sense that will be implemented by other custom senses. In this article, we'll implement perspective and touch senses. Perspective is what animals use to see the world around them. If our AI character sees an enemy, we want to be notified so that we can take some action. Likewise with touch, when an enemy gets too close, we want to be able to sense that, almost as if our AI character can hear that the enemy is nearby. Then we'll write a minimal Aspect class that our senses will be looking for. Cone of sight A raycast is a feature in Unity that allows you to determine which objects are intersected by a line cast from a point in a given direction. While this is a fairly efficient way to handle visual detection in a simple way, it doesn't accurately model the way vision works for most entities. An alternative to using the line of sight is using a cone-shaped field of vision. As the following figure illustrates, the field of vision is literally modeled using a cone shape. This can be in 2D or 3D, as appropriate for your type of game: The preceding figure illustrates the concept of a cone of sight. In this case, beginning with the source, that is, the agent's eyes, the cone grows, but becomes less accurate with distance, as represented by the fading color of the cone. The actual implementation of the cone can vary from a basic overlap test to a more complex realistic model, mimicking eyesight. In a simple implementation, it is only necessary to test whether an object overlaps with the cone of sight, ignoring distance or periphery. A complex implementation mimics eyesight more closely; as the cone widens away from the source, the field of vision grows, but the chance of getting to see things toward the edges of the cone diminishes compared to those near the center of the source. Hearing, feeling, and smelling using spheres One very simple yet effective way of modeling sounds, touch, and smell is via the use of spheres. For sounds, for example, we can imagine the center as being the source and the loudness dissipating the farther from the center the listener is. Inversely, the listener can be modeled instead of, or in addition to, the source of the sound. The listener's hearing is represented by a sphere, and the sounds closest to the listener are more likely to be "heard." We can modify the size and position of the sphere relative to our agent to accommodate feeling and smelling. The following figure represents our sphere and how our agent fits into the setup: As with sight, the probability of an agent registering the sensory event can be modified, based on the distance from the sensor or as a simple overlap event, where the sensory event is always detected as long as the source overlaps the sphere. Expanding AI through omniscience In a nutshell, omniscience is really just a way to make your AI cheat. While your agent doesn't necessarily know everything, it simply means that they can know anything. In some ways, this can seem like the antithesis to realism, but often the simplest solution is the best solution. Allowing our agent access to seemingly hidden information about its surroundings or other entities in the game world can be a powerful tool to provide an extra layer of complexity. In games, we tend to model abstract concepts using concrete values. For example, we may represent a player's health with a numeric value ranging from 0 to 100. Giving our agent access to this type of information allows it to make realistic decisions, even though having access to that information is not realistic. You can also think of omniscience as your agent being able to use the force or sense events in your game world without having to physically experience them. While omniscience is not necessarily a specific pattern or technique, it's another tool in your toolbox as a game developer to cheat a bit and make your game more interesting by, in essence, bending the rules of AI, and giving your agent data that they may not otherwise have had access to through physical means. Getting creative with sensing While cones, spheres, and lines are among the most basic ways an agent can see, hear, and perceive their environment, they are by no means the only ways to implement these senses. If your game calls for other types of sensing, feel free to combine these patterns. Want to use a cylinder or a sphere to represent a field of vision? Go for it. Want to use boxes to represent the sense of smell? Sniff away! Using the tools at your disposal, come up with creative ways to model sensing in terms relative to your player. Combine different approaches to create unique gameplay mechanics for your games by mixing and matching these concepts. For example, a magic-sensitive but blind creature could completely ignore a character right in front of them until they cast or receive the effect of a magic spell. Maybe certain NPCs can track the player using smell, and walking through a collider marked water can clear the scent from the player so that the NPC can no longer track him. As you progress through the book, you'll be given all the tools to pull these and many other mechanics off—sensing, decision-making, pathfinding, and so on. As we cover some of these techniques, start thinking about creative twists for your game. Setting up the scene In order to get started with implementing the sensing system, you can jump right into the example provided for this article, or set up the scene yourself, by following these steps: Let's create a few barriers to block the line of sight from our AI character to the tank. These will be short but wide cubes grouped under an empty game object called Obstacles. Add a plane to be used as a floor. Then, we add a directional light so that we can see what is going on in our scene. As you can see in the example, there is a target 3D model, which we use for our player, and we represent our AI agent using a simple cube. We will also have a Target object to show us where the tank will move to in our scene. For simplicity, our example provides a point light as a child of the Target so that we can easily see our target destination in the game view. Our scene hierarchy will look similar to the following screenshot after you've set everything up correctly: Now we will position the tank, the AI character, and walls randomly in our scene. Increase the size of the plane to something that looks good. Fortunately, in this demo, our objects float, so nothing will fall off the plane. Also, be sure to adjust the camera so that we can have a clear view of the following scene: With the essential setup out of the way, we can begin tackling the code for driving the various systems. Setting up the player tank and aspect Our Target object is a simple sphere game object with the mesh render removed so that we end up with only the Sphere Collider. Look at the following code in the Target.cs file: using UnityEngine; public class Target : MonoBehaviour { public Transform targetMarker; void Start (){} void Update () { int button = 0; //Get the point of the hit position when the mouse is being clicked if(Input.GetMouseButtonDown(button)) { Ray ray = Camera.main.ScreenPointToRay(Input.mousePosition); RaycastHit hitInfo; if (Physics.Raycast(ray.origin, ray.direction, out hitInfo)) { Vector3 targetPosition = hitInfo.point; targetMarker.position = targetPosition; } } } } You'll notice we left in an empty Start method in the code. While there is a cost in having empty Start, Update, and other MonoBehaviour events that don't do anything, we can sometimes choose to leave the Start method in during development, so that the component shows an enable/disable toggle in the inspector. Attach this script to our Target object, which is what we assigned in the inspector to the targetMarker variable. The script detects the mouse click event and then, using a raycast, it detects the mouse click point on the plane in the 3D space. After that, it updates the Target object to that position in the world space in the scene. A raycast is a feature of the Unity Physics API that shoots a virtual ray from a given origin towards a given direction, and returns data on any colliders hit along the way. Implementing the player tank Our player tank is the simple tank model with a kinematic rigid body component attached. The rigid body component is needed in order to generate trigger events whenever we do collision detection with any AI characters. The first thing we need to do is to assign the tag Player to our tank. The isKinematic flag in Unity's Rigidbody component makes it so that external forces are ignored, so that you can control the Rigidbody entirely from code or from an animation, while still having access to the Rigidbody API. The tank is controlled by the PlayerTank script, which we will create in a moment. This script retrieves the target position on the map and updates its destination point and the direction accordingly. The code in the PlayerTank.cs file is as follows: using UnityEngine; public class PlayerTank : MonoBehaviour { public Transform targetTransform; public float targetDistanceTolerance = 3.0f; private float movementSpeed; private float rotationSpeed; // Use this for initialization void Start () { movementSpeed = 10.0f; rotationSpeed = 2.0f; } // Update is called once per frame void Update () { if (Vector3.Distance(transform.position, targetTransform.position) < targetDistanceTolerance) { return; } Vector3 targetPosition = targetTransform.position; targetPosition.y = transform.position.y; Vector3 direction = targetPosition - transform.position; Quaternion tarRot = Quaternion.LookRotation(direction); transform.rotation = Quaternion.Slerp(transform.rotation, tarRot, rotationSpeed * Time.deltaTime); transform.Translate(new Vector3(0, 0, movementSpeed * Time.deltaTime)); } } The preceding screenshot shows us a snapshot of our script in the inspector once applied to our tank. This script queries the position of the Target object on the map and updates its destination point and the direction accordingly. After we assign this script to our tank, be sure to assign our Target object to the targetTransform variable. Implementing the Aspect class Next, let's take a look at the Aspect.cs class. Aspect is a very simple class with just one public enum of type AspectTypes called aspectType. That's all of the variables we need in this component. Whenever our AI character senses something, we'll check the  aspectType to see whether it's the aspect that the AI has been looking for. The code in the Aspect.cs file looks like this: using UnityEngine; public class Aspect : MonoBehaviour { public enum AspectTypes { PLAYER, ENEMY, } public AspectTypes aspectType; } Attach this aspect script to our player tank and set the aspectType to PLAYER, as shown in the following screenshot: Creating an AI character Our NPC will be roaming around the scene in a random direction. It'll have the following two senses: The perspective sense will check whether the tank aspect is within a set visible range and distance The touch sense will detect if the enemy aspect has collided with its box collider, which we'll be adding to the tank in a later step Because our player tank will have the PLAYER aspect type, the NPC will be looking for any aspectType not equal to its own. The code in the Wander.cs file is as follows: using UnityEngine; public class Wander : MonoBehaviour { private Vector3 targetPosition; private float movementSpeed = 5.0f; private float rotationSpeed = 2.0f; private float targetPositionTolerance = 3.0f; private float minX; private float maxX; private float minZ; private float maxZ; void Start() { minX = -45.0f; maxX = 45.0f; minZ = -45.0f; maxZ = 45.0f; //Get Wander Position GetNextPosition(); } void Update() { if (Vector3.Distance(targetPosition, transform.position) <= targetPositionTolerance) { GetNextPosition(); } Quaternion targetRotation = Quaternion.LookRotation(targetPosition - transform.position); transform.rotation = Quaternion.Slerp(transform.rotation, targetRotation, rotationSpeed * Time.deltaTime); transform.Translate(new Vector3(0, 0, movementSpeed * Time.deltaTime)); } void GetNextPosition() { targetPosition = new Vector3(Random.Range(minX, maxX), 0.5f, Random.Range(minZ, maxZ)); } } The Wander script generates a new random position in a specified range whenever the AI character reaches its current destination point. The Update method will then rotate our enemy and move it toward this new destination. Attach this script to our AI character so that it can move around in the scene. The Wander script is rather simplistic. Using the Sense class The Sense class is the interface of our sensory system that the other custom senses can implement. It defines two virtual methods, Initialize and UpdateSense, which will be implemented in custom senses, and are executed from the Start and Update methods, respectively. Virtual methods are methods that can be overridden using the override modifier in derived classes. Unlike abstract classes, virtual classes do not require that you override them. The code in the Sense.cs file looks like this: using UnityEngine; public class Sense : MonoBehaviour { public bool enableDebug = true; public Aspect.AspectTypes aspectName = Aspect.AspectTypes.ENEMY; public float detectionRate = 1.0f; protected float elapsedTime = 0.0f; protected virtual void Initialize() { } protected virtual void UpdateSense() { } // Use this for initialization void Start () { elapsedTime = 0.0f; Initialize(); } // Update is called once per frame void Update () { UpdateSense(); } } The basic properties include its detection rate to execute the sensing operation, as well as the name of the aspect it should look for. This script will not be attached to any of our objects since we'll be deriving from it for our actual senses. Giving a little perspective The perspective sense will detect whether a specific aspect is within its field of view and visible distance. If it sees anything, it will take the specified action, which in this case is to print a message to the console. The code in the Perspective.cs file looks like this: using UnityEngine; public class Perspective : Sense { public int fieldOfView = 45; public int viewDistance = 100; private Transform playerTransform; private Vector3 rayDirection; protected override void Initialize() { playerTransform = GameObject.FindGameObjectWithTag("Player").transform; } protected override void UpdateSense() { elapsedTime += Time.deltaTime; if (elapsedTime >= detectionRate) { DetectAspect(); } } //Detect perspective field of view for the AI Character void DetectAspect() { RaycastHit hit; rayDirection = playerTransform.position - transform.position; if ((Vector3.Angle(rayDirection, transform.forward)) < fieldOfView) { // Detect if player is within the field of view if (Physics.Raycast(transform.position, rayDirection, out hit, viewDistance)) { Aspect aspect = hit.collider.GetComponent<Aspect>(); if (aspect != null) { //Check the aspect if (aspect.aspectType != aspectName) { print("Enemy Detected"); } } } } } We need to implement the Initialize and UpdateSense methods that will be called from the Start and Update methods of the parent Sense class, respectively. In the DetectAspect method, we first check the angle between the player and the AI's current direction. If it's in the field of view range, we shoot a ray in the direction that the player tank is located. The ray length is the value of the visible distance property. The Raycast method will return when it first hits another object. This way, even if the player is in the visible range, the AI character will not be able to see if it's hidden behind the wall. We then check for an Aspect component, and it will return true only if the object that was hit has an Aspect component and its aspectType is different from its own. The OnDrawGizmos method draws lines based on the perspective field of view angle and viewing distance so that we can see the AI character's line of sight in the editor window during play testing. Attach this script to our AI character and be sure that the aspect type is set to ENEMY. This method can be illustrated as follows: void OnDrawGizmos() { if (playerTransform == null) { return; } Debug.DrawLine(transform.position, playerTransform.position, Color.red); Vector3 frontRayPoint = transform.position + (transform.forward * viewDistance); //Approximate perspective visualization Vector3 leftRayPoint = frontRayPoint; leftRayPoint.x += fieldOfView * 0.5f; Vector3 rightRayPoint = frontRayPoint; rightRayPoint.x -= fieldOfView * 0.5f; Debug.DrawLine(transform.position, frontRayPoint, Color.green); Debug.DrawLine(transform.position, leftRayPoint, Color.green); Debug.DrawLine(transform.position, rightRayPoint, Color.green); } } Touching is believing The next sense we'll be implementing is Touch.cs, which triggers when the player tank entity is within a certain area near the AI entity. Our AI character has a box collider component and its IsTrigger flag is on. We need to implement the OnTriggerEnter event, which will be called whenever another collider enters the collision area of this game object's collider. Since our tank entity also has a collider and rigid body components, collision events will be raised as soon as the colliders of the AI character and player tank collide. Unity provides two other trigger events besides OnTriggerEnter: OnTriggerExit and OnTriggerStay. Use these to detect when a collider leaves a trigger, and to fire off every frame that a collider is inside the trigger, respectively. The code in the Touch.cs file is as follows: using UnityEngine; public class Touch : Sense { void OnTriggerEnter(Collider other) { Aspect aspect = other.GetComponent<Aspect>(); if (aspect != null) { //Check the aspect if (aspect.aspectType != aspectName) { print("Enemy Touch Detected"); } } } } Our sample NPC and tank have  BoxCollider components on them already. The NPC has its sensor collider set to IsTrigger = true . If you're setting up the scene on your own, make sure you add the BoxCollider component yourself, and that it covers a wide enough area to trigger easily for testing purposes. Our trigger can be seen in the following screenshot: The previous screenshot shows the box collider on our enemy AI that we'll use to trigger the touch sense event. In the following screenshot, we can see how our AI character is set up: For demo purposes, we just print out that the enemy aspect has been detected by the touch sense, but in your own games, you can implement any events and logic that you want. Testing the results Hit play in the Unity editor and move the player tank near the wandering AI NPC by clicking on the ground to direct the tank to move to the clicked location. You should see the Enemy touch detected message in the console log window whenever our AI character gets close to our player tank: The previous screenshot shows an AI agent with touch and perspective senses looking for another aspect. Move the player tank in front of the NPC, and you'll get the Enemy detected message. If you go to the editor view while running the game, you should see the debug lines being rendered. This is because of the OnDrawGizmos method implemented in the perspective Sense class. To summarize, we introduced the concept of using sensors and implemented two distinct senses—perspective and touch—for our AI character. If you enjoyed this excerpt, check out the book Unity 2017 Game AI Programming - Third Edition, to explore the brand-new features in Unity 2017. How to use arrays, lists, and dictionaries in Unity for 3D game development How to create non-player Characters (NPC) with Unity 2018
Read more
  • 0
  • 0
  • 45267

article-image-troubleshooting-in-vrealize-operations-components
Vijin Boricha
06 Jun 2018
11 min read
Save for later

Troubleshooting techniques in vRealize Operations components

Vijin Boricha
06 Jun 2018
11 min read
vRealize Operations Manager helps in managing the health, efficiency and compliance of virtualized environments. In this tutorial, we have covered the latest troubleshooting techniques of vRealize Operations. Let's see what we can do to monitor and troubleshoot the key vRealize Operations components and services. This is an excerpt from Mastering vRealize Operations Manager - Second Edition written by Spas Kaloferov, Scott Norris, and Christopher Slater. Troubleshooting Services Let's first take a look into some of the most important vRealize Operations services. The Apache2 service vRealize Operations internally uses the Apache2 web server to host Admin UI, Product UI, and Suite API. Here are some useful service log locations: Log file nameLocationPurposeaccess_log & error_logstorage/log/var/log/apache2 Stores log information for the Apache service The Watchdog service Watchdog is a vRealize Operations service that maintains the necessary daemons/services and attempts to restart them as necessary should there be any failure. The vcops-watchdog is a Python script that runs every 5 minutes by means of the vcops-watchdog-daemon with the purpose of monitoring the various vRealize Operations services, including CaSA. The Watchdog service performs the following checks: PID file of the service Service status Here are some useful service log locations: Log file nameLocationPurposevcops-watchdog.log/usr/lib/vmware-vcops/user/log/vcops-watchdog/Stores log information for the WatchDog service The Collector service The Collector service sends a heartbeat to the controller every 30 seconds. By default, the Collector will wait for 30 minutes for adapters to synchronize. The collector properties, including enabling or disabling Self Protection, can be configured from the collector.properties properties file located in /usr/lib/vmware-vcops/user/conf/collector. Here are some useful service log locations: Log file nameLocationPurposecollector.log/storage/log/vcops/log/Stores log information for the Collector service The Controller service The Controller service is part of the analytics engine. The controller does the decision making on where new objects should reside. The controller manages the storage and retrieval of the inventory of the objects within the system. The Controller service has the following uses: It will monitor the collector status every minute How long a deleted resource is available in the inventory How long a non-existing resource is stored in the database Here are some useful service file locations: Log file nameLocationPurposecontroller.properties/usr/lib/vmware-vcops/user/conf/controller/Stores properties information for the Controller service Databases As we learned, vRealize Operations contains quite a few databases, all of which are of great importance for the function of the product. Let’s take a deeper look into those databases. Cassandra DB Currently, Cassandra DB stores the following information: User Preferences and Config Alerts definition Customizations Dashboards, Policies, and View Reports and Licensing Shard Maps Activities Cassandra stores all the information which we see in the content folder; basically, any settings which are applied globally. You are able to log into the Cassandra database from any Analytic Node. The information is the same across nodes. There are two ways to connect to the Cassandra database: cqlshrc is a command-line tool used to get the data within Cassandra, in a SQL-like fashion (inbuilt). To connect to the DB, run the following from the command line (SSH): $VMWARE_PYTHON_BIN $ALIVE_BASE/cassandra/apache-cassandra-2.1.8/bin/cqlsh --ssl --cqlshrc $ALIVE_BASE/user/conf/cassandra/cqlshrc Once you are connected to the DB, we need to navigate to the globalpersistence key space using the following command: vcops_user@cqlsh&gt; use globalpersistence ; The nodetool command-line tool Once you are logged on to the Cassandra DB, we can run the following commands to see information: Command syntaxPurposeDescribe tablesTo list all the relation (tables) in the current database instanceDescribe &lt;table_name&gt;To list the content of that particular tableExitTo exit the Cassandra command lineselect commandTo select any Column data from a tabledelete commandTo delete any Column data from a table Some of the important tables in Cassandra are: Table namePurposeactivity_2_tblStores all the activitiestbl_2a8b303a3ed03a4ebae2700cbfae90bfStores the Shard mapping information of an object (table name may be differ in each environment)supermetricStores the defined super metricspolicyStores all the defined policiesAuthStores all the user details in the clusterglobal_settingsAll the configured global settings are stored herenamespace_to_classtypeInforms what type of data is stored in what table under CassandrasymptomproblemdefinitionAll the defined symptomscertificatesStores all the adapter and data source certificates The Cassandra database has the following configuration files: File typeLocationCassandra.yaml/usr/lib/vmware-vcops/user/conf/cassandra/vcops_cassandra.properties/usr/lib/vmware-vcops/user/conf/cassandra/Cassandra conf scripts/usr/lib/vmware_vcopssuite/utilities/vmware/vcops/cassandra/ The Cassandra.yaml file stores certain information such as the default location to save data (/storage/db/vcops/cassandra/data). The file contains information about all the nodes. When a new node joins the cluster, it refers to this file to make sure it contacts the right node (master node). It also has all the SSL certificate information. Cassandra is started and stopped via the CaSA service, but just because CaSA is running does not mean that the Cassandra service is necessarily running. The service command to check the status of the Cassandra DB service from the command line (SSH) is: service vmware-vcops status cassandra The Cassandra cassandraservice.Sh is located in: $VCOPS_BASE/cassandra/bin/ Here are some useful Cassandra DB log locations: Log File NameLocationPurposeSystem.log/usr/lib/vmware-vcops/user/log/cassandraStores all the Cassandra-related activitieswatchdog_monitor.log/usr/lib/vmware-vcops/user/log/cassandraStores Cassandra Watchdog logswrapper.log/usr/lib/vmware-vcops/user/log/cassandraStores Cassandra start and stop-related logsconfigure.log /usr/lib/vmware-vcops/user/log/cassandraStores logs related to Python scripts of vRopsinitial_cluster_setup.log/usr/lib/vmware-vcops/user/log/cassandraStores logs related to Python scripts of vRopsvalidate_cluster.log/usr/lib/vmware-vcops/user/log/cassandraStores logs related to Python scripts of vRops To enable debug logging for the Cassandra database, edit the logback.xml XML file located in: /usr/lib/vmware-vcops/user/conf/cassandra/ Change &lt;root level="INFO"&gt; to &lt;root level="DEBUG"&gt;. The System.log will not show the debug logs for Cassandra. If you experience any of the following issues, this may be an indicator that the database has performance issues, and so you may need to take the appropriate steps to resolve them: Relationship changes are not be reflected immediately Deleted objects still show up in the tree View and Reports takes slow in processing and take a long time to open. Logging on to the Product UI takes a long time for any user Alert was supposed to trigger but it never happened Cassandra can tolerate 5 ms of latency at any given point of time. You can validate the cluster by running the following command from the command line (SSH). First, navigate to the following folder: /usr/lib/vmware-vcopssuite/utilities/ Run the following command to validate the cluster: $VMWARE_PYTHON_BIN -m vmware.vcops.cassandra.validate_cluster &lt;IP_ADDRESS_1 IP_ADDRESS_2&gt; You can also use the nodetool to perform a health check, and possibly resolve database load issues. The command to check the load status of the activity tables in vRealize Operations version 6.3 and older is as follows: $VCOPS_BASE/cassandra/apache-cassandra-2.1.8/bin/nodetool --port 9008 status For the 6.5 release and newer, VMware added the requirement of using a 'maintenanceAdmin' user along with a password file. The new command is as follows: $VCOPS_BASE/cassandra/apache-cassandra-2.1.8/bin/nodetool -p 9008 --ssl -u maintenanceAdmin --password-file /usr/lib/vmware-vcops/user/conf/jmxremote.password status Regardless of which method you choose to perform the health check, if any of the nodes have over 600 MB of load, you should consult with VMware Global Support Services on the next steps to take, and how to elevate the load issues. Central (Repl DB) The Postgres database was introduced in 6.1. It has two instances in version 6.6. The Central Postgres DB, also called repl, and the Alerts/HIS Postgres DB, also called data, are two separate database instances under the database called vcopsdb. The central DB exists only on the master and the master replica nodes when HA is enabled. It is accessible via port 5433 and it is located in /storage/db/vcops/vpostgres/repl. Currently, the database stores the Resources inventory. You can connect to the central DB from the command line (SSH). Log in on the analytic node you wish to connect to and run: su - postgres The command should not prompt for a password if ran as root. Once logged in, connect to the database instance by running: /opt/vmware/vpostgres/current/bin/psql -d vcopsdb -p 5433 The service command to start the central DB from the command line (SSH) is: service vpostgres-repl start Here are some useful log locations: Log file nameLocationPurposepostgresql-&lt;xx&gt;.log/storage/db/vcops/vpostgres/data/pg_logProvides information on Postgres database cleanup and other disk-related information Alerts/HIS (Data) DB The Alerts DB is called data on all the data nodes including the master and master replica node. It was again introduced in 6.1. Starting from 6.2, the  Historical Inventory Service xDB was merged with the Alerts DB. It is accessible via port 5432 and it is located in /storage/db/vcops/vpostgres/data. Currently, the database stores: Alerts and alarm history History of resource property data History of resource relationship You can connect to the Alerts DB from the command line (SSH). Log in on the analytic node you wish to connect to and run: su - postgres The command should not prompt for a password if ran as root. Once logged in, connect to the database instance by running: /opt/vmware/vpostgres/current/bin/psql -d vcopsdb -p 5432 The service command to start the Alerts DB from the command line (SSH) is: service vpostgres start FSDB The File System Database (FSDB) contains all raw time series metrics and super metrics data for the discovered resources. What is FSDB in vRealize Operations Manager?: FSDB is a GemFire server and runs inside analytics JVM. FSDB in vRealize Operations uses the Sharding Manager to distribute data between nodes (new objects). (We will discuss what vRealize Operations cluster nodes are later in this chapter.) The File System Database is available in all the nodes of a vRops Cluster deployment. It has its own properties file. FSDB stores data (time series data ) collected by adapters and data which is generated/calculated (system, super, badge, CIQ metrics, and so on) based on analysis of that data. If you are troubleshooting FSDB performance issues, you should start from the Self Performance Details dashboard, more precisely, the FSDB Data Processing widget. We covered both of these earlier in this chapter. You can also take a look at the metrics provided by the FSDB Metric Picker: You can access it by navigating to the Environment tab, vRealize Operations Clusters, selecting a node, and selecting vRealize Operations Manager FSDB. Then, select the All Metrics tab. You can check the synchronization state of the FSDB to determine the overall health of the cluster by running the following command from the command line (SSH): $VMWARE_PYTHON_BIN /usr/lib/vmware-vcops/tools/vrops-platform-cli/vrops-platform-cli.py getShardStateMappingInfo By restarting the FSDB, you can trigger synchronization of all the data by getting missing data from other FSDBs. Synchronization takes place only when you have vRealize Operations HA configured. Here are some useful log locations for FSDB: Log file nameLocaitonPurposeAnalytics-&lt;UUID&gt;.log/usr/lib/vmware-vcops/user/log Used to check the Sharding Module functionality Can trace which objects have synced ShardingManager_&lt;UUID&gt;.log/usr/lib/vmware-vcops/user/logCan be used to get the total time the sync tookfsdb-accessor-&lt;UUID&gt;.log/usr/lib/vmware-vcops/user/logProvides information on FSDB database cleanup and other disk-related information Platform-cli Platform-cli is a tool by which we can get information from various databases, including the GemFire cache, Cassandra, and the Alerts/HIS persistence databases. In order to run this Python script, you need to run the following command: $VMWARE_PYTHON_BIN /usr/lib/vmware-vcops/tools/vrops-platform-cli/vrops-platform-cli.py The following example of using this command will list all the resources in ascending order and also show you which shard it is stored on: $VMWARE_PYTHON_BIN /usr/lib/vmware-vcops/tools/vrops-platform-cli/vrops-platform-cli.py getResourceToShardMapping We discussed how to troubleshoot some of the most important components like services and databases along with troubleshooting failures in the upgrade process. To know more about self-monitoring dashboards and infrastructure compliance, check out this book Mastering vRealize Operations Manager - Second Edition. What to expect from vSphere 6.7 Introduction to SDN – Transformation from legacy to SDN Learning Basic PowerCLI Concepts
Read more
  • 0
  • 0
  • 37112

article-image-getting-started-polygons-blender
Sunith Shetty
05 Jun 2018
11 min read
Save for later

Building VR objects in React V2 2.0: Getting started with polygons in Blender

Sunith Shetty
05 Jun 2018
11 min read
A polygon is an n-sided object composed of vertices (points), edges, and faces. A face can face in or out or be double-sided. For most real-time VR, we use single–sided polygons; we noticed this when we first placed a plane in the world, depending on the orientation, you may not see it. In today’s tutorial, we will understand why Polygons are the best way to present real-time graphics. To really show how this all works, I'm going to show the internal format of an OBJ file. Normally, you won't hand edit these — we are beyond the days of VR constructed with a few thousand polygons (my first VR world had a train that represented downloads, and it had six polygons, each point lovingly crafted by hand), so hand editing things isn't necessary, but you may need to edit the OBJ files to include the proper paths or make changes your modeler may not do natively–so let's dive in! This article is an excerpt from a book written by John Gwinner titled Getting Started with React VR. In this book, you'll gain a deeper understanding of Virtual Reality and a full-fledged  VR app to add to your profile. Polygons are constructed by creating points in 3D space, and connecting them with faces. You can consider that vertices are connected by lines (most modelers work this way), but in the native WebGL that React VR is based on, it's really just faces. The points don't really exist by themselves, but more or less "anchor" the corners of the polygon. For example, here is a simple triangle, modeled in Blender: In this case, I have constructed a triangle with three vertices and one face (with just a flat color, in this case green). The edges, shown in yellow or lighter shade, are there for the convenience of the modeler and won't be explicitly rendered. Here is what the triangle looks like inside our gallery: If you look closely in the Blender photograph, you'll notice that the object is not centered in the world. When it exports, it will export with the translations that you have applied in Blender. This is why the triangle is slightly off center on the pedestal. The good news is that we are in outer space, floating in orbit, and therefore do not have to worry about gravity. (React VR does not have a physics engine, although it is straightforward to add one.) The second thing you may notice is that the yellow lines (lighter gray lines in print) around the triangle in Blender do not persist in the VR world. This is because the file is exported as one face, which connects three vertices. The plural of vertex is vertices, not vertexes. If someone asks you about vertexes, you can laugh at them almost as much as when someone pronouncing Bézier curve as "bez ee er." Ok, to be fair, I did that once, now I always say Beh zee a. Okay, all levity aside, now let's make it look more interesting than a flat green triangle. This is done through something usually called as texture mapping. Honestly, the phrase "textures" and "materials" often get swapped around interchangeably, although lately they have sort of settled down to materials meaning anything about an object's physical appearance except its shape; a material could be how shiny it is, how transparent it is, and so on. A texture is usually just the colors of the object — tile is red, skin may have freckles — and is therefore usually called a texture map which is represented with a JPG, TGA, or other image format. There is no real cross software file format for materials or shaders (which are usually computer code that represents the material). When it comes time to render, there are some shader languages that are standard, although these are not always used in CAD programs. You will need to learn what your CAD program uses, and become proficient in how it handles materials (and texture maps). This is far beyond the scope of this book. The OBJ file format (which is what React VR usually uses) allows the use of several different texture maps to properly construct the material. It also can indicate the material itself via parameters coded in the file. First, let's take a look at what the triangle consists of. We imported OBJ files via the Model keyword: <Model source={{ obj: asset('OneTri.obj'), mtl: asset('OneTri.mtl'), }} style={{ transform: [ { translate: [ -0, -1, -5. ] }, { scale: .1 }, ] }} /> First, let's open the MTL (material) file (as the .obj file uses the .mtl file). The OBJ file format was developed by Wavefront: # Blender MTL File: 'OneTri.blend' # Material Count: 1 newmtl BaseMat Ns 96.078431 Ka 1.000000 1.000000 1.000000 Kd 0.040445 0.300599 0.066583 Ks 0.500000 0.500000 0.500000 Ke 0.000000 0.000000 0.000000 Ni 1.000000 d 1.000000 illum 2 A lot of this is housekeeping, but the important things are the following parameters: Ka : Ambient color, in RGB format Kd : Diffuse color, in RGB format Ks : Specular color, in RGB format Ns : Specular exponent, from 0 to 1,000 d : Transparency (d meant dissolved). Note that WebGL cannot normally show refractive materials, or display real volumetric materials and raytracing, so d is simply the percentage of how much light is blocked. 1 (the default) is fully opaque. Note that d in the .obj specification works for illum mode 2. Tr : Alternate representation of transparency; 0 is fully opaque. illum <#> (a number from 0 to 10). Not all illumination models are supported by WebGL. The current list is: Color on and Ambient off. Color on and Ambient on. Highlight on (and colors) <= this is the normal setting. There are other illumination modes, but are currently not used by WebGL. This of course, could change. Ni is optical density. This is important for CAD systems, but the chances of it being supported in VR without a lot of tricks are pretty low.  Computers and video cards get faster and faster all the time though, so maybe optical density and real time raytracing will be supported in VR eventually, thanks to Moore's law (statistically, computing power roughly doubles every two years or so). Very important: Make sure you include the "lit" keyword with all of your model declarations, otherwise the loader will assume you have only an emissive (glowing) object and will ignore most of the parameters in the material file! YOU HAVE BEEN WARNED. It'll look very weird and you'll be completely confused. Don't ask me why I know! The OBJ file itself has a description of the geometry. These are not usually something you can hand edit, but it's useful to see the overall structure. For the simple object, shown before, it's quite manageable: # Blender v2.79 (sub 0) OBJ File: 'OneTri.blend' # www.blender.org mtllib OneTri.mtl o Triangle v -7.615456 0.218278 -1.874056 v -4.384528 15.177612 -6.276536 v 4.801097 2.745610 3.762014 vn -0.445200 0.339900 0.828400 usemtl BaseMat s off f 3//1 2//1 1//1 First, you see a comment (marked with #) that tells you what software made it, and the name of the original file. This can vary. The mtllib is a call out to a particular material file, that we already looked at. The o lines (and g line is if there a group) define the name of the object and group; although React VR doesn't  really  use these (currently), in most modeling packages this will be listed in the hierarchy of objects. The v and vn keywords are where it gets interesting, although these are still not something visible. The v keyword creates a vertex in x, y, z space. The vertices built will later be connected into polygons. The vn establishes the normal for those objects, and vt will create the texture coordinates of the same points. More on texture coordinates in a bit. The usemtl BaseMat establishes what material, specified in your .mtl file, that will be used for the following faces. The s off means smoothing is turned off. Smoothing and vertex normals can make objects look smooth, even if they are made with very few polygons. For example, take a look at these two teapots; the first is without smoothing. Looks pretty computer graphics like, right? Now, have a look at the same teapot with the "s 1" parameter specified throughout, and normals included in the file.  This is pretty normal (pun intended), what I mean is most CAD software will compute normals for you. You can make normals; smooth, sharp, and add edges where needed. This adds detail without excess polygons and is fast to render. The smooth teapot looks much more real, right? Well, we haven't seen anything yet! Let's discuss texture. I didn't used to like Sushi because of the texture. We're not talking about that kind of texture. Texture mapping is a lot like taking a piece of Christmas wrapping paper and putting it around an odd shaped object. Just like when you get that weird looking present at Christmas and don't know quite what to do, sometimes doing the wrapping doesn't have a clear right way to do it. Boxes are easy, but most interesting objects aren't always a box. I found this picture online with the caption "I hope it's an X-Box." The "wrapping" is done via U, V coordinates in the CAD system. Let's take a look at a triangle, with proper UV coordinates. We then go get our wrapping paper, that is to say, we take an image file we are going to use as the texture, like this: We then wrap that in our CAD program by specifying this as a texture map. We'll then export the triangle, and put it in our world. You would probably have expected to see "left and bottom" on the texture map. Taking a closer look in our modeling package (Blender still) we see that the default UV mapping (using Blender's standard tools) tries to use as much of the texture map as possible, but from an artistic standpoint, may not be what we want. This is not to show that Blender is "yer doin' it wrong" but to make the point that you've got to check the texture mapping before you export. Also, if you are attempting to import objects without U,V coordinates, double-check them! If you are hand editing an .mtl file, and your textures are not showing up, double–check your .obj file and make sure you have vt lines; if you don't, the texture will not show up. This means the U,V coordinates for the texture mapping were not set. Texture mapping is non-trivial; there is quite an art about it and even entire books written about texturing and lighting. After having said that, you can get pretty far with Blender and any OBJ file if you've downloaded something from the internet and want to make it look a little better. We'll show you how to fix it. The end goal is to get a UV map that is more usable and efficient. Not all OBJ file exporters export proper texture maps, and frequently .obj files you may find online may or may not have UVs set. You can use Blender to fix the unwrapping of your model. We have several good Blender books to provide you a head start in it. You can also use your favorite CAD modeling program, such as Max, Maya, Lightwave, Houdini, and so on. This is important, so I'll mention it again in an info box. If you already use a different polygon modeler or CAD page, you don't have to learn Blender; your program will undoubtedly work fine.  You can skim this section. If you don't want to learn Blender anyway, you can download all of the files that we construct from the Github link. You'll need some of the image files if you do work through the examples. Files for this article are at: http://bit.ly/VR_Chap7. To summarize, we learned the basics of polygon modeling with Blender, also got to know the importance of polygon budgets, how to export those models, and details about the OBJ/MTL file formats. To know more about how to make virtual worlds look real, do check out this book Getting Started with React VR. Top 7 modern Virtual Reality hardware systems Types of Augmented Reality targets Unity plugins for augmented reality application development
Read more
  • 0
  • 0
  • 34737

article-image-asking-if-developers-can-be-great-entrepreneurs-is-like-asking-if-moms-can-excel-at-work-and-motherhood
Aarthi Kumaraswamy
05 Jun 2018
13 min read
Save for later

Asking if good developers can be great entrepreneurs is like asking if moms can excel at work and motherhood

Aarthi Kumaraswamy
05 Jun 2018
13 min read
You heard me right. Asking, ‘can developers succeed at entrepreneurship?’ is like asking ‘if women should choose between work and having children’. ‘Can you have a successful career and be a good mom?’ is a question that many well-meaning acquaintances still ask me. You see I am a new mom, (not a very new one, my son is 2.5 years old now). I’m also the managing editor of this site since its inception last year when I rejoined work after a maternity break. In some ways, the Packt Hub feels like running a startup too. :) Now how did we even get to this question? It all started with the results of this year's skill up survey. Every year we conduct a survey amongst developers to know the pulse of the industry and to understand their needs, aspirations, and apprehensions better so that we can help them better to skill up. One of the questions we asked them this year was: ‘Where do you see yourself in the next 5 years?’ To this, an overwhelming one third responded stating they want to start a business. Surveys conducted by leading consultancies, time and again, show that only 2 or 3 in 10 startups survive. Naturally, a question that kept cropping up in our editorial discussions after going through the above response was: Can developers also succeed as entrepreneurs? To answer this, first let’s go back to the question: Can you have a successful career and be a good mom? The short answer is, Yes, it is possible to be both, but it will be hard work. The long answer is, This path is not for everyone. As a working mom, you need to work twice as hard, befriend uncertainty and nurture a steady support system that you trust both at work and home to truly flourish. At times, when you see your peers move ahead in their careers or watch stay-at-home moms with their kids, you might feel envy, inadequacy or other unsavory emotions in between. You need superhuman levels of mental, emotional and physical stamina to be the best version of yourself to move past such times with grace, and to truly appreciate the big picture: you have a job you love and a kid who loves you. But what has my experience as a working mom got to do anything with developers who want to start their own business? You’d be surprised to see the similarities. Starting a business is, in many ways, like having a baby. There is a long incubation period, then the painful launch into the world followed by sleepless nights of watching over the business so it doesn’t choke on itself while you were catching a nap. And you are doing this for the first time too, even if you have people giving you advice. Then there are those sustenance issues to look at and getting the right people in place to help the startup learn to turn over, sit up, stand up, crawl and then finally run and jump around till it makes you go mad with joy and apprehension at the same time thinking about what’s next in store now. How do I scale my business? Does my business even need me? To me, being a successful developer, writer, editor or any other profession, for that matter, is about being good at what you do (write code, write stories, spot raw talent, and bring out the best in others etc) and enjoying doing it immensely. It requires discipline, skill, expertise, and will. To see if the similarities continue, let’s try rewriting my statement for a developer turned entrepreneur. Can you be a good developer and a great entrepreneur? This path is not for everyone. As a developer-turned-entrepreneur, you need to work twice as hard as your friends who have a full-time job, to make it through the day wearing many hats and putting out workplace fires that have got nothing to do with your product development. You need a steady support system both at work and home to truly flourish. At times, when you see others move ahead in their careers or listen to entrepreneurs who have sold their business to larger companies or just got VC funded, you might feel envy, selfishness, inadequacy or any other emotion in between. You need superhuman levels of maturity to move past such times, and to truly appreciate the big picture: you built something incredible and now you are changing the world, even if it is just one customer at a time with your business. Now that we sail on the same boat, here are the 5 things I learned over the last year as a working mom that I hope you as a developer-entrepreneur will find useful. I’d love to hear your take on them in the comments below. [divider style="normal" top="20" bottom="20"] #1. Become a productivity master. Compartmentalize your roles and responsibilities into chunks spread across the day. Ruthlessly edit out clutter from your life. [divider style="normal" top="20" bottom="20"] Your life changed forever when your child (business) was born. What worked for you as a developer may not work as an entrepreneur. The sooner you accept this, the faster you will see the differences and adapt accordingly. For example, I used to work quite late into the night and wake up late in the morning before my son was born. I also used to binge watch TV shows during weekends. Now I wake up as early as 4.30 AM so that I can finish off the household chores for the day and get my son ready for play school by 9 AM. I don’t think about work at all at this time. I must accommodate my son during lunch break and have a super short lunch myself. But apart from that I am solely focused on the work at hand during office hours and don’t think about anything else. My day ends at 11 PM. Now I am more choosy about what I watch as I have very less time for leisure. I instead prefer to ‘do’ things like learning something new, spending time bonding with my son, or even catching up on sleep, on weekends. This spring-cleaning and compartmentalization took a while to get into habit, but it is worth the effort. Truth be told, I still occasionally fallback to binging, like I am doing right now with this article, writing it at 12 AM on a Saturday morning because I’m in a flow. As a developer-entrepreneur, you might be tempted to spend most of your day doing what you love, i.e., developing/creating something because it is a familiar territory and you excel at that. But doing so at the cost of your business will cost you your business, sooner than you think. Resist the urge and instead, organize your day and week such that you spend adequate time on key aspects of your business including product development. Make a note of everything you do for a few days and then decide what’s not worth doing and what else can you do instead in its place. Have room only for things that matter to you which enrich your business goals and quality of life. [divider style="normal" top="20" bottom="20"] #2. Don’t aim to create the best, aim for good enough. Ship the minimum viable product (MVP). [divider style="normal" top="20" bottom="20"] All new parents want the best for their kids from the diaper they use to the food they eat and the toys they play with. They can get carried away buying stuff that they think their child needs only to have a storeroom full of unused baby items. What I’ve realized is that babies actually need very less stuff. They just need basic food, regular baths, clean diapers (you could even get away without those if you are up for the challenge!), sleep, play and lots of love (read mommy and daddy time, not gifts). This is also true for developers who’ve just started up. They want to build a unique product that the world has never seen before and they want to ship the perfect version with great features and excellent customer reviews. But the truth is, your first product is, in fact, a prototype built on your assumption of what your customer wants. For all you know, you may be wrong about not just your customer’s needs but even who your customer is. This is why a proper market study is key to product development. Even then, the goal should be to identify the key features that will make your product viable. Ship your MVP, seek quick feedback, iterate and build a better product in the next version. This way you haven’t unnecessarily sunk upfront costs, you’re ahead of your competitors and are better at estimating customer needs as well. Simplicity is the highest form of sophistication. You need to find just the right elements to keep in your product and your business model. As Michelangelo would put it, “chip away the rest”. [divider style="normal" top="20" bottom="20"] #3. There is no shortcut to success. Don’t compromise on quality or your values. [divider style="normal" top="20" bottom="20"] The advice to ship a good enough product is not a permission to cut corners. Since their very first day, babies watch and learn from us. They are keen observers and great mimics. They will feel, talk and do what we feel, say and do, even if they don’t understand any of the words or gestures. I think they do understand emotions. One more reason for us to be better role models for our children. The same is true in a startup. The culture mimics the leader and it quickly sets in across the organization. As a developer, you may have worked long hours, even into the night to get that piece of code working and felt a huge rush from the accomplishment. But as an entrepreneur remember that you are being watched and emulated by those who work for you. You are what they want to be when they ‘grow up’. Do you want a crash and burn culture at your startup? This is why it is crucial that you clarify your business’s goals, purpose, and values to everyone in your company, even if it just has one other person working there. It is even more important that you practice what you preach. Success is an outcome of right actions taken via the right habits cultivated. [divider style="normal" top="20" bottom="20"] #4. You can’t do everything yourself and you can’t be everywhere. You need to trust others to do their job and appreciate them for a job well done. This also means you hire the right people first. [divider style="normal" top="20" bottom="20"] It takes a village to raise a child, they say. And it is very true, especially in families where both parents work. It would’ve been impossible for me to give my 100% at work, if I had to keep checking in on how my son is doing with his grandparents or if I refused the support my husband offers sharing household chores. Just because you are an exceptional developer, you can’t keep micromanaging product development at your startup. As a founder, you have a larger duty towards your entire business and customers. While it is good to check how your product is developing, your pure developer days are behind. Find people better than you at this job and surround yourself with people who are good at what they do, and share your values for key functions of your startup. Products succeed because of developers, businesses succeed because their leader knew when to get out of the way and when to step in. [divider style="normal" top="20" bottom="20"] #5. Customer first, product next, ego last. [divider style="normal" top="20" bottom="20"] This has been the toughest lesson so far. It looks deceptively simple in theory but hard to practice in everyday life. As developers and engineers, we are primed to find solutions. We also see things in binaries: success and failure, right and wrong, good and bad. This is a great quality to possess to build original products. However, it is also the Achilles heel for the developer turned entrepreneur. In business, things are seldom in black and white. Putting product first can be detrimental to knowing your customers. For example, you build a great product, but the customer is not using it as you intended it to be used. Do you train your customers to correct their ways or do you unearth the underlying triggers that contribute to that customer behavior and alter your product’s vision? The answer is, ‘it depends’. And your job as an entrepreneur is to enable your team to find the factors that influence your decision on the subject; it is not to find the answer yourself or make a decision based on your experience. You need to listen more than you talk to truly put your customer first. To do that you need to acknowledge that you don’t know everything. Put your ego last. Make it easy for customers to share their views with you directly. Acknowledge that your product/service exists to serve your customers’ needs. It needs to evolve with your customer. Find yourself a good mentor or two who you respect and want to emulate. They will be your sounding board and the light at the end of the tunnel during your darkest hours (you will have many of those, I guarantee). Be grateful for your support network of family, friends, and colleagues and make sure to let them know how much you appreciate them for playing their part in your success. If you have the right partner to start the business, jump in with both feet. Most tech startups have a higher likelihood of succeeding when they have more than one founder. Probably because the demands of tech and business are better balanced on more than one pair of shoulders. [dropcap]H[/dropcap]ow we frame our questions is a big part of the problem. Reframing them makes us see different perspectives thereby changing our mindsets. Instead of asking ‘can working moms have it all?’, what if we asked ‘what can organizations do to help working moms achieve work-life balance?’, ‘how do women cope with competing demands at work and home?’ Instead of asking ‘can developers be great entrepreneurs?’ better questions to ask are ‘what can developers do to start a successful business?’, ‘what can we learn from those who have built successful companies?’ Keep an open mind; the best ideas may come from the unlikeliest sources! 1 in 3 developers wants to be an entrepreneur. What does it take to make a successful transition? Developers think managers don’t know enough about technology. And that’s hurting business. 96% of developers believe developing soft skills is important  
Read more
  • 0
  • 0
  • 14615

article-image-how-to-navigate-files-in-a-vue-app-using-the-dropbox-api
Pravin Dhandre
05 Jun 2018
13 min read
Save for later

How to navigate files in a Vue app using the Dropbox API

Pravin Dhandre
05 Jun 2018
13 min read
Dropbox API is a set of HTTP endpoints that help apps to integrate with Dropbox easily. It allows developers to work with files present in Dropbox and provides access to advanced functionality like full-text search, thumbnails, and sharing. In this article we will discuss Dropbox API functionalities listed below to navigate and query your files and folders: Loading and querying the Dropbox API Listing the directories and files from your Dropbox account Adding a loading state to your app Using Vue animations To get started you will need a Dropbox account and follow the steps detailed in this article. If you don't have one, sign up and add a few dummy files and folders. The content of Dropbox doesn't matter, but having folders to navigate through will help with understanding the code. This article is an excerpt from a book written by Mike Street titled Vue.js 2.x by Example.  This book puts Vue.js into a real-world context, guiding you through example projects to help you build Vue.js applications from scratch. Getting started—loading the libraries Create a new HTML page for your app to run in. Create the HTML structure required for a web page and include your app view wrapper: It's called #app here, but call it whatever you want - just remember to update the JavaScript. As our app code is going to get quite chunky, make a separate JavaScript file and include it at the bottom of the document. You will also need to include Vue and the Dropbox API SDK. As with before, you can either reference the remote files or download a local copy of the library files. Download a local copy for both speed and compatibility reasons. Include your three JavaScript files at the bottom of your HTML file: Create your app.js and initialize a new Vue instance, using the el tag to mount the instance onto the ID in your view. Creating a Dropbox app and initializing the SDK Before we interact with the Vue instance, we need to connect to the Dropbox API through the SDK. This is done via an API key that is generated by Dropbox itself to keep track of what is connecting to your account and where Dropbox requires you to make a custom Dropbox app. Head to the Dropbox developers area and select Create your app. Choose Dropbox API and select either a restricted folder or full access. This depends on your needs, but for testing, choose Full Dropbox. Give your app a name and click the button Create app. Generate an access token to your app. To do so, when viewing the app details page, click the Generate button under the Generated access token. This will give you a long string of numbers and letters - copy and paste that into your editor and store it as a variable at the top of your JavaScript. In this the API key will be referred to as XXXX: Now that we have our API key, we can access the files and folders from our Dropbox. Initialize the API and pass in your accessToken variable to the accessToken property of the Dropbox API: We now have access to Dropbox via the dbx variable. We can verify our connection to Dropbox is working by connecting and outputting the contents of the root path: This code uses JavaScript promises, which are a way of adding actions to code without requiring callback functions. Take a note of the first line, particularly the path variable. This lets us pass in a folder path to list the files and folders within that directory. For example, if you had a folder called images in your Dropbox, you could change the parameter value to /images and the file list returned would be the files and folders within that directory. Open your JavaScript console and check the output; you should get an array containing several objects - one for each file or folder in the root of your Dropbox. Displaying your data and using Vue to get it Now that we can retrieve our data using the Dropbox API, it's time to retrieve it within our Vue instance and display in our view. This app is going to be entirely built using components so we can take advantage of the compartmentalized data and methods. It will also mean the code is modular and shareable, should you want to integrate into other apps. We are also going to take advantage of the native Vue created() function - we'll cover it when it gets triggered in a bit. Create the component First off, create your custom HTML element, <dropbox-viewer>, in your View. Create a <script> template block at the bottom of the page for our HTML layout: Initialize your component in your app.js file, pointing it to the template ID: Viewing the app in the browser should show the heading from the template. The next step is to integrate the Dropbox API into the component. Retrieve the Dropbox data Create a new method called dropbox. In there, move the code that calls the Dropbox class and returns the instance. This will now give us access to the Dropbox API through the component by calling this.dropbox(): We are also going to integrate our API key into the component. Create a data function that returns an object containing your access token. Update the Dropbox method to use the local version of the key: We now need to add the ability for the component to get the directory list. For this, we are going to create another method that takes a single parameter—the path. This will give us the ability later to request the structure of a different path or folder if required. Use the code provided earlier - changing the dbx variable to this.dropbox(): Update the Dropbox filesListFolder function to accept the path parameter passed in, rather than a fixed value. Running this app in the browser will show the Dropbox heading, but won't retrieve any folders because the methods have not been called yet. The Vue life cycle hooks This is where the created() function comes in. The created() function gets called once the Vue instance has initialized the data and methods, but has yet to mount the instance on the HTML component. There are several other functions available at various points in the life cycle; more about these can be read at Alligator.io. The life cycle is as follows: Using the created() function gives us access to the methods and data while being able to start our retrieval process as Vue is mounting the app. The time between these various stages is split-second, but every moment counts when it comes to performance and creating a quick app. There is no point waiting for the app to be fully mounted before processing data if we can start the task early. Create the created() function on your component and call the getFolderStructure method, passing in an empty string for the path to get the root of your Dropbox: Running the app now in your browser will output the folder list to your console, which should give the same result as before. We now need to display our list of files in the view. To do this, we are going to create an empty array in our component and populate it with the result of our Dropbox query. This has the advantage of giving Vue a variable to loop through in the view, even before it has any content. Displaying the Dropbox data Create a new property in your data object titled structure, and assign this to an empty array. In the response function of the folder retrieval, assign response.entries to this.structure. Leave console.log as we will need to inspect the entries to work out what to output in our template: We can now update our view to display the folders and files from your Dropbox. As the structure array is available in our view, create a <ul> with a repeatable <li> looping through the structure. As we are now adding a second element, Vue requires templates to have one containing the element, wrap your heading and list in a <div>: Viewing the app in the browser will show a number of empty bullet points when the array appears in the JavaScript console. To work out what fields and properties you can display, expand the array in the JavaScript console and then further for each object. You should notice that each object has a collection of similar properties and a few that vary between folders and files. The first property, .tag, helps us identify whether the item is a file or a folder. Both types then have the following properties in common: id: A unique identifier to Dropbox name: The name of the file or folder, irrespective of where the item is path_display: The full path of the item with the case matching that of the files and folders path_lower: Same as path_display but all lowercase Items with a .tag of a file also contain several more fields for us to display: client_modified: This is the date when the file was added to Dropbox. content_hash: A hash of the file, used for identifying whether it is different from a local or remote copy. More can be read about this on the Dropbox website. rev: A unique identifier of the version of the file. server_modified: The last time the file was modified on Dropbox. size: The size of the file in bytes. To begin with, we are going to display the name of the item and the size, if present. Update the list item to show these properties: More file meta information To make our file and folder view a bit more useful, we can add more rich content and metadata to files such as images. These details are available by enabling the include_media_info option in the Dropbox API. Head back to your getFolderStructure method and add the parameter after path. Here are some new lines of readability: Inspecting the results from this new API call will reveal the media_info key for videos and images. Expanding this will reveal several more pieces of information about the file, for example, dimensions. If you want to add these, you will need to check that the media_info object exists before displaying the information: Try updating the path when retrieving the data from Dropbox. For example, if you have a folder called images, change the this.getFolderStructure parameter to /images. If you're not sure what the path is, analyze the data in the JavaScript console and copy the value of the path_lower attribute of a folder, for example: Formatting the file sizes With the file size being output in plain bytes it can be quite hard for a user to dechiper. To combat this, we can add a formatting method to output a file size which is more user-friendly, for example displaying 1kb instead of 1024. First, create a new key on the data object that contains an array of units called byteSizes: This is what will get appended to the figure, so feel free to make these properties either lowercase or full words, for example, megabyte. Next, add a new method, bytesToSize, to your component. This will take one parameter of bytes and output a formatted string with the unit at the end: We can now utilize this method in our view: Add loading screen The last step of this tutorial is to make a loading screen for our app. This will tell the user the app is loading, should the Dropbox API be running slowly (or you have a lot of data to show!). The theory behind this loading screen is fairly basic. We will set a loading variable to true by default that then gets set to false once the data has loaded. Based on the result of this variable, we will utilize view attributes to show, and then hide, an element with the loading text or animation in and also reveal the loaded data list. Create a new key in the data object titled isLoading. Set this variable to true by default: Within the getFolderStructure method on your component, set the isLoading variable to false. This should happen within the promise after you have set the structure: We can now utilize this variable in our view to show and hide a loading container. Create a new <div> before the unordered list containing some loading text. Feel free to add a CSS animation or an animated gif—anything to let the user know the app is retrieving data: We now need to only show the loading div if the app is loading and the list once the data has loaded. As this is just one change to the DOM, we can use the v-if directive. To give you the freedom of rearranging the HTML, add the attribute to both instead of using v-else. To show or hide, we just need to check the status of the isLoading variable. We can prepend an exclamation mark to the list to only show if the app is not loading: Our app should now show the loading container once mounted, and then it should show the list once the app data has been gathered. To recap, our complete component code now looks like this: Animating between states As a nice enhancement for the user, we can add some transitions between components and states. Helpfully, Vue includes some built-in transition effects. Working with CSS, these transitions allow you to add fades, swipes, and other CSS animations easily when DOM elements are being inserted. More information about transitions can be found in the Vue documentation. The first step is to add the Vue custom HTML <transition> element. Wrap both your loading and list with separate transition elements and give it an attribute of name and a value of fade: Now add the following CSS to either the head of your document or a separate style sheet if you already have one: With the transition element, Vue adds and removes various CSS classes based on the state and time of the transition. All of these begin with the name passed in via the attribute and are appended with the current stage of transition: Try the app in your browser, you should notice the loading container fading out and the file list fading in. Although in this basic example, the list jumps up once the fading has completed, it's an example to help you understand using transitions in Vue. We learned to make Dropbox viewer to list out files and folders from the Dropbox account and also used Vue animations for navigation. Do check out the book Vue.js 2.x by Example to start integrating remote data into your web applications swiftly. Building a real-time dashboard with Meteor and Vue.js Building your first Vue.js 2 Web application Installing and Using Vue.js
Read more
  • 0
  • 0
  • 43770

article-image-how-tflearn-makes-building-tensorflow-models-easier
Savia Lobo
04 Jun 2018
7 min read
Save for later

How TFLearn makes building TensorFlow models easier

Savia Lobo
04 Jun 2018
7 min read
Today, we will introduce you to TFLearn, and will create layers and models which are directly beneficial in any model implementation with Tensorflow. TFLearn is a modular library in Python that is built on top of core TensorFlow. [box type="note" align="" class="" width=""]This article is an excerpt taken from the book Mastering TensorFlow 1.x written by Armando Fandango. In this book, you will learn how to build TensorFlow models to work with multilayer perceptrons using Keras, TFLearn, and R.[/box] TIP: TFLearn is different from the TensorFlow Learn package which is also known as TF Learn (with one space in between TF and Learn). It is available at the following link; and the source code is available on GitHub. TFLearn can be installed in Python 3 with the following command: pip3  install  tflearn Note: To install TFLearn in other environments or from source, please refer to the following link: http://tflearn.org/installation/ The simple workflow in TFLearn is as follows:  Create an input layer first.  Pass the input object to create further layers.  Add the output layer.  Create the net using an estimator layer such as regression.  Create a model from the net created in the previous step.  Train the model with the model.fit() method.  Use the trained model to predict or evaluate. Creating the TFLearn Layers Let us learn how to create the layers of the neural network models in TFLearn:  Create an input layer first: input_layer  =  tflearn.input_data(shape=[None,num_inputs]  Pass the input object to create further layers: layer1  =  tflearn.fully_connected(input_layer,10, activation='relu') layer2  =  tflearn.fully_connected(layer1,10, activation='relu')  Add the output layer: output  =  tflearn.fully_connected(layer2,n_classes, activation='softmax')  Create the final net from the estimator layer such as regression: net  =  tflearn.regression(output, optimizer='adam', metric=tflearn.metrics.Accuracy(), loss='categorical_crossentropy' ) The TFLearn provides several classes for layers that are described in following sub-sections. TFLearn core layers TFLearn offers the following layers in the tflearn.layers.core module: Layer classDescriptioninput_dataThis layer is used to specify the input layer for the neural network.fully_connectedThis layer is used to specify a layer where all the neurons are connected to all the neurons in the previous layer.dropoutThis layer is used to specify the dropout regularization. The input elements are scaled by 1/keep_prob while keeping the expected sum unchanged.Layer classDescriptioncustom_layerThis layer is used to specify a custom function to be applied to the input. This class wraps our custom function and presents the function as a layer.reshapeThis layer reshapes the input into the output of specified shape.flattenThis layer converts the input tensor to a 2D tensor.activationThis layer applies the specified activation function to the input tensor.single_unitThis layer applies the linear function to the inputs.highwayThis layer implements the fully connected highway function.one_hot_encodingThis layer converts the numeric labels to their binary vector one-hot encoded representations.time_distributedThis layer applies the specified function to each time step of the input tensor.multi_target_dataThis layer creates and concatenates multiple placeholders, specifically used when the layers use targets from multiple sources. TFLearn convolutional layers TFLearn offers the following layers in the tflearn.layers.conv module: Layer classDescriptionconv_1dThis layer applies 1D convolutions to the input dataconv_2dThis layer applies 2D convolutions to the input dataconv_3dThis layer applies 3D convolutions to the input dataconv_2d_transposeThis layer applies transpose of conv2_d to the input dataconv_3d_transposeThis layer applies transpose of conv3_d to the input dataatrous_conv_2dThis layer computes a 2-D atrous convolutiongrouped_conv_2dThis layer computes a depth-wise 2-D convolutionmax_pool_1dThis layer computes 1-D max poolingmax_pool_2dThis layer computes 2D max poolingavg_pool_1dThis layer computes 1D average poolingavg_pool_2dThis layer computes 2D average poolingupsample_2dThis layer applies the row and column wise 2-D repeat operationupscore_layerThis layer implements the upscore as specified in http://arxiv. org/abs/1411.4038global_max_poolThis layer implements the global max pooling operationglobal_avg_poolThis layer implements the global average pooling operationresidual_blockThis layer implements the residual block to create deep residual networksresidual_bottleneckThis layer implements the residual bottleneck block for deep residual networksresnext_blockThis layer implements the ResNeXt block TFLearn recurrent layers TFLearn offers the following layers in the tflearn.layers.recurrent module: Layer classDescriptionsimple_rnnThis layer implements the simple recurrent neural network modelbidirectional_rnnThis layer implements the bi-directional RNN modellstmThis layer implements the LSTM modelgruThis layer implements the GRU model TFLearn normalization layers TFLearn offers the following layers in the tflearn.layers.normalization module: Layer classDescriptionbatch_normalizationThis layer normalizes the output of activations of previous layers for each batchlocal_response_normalizationThis layer implements the LR normalizationl2_normalizationThis layer applies the L2 normalization to the input tensors TFLearn embedding layers TFLearn offers only one layer in the tflearn.layers.embedding_ops module: Layer classDescriptionembeddingThis layer implements the embedding function for a sequence of integer IDs or floats TFLearn merge layers TFLearn offers the following layers in the tflearn.layers.merge_ops module: Layer classDescriptionmerge_outputsThis layer merges the list of tensors into a single tensor, generally used to merge the output tensors of the same shapemergeThis layer merges the list of tensors into a single tensor; you can specify the axis along which the merge needs to be done TFLearn estimator layers TFLearn offers only one layer in the tflearn.layers.estimator module: Layer classDescriptionregressionThis layer implements the linear or logistic regression While creating the regression layer, you can specify the optimizer and the loss and metric functions. TFLearn offers the following optimizer functions as classes in the tflearn.optimizers module: SGD RMSprop Adam Momentum AdaGrad Ftrl AdaDelta ProximalAdaGrad Nesterov Note: You can create custom optimizers by extending the tflearn.optimizers.Optimizer base class. TFLearn offers the following metric functions as classes or ops in the tflearn.metrics module: Accuracy or  accuracy_op Top_k or top_k_op R2 or r2_op WeightedR2  or weighted_r2_op Binary_accuracy_op Note : You can create custom metrics by extending the tflearn.metrics.Metric base class. TFLearn provides the following loss functions, known as objectives, in the tflearn.objectives module: Softymax_categorical_crossentropy categorical_crossentropy binary_crossentropy Weighted_crossentropy mean_square hinge_loss roc_auc_score Weak_cross_entropy_2d While specifying the input, hidden, and output layers, you can specify the activation functions to be applied to the output. TFLearn provides the following activation functions in the tflearn.activations module: linear tanh Sigmoid softmax softplus Softsign relu relu6 leaky_relu Prelu elu Crelu selu Creating the TFLearn Model Create the model from the net created in the previous step (step 4 in creating the TFLearn layers section): model  =  tflearn.DNN(net) Types of TFLearn models The TFLearn offers two different classes of the models: DNN  (Deep Neural Network) model: This class allows you to create a multilayer perceptron from the network that you have created from the layers SequenceGenerator model: This class allows you to create a deep neural network that can generate sequences Training the TFLearn Model After creating, train the model with the model.fit() method: model.fit(X_train, Y_train, n_epoch=n_epochs, batch_size=batch_size, show_metric=True, run_id='dense_model') Using the TFLearn Model Use the trained model to predict or evaluate: score  =  model.evaluate(X_test,  Y_test) print('Test  accuracy:',  score[0]) The complete code for the TFLearn MNIST classification example is provided in the notebook ch-02_TF_High_Level_Libraries. The output from the TFLearn MNIST example is as follows: Training  Step:  5499         |  total  loss:  0.42119  |  time:  1.817s |  Adam  |  epoch:  010  |  loss:  0.42119  -  acc:  0.8860  --  iter:  54900/55000 Training  Step:  5500         |  total  loss:  0.40881  |  time:  1.820s |  Adam  |  epoch:  010  |  loss:  0.40881  -  acc:  0.8854  --  iter:  55000/55000 -- Test  accuracy:  0.9029 Note: You can get more information about TFLearn from the following link: http://tflearn.org/. To summarize, we got to know about TFLearn and the different TFLearn layers and models. If you found this post useful, do check out this book Mastering TensorFlow 1.x, to explore advanced features of TensorFlow 1.x, and gain insight into TensorFlow Core, Keras, TF Estimators, TFLearn, TF Slim, Pretty Tensor, and Sonnet. TensorFlow.js 0.11.1 releases! How to Build TensorFlow Models for Mobile and Embedded devices Distributed TensorFlow: Working with multiple GPUs and servers  
Read more
  • 0
  • 0
  • 24333
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-data-cleaning-worst-part-of-data-analysis
Amey Varangaonkar
04 Jun 2018
5 min read
Save for later

Data cleaning is the worst part of data analysis, say data scientists

Amey Varangaonkar
04 Jun 2018
5 min read
The year was 2012. Harvard Business Review had famously declared the role of data scientist as the ‘sexiest job of the 21st century’. Companies were slowly working with more data than ever before. The real actionable value of the data that could be used for commercial purposes was slowly beginning to uncover. Someone who could derive these actionable insights from the data was needed. The demand for data scientists was higher than ever. Fast forward to 2018 - more data has been collected in the last 2 years than ever before. Data scientists are still in high demand, and the need for insights is higher than ever. There has been one significant change, though - the process of deriving insights has become more complex. If you ask the data scientists, the first initial phase of this process, which involves data cleansing, has become a lot more cumbersome. So much so, that it is no longer a myth that data scientists spend almost 80% of their time cleaning and readying the data for analysis. Why data cleaning is a nightmare In the recently conducted Packt Skill-Up survey, we asked data professionals what the worst part of the data analysis process was, and a staggering 50% responded with data cleaning. Source: Packt Skill Up Survey We dived deep into this, and tried to understand why many data science professionals have this common feeling of dislike towards data cleaning, or scrubbing - as many call it. Read the Skill Up report in full. Sign up to our weekly newsletter and download the PDF for free. There is no consistent data format Organizations these days work with a lot of data. Some of it is in a structured, readily understandable format. This kind of data is usually quite easy to clean, parse and analyze. However, some of the data is really messy, and cannot be used as is for analysis. This includes missing data, irregularly formatted data, and irrelevant data which is not worth analyzing at all. There is also the problem of working with unstructured data which needs to be pre-processed to get the data worth analyzing. Audio or video files, email messages, presentations, xml documents and web pages are some classic examples of this. There’s too much data to be cleaned The volume of data that businesses deal with on a day to day basis is in the scale of terabytes or even petabytes. Making sense of all this data, coming from a variety of sources and in different formats is, undoubtedly, a huge task. There are a whole host of tools designed to ease this process today, but it remains an incredibly tricky challenge to sift through the large volumes of data and prepare it for analysis. Data cleaning is tricky and time-consuming Data cleansing can be quite an exhaustive and time-consuming task, especially for data scientists. Cleaning the data requires removal of duplications, removing or replacing missing entries, correcting misfielded values, ensuring consistent formatting and a host of other tasks which take a considerable amount of time. Once the data is cleaned, it needs to be placed in a secure location. Also, a log of the entire process needs to be kept to ensure the right data goes through the right process. All of this requires the data scientists to create a well-designed data scrubbing framework to avoid the risk of repetition. All of this is more of a grunt work and requires a lot of manual effort. Sadly, there are no tools in the market which can effectively automate this process. Outsourcing the process is expensive Given that data cleaning is a rather tedious job, many businesses think of outsourcing the task to third party vendors. While this reduces a lot of time and effort on the company’s end, it definitely increases the cost of the overall process. Many small and medium scale businesses may not be able to afford this, and thus are heavily reliant on the data scientist to do the job for them. You can hate it, but you cannot ignore it It is quite obvious that data scientists need clean, ready-to-analyze data if they are to to extract actionable business insights from it. Some data scientists equate data cleaning to donkey work, suggesting there’s not a lot of innovation involved in this process. However, some believe data cleaning is rather important, and pay special attention to it given once it is done right, most of the problems in data analysis are solved. It is very difficult to take advantage of the intrinsic value offered by the dataset if it does not adhere to the quality standards set by the business, making data cleaning a crucial component of the data analysis process. Now that you know why data cleaning is essential, why not dive deeper into the technicalities? Check out our book Practical Data Wrangling for expert tips on turning your noisy data into relevant, insight-ready information using R and Python. Read more Cleaning Data in PDF Files 30 common data science terms explained How to create a strong data science project portfolio that lands you a job  
Read more
  • 0
  • 2
  • 62843

article-image-have-microservices-killed-monolithic-software-architecture-for-good
Aaron Lazar
04 Jun 2018
6 min read
Save for later

Have Microservices killed the monolithic architecture? Maybe not!

Aaron Lazar
04 Jun 2018
6 min read
Microservices have been growing in popularity since the past few years, 2014 to be precise. Honestly speaking they weren’t that popular until around 2016 - take a look at the steep rise in the curve. The outbreak has happened over the past few years and there are quite a few factors contributing to their growth, like the cloud, distributed architectures, etc. Source: Google Trends Microservices allow for a clearer and refined architecture, with services built to work in isolation, without affecting the resilience and robustness of the application in any way. But does that mean that the Monolith is dead and only Microservices reign? Let’s find out, shall we? Those of you who participated in this year’s survey, I thank you for taking the time out to share such valuable information. For those of you who don’t know what the survey is all about, it a thing that we do every year, where thousands of developers, architects, managers, admins, share their insights with us, and we share our findings with the community. This year’s survey was as informative as the last, if not more! We had developers tell us so much about what they’re doing, where they see technology heading and what tools and techniques they use to stay relevant at what they do. So we took the opportunity and asked our respondents a question about the topic under discussion. Source: WWE.com Revelations If I asked a developer in 2018, what they thought would be the response, they’d instantly say that a majority would be for microservices. Source: Packtpub Skill Up Survey 2018 If you were the one who guessed the answer was going to be Yes, give yourself a firm pat on the back! It’s great to see that 1,603 people are throwing their hands up in the air and building microservices. On the other hand, it’s possible that it’s purely their manager’s decision (See how this forms a barrier to achieving business goals). Anyway, I was particularly concerned about the remaining 314 people who said ‘No’ (those who skipped answering, now is your chance to say something in the comments section below!). Why no Microservices? I thought I’d analyse the possibilities as to why one wouldn’t want to use the microservices pattern in their application architecture. It’s not like developers are migrating from monoliths to microservices, just because everyone else is doing it. Like any other architectural decision, there are several factors that need to be taken into consideration before making the switch. So here’s what I thought were some reasons why developers are sticking to monoliths. #1 One troll vs many elves: Complex times Well imagine you could be attacked by one troll or a hundred house elves. Which situation would you choose to be in if neither isn’t an option? I don’t know about you, but I’d choose the troll any day! Keeping the troll’s size aside, I’d be better off knowing I had one large enemy in front of me, rather than being surrounded by a hundred miniature ones. The same goes for microservices. More services means more complexity, more issues that could crop up. For developers, more services means that they would need to run or connect to all of them on their machine. Although there are tools that help solve this problem, you have to admit that it’s a task to run all services together as a whole application. On the other hand, Ops professionals are tasked to monitor and keep all these services up and running. #2 We lack the expertise Let alone having Developer Rockstars or Admin Ninjas (Oops, I shouldn’t be using those words now, find out why), if your organisation lacks experienced professionals, you’ve got a serious problem. What if there’s an organisation that has been having issues developing/managing a monolith itself. There’s no guarantee that they will be able to manage a microservices based application more effectively. It’s a matter of the organisation having enough hands on skills needed to perform these tasks. These skills are tough to acquire and it’s not simple for organisations to find the right talent. #3 Tower of Babel: Communication gaps In a monolith, communication happens within the application itself and the network channels exist internally. However, this isn’t the case for a microservices architecture as inter-service communication is necessary to keep everything running in tandem. This results in the generation of multiple points of failure, complicating things. To minimise failure, each service has a certain number of retries when trying to establish communication with another. When scaled up, these retries add a load on the database, what with communication formats having to follow strict rules to avoid complexity back again. It’s a vicious circle! #4 Rebuilding a monolith When you build an application based on the microservices architecture, you may benefit a great deal from robustness and reliability. However, microservices together form a large, complicated system, which can be managed by orchestration platforms like Kubernetes. Although, if individual teams are managing clusters of these services, it’s quite likely that orchestration, deployment and management of such a system will be a pain. #5 Burning in dependency hell Microservices are notorious for inviting developers to build services in various languages and then to glue them together. While this is an advantage to a certain extent, it complicates dependency management in the entire application. Moreover, dependencies get even more complicated when versions of tools don’t receive instantaneous support as they are updated. You and your team can go crazy keeping track of versions and dependencies that need to be managed to maintain smooth functioning of your application. So while the microservice architecture is hot, it is not always the best option and teams can actually end up making things worse if they choose to make the change unprepared. Yes, the cloud does benefit much more when applications are deployed as services, rather than as a monolith, but the renowned/infamous “lift and shift” method still exists and works when needed. Ultimately, if you think past the hype, the monolith is not really dead yet and is in fact still being deployed and run in several organisations. Finally, I want to stress that it’s critical that developers and architects take a well informed decision, keeping in mind all the above factors, before they choose an architecture. Like they say, “With great power comes great responsibility”, that’s exactly what great architecture is all about, rather than just jumping on the bandwagon. Building Scalable Microservices Why microservices and DevOps are a match made in heaven What is a multi layered software architecture?
Read more
  • 0
  • 0
  • 20796

article-image-developers-want-to-become-entrepreneurs-make-successful-transition
Fatema Patrawala
04 Jun 2018
9 min read
Save for later

1 in 3 developers want to be entrepreneurs. What does it take to make a successful transition?

Fatema Patrawala
04 Jun 2018
9 min read
Much of the change we have seen over the last decade has been driven by a certain type of person: part developer, part entrepreneur. From Steve Jobs, to Bill Gates, Larry Page to Sergey Bin, Mark Zuckerberg to Elon Musk - the list is long. These people have built their entrepreneurial success on software. In doing so, they’ve changed the way we live, and changed the way we look at the global tech landscape. Part of the reason software is eating the world is because of the entrepreneurial spirit of people like these.Silicon Valley is a testament to this! That is why a combination of tech and leadership skills in developers is the holy grail even today. From this perspective, we weren’t surprised to find that a significant number of developers working today have entrepreneurial ambitions. In this year’s Skill Up survey we asked respondents what they hope to be doing in the next 5 years. A little more than one third of the respondents said they would like to be working as a founder of their own company. Source: Packt Skill Up Survey But being a successful entrepreneur requires a completely new set of skills from those that make one a successful developer. It requires a different mindset. True, certain skills you might already possess. Others, though you’re going to have to acquire. With skills realizing the power of purpose and values will be the key factor to building a successful venture. You will need to clearly articulate why you’re doing something and what you’re doing. In this way you will attract and keep customers who will clearly know the purpose of your business. Let us see what it takes to transition from a developer to a successful entrepreneur. Think customer first, then product: Source: Gifer Whatever you are thinking right now, think Bigger! Developers love making things. And that’s a good place to begin. In tech, ideas often originate from technical conversations and problem solving. “Let’s do X on Z  platform because the Y APIs make it possible”. This approach works sometimes, but it isn’t a sustainable business model. That’s because customers don’t think like that. Their thought process is more like this: “Is X for me?”, “Will X solve my problem?”, “Will X save me time or money?”. Think about the customer first and then begin developing your product. You can’t create a product and then check if there is it is in demand - that sounds obvious, but it does happen. Do your research, think about the market and get to know your customer first. In other words, to be a successful entrepreneur, you need to go one step further - yes, care about your products, but think about it from the long term perspective. Think big picture: what’s going to bring in real value to the lives of other people. Search for opportunities to solve problems in other people’s lives adopting a larger and more market oriented approach. Adding a bit of domain knowledge like market research, product-market fit, market positioning etc to your to-do list should be a vital first step in your journey. Remember, being a good technician is not enough Source: Tenor Most developers are comfortable and probably excel at being good technicians. This means you’re good at writing those beautiful codes, produce something tangible, and cranking away on each task, moving one step closer to the project launch date. Great. But it takes more than a technician to run a successful business. It’s critical to look ahead into both the near and the long term. What’s going to provide the best ROI next year? Where do I want to be in 5 years time? To do this you need to be organized and focused. It’s of critical importance to determine your goals and objectives. Simply put you need to evolve from being a problem solver/great executioner to a visionary/strategic thinker. The developer-entrepreneur is a rare breed of doer-thinker. Be passionate and focussed Source: Bustle Perseverance is the single most important trait found in successful entrepreneurs. As a developer you are primed to think logically and come to conclusions, which is a great trait to possess in most cases. However, in entrepreneurship there will be times when you will do all the right things, meet the right people and yet meet with failures. To get your company off the ground, you need to be passionate and excited about what you’re working on. This enthusiasm will often be the only thing to carry you through late nights, countless setbacks and tough situations. You must stay open, even if logic dictates otherwise. Most startups fail either because they read the market wrong or they didn’t stay long enough in the race. You need to also know how to focus closely on the very next step to get closer to your ultimate goal. There will be many distracting forces when trying to build a business that focusing on one particular task will not be easy and you will need to religiously master this skill. Become amazing at networking Source: Gfycat It truly isn't about what you know or what you have developed. Sure, what you know is important. But what's far more important is who you know. There's a reason why certain people can make so much progress in such a brief period. These business networking power players command the room by bringing the right people together.  As an entrepreneur, if there's one thing that you should focus on, it's becoming a truly skilled business networker. Imagine having an idea for a business that's so wonderful, that you can pick up the phone and call four or five people who can help you turn that idea into a reality. Sharpen your strategic and critical thinking skills Source: Tenor As entrepreneurs it’s essential to possess sharp critical thinking skills. When you think critically,t you ask the hard, tactical questions while expanding the lens to see the wider picture. Without thinking critically, it’s hard to assess whether the most creative application that you have developed really has a life out in the world. You are always going to love what you have created but it is important to wear multiple hats to think from the users’ perspective. Get it reviewed by the industry stalwarts in your community to catch that silly but important area that you could have missed while planning. With critical thinking it is also important you strategize and plan out things for smooth functioning of your business. Find and manage the right people Source: Gfycat In reality businesses are like living creatures, who’s organs all need to work in harmony. There is no major organ more critical than another, and a failure to one system can bring down all the others. Developers build things, marketers get customers and salespersons sell products. The trick is, to find and employ the right people for your business if you are to create a sustainable business. Your ideas and thinking will need to align with the ideas of people that you are working with. Only by learning to leverage employees, vendors and other resources can you build a scalable and sustainable company. Learn the art of selling an idea Source: Patent an idea Every entrepreneur needs to play the role of a sales person whether they like it or not. To build a successful business venture you will need to sell your ideas, products or services to customers, investors or employees. You should be ready to work and be there when customers are ready to buy. Alternatively, you should also know how to let go and move on when they are not. The ability to convince others that you are going to provide them the maximum product value will help you crack that mission critical deal. Be flexible. Create contingency plans Source: Gifer Things rarely go as per plan in software development. Project scope creeps, clients expectations rise, or bugs always seem to mysteriously appear. Even the most seasoned developers can’t predict all possible scenarios so they have to be ready with contingencies. This is also true in the world of business start-ups. Despite the best-laid business plans, entrepreneurs need to be prepared for the unexpected and be able to roll with the punches. You need to very well be prepared with an option A, B or C. Running a business is like sea surfing. You’ve got to be nimble enough both in your thinking and operations to ride the waves, high and low. Always measure performance Source: Tenor "If you can't measure it, you can't improve it." Peter Drucker’s point feels obvious but is one worth bearing in mind always. If you can't measure something, you’ll never improve. Measuring key functions of your business will help you scale it faster. Otherwise you risk running your firm blindly, without any navigational path to guide it. Closing Comments Becoming an entrepreneur and starting your own business is one of life’s most challenging and rewarding journeys. Having said that, for all of the perks that the entrepreneurial path offers, it’s far from being all roses. Being an entrepreneur means being a warrior. It means being clever, hungry and often a ruthless competitor. If you are a developer turned entrepreneur, we’d love to hear your take on the topic and your own personal journey. Share your comments below or write to us at hub@packtpub.com. Developers think managers don’t know enough about technology. And that’s hurting business. 96% of developers believe developing soft skills is important Don’t call us ninjas or rockstars, say developers        
Read more
  • 0
  • 0
  • 21409

article-image-visualizing-bigquery-data-with-tableau
Sugandha Lahoti
04 Jun 2018
8 min read
Save for later

Visualizing BigQuery Data with Tableau

Sugandha Lahoti
04 Jun 2018
8 min read
Tableau is an interactive data visualization tool that can be used to create business intelligence dashboards. Much like most business intelligence tools, it can be used to pull and manipulate data from a number of sources. The difference is its dedication to help users create insightful data visualizations. Tableau's drag-and-drop interface makes it easy for users to explore data via elegant charts. It also includes an in-memory engine in order to speed up calculations on extremely large data sets. In today’s tutorial, we will be using Tableau Desktop for visualizing BigQuery Data. [box type="note" align="" class="" width=""]This article is an excerpt from the book, Learning Google BigQuery, written by Thirukkumaran Haridass and Eric Brown. This book is a comprehensive guide to mastering Google BigQuery to get intelligent insights from your Big Data.[/box] The following section explains how to use Tableau Desktop Edition to connect to BigQuery and get the data from BigQuery to create visuals: After opening Tableau Desktop, select Google BigQuery under the Connect To a Server section on the left; then enter your login credentials for BigQuery: At this point, all the tables in your dataset should be displayed on the left: You can drag and drop the table you are interested in using to the middle section labeled Drop Tables Here. In this case, we want to query the Google Analytics BigQuery test data, so we will click where it says New Custom SQL and enter the following query in the dialog: SELECT trafficsource.medium as Medium, COUNT(visitId) as Visits FROM `google.com:analytics- bigquery.LondonCycleHelmet.ga_sessions_20130910` GROUP BY Medium Now we can click on Update Now to view the first 10,000 rows of our data. We can also do some simple transformations on our columns, such as changing string values to dates and many others. At the bottom, click on the tab titled Sheet 1 to enter the worksheet view. Tableau's interface allows users to simply drag and drop dimensions and metrics from the left side of the report into the central part to create simple text charts, with a feel much like Excel's pivot chart functionality. This makes Tableau easy to transition to for Excel users. From the Dimensions section on the left-hand-side navigation, drag and drop the Medium dimension into the sheet section. Then drag the Visits metric in the Metric section on the left-hand-side navigation to the Text sub-section in the Marks section. This will create a simple text chart with data from the original query: On the right, click on the button marked Show Me. This should bring up a screen with icons for each graph type that can be created in Tableau: Tableau helps by shading graph types that are not available based on the data that is currently selected in the report. It will also make suggestions based on the data available. In this case, a bar chart has been preselected for us as our data is a text dimension and a numeric metric. Click on the bar chart. Once clicked, the default sideways bar chart will appear with the data we have selected. Click on the Swap Rows and Columns in the icon bar at the top of the screen to flip the chart from horizontal to vertical: Map charts in Tableau One of Tableau's strengths is its ease of use when creating a number of different types of charts. This is true when creating maps, especially because maps can be very painful to create using other tools. Here is the way to create a simple map in Tableau using BigQuery public data. The first few steps are the same as in the preceding example: After opening Tableau Desktop, select Google BigQuery under the Connect To a Server section on the left; then enter your login credentials for BigQuery. At this point, all the tables in your dataset should be displayed on the left-hand side. Click where it says New Custom SQL and enter the following query in the dialog: SELECT zipcode, SUM(population) AS population FROM `bigquery-public- data.census_bureau_usa.population_by_zip_2010` GROUP BY zipcode ORDER BY population desc This data is from the United States Census from 2010. The query returns all zip codes in USA, sorted by most populous to least populous. At the bottom, click on the tab titled Sheet 1 to enter the worksheet view. Double-click on the zipcode dimension on the dimensions section on the left navigation. Clicking on a dimension of zip codes (or any other formatted location dimension such as latitude/longitude, country names, state names, and so on) will automatically create a map in Tableau: Drag the population metric from the metrics section on the left navigation and drop it on the color tab in the marks section: The map will now show the most populous zip codes shaded darker than the less populous zip codes. The map chart also includes zoom features in order to make dealing with large maps easy. In the top-left corner of the map, there is a magnifying glass icon. This icons has the map zoom features. Clicking on the arrow at the bottom of this icon opens more features. The icon with a rectangle and a magnifying glass is the selection tool (The first icon to the right of the arrow when hovering over arrow): Click on this icon and then on the map to select a section of the map to be zoomed into: This image is shown after zooming into the California area of the United States. The map now shows the areas of the state that are the most populous. Create a word cloud in Tableau Word clouds are great visualizations for finding words that are most referenced in books, publications, and social media. This section will cover creating a word cloud in Tableau using BigQuery public data. The first few steps are the same as in the preceding example: After opening Tableau Desktop, select Google BigQuery under the Connect To a Server section on the left; then enter your login credentials for BigQuery. At this point, all the tables in your dataset should be displayed on the left. Click where it says New Custom SQL and enter the following query in the dialog: SELECT word, SUM(word_count) word_count FROM `bigquery-public-data.samples.shakespeare` GROUP BY word ORDER BY word_count desc The dataset is from the works of William Shakespeare. The query returns a list of all words in his works, along with a count of the times each word appears in one of his works. At the bottom, click on the tab titled Sheet 1 to enter the worksheet view. In the dimensions section, drag and drop the word dimension into the text tab in the marks section. In the dimensions section, drag and drop the word_count measure to the size tab in the marks section. There will be two tabs used in the marks section. Right-click on the size tab labeled word and select Measure | Count: This will create what is called a tree map. In this example, there are far too many words in the list to utilize the visualization. Drag and drop the word_count measure from the measures section to the filters section. When prompted with How do you want to filter on word_count, select Sum and click on next.. Select At Least for your condition and type 2000 in the dialog. Click on OK. This will return only those words that have a word count of at least 2,000.. Use the dropdown in the marks card to select Text: 11. Drag and drop the word_count measure from the measures section to the color tab in the marks section. This will color each word based on the count for that word: You should be left with a color-coded word cloud. Other charts can now be created as individual worksheet tabs. Tabs can then be combined to make what Tableau calls a dashboard. The process of creating a dashboard here is a bit more cumbersome than creating a dashboard in Google Data Studio, but Tableau offers a great deal of more customization for its dashboards. This, coupled with all the other features it offers, makes Tableau a much more attractive option, especially for enterprise users. We learnt various features of Tableau and how to use it for visualizing BigQuery data.To know about other third party tools for reporting and visualization purposes such as R and Google Data Studio, check out this book Learning Google BigQuery. Tableau is the most powerful and secure end-to-end analytics platform - Interview Insights Tableau 2018.1 brings new features to help organizations easily scale analytics Getting started with Data Visualization in Tableau      
Read more
  • 0
  • 0
  • 42782
article-image-fuzzy-logic-ai-characters-unity-3d-games
Kunal Chaudhari
01 Jun 2018
16 min read
Save for later

Implementing fuzzy logic to bring AI characters alive in Unity based 3D games

Kunal Chaudhari
01 Jun 2018
16 min read
Fuzzy logic is a fantastic way to represent the rules of your game in a more nuanced way. Perhaps more so than other concepts, fuzzy logic is a very math-heavy topic. Most of the information can be represented purely by mathematical functions. For the sake of teaching the important concepts as they apply to Unity, most of the math has been simplified and implemented using Unity's built-in features. In this tutorial, we will take a look at the concepts behind fuzzy logic systems and implement in your AI system. Implementing fuzzy logic will make your game characters more believable and depict real-world attributes. This article is an excerpt from a book written by Ray Barrera, Aung Sithu Kyaw, and Thet Naing Swe titled  Unity 2017 Game AI Programming - Third Edition. This book will help you leverage the power of artificial intelligence to program smart entities for your games. Defining fuzzy logic The simplest way to define fuzzy logic is by comparison to binary logic.  Generally, transition rules as looked at as true or false or 0 or 1 values. Is something visible? Is it at least a certain distance away? Even in instances where multiple values were being evaluated, all of the values had exactly two outcomes; thus, they were binary. In contrast, fuzzy values represent a much richer range of possibilities, where each value is represented as a float rather than an integer. We stop looking at values as 0 or 1, and we start looking at them as 0 to 1. A common example used to describe fuzzy logic is temperature. Fuzzy logic allows us to make decisions based on non-specific data. I can step outside on a sunny Californian summer's day and ascertain that it is warm, without knowing the temperature precisely. Conversely, if I were to find myself in Alaska during the winter, I would know that it is cold, again, without knowing the exact temperature. These concepts of cold, cool, warm, and hot are fuzzy ones. There is a good amount of ambiguity as to at what point we go from warm to hot. Fuzzy logic allows us to model these concepts as sets and determine their validity or truth by using a set of rules. When people make decisions, people have some gray areas. That is to say, it's not always black and white. The same concept applies to agents that rely on fuzzy logic. Say you hadn't eaten in a few hours, and you were starting to feel a little hungry. At which point were you hungry enough to go grab a snack? You could look at the time right after a meal as 0, and 1 would be the point where you approached starvation. The following figure illustrates this point: When making decisions, there are many factors that determine the ultimate choice. This leads into another aspect of fuzzy logic controllers—they can take into account as much data as necessary. Let's continue to look at our "should I eat?" example. We've only considered one value for making that decision, which is the time since the last time you ate. However, there are other factors that can affect this decision, such as how much energy you're expending and how lazy you are at that particular moment. Or am I the only one to use that as a deciding factor? Either way, you can see how multiple input values can affect the output, which we can think of as the "likeliness to have another meal." Fuzzy logic systems can be very flexible due to their generic nature. You provide input, the fuzzy logic provides an output. What that output means to your game is entirely up to you. We've primarily looked at how the inputs would affect a decision, which, in reality, is taking the output and using it in a way the computer, our agent, can understand. However, the output can also be used to determine how much of something to do, how fast something happens, or for how long something happens. For example, imagine your agent is a car in a sci-fi racing game that has a "nitro-boost" ability that lets it expend a resource to go faster. Our 0 to 1 value can represent a normalized amount of time for it to use that boost or perhaps a normalized amount of fuel to use. Picking fuzzy systems over binary systems With most things in game programming, we must evaluate the requirements of our game and the technology and hardware limitations when deciding on the best way to tackle a problem. As you might imagine, there is a performance cost associated with going from a simple yes/no system to a more nuanced fuzzy logic one, which is one of the reasons we may opt out of using it. Of course, being a more complex system doesn't necessarily always mean it's a better one. There will be times when you just want the simplicity and predictability of a binary system because it may fit your game better. While there is some truth to the old adage, "the simpler, the better", one should also take into account the saying, "everything should be made as simple as possible, but not simpler". Though the quote is widely attributed to Albert Einstein, the father of relativity, it's not entirely clear who said it. The important thing to consider is the meaning of the quote itself. You should make your AI as simple as your game needs it to be, but not simpler. Pac-Man's AI works perfectly for the game–it's simple enough. However, rules say that simple would be out of place in a modern shooter or strategy game. Using fuzzy logic Once you understand the simple concepts behind fuzzy logic, it's easy to start thinking of the many ways in which it can be useful. In reality, it's just another tool in our belt, and each job requires different tools. Fuzzy logic is great at taking some data, evaluating it in a similar way to how a human would (albeit in a much simpler way), and then translating the data back to information that is usable by the system. Fuzzy logic controllers have several real-world use cases. Some are more obvious than others, and while these are by no means one-to-one comparisons to our usage in game AI, they serve to illustrate a point: Heating ventilation and air conditioning (HVAC) systems: The temperature example when talking about fuzzy logic is not only a good theoretical approach to explaining fuzzy logic, but also a very common real-world example of fuzzy logic controllers in action. Automobiles: Modern automobiles come equipped with very sophisticated computerized systems, from the air conditioning system (again), to fuel delivery, to automated braking systems. In fact, putting computers in automobiles has resulted in far more efficient systems than the old binary systems that were sometimes used. Your smartphone: Ever notice how your screen dims and brightens depending on how much ambient light there is? Modern smartphone operating systems look at ambient light, the color of the data being displayed, and the current battery life to optimize screen brightness. Washing machines: Not my washing machine necessarily, as it's quite old, but most modern washers (from the last 20 years) make some use of fuzzy logic. Load size, water dirtiness, temperature, and other factors are taken into account from cycle to cycle to optimize water use, energy consumption, and time. If you take a look around your house, there is a good chance you'll find a few interesting uses of fuzzy logic, and I mean besides your computer, of course. While these are neat uses of the concept, they're not particularly exciting or game-related. I'm partial to games involving wizards, magic, and monsters, so let's look at a more relevant example. Implementing a simple fuzzy logic system For this example, we're going to use my good friend, Bob, the wizard. Bob lives in an RPG world, and he has some very powerful healing magic at his disposal. Bob has to decide when to cast this magic on himself based on his remaining health points (HPs). In a binary system, Bob's decision-making process might look like this: if(healthPoints <= 50) { CastHealingSpell(me); } We see that Bob's health can be in one of two states—above 50, or not. Nothing wrong with that, but let's have a look at what the fuzzy version of this same scenario might look like, starting with determining Bob's health status: Before the panic sets in upon seeing charts and values that may not quite mean anything to you right away, let's dissect what we're looking at. Our first impulse might be to try to map the probability that Bob will cast a healing spell to how much health he is missing. That would, in simple terms, just be a linear function. Nothing really fuzzy about that—it's a linear relationship, and while it is a step above a binary decision in terms of complexity, it's still not truly fuzzy. Enter the concept of a membership function. It's key to our system, as it allows us to determine how true a statement is. In this example, we're not simply looking at raw values to determine whether or not Bob should cast his spell; instead, we're breaking it up into logical chunks of information for Bob to use in order to determine what his course of action should be. In this example, we're comparing three statements and evaluating not only how true each one is, but which is the most true: Bob is in a critical condition Bob is hurt Bob is healthy If you're into official terminology, we call this determining the degree of membership to a set. Once we have this information, our agent can determine what to do with it next. At a glance, you'll notice it's possible for two statements to be true at a time. Bob can be in a critical condition and hurt. He can also be somewhat hurt and a little bit healthy. You're free to pick the thresholds for each, but, in this example, let's evaluate these statements as per the preceding graph. The vertical value represents the degree of truth of a statement as a normalized float (0 to 1): At 0 percent health, we can see that the critical statement evaluates to 1. It is absolutely true that Bob is critical when his health is gone. At 40 percent health, Bob is hurt, and that is the truest statement. At 100 percent health, the truest statement is that Bob is healthy. Anything outside of these absolutely true statements is squarely in fuzzy territory. For example, let's say Bob's health is at 65 percent. In that same chart, we can visualize it like this: The vertical line drawn through the chart at 65 represents Bob's health. As we can see, it intersects both sets, which means that Bob is a little bit hurt, but he's also kind of healthy. At a glance, we can tell, however, that the vertical line intercepts the Hurt set at a higher point in the graph. We can take this to mean that Bob is more hurt than he is healthy. To be specific, Bob is 37.5 percent hurt, 12.5 percent healthy, and 0 percent critical. Let's take a look at this in code; open up our FuzzySample scene in Unity. The hierarchy will look like this: The important game object to look at is Fuzzy Example. This contains the logic that we'll be looking at. In addition to that, we have our Canvas containing all of the labels and the input field and button that make this example work. Lastly, there's the Unity-generated EventSystem and Main Camera, which we can disregard. There isn't anything special going on with the setup for the scene, but it's a good idea to become familiar with it, and you are encouraged to poke around and tweak it to your heart's content after we've looked at why everything is there and what it all does. With the Fuzzy Example game object selected, the inspector will look similar to the following image: Our sample implementation is not necessarily something you'll take and implement in your game as it is, but it is meant to illustrate the previous points in a clear manner. We use Unity's AnimationCurve for each different set. It's a quick and easy way to visualize the very same lines in our earlier graph. Unfortunately, there is no straightforward way to plot all the lines in the same graph, so we use a separate AnimationCurve for each set. In the preceding screenshot, they are labeled Critical, Hurt, and Healthy. The neat thing about these curves is that they come with a built-in method to evaluate them at a given point (t). For us, t does not represent time, but rather the amount of health Bob has. As in the preceding graph, the Unity example looks at a HP range of 0 to 100. These curves also provide a simple user interface for editing the values. You can simply click on the curve in the inspector. That opens up the curve editing window. You can add points, move points, change tangents, and so on, as shown in the following screenshot: Unity's curve editor window Our example focuses on triangle-shaped sets. That is, linear graphs for each set. You are by no means restricted to this shape, though it is the most common. You could use a bell curve or a trapezoid, for that matter. To keep things simple, we'll stick to the triangle. You can learn more about Unity's AnimationCurve editor at http://docs.unity3d.com/ScriptReference/AnimationCurve.html. The rest of the fields are just references to the different UI elements used in code that we'll be looking at later in this chapter. The names of these variables are fairly self-explanatory, however, so there isn't much guesswork to be done here. Next, we can take a look at how the scene is set up. If you play the scene, the game view will look something similar to the following screenshot: A simple UI to demonstrate fuzzy values We can see that we have three distinct groups, representing each question from the "Bob, the wizard" example. How healthy is Bob, how hurt is Bob, and how critical is Bob? For each set, upon evaluation, the value that starts off as 0 true will dynamically adjust to represent the actual degree of membership. There is an input box in which you can type a percentage of health to use for the test. No fancy controls are in place for this, so be sure to enter a value from 0 to 100. For the sake of consistency, let's enter a value of 65 into the box and then press the Evaluate! button. This will run some code, look at the curves, and yield the exact same results we saw in our graph earlier. While this shouldn't come as a surprise (the math is what it is, after all), there are fewer things more important in game programming than testing your assumptions, and sure enough, we've tested and verified our earlier statement. After running the test by hitting the Evaluate! button, the game scene will look similar to the following screenshot: This is how Bob is doing at 65 percent health Again, the values turn out to be 0.125 (or 12.5 percent) healthy and 0.375 (or 37.5 percent) hurt. At this point, we're still not doing anything with this data, but let's take a look at the code that's handling everything: using UnityEngine; using UnityEngine.UI; using System.Collections; public class FuzzySample1 : MonoBehaviour { private const string labelText = "{0} true"; public AnimationCurve critical; public AnimationCurve hurt; public AnimationCurve healthy; public InputField healthInput; public Text healthyLabel; public Text hurtLabel; public Text criticalLabel; private float criticalValue = 0f; private float hurtValue = 0f; private float healthyValue = 0f; We start off by declaring some variables. The labelText is simply a constant we use to plug into our label. We replace {0} with the real value. Next, we declare the three AnimationCurve variables that we mentioned earlier. Making these public or otherwise accessible from the inspector is key to being able to edit them visually (though it is possible to construct curves by code), which is the whole point of using them. The following four variables are just references to UI elements that we saw earlier in the screenshot of our inspector, and the last three variables are the actual float values that our curves will evaluate into: private void Start () { SetLabels(); } /* * Evaluates all the curves and returns float values */ public void EvaluateStatements() { if (string.IsNullOrEmpty(healthInput.text)) { return; } float inputValue = float.Parse(healthInput.text); healthyValue = healthy.Evaluate(inputValue); hurtValue = hurt.Evaluate(inputValue); criticalValue = critical.Evaluate(inputValue); SetLabels(); } The Start() method doesn't require much explanation. We simply update our labels here so that they initialize to something other than the default text. The EvaluateStatements() method is much more interesting. We first do some simple null checking for our input string. We don't want to try and parse an empty string, so we return out of the function if it is empty. As mentioned earlier, there is no check in place to validate that you've input a numerical value, so be sure not to accidentally input a non-numerical value or you'll get an error. For each of the AnimationCurve variables, we call the Evaluate(float t) method, where we replace t with the parsed value we get from the input field. In the example we ran, that value would be 65. Then, we update our labels once again to display the values we got. The code looks similar to this: /* * Updates the GUI with the evluated values based * on the health percentage entered by the * user. */ private void SetLabels() { healthyLabel.text = string.Format(labelText, healthyValue); hurtLabel.text = string.Format(labelText, hurtValue); criticalLabel.text = string.Format(labelText, criticalValue); } } We simply take each label and replace the text with a formatted version of our labelText constant that replaces the {0} with the real value. To summarize, we learned how fuzzy logic is used in the real world, and how it can help illustrate vague concepts in a way binary systems cannot. We also learned to implement our own fuzzy logic controllers using the concepts of member functions, degrees of membership, and fuzzy sets.  If you enjoyed this excerpt, check out the book Unity 2017 Game AI Programming - Third Edition, to build exciting and richer games by mastering advanced Artificial Intelligence concepts in Unity. Unity Machine Learning Agents: Transforming Games with Artificial Intelligence Put your game face on! Unity 2018.1 is now available How to create non-player Characters (NPC) with Unity 2018
Read more
  • 0
  • 0
  • 46762

article-image-developers-think-managers-dont-know-about-technology
Fatema Patrawala
01 Jun 2018
7 min read
Save for later

Developers think managers don’t know enough about technology. And that’s hurting business.

Fatema Patrawala
01 Jun 2018
7 min read
It's not hard to find jokes online about management not getting software. There has long been a perception that those making key business decisions don't actually understand the technology and software that is at the foundation of just about every organization's operations. Now, research has confirmed that the management and engineering divide is real. In this year's Skill Up survey that we ran among 8000 developers, we found that more than 60% of developers believe they know more about technology than their manager. Source: Packtpub Skill Up Survey 2018 Developer perceptions on the topic aren't simply a question of ego; they're symptomatic of some a barriers to business success. 42% of the respondents listed management's lack of technical knowledge as a barrier to success. It also appears as one of the top 3 organizational barriers to achieving business goals. Source: Packtpub Skill Up Survey 2018 To dissect the technical challenges faced by organizations, we also asked respondents to pick the top technical barriers to success. As can be seen from the below graph, a lot of the barriers directly or indirectly relate to management’s understanding of technology. Take for example management’s decisions to continue with legacy systems, or investment or divestment from certain projects, choice of training programs and vendors etc. Source: Packtpub Skill Up Survey 2018 Management tends to weigh decisions based on the magnitude of investment against returns in immediate or medium term. Also, unless there is hard evidence of performance benefits or cost savings, management is generally wary of approving new projects or spending more on existing ones. This approach is generally robust and has saved business precious dollars by curbing pet projects and unruly experiments and research. However, with technology, things are always so straightforward. One day some tool is the talk of the town (think Adobe Flash) and everyone seems to be learning it or buying it and then in a few months or a couple of years down the line, it has gone completely off radar. Conversely, something that didn’t exist yesterday or was present in some obscure research lab (think self-driving tech, gene-editing, robotics etc), is now changing the rules of the game and businesses whose leadership teams have had their ears on the ground topple everyone else, including time tested veterans. Early adopters and early jumpers make the most of tech trends. This requires one (in the position to make decisions within organizations) to be aware of the changing tech landscape to the extent that one can predict what’s going to replace the current reigning tech and in what timeframe. It requires that management is aware of what’s happening in adjacent industries or even in seemingly unrelated ones. Who knew Unity (game platform), Nvidia (chipmaker), Google (search engine), would enter the auto industry, (all thanks to self driving tech)? While these are some over the top factors, let us look at each one of them in detail. Why do developers believe there is a management knowledge gap? Few reasons listed to justify the response: Rapid pace of technology change: The rapid rate of technology change is significantly impacting IT strategy. Not only are there plenty of emerging technology trends, from AI to cloud, they’re all coming at the same time, and even affecting each other. It’s clear that keeping up with the rate of digital advancement - for example automation, harnessing big data, emerging technologies and cyber security - will pose significant challenge for leaders and senior management. Adding a whole new layer of complexity as they try to stay ahead of competition and innovate. Balancing strategic priorities while complying to changing regulations: Another major challenge for senior management is to balance strategic priorities with the regulatory demands of the industry. In 2018, GDPR has been setting a new benchmark for the protection of consumer data rights by making organisations more accountable. Governed by GDPR, organisations and senior management will now be responsible for guarding every piece of information connected to an individual. In order to be GDPR compliant, management will begin introducing the correct security protocols in their business processes. This will include encryption, two-factor authentication and key management strategies to avoid severe legal, financial and reputational consequences.To make the right decisions, they will need to be technically competent enough to understand the strengths and limitations of the tools and techniques involved in the compliance process Finding right IT talent: Identifying the right talent with the skill sets that you need is a big challenge for senior management. They are constantly trying to find and hire IT talent, such as skilled data scientists and app developers, to accommodate and capitalize on emerging trends in cloud and the API economy. The team has to take care to bring in the right people and let them create magic with their development skills. Alongside this they also need to reinvent how they manage, retract, retain, motivate, and compensate these folks. Responses to this quora question highlight that it can be a difficult process for managers to go through a lengthy recruitment cycle. And the worst feeling is when after all the effort the candidate declines the offer for another lucrative one. So much promising technology, so little time: Time is tight in business and tech. Keeping pace with how quickly innovative and promising technologies crop up is easier said than done. There are so many interesting technologies out there, and there's so little time to implement them fast enough. Before anyone can choose a technology that might work for the company, a new product appears to be on the horizon. Once you see something you like, there's always something else popping up. While managers are working on a particular project to make all the parts work together for an outstanding customer experience, it requires time to do so and implement these technologies. When juggling with all of these moving parts, managers are always looking for technologies and ways to implement great things faster. That's the major reason behind companies having a CTO, VP of engineering and CEO separately to function at their own levels and departments. Murphy’s law of unforeseen IT problems: One of the biggest problems when you’re working in tech is Murphy's Law. This is the law that states  "Anything that can go wrong, will -- at the worst possible moment." It doesn't matter how hard we have worked, how strong the plan is, or how many times things are tested. You get to doing the project and if something has to go wrong, it will. There are times we face IT problems that we don't see coming. It doesn't matter how much you try to plan -- stuff happens. When management doesn’t properly understand technology it’s often hard for them to appreciate how problems arise and how long it can take to solve them. That puts pressure on engineers and developers which can make managing projects even harder. Overcoming perfectionism with an agile mindset: Senior management often wants things done yesterday, and they want it done perfectly. Of course, this is impossible. While Agile can help improve efficiency in the development process, perfectionism is anathema to Agile. It’s about delivering quickly and consistently, not building something perfect and then deploying it. Getting management to understand this is a challenge for engineers - good management teams will understand Agile and what the trade offs are. At the forefront of everyone’s mind should be what the customer needs and what is going to benefit the business. Concluding with Dilbert comic for a lighter note. Source With purpose, process, and changing technologies, managers need to change in the way they function and manage. People don't leave companies, they leave bad managers and the same could be said true for technical workers. They don't leave bad companies they leave non-technical managers who make bad technical decisions. Don’t call us ninjas or rockstars, say developers 96% of developers believe developing soft skills is important  
Read more
  • 0
  • 0
  • 34485

article-image-restful-web-services-with-kotlin
Natasha Mathur
01 Jun 2018
9 min read
Save for later

Building RESTful web services with Kotlin

Natasha Mathur
01 Jun 2018
9 min read
Kotlin has been eating up the Java world. It has already become a hit in the Android Ecosystem which was dominated by Java and is welcomed with open arms. Kotlin is not limited to Android development and can be used to develop server-side and client-side web applications as well. Kotlin is 100% compatible with the JVM so you can use any existing frameworks such as Spring Boot, Vert.x, or JSF for writing Java applications. In this tutorial, we will learn how to implement RESTful web services using Kotlin. This article is an excerpt from the book 'Kotlin Programming Cookbook', written by, Aanand Shekhar Roy and Rashi Karanpuria. Setting up dependencies for building RESTful services In this recipe, we will lay the foundation for developing the RESTful service. We will see how to set up dependencies and run our first SpringBoot web application. SpringBoot provides great support for Kotlin, which makes it easy to work with Kotlin. So let's get started. We will be using IntelliJ IDEA and Gradle build system. If you don't have that, you can get it from https://www.jetbrains.com/idea/. How to do it… Let's follow the given steps to set up the dependencies for building RESTful services: First, we will create a new project in IntelliJ IDE. We will be using the Gradle build system for maintaining dependency, so create a Gradle project: When you have created the project, just add the following lines to your build.gradle file. These lines of code contain spring-boot dependencies that we will need to develop the web app: buildscript { ext.kotlin_version = '1.1.60' // Required for Kotlin integration ext.spring_boot_version = '1.5.4.RELEASE' repositories { jcenter() } dependencies { classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version" // Required for Kotlin integration classpath "org.jetbrains.kotlin:kotlin-allopen:$kotlin_version" // See https://kotlinlang.org/docs/reference/compiler-plugins.html#kotlin-spring-compiler-plugin classpath "org.springframework.boot:spring-boot-gradle-plugin:$spring_boot_version" } } apply plugin: 'kotlin' // Required for Kotlin integration apply plugin: "kotlin-spring" // See https://kotlinlang.org/docs/reference/compiler-plugins.html#kotlin-spring-compiler-plugin apply plugin: 'org.springframework.boot' jar { baseName = 'gs-rest-service' version = '0.1.0' } sourceSets { main.java.srcDirs += 'src/main/kotlin' } repositories { jcenter() } dependencies { compile "org.jetbrains.kotlin:kotlin-stdlib:$kotlin_version" // Required for Kotlin integration compile 'org.springframework.boot:spring-boot-starter-web' testCompile('org.springframework.boot:spring-boot-starter-test') } Let's now create an App.kt file in the following directory hierarchy: It is important to keep the App.kt file in a package (we've used the college package). Otherwise, you will get an error that says the following: ** WARNING ** : Your ApplicationContext is unlikely to start due to a `@ComponentScan` of the default package. The reason for this error is that if you don't include a package declaration, it considers it a "default package," which is discouraged and avoided. Now, let's try to run the App.kt class. We will put the following code to test if it's running: @SpringBootApplication open class App { } fun main(args: Array<String>) { SpringApplication.run(App::class.java, *args) } Now run the project; if everything goes well, you will see output with the following line at the end: Started AppKt in 5.875 seconds (JVM running for 6.445) We now have our application running on our embedded Tomcat server. If you go to http://localhost:8080, you will see an error as follows: The preceding error is 404 error and the reason for that is we haven't told our application to do anything when a user is on the / path. Creating a REST controller In the previous recipe, we learned how to set up dependencies for creating RESTful services. Finally, we launched our backend on the http://localhost:8080 endpoint but got 404 error as our application wasn't configured to handle requests at that path (/). We will start from that point and learn how to create a REST controller. Let's get started! We will be using IntelliJ IDE for coding purposes. For setting up of the environment, refer to the previous recipe. You can also find the source in the repository at https://gitlab.com/aanandshekharroy/kotlin-webservices. How to do it… In this recipe, we will create a REST controller that will fetch us information about students in a college. We will be using an in-memory database using a list to keep things simple: Let's first create a Student class having a name and roll number properties: package college class Student() { lateinit var roll_number: String lateinit var name: String constructor( roll_number: String, name: String): this() { this.roll_number = roll_number this.name = name } } Next, we will create the StudentDatabase endpoint, which will act as a database for the application: @Component class StudentDatabase { private val students = mutableListOf<Student>() } Note that we have annotated the StudentDatabase class with @Component, which means its lifecycle will be controlled by Spring (because we want it to act as a database for our application). We also need a @PostConstruct annotation, because it's an in-memory database that is destroyed when the application closes. So we would like to have a filled database whenever the application launches. So we will create an init method, which will add a few items into the "database" at startup time: @PostConstruct private fun init() { students.add(Student("2013001","Aanand Shekhar Roy")) students.add(Student("2013165","Rashi Karanpuria")) } Now, we will create a few other methods that will help us deal with our database: getStudent: Gets the list of students present in our database: fun getStudents()=students addStudent: This method will add a student to our database: fun addStudent(student: Student): Boolean { students.add(student) return true } Now let's put this database to use. We will be creating a REST controller that will handle the request. We will create a StudentController and annotate it with @RestController. Using @RestController is simple, and it's the preferred method for creating MVC RESTful web services. Once created, we need to provide our database using Spring dependency injection, for which we will need the @Autowired annotation. Here's how our StudentController looks: @RestController class StudentController { @Autowired private lateinit var database: StudentDatabase } Now we will set our response to the / path. We will show the list of students in our database. For that, we will simply create a method that lists out students. We will need to annotate it with @RequestMapping and provide parameters such as path and request method (GET, POST, and such): @RequestMapping("", method = arrayOf(RequestMethod.GET)) fun students() = database.getStudents() This is what our controller looks like now. It is a simple REST controller: package college import org.springframework.beans.factory.annotation.Autowired import org.springframework.web.bind.annotation.RequestMapping import org.springframework.web.bind.annotation.RequestMethod import org.springframework.web.bind.annotation.RestController @RestController class StudentController { @Autowired private lateinit var database: StudentDatabase @RequestMapping("", method = arrayOf(RequestMethod.GET)) fun students() = database.getStudents() } Now when you restart the server and go to http://localhost:8080, we will see the response as follows: As you can see, Spring is intelligent enough to provide the response in the JSON format, which makes it easy to design APIs. Now let's try to create another endpoint that will fetch a student's details from a roll number: @GetMapping("/student/{roll_number}") fun studentWithRollNumber( @PathVariable("roll_number") roll_number:String) = database.getStudentWithRollNumber(roll_number) Now, if you try the http://localhost:8080/student/2013001 endpoint, you will see the given output: {"roll_number":"2013001","name":"Aanand Shekhar Roy"} Next, we will try to add a student to the database. We will be doing it via the POST method: @RequestMapping("/add", method = arrayOf(RequestMethod.POST)) fun addStudent(@RequestBody student: Student) = if (database.addStudent(student)) student else throw Exception("Something went wrong") There's more… So far, our server has been dependent on IDE. We would definitely want to make it independent of an IDE. Thanks to Gradle, it is very easy to create a runnable JAR just with the following: ./gradlew clean bootRepackage The preceding command is platform independent and uses the Gradle build system to build the application. Now, you just need to type the mentioned command to run it: java -jar build/libs/gs-rest-service-0.1.0.jar You can then see the following output as before: Started AppKt in 4.858 seconds (JVM running for 5.548) This means your server is running successfully. Creating the Application class for Spring Boot The SpringApplication class is used to bootstrap our application. We've used it in the previous recipes; we will see how to create the Application class for Spring Boot in this recipe. We will be using IntelliJ IDE for coding purposes. To set up the environment, read previous recipes, especially the Setting up dependencies for building RESTful services recipe. How to do it… If you've used Spring Boot before, you must be familiar with using @Configuration, @EnableAutoConfiguration, and @ComponentScan in your main class. These were used so frequently that Spring Boot provides a convenient @SpringBootApplication alternative. The Spring Boot looks for the public static main method, and we will use a top-level function outside the Application class. If you noted, while setting up the dependencies, we used the kotlin-spring plugin, hence we don't need to make the Application class open. Here's an example of the Spring Boot application: package college import org.springframework.boot.SpringApplication import org.springframework.boot.autoconfigure.SpringBootApplication @SpringBootApplication class Application fun main(args: Array<String>) { SpringApplication.run(Application::class.java, *args) } The Spring Boot application executes the static run() method, which takes two parameters and starts a autoconfigured Tomcat web server when Spring application is started. When everything is set, you can start the application by executing the following command: ./gradlew bootRun If everything goes well, you will see the following output in the console: This is along with the last message—Started AppKt in xxx seconds. This means that your application is up and running. In order to run it as an independent server, you need to create a JAR and then you can execute as follows: ./gradlew clean bootRepackage Now, to run it, you just need to type the following command: java -jar build/libs/gs-rest-service-0.1.0.jar We learned how to set up dependencies for building RESTful services, creating a REST controller, and creating the application class for Spring boot. If you are interested in learning more about Kotlin then be sure to check out the 'Kotlin Programming Cookbook'. Build your first Android app with Kotlin 5 reasons to choose Kotlin over Java Getting started with Kotlin programming Forget C and Java. Learn Kotlin: the next universal programming language
Read more
  • 0
  • 0
  • 47761
article-image-network-programming-gawk
Pavan Ramchandani
31 May 2018
12 min read
Save for later

Network programming 101 with GAWK (GNU AWK)

Pavan Ramchandani
31 May 2018
12 min read
In today's tutorial, we will learn about the networking aspects, for example working with TCP/IP for both client-side and server-side. We will also explore HTTP services to help you get going with networking in AWK. This tutorial is an excerpt from a book written by Shiwang Kalkhanda, titled Learning AWK Programming. The AWK programming language was developed as a pattern-matching language for text manipulation; however, GAWK has advanced features, such as file-like handling of network connections. We can perform simple TCP/IP connection handling in GAWK with the help of special filenames. GAWK extends the two-way I/O mechanism used with the |& operator to simple networking using these special filenames that hide the complex details of socket programming to the programmer. The special filename for network communication is made up of multiple fields, all of which are mandatory. The following is the syntax of creating a filename for network communication: /net-type/protocol/local-port/remote-host/remote-port Each field is separated from another with a forward slash. Specifying all of the fields is mandatory. If any of the field is not valid for any protocol or you want the system to pick a default value for that field, it is set as 0. The following list illustrates the meaning of different fields used in creating the file for network communication: net-type: Its value is inet4 for IPv4, inet6 for IPv6, or inet to use the system default (which is generally IPv4). protocol: It is either tcp or udp for a TCP or UDP IP connection. It is advised you use the TCP protocol for networking. UDP is used when low overhead is a priority. local-port: Its value decides which port on the local machine is used for communication with the remote system. On the client side, its value is generally set to 0 to indicate any free port to be picked up by the system itself. On the server side, its value is other than 0 because the service is provided to a specific publicly known port number or service name, such as http, smtp, and so on. remote-host: It is the remote hostname which is to be at the other end of the connection. For the server side, its value is set to 0 to indicate the server is open for all other hosts for connection. For the client side, its value is fixed to one remote host and hence, it is always different from 0. This name can either be represented through symbols, such as www.google.com, or numbers, 123.45.67.89. remote-port: It is the port on which the remote machine will communicate across the network. For clients, its value is other than 0, to indicate to which port they are connecting to the remote machine. For servers, its value is the port on which they want connection from the client to be established. We can use a service name here such as ftp, http, or a port number such as 80, 21, and so on. TCP client and server (/inet/tcp) TCP gaurantees that data is received at the other end and in the same order as it was transmitted, so always use TCP. In the following example, we will create a tcp-server (sender) to send the current date time of the server to the client. The server uses the strftime() function with the coprocess operator to send to the GAWK server, listening on the 8080 port. The remote host and remote port could be any client, so its value is kept as 0. The server connection is closed by passing the special filename to the close() function for closing the file as follows: $ vi tcpserver.awk #TCP-Server BEGIN { print strftime() |& "/inet/tcp/8080/0/0" close("/inet/tcp/8080/0/0") } Now, open one Terminal and run this program before running the client program as follows: $ awk -f tcpserver.awk Next, we create the tcpclient (receiver) to receive the data sent by the tcpserver. Here, we first create the client connection and pass the received data to the getline() using the coprocess operator. Here the local-port value is set to 0 to be automatically chosen by the system, the remote-host is set to the localhost, and the remote-port is set to the tcp-server port, 8080. After that, the received message is printed, using the print $0 command, and finally, the client connection is closed using the close command, as follows: $ vi tcpclient.awk #TCP-client BEGIN { "/inet/tcp/0/localhost/8080" |& getline print $0 close("/inet/tcp/0/localhost/8080") } Now, execute the tcpclient program in another Terminal as follows : $ awk -f tcpclient.awk The output of the previous code is as follows : Fri Feb 9 09:42:22 IST 2018 UDP client and server ( /inet/udp ) The server and client programs that use the UDP protocol for communication are almost identical to their TCP counterparts, with the only difference being that the protocol is changed to udp from tcp. So, the UDP-server and UDP-client program can be written as follows: $ vi udpserver.awk #UDP-Server BEGIN { print strftime() |& "/inet/udp/8080/0/0" "/inet/udp/8080/0/0" |& getline print $0 close("/inet/udp/8080/0/0") } $ awk -f udpserver.awk Here, only one addition has been made to the client program. In the client, we send the message hello from client ! to the server. So when we execute this program on the receiving Terminal, where the udpclient.awk program is run, we get the remote system date time. And on the Terminal where the udpserver.awk program is run, we get the hello message from the client: $ vi udpclient.awk #UDP-client BEGIN { print "hello from client!" |& "/inet/udp/0/localhost/8080" "/inet/udp/0/localhost/8080" |& getline print $0 close("/inet/udp/0/localhost/8080") } $ awk -f udpclient.awk GAWK can be used to open direct sockets only. Currently, there is no way to access services available over an SSL connection such as https, smtps, pop3s, imaps, and so on. Reading a web page using HttpService To read a web page, we use the Hypertext Transfer Protocol (HTTP ) service which runs on port number 80. First, we redefine the record separators RS and ORS because HTTP requires CR-LF to separate lines. The program requests to the IP address 35.164.82.168 ( www.grymoire.com ) of a static website which, in turn, makes a GET request to the web page: http://35.164.82.168/Unix/donate.html . HTTP calls the GET request, a method which tells the web server to transmit the web page donate.html. The output is stored in the getline function using the co-process operator and printed on the screen, line by line, using the while loop. Finally, we close the http service connection. The following is the program to retrieve the web page: $ vi view_webpage.awk BEGIN { RS=ORS="rn" http = "/inet/tcp/0/35.164.82.168/80" print "GET http://35.164.82.168/Unix/donate.html" |& http while ((http |& getline) > 0) print $0 close(http) } $ awk -f view_webpage.awk Upon executing the program, it fills the screen with the source code of the page on the screen as follows: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML lang="en-US"> <HEAD> <TITLE> Welcome to The UNIX Grymoire!</TITLE> <meta name="keywords" content="grymoire, donate, unix, tutorials, sed, awk"> <META NAME="Description" CONTENT="Please donate to the Unix Grymoire" > <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <link href="myCSS.css" rel="stylesheet" type="text/css"> <!-- Place this tag in your head or just before your close body tag --> <script type="text/javascript" src="https://apis.google.com/js/plusone.js"></script> <link rel="canonical" href="http://www.grymoire.com/Unix/donate.html"> <link href="myCSS.css" rel="stylesheet" type="text/css"> ........ ........ Profiling in GAWK Profiling of code is done for code optimization. In GAWK, we can do profiling by supplying a profile option to GAWK while running the GAWK program. On execution of the GAWK program with that option, it creates a file with the name awkprof.out. Since GAWK is performing profiling of the code, the program execution is up to 45% slower than the speed at which GAWK normally executes. Let's understand profiling by looking at some examples. In the following example, we create a program that has four functions; two arithmetic functions, one function prints an array, and one function calls all of them. Our program also contains two BEGIN and two END statements. First, the BEGIN and END statement and then it contains a pattern action rule, then the second BEGIN and END statement, as follows: $ vi codeprof.awk func z_array(){ arr[30] = "volvo" arr[10] = "bmw" arr[20] = "audi" arr[50] = "toyota" arr["car"] = "ferrari" n = asort(arr) print "Array begins...!" print "=====================" for ( v in arr ) print v, arr[v] print "Array Ends...!" print "=====================" } function mul(num1, num2){ result = num1 * num2 printf ("Multiplication of %d * %d : %dn", num1,num2,result) } function all(){ add(30,10) mul(5,6) z_array() } BEGIN { print "First BEGIN statement" print "=====================" } END { print "First END statement " print "=====================" } /maruti/{print $0 } BEGIN { print "Second BEGIN statement" print "=====================" all() } END { print "Second END statement" print "=====================" } function add(num1, num2){ result = num1 + num2 printf ("Addition of %d + %d : %dn", num1,num2,result) } $ awk -- prof -f codeprof.awk cars.dat The output of the previous code is as follows: First BEGIN statement ===================== Second BEGIN statement ===================== Addition of 30 + 10 : 40 Multiplication of 5 * 6 : 30 Array begins...! ===================== 1 audi 2 bmw 3 ferrari 4 toyota 5 volvo Array Ends...! ===================== maruti swift 2007 50000 5 maruti dezire 2009 3100 6 maruti swift 2009 4100 5 maruti esteem 1997 98000 1 First END statement ===================== Second END statement ===================== Execution of the previous program also creates a file with the name awkprof.out. If we want to create this profile file with a custom name, then we can specify the filename as an argument to the --profile option as follows: $ awk --prof=codeprof.prof -f codeprof.awk cars.dat Now, upon execution of the preceding code we get a new file with the name codeprof.prof. Let's try to understand the contents of the file codeprof.prof created by the profiles as follows: # gawk profile, created Fri Feb 9 11:01:41 2018 # BEGIN rule(s) BEGIN { 1 print "First BEGIN statement" 1 print "=====================" } BEGIN { 1 print "Second BEGIN statement" 1 print "=====================" 1 all() } # Rule(s) 12 /maruti/ { # 4 4 print $0 } # END rule(s) END { 1 print "First END statement " 1 print "=====================" } END { 1 print "Second END statement" 1 print "=====================" } # Functions, listed alphabetically 1 function add(num1, num2) { 1 result = num1 + num2 1 printf "Addition of %d + %d : %dn", num1, num2, result } 1 function all() { 1 add(30, 10) 1 mul(5, 6) 1 z_array() } 1 function mul(num1, num2) { 1 result = num1 * num2 1 printf "Multiplication of %d * %d : %dn", num1, num2, result } 1 function z_array() { 1 arr[30] = "volvo" 1 arr[10] = "bmw" 1 arr[20] = "audi" 1 arr[50] = "toyota" 1 arr["car"] = "ferrari" 1 n = asort(arr) 1 print "Array begins...!" 1 print "=====================" 5 for (v in arr) { 5 print v, arr[v] } 1 print "Array Ends...!" 1 print "=====================" } This profiling example explains the various basic features of profiling in GAWK. They are as follows: The first look at the file from top to bottom explains the order of the program in which various rules are executed. First, the BEGIN rules are listed followed by the BEGINFILE rule, if any. Then pattern-action rules are listed. Thereafter, ENDFILE rules and END rules are printed. Finally, functions are listed in alphabetical order. Multiple BEGIN and END rules retain their places as separate identities. The same is also true for the BEGINFILE and ENDFILE rules. The pattern-action rules have two counts. The first number, to the left of the rule, tells how many times the rule's pattern was tested for the input file/record. The second number, to the right of the rule's opening left brace, with a comment, shows how many times the rule's action was executed when the rule evaluated to true. The difference between the two indicates how many times the rules pattern evaluated to false. If there is an if-else statement then the number shows how many times the condition was tested. At the right of the opening left brace for its body is a count showing how many times the condition was true. The count for the else statement tells how many times the test failed.  The count at the beginning of a loop header (for or while loop) shows how many times the loop conditional-expression was executed. In user-defined functions, the count before the function keyword tells how many times the function was called. The counts next to the statements in the body show how many times those statements were executed. The layout of each block uses C-style tabs for code alignment. Braces are used to mark the opening and closing of a code block, similar to C-style. Parentheses are used as per the precedence rule and the structure of the program, but only when needed. Printf or print statement arguments are enclosed in parentheses, only if the statement is followed by redirection. GAWK also gives leading comments before rules, such as before BEGIN and END rules, BEGINFILE and ENDFILE rules, and pattern-action rules and before functions. GAWK provides standard representation in a profiled version of the program. GAWK also accepts another option, --pretty-print. The following is an example of a pretty-printing AWK program: $ awk --pretty-print -f codeprof.awk cars.dat When GAWK is called with pretty-print, the program generates awkprof.out, but this time without any execution counts in the output. Pretty-print output also preserves any original comments if they are given in a program while the profile option omits the original program’s comments. The file created on execution of the program with --pretty-print option is as follows: # gawk profile, created Fri Feb 9 11:04:19 2018 # BEGIN rule(s) BEGIN { print "First BEGIN statement" print "=====================" } BEGIN { print "Second BEGIN statement" print "=====================" all() } # Rule(s) /maruti/ { print $0 } # END rule(s) END { print "First END statement " print "=====================" } END { print "Second END statement" print "=====================" } # Functions, listed alphabetically function add(num1, num2) { result = num1 + num2 printf "Addition of %d + %d : %dn", num1, num2, result } function all() { add(30, 10) mul(5, 6) z_array() } function mul(num1, num2) { result = num1 * num2 printf "Multiplication of %d * %d : %dn", num1, num2, result } function z_array() { arr[30] = "volvo" arr[10] = "bmw" arr[20] = "audi" arr[50] = "toyota" arr["car"] = "ferrari" n = asort(arr) print "Array begins...!" print "=====================" for (v in arr) { print v, arr[v] } print "Array Ends...!" print "=====================" } To summarize, we looked at the basics of network programming and GAWK's built-in command line debugger. Do check out the book Learning AWK Programming to know more about the intricacies of AWK programming for text processing. 20 ways to describe programming in 5 words What is Mob Programming?
Read more
  • 0
  • 0
  • 39123

article-image-really-basic-guide-to-batch-file-programming
Richard Gall
31 May 2018
3 min read
Save for later

A really basic guide to batch file programming

Richard Gall
31 May 2018
3 min read
Batch file programming is a way of making a computer do things simply by creating, yes, you guessed it, a batch file. It's a way of doing things you might ordinarily do in the command prompt, but automates some tasks, which means you don't have to write so much code. If it sounds straightforward, that's because it is, generally. Which is why it's worth learning... Batch file programming is a good place to start learning how computers work Of course, if you already know your way around batch files, I'm sure you'll agree it's a good way for someone relatively experienced in software to get to know their machine a little better. If you know someone that you think would get a lot from learning batch file programming share this short guide with them! Why would I write a batch script? There are a number of reasons you might write batch scripts. It's particularly useful for resolving network issues, installing a number of programs on different machines, even organizing files and folders on your computer. Imagine you have a recurring issue - with a batch file you can solve it quickly and easily wherever you are without having to write copious lines of code in the command line. Or maybe your desktop simply looks like a mess; with a little knowledge of batch file programming you can clean things up without too much effort. How to write a batch file Clearly, batch file programming can make your life a lot easier. Let's take a look at the key steps to begin writing batch scripts. Step 1: Open your text editor Batch file programming is really about writing commands - so you'll need your text editor open to begin. Notepad, wordpad, it doesn't matter! Step 2: Begin writing code As we've already seen, batch file programming is really about writing commands for your computer. The code is essentially the same as what you would write in the command prompt. Here are a few batch file commands you might want to know to get started: ipconfig - this presents network information like your IP and MAC address. start “” [website] - this opens a specified website in your browser. rem - this is used if you want to make a comment or remark in your code (ie. for documentation purposes) pause - this, as you'd expect, pauses the script so it can be read before it continues. echo - this command will display text in the command prompt. %%a - this command refers to every file in a given folder if - this is a conditional command The list of batch file commands is pretty long. There are plenty of other resources with an exhaustive list of commands you can use, but a good place to begin is this page on Wikipedia. Step 3: Save your batch file Once you've written your commands in the text editor, you'll then need to save your document as a batch file. Title it, and suffix it with the .bat extension. You'll also need to make sure save as type is set as 'All files'. That's basically it when it comes to batch file programming. Of course, there are some complex things you can do, but once you know the basics, getting into the code is where you can start to experiment.  Read next Jupyter and Python scripting Python Scripting Essentials
Read more
  • 0
  • 0
  • 40744
Modal Close icon
Modal Close icon