Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7008 Articles
article-image-abstract-terrain-shader-duality
Lőrinc Serfőző
06 Jan 2017
5 min read
Save for later

Abstract terrain shader in Duality

Lőrinc Serfőző
06 Jan 2017
5 min read
This post guides you through the creation process of abstract-looking terrain shaders in the Duality 2D game engine. The basics of the engine are not presented here, but if you are familiar with game technology, it should not be too difficult to follow along. If something does not make sense at first, take a look at the official documentation on GitHub. Alternatively, there are two tutorials with more of an introductory flair. In addition, the concepts described here can be easily adapted to other game engines and frameworks as well. Required tools Duality can be downloaded from the official site. A C# compiler and a text editor are also needed. Visual Studio 2013 or higher is recommended, but other IDEs, like MonoDevelop also work. Creating the required resources Open up a new project in Dualitor! First, we have to create several new resources. The following list describes the required resources. Create and name them accordingly. VertexShader encapsulates a GLSL vertex shader. We need this, because the vertex coordinates should be converted to world space, in order to achieve the desired terrain effect. More on that later. FragmentShader encapsulates a GLSL fragment shader, the 'creative' part of our processing. ShaderProgram binds a VertexShader and a FragmentShader together. DrawTechnique provides attributes (such as blending mode, etc.) to the ShaderProgram to be able to send it to the GPU. Material establishes the connection between a DrawTechnique and one or several textures and other numerical data. It can be attached to the GameObject's renderers in the scene. The vertex shader Let's start with implementing the vertex shader. Unlike most game engines, Duality handles some of the vertex transformations on the CPU, in order to achieve a parallax scaling effect. Thus, the vertex array passed to the GPU is already scaled. However, we do not need that precalculation for our terrain shader, so this transformation has to be undone in the vertex shader. Double click the VertexShader resource to open it in an external text editor. It should contain the following: void main() { gl_Position = ftransform(); gl_TexCoord[0] = gl_MultiTexCoord0; gl_FrontColor = gl_Color; } To perform the inverse transformation, the camera data should be passed to the shader. This is done automatically by Duality via pre-configured uniform variables: CameraFocusDist, CameraParallax and CameraPosition. The result worldPosition is passed to the fragment shader via a varying variable. // vertex shader varying vec3 worldPosition; uniform float CameraFocusDist; uniform bool CameraParallax; uniform vec3 CameraPosition; vec3 reverseParallaxTransform () { // Duality uses software pre-transformation of vertices // gl_Vertex is already in parallax (scaled) view space when arriving here. vec4 vertex = gl_Vertex; // Reverse-engineer the scale that was previously applied to the vertex float scale = 1.0; if (CameraParallax) { scale = CameraFocusDist / vertex.z; } else { // default focus dist is 500 scale = CameraFocusDist / 500.0; } return vec3 (vertex.xyz + vec3 (CameraPosition.xy, 0)) / scale; } void main() { gl_Position = ftransform(); gl_TexCoord[0] = gl_MultiTexCoord0; gl_FrontColor = gl_Color; worldPosition = reverseParallaxTransform (); } The fragment shader Next, implement the fragment shader. Various effects can be achieved using textures and mathematical functions creatively. Here a simple method is presented: the well-known XOR texture generation. It is based on calculating the binary exclusive or product of the integer world coordinates (operator ^ in GLSL). To control its parameters, two uniform variables, scale and repeat, are introduced in addition to the varying one from the vertex shader. A texture named mainTex is also used to alpha-mask the product. // fragment shader varying vec3 worldPosition; uniform float scale; uniform int repeat; uniform sampler2D mainTex; void main() { vec4 texSample = texture2D(mainTex, gl_TexCoord[0].st); int x = (int)(worldPosition.x * scale) % repeat; int y = (int)(worldPosition.y * scale) % repeat; vec3 color = gl_Color.rgb * (x ^ y) / (float)repeat; gl_FragColor = vec4(color, 1.0) * texSample.a; } Assign the VertexShader and FragmentShader resources to the ShaderProgram resource, and that to the DrawTechnique resource. The latter one, as mentioned, determines the blending mode. This time it has to be set to Mask in order to make the alpha masking work. The DrawTechnique should be assigned to the Material resource. The material is used to control the custom uniform parameters. The following values yield correct results: MainColor: Anything but black or white for testing purposes. mainTex: A texture with alpha mask. For example, this rounded block shape. scale: 1.0. repeat: 256. Populating the scene Create a SpriteRenderer in the scene, and assign the new material to it. Because we used world coordinates in the fragment shader, the texture stays fixed relative to the world, and the alpha mask functions as a “window” to it. The effect can be perceived by repositioning the sprite in the game world. Duplicate the sprite GameObject several times and move them around. When they intersect, the texture should be perfectly continuous. You may notice that the texture behaves incorrectly while moving the camera in Scene Editor view. The reason behind is that in that view mode, the camera is different than the one the shader calculates against. For inspecting the final look, use the Game View. Summary This technique can be used to quickly build continuous-looking terrains using a small number of alpha masks in your top-down or sidescroller game projects. Of course, the fragment shader could be extended with additional logic and textures. Experimenting with them often yields usable results. I hope you enjoyed this post. In case you have any questions, feel free to post them below, or on the Duality forums. About the author Lőrinc Serfőző is a software engineer at Graphisoft, the company behind the the BIM solution ArchiCAD. He is studying mechatronics engineering at the Budapest University of Technology and Economics. It’s an interdisciplinary field between the more traditional mechanical engineering, electrical engineering and informatics, and Lőrinc has quickly grown a passion toward software development. He is a supporter of open source software and contributes to the C# and OpenGL-based Duality game engine, creating free plugins and tools for users.
Read more
  • 0
  • 0
  • 4726

article-image-adding-life-your-chatbot
Ben James
05 Jan 2017
5 min read
Save for later

Adding Life to your Chatbot

Ben James
05 Jan 2017
5 min read
In the previous post we looked at getting your new bot off the ground with the SuperScript package. Today, we'll take this a step further and write your own personal assistant to find music videos, complete with its own voice using IVONA's text-to-speech platform. Giving your bot a voice To get started with IVONA, visit here and go to Speech Cloud > Sign Up. After a quick sign-up process, you'll be pointed to your dashboard, where you'll be able to get the API key needed to use their services. Go ahead and do so, and ensure you download the key file, as we'll need it later. We'll need to add a couple more packages to your SuperScript bot to integrate IVONA into it, so run the following: npm install --save ivona-node npm install --save play-sound ivona-node is a library for easily interfacing with the IVONA API without having to set things like custom headers yourself, while play-sound will let you play sound directly from your terminal, so you can hear what your bot says without having to locate the mp3 file and play it yourself! Now we need to write some code to get these two things working together. Open up src/server.js in your SuperScript directory, and at the top, add: import Ivona from 'ivona-node'; import Player from 'play-sound'; import fs from 'fs'; We'll need fs to be able to write the voice files to our system. Now, find your IVONA access and secret keys, and set up a new IVONA instance by adding the following: const ivona = new Ivona({ accessKey: 'YOUR_ACCESS_KEY', secretKey: 'YOUR_SECRET_KEY', }); We also need to create an instance of the player: const player = Player(); Great! We can double-check that we can access the IVONA servers by asking for a full list of voices that IVONA provides. ivona.listVoices() .on('complete', (voices) => { console.log(voices); }); These are available to sample on the IVONA home page, so if you haven't already, go and check it out. And find one you like! Now it's time for the magic to happen. Inside the bot.reply callback, we need to ask IVONA to turn our bot response into a speech before outputting it in our terminal. We can do that in just a few lines: bot.reply(..., (err, reply) => { // ... Other code to output text to the terminal // const stream = fs.createWriteStream('text.mp3'); ivona.createVoice(reply.string, { body: { voice: { name: 'Justin', language: 'en-US', gender: 'Male', }, }, }).pipe(stream); stream.on('finish', () => { player.play('text.mp3', (err) => { if (err) { console.error(err); } }); }); }); Run your bot again by running npm run start, and watch the magic unfurl as your bot speaks to you! Getting your bot to do your bidding Now that your bot has a human-like voice, it's time to get it to do something useful for you. After all, you are its master. We're going to write a simple script to find music videos for you. So let's open up chat/main.ss and add an additional trigger: + find a music video for (*) by (*) - Okay, here's your music video for <cap1> by <cap2>. ^findMusicVideo(<cap1>, <cap2>) Here, whenever we ask the bot for a music video, we just go off to our function findMusicVideo that finds a relevant video on YouTube. We'll write that SuperScript plugin now. First, we'll need to install the request library to make HTTP requests to YouTube. npm install --save request You'll also need to get a Google API key to search YouTube and get back some results in JSON form. To do this, you can go to here and follow the instructions to get a new key for the 'YouTube Data API'. Then, inside plugins/musicVideo.js, we can write: import request from 'request'; const YOUTUBE_API_BASE = 'https://www.googleapis.com/youtube/v3/search'; const GOOGLE_API_KEY = 'YOUR_KEY_HERE'; const findMusicVideo = function findMusicVideo(song, artist, callback) { request({ url: YOUTUBE_API_BASE, qs: { part: 'snippet', key: GOOGLE_API_KEY, q: `${song} ${artist}`, }, }, (error, response, body) => { if (!error && response.statusCode === 200) { try { const parsedJSON = JSON.parse(body); if (parsedJSON.items[0]) { return callback(`https://youtu.be/${parsedJSON.items[0].id.videoId}`); } } catch (err) { console.error(err); } return callback(''); } return callback(''); }); }; All we're doing here is making a request to the YouTube API for the relevant song and artist. We then take the first one that YouTube found, and stick it in a nice link to give back to the user. Now, parse and run your bot again, and you'll see that not only does your bot talk to you with a voice, but now you can ask it to find a YouTube video for you. About the author Ben is currently the technical director at To Play For, creating games, interactive stories and narratives using artificial intelligence. Follow him at @ToPlayFor.
Read more
  • 0
  • 0
  • 2307

article-image-learning-basic-powercli-concepts
Packt
05 Jan 2017
7 min read
Save for later

Learning Basic PowerCLI Concepts

Packt
05 Jan 2017
7 min read
In this article, by Robert van den Nieuwendijk, author of the book Learning PowerCLI - Second Edition, you will learn some basic PowerShell and PowerCLI concepts. Knowing these concepts will make it easier for you to learn the advanced topics. We will cover the Get-Command, Get-Help, and Get-Member cmdlets in this article. (For more resources related to this topic, see here.) Using the Get-Command, Get-Help, and Get-Member cmdlets There are some PowerShell cmdlets that everyone should know. Knowing these cmdlets will help you discover other cmdlets, their functions, parameters, and returned objects. Using Get-Command The first cmdlet that you should know is Get-Command. This cmdlet returns all the commands that are installed on your computer. The Get-Command cmdlet has the following syntax: Get-Command [[-ArgumentList] <Object[]>] [-All] [-ListImported] [-Module <String[]>] [-Noun <String[]>] [-ParameterName <String[]>] [-ParameterType <PSTypeName[]>] [-Syntax] [-TotalCount <Int32>] [-Verb <String[]>] [<CommonParameters>] Get-Command [[-Name] <String[]>] [[-ArgumentList] <Object[]>] [-All] [-CommandType <CommandTypes>] [-ListImported] [-Module <String[]>] [-ParameterName <String[]>] [-ParameterType <PSTypeName[]>] [-Syntax] [-TotalCount <Int32>] [<CommonParameters>] The first parameter set is named CmdletSet, and the second parameter set is named AllCommandSet. If you type the following command, you will get a list of commands installed on your computer, including cmdlets, aliases, functions, workflows, filters, scripts, and applications: PowerCLI C:> Get-Command You can also specify the name of a specific cmdlet to get information about that cmdlet, as shown in the following command: PowerCLI C:> Get-Command –Name Get-VM This will return the following information about the Get-VM cmdlet: CommandType Name ModuleName ----------- ---- ---------- Cmdlet Get-VM VMware.VimAutomation.Core You see that the command returns the command type and the name of the module that contains the Get-VM cmdlet. CommandType, Name, and ModuleName are the properties that the Get-VM cmdlet returns by default. You will get more properties if you pipe the output to the Format-List cmdlet. The following screenshot will show you the output of the Get-Command –Name Get-VM | Format-List * command: You can use the Get-Command cmdlet to search for cmdlets. For example, if necessary, search for the cmdlets that are used for vSphere hosts. Type the following command: PowerCLI C:> Get-Command -Name *VMHost* If you are searching for the cmdlets to work with networks, use the following command: PowerCLI C:> Get-Command -Name *network* Using Get-VICommand PowerCLI has a Get-VICommand cmdlet that is similar to the Get-Command cmdlet. The Get-VICommand cmdlet is actually a function that creates a filter on the Get-Command output, and it returns only PowerCLI commands. Type the following command to list all the PowerCLI commands: PowerCLI C:> Get-VICommand The Get-VICommand cmdlet has only one parameter –Name. So, you can also type, for example, the following command to get information only about the Get-VM cmdlet: PowerCLI C:> Get-VICommand –Name Get-VM Using Get-Help To discover more information about cmdlets, you can use the Get-Help cmdlet. For example: PowerCLI C:> Get-Help Get-VM This will display the following information about the Get-VM cmdlet: The Get-Help cmdlet has some parameters that you can use to get more information. The –Examples parameter shows examples of the cmdlet. The –Detailed parameter adds parameter descriptions and examples to the basic help display. The –Full parameter displays all the information available about the cmdlet. And the –Online parameter retrieves online help information available about the cmdlet and displays it in a web browser. Since PowerShell V3, there is a new Get-Help parameter –ShowWindow. This displays the output of Get-Help in a new window. The Get-Help -ShowWindow command opens the following screenshot: Using Get-PowerCLIHelp The PowerCLI Get-PowerCLIHelp cmdlet opens a separate help window for PowerCLI cmdlets, PowerCLI objects, and articles. This is a very useful tool if you want to browse through the PowerCLI cmdlets or PowerCLI objects. The following screenshot shows the window opened by the Get-PowerCLIHelp cmdlet: Using Get-PowerCLICommunity If you have a question about PowerCLI and you cannot find the answer in this article, use the Get-PowerCLICommunity cmdlet to open the VMware vSphere PowerCLI section of the VMware VMTN Communities. You can log in to the VMware VMTN Communities using the same My VMware account that you used to download PowerCLI. First, search the community for an answer to your question. If you still cannot find the answer, go to the Discussions tab and ask your question by clicking on the Start a Discussion button, as shown later. You might receive an answer to your question in a few minutes. Using Get-Member In PowerCLI, you work with objects. Even a string is an object. An object contains properties and methods, which are called members in PowerShell. To see which members an object contains, you can use the Get-Member cmdlet. To see the members of a string, type the following command: PowerCLI C:> "Learning PowerCLI" | Get-Member Pipe an instance of a PowerCLI object to Get-Member to retrieve the members of that PowerCLI object. For example, to see the members of a virtual machine object, you can use the following command: PowerCLI C:> Get-VM | Get-Member TypeName: VMware.VimAutomation.ViCore.Impl.V1.Inventory.VirtualMachineImpl Name MemberType Definition ---- ---------- ---------- ConvertToVersion Method T VersionedObjectInterop.Conver... Equals Method bool Equals(System.Object obj) GetConnectionParameters Method VMware.VimAutomation.ViCore.Int... GetHashCode Method int GetHashCode() GetType Method type GetType() IsConvertableTo Method bool VersionedObjectInterop.IsC... LockUpdates Method void ExtensionData.LockUpdates() ObtainExportLease Method VMware.Vim.ManagedObjectReferen... ToString Method string ToString() UnlockUpdates Method void ExtensionData.UnlockUpdates() CDDrives Property VMware.VimAutomation.ViCore.Typ... Client Property VMware.VimAutomation.ViCore.Int... CustomFields Property System.Collections.Generic.IDic... DatastoreIdList Property string[] DatastoreIdList {get;} Description Property string Description {get;} DrsAutomationLevel Property System.Nullable[VMware.VimAutom... ExtensionData Property System.Object ExtensionData {get;} FloppyDrives Property VMware.VimAutomation.ViCore.Typ... Folder Property VMware.VimAutomation.ViCore.Typ... FolderId Property string FolderId {get;} Guest Property VMware.VimAutomation.ViCore.Typ... GuestId Property string GuestId {get;} HAIsolationResponse Property System.Nullable[VMware.VimAutom... HardDisks Property VMware.VimAutomation.ViCore.Typ... HARestartPriority Property System.Nullable[VMware.VimAutom... Host Property VMware.VimAutomation.ViCore.Typ... HostId Property string HostId {get;} Id Property string Id {get;} MemoryGB Property decimal MemoryGB {get;} MemoryMB Property decimal MemoryMB {get;} Name Property string Name {get;} NetworkAdapters Property VMware.VimAutomation.ViCore.Typ... Notes Property string Notes {get;} NumCpu Property int NumCpu {get;} PersistentId Property string PersistentId {get;} PowerState Property VMware.VimAutomation.ViCore.Typ... ProvisionedSpaceGB Property decimal ProvisionedSpaceGB {get;} ResourcePool Property VMware.VimAutomation.ViCore.Typ... ResourcePoolId Property string ResourcePoolId {get;} Uid Property string Uid {get;} UsbDevices Property VMware.VimAutomation.ViCore.Typ... UsedSpaceGB Property decimal UsedSpaceGB {get;} VApp Property VMware.VimAutomation.ViCore.Typ... Version Property VMware.VimAutomation.ViCore.Typ... VMHost Property VMware.VimAutomation.ViCore.Typ... VMHostId Property string VMHostId {get;} VMResourceConfiguration Property VMware.VimAutomation.ViCore.Typ... VMSwapfilePolicy Property System.Nullable[VMware.VimAutom... The command returns the full type name of the VirtualMachineImpl object and all its methods and properties. Remember that the properties are objects themselves. You can also use Get-Member to get the members of the properties. For example, the following command line will give you the members of the VMGuestImpl object: PowerCLI C:> $VM = Get-VM –Name vCenter PowerCLI C:> $VM.Guest | Get-Member Summary In this article, you looked at the Get-Help, Get-Command, and Get-Member cmdlets. Resources for Article: Further resources on this subject: Enhancing the Scripting Experience [article] Introduction to vSphere Distributed switches [article] Virtualization [article]
Read more
  • 0
  • 0
  • 27437

article-image-test-driven-development
Packt
05 Jan 2017
19 min read
Save for later

Test-Driven Development

Packt
05 Jan 2017
19 min read
In this article by Md. Ziaul Haq, the author of the book Angular 2 Test-Driven Development, introduces you to the fundamentals of test-driven development with AngularJS, including: An overview of test-driven development (TDD) The TDD life cycle: test first, make it run, and make it better Common testing techniques (For more resources related to this topic, see here.) Angular2 is at the forefront of client-side JavaScript testing. Every Angular2 tutorial includes an accompanying test, and event test modules are a part of the core AngularJS package. The Angular2 team is focused on making testing fundamental to web development. An overview of TDD Test-driven development (TDD) is an evolutionary approach to development, where you write a test before you write just enough production code to fulfill that test and its refactoring. The following section will explore the fundamentals of TDD and how they are applied by a tailor. Fundamentals of TDD Get the idea of what to write in your code before you start writing it. This may sound cliched, but this is essentially what TDD gives you. TDD begins by defining expectations, then makes you meet the expectations, and finally, forces you to refine the changes after the expectations are met. Some of the clear benefits that can be gained by practicing TDD are as follows: No change is small: Small changes can cause a hell lot of breaking issues in the entire project. Only practicing TDD can help out, as after any change, test suit will catch the breaking points and save the project and the life of developers. Specifically identify the tasks: A test suit provides a clear vision of the tasks specifically and provides the workflow step-by-step in order to be successful. Setting up the tests first allows you to focus on only the components that have been defined in the tests. Confidence in refactoring: Refactoring involves moving, fixing, and changing a project. Tests protect the core logic from refactoring by ensuring that the logic behaves independently of the code structure. Upfront investment, benefits in future: Initially, it looks like testing kills the extra time, but it actually pays off later, when the project becomes bigger, it gives confidence to extend the feature as just running the test will get the breaking issues, if any. QA resource might be limited: In most cases, there are some limitations on QA resources as it always takes extra time for everything to be manually checked by the QA team, but writing some test case and by running them successfully will save some QA time definitely. Documentation: Tests define the expectations that a particular object or function must meet. An expectation acts as a contract and can be used to see how a method should or can be used. This makes the code readable and easier to understand. Measuring the success with different eyes TDD is not just a software development practice. The fundamental principles are shared by other craftsmen as well. One of these craftsmen is a tailor, whose success depends on precise measurements and careful planning. Breaking down the steps Here are the high-level steps a tailor takes to make a suit: Test first: Determining the measurements for the suit Having the customer determine the style and material they want for their suit Measuring the customer's arms, shoulders, torso, waist, and legs Making the cuts: Measuring the fabric and cutting it Selecting the fabric based on the desired style Measuring the fabric based on the customer's waist and legs Cutting the fabric based on the measurements Refactoring: Comparing the resulting product to the expected style, reviewing, and making changes Comparing the cut and look to the customer's desired style Making adjustments to meet the desired style Repeating: Test first: Determining the measurements for the pants Making the cuts: Measuring the fabric and making the cuts Refactor: Making changes based on the reviews The preceding steps are an example of a TDD approach. The measurements must be taken before the tailor can start cutting up the raw material. Imagine, for a moment, that the tailor didn't use a test-driven approach and didn't use a measuring tape (testing tool). It would be ridiculous if the tailor started cutting before measuring. As a developer, do you "cut before measuring"? Would you trust a tailor without a measuring tape? How would you feel about a developer who doesn't test? Measure twice, cut once The tailor always starts with measurements. What would happen if the tailor made cuts before measuring? What would happen if the fabric was cut too short? How much extra time would go into the tailoring? Measure twice, cut once. Software developers can choose from an endless amount of approaches to use before starting developing. One common approach is to work off a specification. A documented approach may help in defining what needs to be built; however, without tangible criteria for how to meet a specification, the actual application that gets developed may be completely different from the specification. With a TDD approach (test first, make it run, and make it better), every stage of the process verifies that the result meets the specification. Think about how a tailor continues to use a measuring tape to verify the suit throughout the process. TDD embodies a test-first methodology. TDD gives developers the ability to start with a clear goal and write code that will directly meet a specification. Develop like a professional and follow the practices that will help you write quality software. Practical TDD with JavaScript Let's dive into practical TDD in the context of JavaScript. This walk through will take you through the process of adding the multiplication functionality to a calculator. Just keep the TDD life cycle, as follows, in mind: Test first Make it run Make it better Point out the development to-do list A development to-do list helps to organize and focus on tasks specifically. It also helps to provide a surface to list down the ideas during the development process, which could be a single feature later on. Let's add the first feature in the development to-do list—add multiplication functionality: 3 * 3 = 9. The preceding list describes what needs to be done. It also provides a clear example of how to verify multiplication—3 * 3 = 9. Setting up the test suit To set up the test, let's create the initial calculator in a file, called calculator.js, and is initialized as an object as follows: var calculator = {}; The test will be run through a web browser as a simple HTML page. So, for that, let's create an HTML page and import calculator.js to test it and save the page as testRunner.html. To run the test, open the testRunner.html file in your web browser. The testRunner.html file will look as follows: <!DOCTYPE html> <html> <head> <title>Test Runner</title> </head> <body> <script src="calculator.js"></script> </body> </html> The test suit is ready for the project and the development to-do list for feature is ready as well. The next step is to dive into the TDD life cycle based on the feature list one by one. Test first Though it's easy to write a multiplication function and it will work as its pretty simple feature, as a part of practicing TDD, it's time to follow the TDD life cycle. The first phase of the life cycle is to write a test based on the development to-do list. Here are the steps for the first test: Open calculator.js. Create a new function to test multiplying 3 * 3: function multipleTest1() { // Test var result = calculator.multiply(3, 3); // Assert Result is expected if (result === 9) { console.log('Test Passed'); } else { console.log('Test Failed'); } }; The test calls a multiply function, which still needs to be defined. It then asserts that the results are as expected, by displaying a pass or fail message. Keep in mind that in TDD, you are looking at the use of the method and explicitly writing how it should be used. This allows you to define the interface through a use case, as opposed to only looking at the limited scope of the function being developed. The next step in the TDD life cycle is focused on making the test run. Make it run In this step, we will run the test, just as the tailor did with the suit. The measurements were taken during the test step, and now the application can be molded to fit the measurements. The following are the steps to run the test: Open testRunner.html on a web browser. Open the JavaScript developer Console window in the browser. Test will throw an error, which will be visible in the browser's developer console, as shown in the following screenshot: The thrown error is about the undefined function, which is expected as the calculator application calls a function that hasn't been created yet—calculator.multiply. In TDD, the focus is on adding the easiest change to get a test to pass. There is no need to actually implement the multiplication logic. This may seem unintuitive. The point is that once a passing test exists, it should always pass. When a method contains fairly complex logic, it is easier to run a passing test against it to ensure that it meets the expectations. What is the easiest change that can be made to make the test pass? By returning the expected value of 9, the test should pass. Although this won't add the multiply function, it will confirm the application wiring. In addition, after you have passed the test, making future changes will be easy as you have to simply keep the test passing! Now, add the multiply function and have it return the required value of 9, as illustrated: var calculator = { multiply : function() { return 9; } }; Now, let's refresh the page to rerun the test and look at the JavaScript console. The result should be as shown in the following screenshot: Yes! No more errors, there's a message showing that test has been passed. Now that there is a passing test, the next step will be to remove the hardcoded value in the multiply function. Make it better The refactoring step needs to remove the hardcoded return value of the multiply function that we added as the easiest solution to pass the test and will add the required logic to get the expected result. The required logic is as follows: var calculator = { multiply : function(amount1, amount2) { return amount1 * amount2; } }; Now, let's refresh the browser to rerun the tests, it will pass the test as it did before. Excellent! Now the multiply function is complete. The full code of the calculator.js file for the calculator object with its test will look as follows: var calculator = { multiply : function(amount1, amount2) { return amount1 * amount2; } }; function multipleTest1() { // Test var result = calculator.multiply(3, 3); // Assert Result is expected if (result === 9) { console.log('Test Passed'); } else { console.log('Test Failed'); } }; multipleTest1(); Mechanism of testing To be a proper TDD following developer, it is important to understand some fundamental mechanisms of testing, techniques, and approaches to testing. In this section, we will walk you through a couple of examples of techniques and mechanisms of the tests that will be leveraged in this article. This will mostly include the following points: Testing doubles with Jasmine spies Refactoring the existing tests Building patterns In addition, here are the additional terms that will be used: Function under test: This is the function being tested. It is also referred to as system under test, object under test, and so on. The 3 A's (Arrange, Act, and Assert): This is a technique used to set up tests, first described by Bill Wake (http://xp123.com/articles/3a-arrange-act-assert/). Testing with a framework We have already seen a quick and simple way to perform tests on calculator application, where we have set the test for the multiply method. But in real life, it will be more complex and a way larger application, where the earlier technique will be too complex to manage and perform. In that case, it will be very handy and easier to use a testing framework. A testing framework provides methods and structures to test. This includes a standard structure to create and run tests, the ability to create assertions/expectations, the ability to use test doubles, and more. The following example code is not exactly how it runs with the Jasmine test/spec runner, it's just about the idea of how the doubles work, or how these doubles return the expected result. Testing doubles with Jasmine spies A test double is an object that acts and is used in place of another object. Jasmine has a test double function that is known as spies. Jasmine spy is used with the spyOn()method. Take a look at the following testableObject object that needs to be tested. Using a test double, you can determine the number of times testableFunction gets called. The following is an example of Test double: var testableObject = { testableFunction : function() { } }; jasmine.spyOn(testableObject, 'testableFunction'); testableObject.testableFunction(); testableObject.testableFunction(); testableObject.testableFunction(); console.log(testableObject.testableFunction.count); The preceding code creates a test double using a Jasmine spy (jasmine.spyOn). The test double is then used to determine the number of times testableFunction gets called. The following are some of the features that a Jasmine test double offers: The count of calls on a function The ability to specify a return value (stub a return value) The ability to pass a call to the underlying function (pass through) Stubbing return value The great thing about using a test double is that the underlying code of a method does not have to be called. With a test double, you can specify exactly what a method should return for a given test. Consider the following example of an object and a function, where the function returns a string: var testableObject = { testableFunction : function() { return 'stub me'; } }; The preceding object (testableObject) has a function (testableFunction) that needs to be stubbed. So, to stub the single return value, it will need to chain the and.returnValuemethod and will pass the expected value as param. Here is how to spy chain the single return value to stub it: jasmine.spyOn(testableObject, 'testableFunction') .and .returnValue('stubbed value'); Now, when testableObject.testableFunction is called, a stubbed value will be returned. Consider the following example of the preceding single stubbed value: var testableObject = { testableFunction : function() { return 'stub me'; } }; //before the return value is stubbed Console.log(testableObject.testableFunction()); //displays 'stub me' jasmine.spyOn(testableObject,'testableFunction') .and .returnValue('stubbed value'); //After the return value is stubbed Console.log(testableObject.testableFunction()); //displays 'stubbed value' Similarly, we can pass multiple retuned values as the preceding example. To do so, it will chain the and.returnValuesmethod with the expected values as param, where the values will be separated by commas. Here is how to spy chain the multiple return values to stub them one by one: jasmine.spyOn(testableObject, 'testableFunction') .and .returnValues('first stubbed value', 'second stubbed value', 'third stubbed value'); So, for every call of testableObject.testableFunction, it will return the stubbedvalue in order until reaches the end of the return value list. Consider the given example of the preceding multiple stubbed values: jasmine.spyOn(testableObject, 'testableFunction') .and .returnValue('first stubbed value', 'second stubbed value', 'third stubbed value'); //After the is stubbed return values Console.log(testableObject.testableFunction()); //displays 'first stubbed value' Console.log(testableObject.testableFunction()); //displays 'second stubbed value' Console.log(testableObject.testableFunction()); //displays 'third stubbed value' Testing arguments A test double provides insights into how a method is used in an application. As an example, a test might want to assert what arguments a method was called with or the number of times a method was called. Here is an example function: var testableObject = { testableFunction : function(arg1, arg2) {} }; The following are the steps to test the arguments with which the preceding function is called: Create a spy so that the arguments called can be captured: jasmine.spyOn(testableObject, 'testableFunction'); Then, to access the arguments, do the following: //Get the arguments for the first call of the function var callArgs = testableObject.testableFunction.call.argsFor(0); console.log(callArgs); //displays ['param1', 'param2'] Here is how the arguments can be displayed using console.log: var testableObject = { testableFunction : function(arg1, arg2) {} }; //create the spy jasmine.spyOn(testableObject, 'testableFunction'); //Call the method with specific arguments testableObject.testableFunction('param1', 'param2'); //Get the arguments for the first call of the function var callArgs = testableObject.testableFunction.call.argsFor(0); console.log(callArgs); //displays ['param1', 'param2'] Refactoring Refactoring is the act of restructuring, rewriting, renaming, and removing code in order to improve the design, readability, maintainability, and overall aesthetics of a piece of code. The TDD life cycle step of "making it better" is primarily concerned with refactoring. This section will walk you through a refactoring example. Take a look at the following example of a function that needs to be refactored: var abc = function(z) { var x = false; if(z > 10) return true; return x; } This function works fine and does not contain any syntactical or logical issues. The problem is that the function is difficult to read and understand. Refactoring this function will improve the naming, structure, and definition. The exercise will remove the masquerading complexity and reveal the function's true meaning and intention. Here are the steps: Rename the function and variable names to be more meaningful, that is, rename x and z so that they make sense, as shown: var isTenOrGreater = function(value) { var falseValue = false; if(value > 10) return true; return falseValue; } Now, the function can easily be read and the naming makes sense. Remove unnecessary complexity. In this case, the if conditional statement can be removed completely, as follows: var isTenOrGreater = function(value) { return value > 10; }; Reflect on the result. At this point, the refactoring is complete, and the function's purpose should jump out at you. The next question that should be asked is "why does this method exist in the first place?". This example only provided a brief walk-through of the steps that can be taken to identify issues in code and how to improve them. Building with a builder These days, design pattern is almost a kind of common practice, and we follow design pattern to make life easier. For the same reason, the builder pattern will be followed here. The builder pattern uses a builder object to create another object. Imagine an object with 10 properties. How will test data be created for every property? Will the object have to be recreated in every test? A builder object defines an object to be reused across multiple tests. The following code snippet provides an example of the use of this pattern. This example will use the builder object in the validate method: var book = { id : null, author : null, dateTime : null }; The book object has three properties: id, author, and dateTime. From a testing perspective, you would want the ability to create a valid object, that is, one that has all the fields defined. You may also want to create an invalid object with missing properties, or you may want to set certain values in the object to test the validation logic, that is, dateTime is an actual date. Here are the steps to create a builder for the dateTime object: Create a builder function, as shown: var bookBuilder = function() {}; Create a valid object within the builder, as follows: var bookBuilder = function() { var _resultBook = { id: 1, author: 'Any Author', dateTime: new Date() }; } Create a function to return the built object, as given: var bookBuilder = function() { var _resultBook = { id: 1, author: "Any Author", dateTime: new Date() }; this.build = function() { return _resultBook; } } As illustrated, create another function to set the _resultBook author field: var bookBuilder = function() { var _resultBook = { id: 1, author: 'Any Author', dateTime: new Date() }; this.build = function() { return _resultBook; }; this.setAuthor = function(author){ _resultBook.author = author; }; }; Make the function fluent, as follows, so that calls can be chained: this.setAuthor = function(author) { _resultBook.author = author; return this; }; A setter function will also be created for dateTime, as shown: this.setDateTime = function(dateTime) { _resultBook.dateTime = dateTime; return this; }; Now, bookBuilder can be used to create a new book, as follows: var bookBuilder = new bookBuilder(); var builtBook = bookBuilder.setAuthor('Ziaul Haq') .setDateTime(new Date()) .build(); console.log(builtBook.author); // Ziaul Haq The preceding builder can now be used throughout your tests to create a single consistent object. Here is the complete builder for your reference: var bookBuilder = function() { var _resultBook = { id: 1, author: 'Any Author', dateTime: new Date() }; this.build = function() { return _resultBook; }; this.setAuthor = function(author) { _resultBook.author = author; return this; }; this.setDateTime = function(dateTime) { _resultBook.dateTime = dateTime; return this; }; }; Let's create the validate method to validate the created book object from builder. var validate = function(builtBookToValidate){ if(!builtBookToValidate.author) { return false; } if(!builtBookToValidate.dateTime) { return false; } return true; }; So, at first, let's create a valid book object with builder by passing all the required information, and if this is passed via the validate object, this should show a valid message: var validBuilder = new bookBuilder().setAuthor('Ziaul Haq') .setDateTime(new Date()) .build(); // Validate the object with validate() method if (validate(validBuilder)) { console.log('Valid Book created'); } In the same way, let's create an invalid book object via builder by passing some null value in the required information. And by passing the object to the validate method, it should show the message, why it's invalid. var invalidBuilder = new bookBuilder().setAuthor(null).build(); if (!validate(invalidBuilder)) { console.log('Invalid Book created as author is null'); } var invalidBuilder = new bookBuilder().setDateTime(null).build(); if (!validate(invalidBuilder)) { console.log('Invalid Book created as dateTime is null'); } Self-test questions Q1. A test double is another name for a duplicate test. True False Q2. TDD stands for test-driven development. True False Q3. The purpose of refactoring is to improve code quality. True False Q4. A test object builder consolidates the creation of objects for testing. True False Q5. The 3 A's are a sports team. True False Summary This article provided an introduction to TDD. It discussed the TDD life cycle (test first, make it run, and make it better) and showed how the same steps are used by a tailor. Finally, it looked over some of the testing techniques such as test doubles, refactoring, and building patterns. Although TDD is a huge topic, this article is solely focused on the TDD principles and practices to be used with AngularJS. Resources for Article: Further resources on this subject: Angular 2.0 [Article] Writing a Blog Application with Node.js and AngularJS [Article] Integrating a D3.js visualization into a simple AngularJS application [Article]
Read more
  • 0
  • 0
  • 1719

Packt
05 Jan 2017
17 min read
Save for later

Data Types – Foundational Structures

Packt
05 Jan 2017
17 min read
This article by William Smith, author of the book Everyday Data Structures reviews the most common and most important fundamental data types from the 10,000-foot view. Calling data types foundational structures may seem like a bit of a misnomer but not when you consider that developers use data types to build their classes and collections. So, before we dive into examining proper data structures, it's a good idea to quickly review data types, as these are the foundation of what comes next. In this article, we will briefly explain the following topics: Numeric data types Casting,Narrowing, and Widening 32-bit and 64-bit architecture concerns Boolean data types Logic operations Order of operations Nesting operations Short-circuiting String data types Mutability of strings (For more resources related to this topic, see here.) Numeric data types A detailed description of all the numeric data types in each of these four languages namely, C#, Java, Objective C, and Swift, could easily encompass a book of its own. The simplest way to evaluate these types is based on the underlying size of the data, using examples from each language as a framework for the discussion. When you are developing applications for multiple mobile platforms, you should be aware that the languages you use could share a data type identifier or keyword, but under the hood, those identifiers may not be equal in value. Likewise, the same data type in one language may have a different identifier in another. For example, examine the case of the 16 bit unsigned integer, sometimes referred to as an unsigned short. Well, it's called an unsigned short in Objective-C. In C#, we are talking about a ushort, while Swift calls it a UInt16. Java, on the other hand, uses a char for this data type. Each of these data types represents a 16 bit unsigned integer; they just use different names. This may seem like a small point, but if you are developing apps for multiple devices using each platform's native language, for the sake of consistency, you will need to be aware of these differences. Otherwise, you may risk introducing platform-specific bugs that are extremely difficult to detect and diagnose. Integer types The integer data types are defined as representing whole numbers and can be either signed (negative, zero, or positive values) or unsigned (zero or positive values). Each language uses its own identifiers and keywords for the integer types, so it is easiest to think in terms of memory length. For our purpose, we will only discuss the integer types representing 8, 16, 32, and 64 bit memory objects. 8 bit data types, or bytes as they are more commonly referred to, are the smallest data types that we will examine. If you have brushed up on your binary math, you will know that an 8 bit memory block can represent 28, or 256 values. Signed bytes can range in values from -128 to 127, or -(27) to (27) - 1. Unsigned bytes can range in values from 0 to 255, or 0 to (28) -1. A 16 bit data type is often referred to as a short, although that is not always the case. These types can represent 216, or 65,536 values. Signed shorts can range in values from -32,768 to 32,767, or -(215) to (215) - 1. Unsigned shorts can range in values from 0 to 65,535, or 0 to (216) - 1. A 32 bit data type is most commonly identified as an int, although it is sometimes identified as a long. Integer types can represent 232, or 4,294,967,296 values. Signed ints can range in values from -2,147,483,648 to 2,147,483,647, or -(231) to (231) - 1. Unsigned ints can range in values from 0 to 4,294,967,295, or 0 to (232) - 1. Finally, a 64 bit data type is most commonly identified as a long, although Objective-C identifies it as a long long. Long types can represent 264, or 18,446,744,073,709,551,616 values. Signed longs can range in values from −9,223,372,036,854,775,808 to 9,223,372,036,854,775,807, or -(263) to (263) - 1. Unsigned longs can range in values from 0 to 18,446,744,073,709,551,615, or 0 to (263) - 1. Note that these values happen to be consistent across the four languages we will work with, but some languages will introduce slight variations. It is always a good idea to become familiar with the details of a language's numeric identifiers. This is especially true if you expect to be working with cases that involve the identifier's extreme values. Single precision float Single precision floating point numbers, or floats as they are more commonly referred to, are 32 bit floating point containers that allow for storing values with much greater precision than the integer types, typically 6 or 7 significant digits. Many languages use the float keyword or identifier for single precision float values, and that is the case for each of the four languages we are discussing. You should be aware that floating point values are subject to rounding errors because they cannot represent base-10 numbers exactly. The arithmetic of floating point types is a fairly complex topic, the details of which will not be pertinent to the majority of developers on any given day. However, it is still a good practice to familiarize yourself with the particulars of the underlying science as well as the implementation in each language. Double precision float Double precision floating point numbers, or doubles as they are more commonly referred to, are 64 bit floating point values that allow for storing values with much greater precision than the integer types, typically to 15 significant digits. Many languages use the double identifier for double precision float values and that is also the case for each of the four languages: C#, Objective C, Java, and Swift. In most circumstances, it will not matter whether you choose float over double, unless memory space is a concern in which case you will want to choose float whenever possible. Many argue that float is more performant than double under most conditions, and generally speaking, this is the case. However, there are other conditions where double will be more performant than float. The reality is the efficiency of each type is going to vary from case to case, based on a number of criteria that are too numerous to detail in the context of this discussion. Therefore, if your particular application requires truly peak efficiency, you should research the requirements and environmental factors carefully and decide what is best for your situation. Otherwise, just use whichever container will get the job done and move on. Currency Due to the inherent inaccuracy found in floating point arithmetic, grounded in the fact that they are based on binary arithmetic, floats, and doubles cannot accurately represent the base-10 multiples we use for currency. Representing currency as a float or double may seem like a good idea at first as the software will round off the tiny errors in your arithmetic. However, as you begin to perform more and complex arithmetic operations on these inexact results, your precision errors will begin to add up and result in serious inaccuracies and bugs that can be very difficult to track down. This makes float and double data types insufficient for working with currency where perfect accuracy for multiples of 10 is essential. Typecasting In the realm of computer science, type conversion or typecasting means to convert an instance of one object or data type into another. This can be done through either implicit conversion, sometimes called coercion, or explicit conversion, otherwise known as casting. To fully appreciate casting, we also need to understand the difference between static and dynamic languages. Statically versus dynamically typed languages A statically typed language will perform its type checking at compile time. This means that when you try to build your solution, the compiler will verify and enforce each of the constraints that apply to the types in your application. If they are not enforced, you will receive an error and the application will not build. C#, Java, and Swift are all statically typed languages. Dynamically typed languages, on the other hand, do the most or all of their type checking at run time. This means that the application could build just fine, but experience a problem while it is actually running if the developer wasn't careful in how he wrote the code. Objective-C is a dynamically typed language because it uses a mixture of statically typed objects and dynamically typed objects. The Objective-C classes NSNumber and NSDecimalNumber are both examples of dynamically typed objects. Consider the following code example in Objective-C: double myDouble = @"chicken"; NSNumber *myNumber = @"salad"; The compiler will throw an error on the first line, stating Initializing 'double' with an expression of incompatible type 'NSString *'. That's because double is a plain C object, and it is statically typed. The compiler knows what to do with this statically typed object before we even get to the build, so your build will fail. However, the compiler will only throw a warning on the second line, stating Incompatible pointer types initializing 'NSNumber *' with an expression of type 'NSString *'. That's because NSNumber is an Objective-C class, and it is dynamically typed. The compiler is smart enough to catch your mistake, but it will allow the build to succeed (unless you have instructed the compiler to treat warnings as errors in your build settings). Although the forthcoming crash at runtime is obvious in the previous example, there are cases where your app will function perfectly fine despite the warnings. However, no matter what type of language you are working with, it is always a good idea to consistently clean up your code warnings before moving on to new code. This helps keep your code clean and avoids any bugs that can be difficult to diagnose. On those rare occasions where it is not prudent to address the warning immediately, you should clearly document your code and explain the source of the warning so that other developers will understand your reasoning. As a last resort, you can take advantage of macros or pre-processor (pre-compiler) directives that can suppress warnings on a line by line basis. Implicit and explicit casting Implicit casting does not require any special syntax in your source code. This makes implicit casting somewhat convenient. However, since implicit casts do not define their types manually, the compiler cannot always determine which constraints apply to the conversion and therefore will not be able to check these constraints until runtime. This makes the implicit cast also somewhat dangerous. Consider the following code example in C#: double x = "54"; This is an implicit conversion because you have not told the compiler how to treat the string value. In this case, the conversion will fail when you try to build the application, and the compiler will throw an error for this line, stating Cannot implicitly convert type 'string' to 'double'. Now, consider the explicitly cast version of this example: double x = double.Parse("42"); Console.WriteLine("40 + 2 = {0}", x); /* Output 40 + 2 = 42 */ This conversion is explicit and therefore type safe, assuming that the string value is parsable. Widening and narrowing When casting between two types, an important consideration is whether the result of the change is within the range of the target data type. If your source data type supports more bytes than your target data type, the cast is considered to be a narrowing conversion. Narrowing conversions are either casts that cannot be proven to always succeed or casts that are known to possibly lose information. For example, casting from a float to an integer will result in loss of information (precision in this case), as the result will be rounded off to the nearest whole number. In most statically typed languages, narrowing casts cannot be performed implicitly. Here is an example by borrowing from the C# single precision: //C# piFloat = piDouble; In this example, the compiler will throw an error, stating Cannot implicitly convert type 'double' to 'float'. And explicit conversion exists (Are you missing a cast?). The compiler sees this as a narrowing conversion and treats the loss of precision as an error. The error message itself is helpful and suggests an explicit cast as a potential solution for our problem: //C# piFloat = (float)piDouble; We have now explicitly cast the double value piDouble to a float, and the compiler no longer concerns itself with loss of precision. If your source data type supports fewer bytes than your target data type, the cast is considered to be a widening conversion. Widening conversions will preserve the source object's value, but may change its representation in some way. Most statically typed languages will permit implicit widening casts. Let's borrow again from our previous C# example: //C# piDouble = piFloat; In this example, the compiler is completely satisfied with the implicit conversion and the app will build. Let's expand the example further: //C# piDouble = (double)piFloat; This explicit cast improves readability, but does not change the nature of the statement in any way. The compiler also finds this format to be completely acceptable, even if it is somewhat more verbose. Beyond improved readability, explicit casting when widening adds nothing to your application. Therefore, it is your preference if you want to use explicit casting when widening is a matter of personal preference. Boolean data type Boolean data types are intended to symbolize binary values, usually denoted by 1 and 0, true and false, or even YES and NO. Boolean types are used to represent truth logic, which is based on Boolean algebra. This is just a way of saying that Boolean values are used in conditional statements, such as if or while, to evaluate logic or repeat an execution conditionally. Equality operations include any operations that compare the value of any two entities. The equality operators are: == implies equal to != implies not equal to Relational operations include any operations that test a relation between two entities. The relational operators are: > implies greater than >= implies greater than or equal to < implies less than <= implies less than or equal to Logic operations include any operations in your program that evaluate and manipulate Boolean values. There are three primary logic operators, namely AND, OR, and NOT. Another, slightly less commonly used operator, is the exclusive or, or XOR operator.  All Boolean functions and statements can be built with these four basic operators. The AND operator is the most exclusive comparator. Given two Boolean variables A and B, AND will return true if and only if both A and B are true. Boolean variables are often visualized using tools called truth tables. Consider the following truth table for the AND operator: A B A ^ B 0 0 0 0 1 0 1 0 0 1 1 1 This table demonstrates the AND operator.  When evaluating a conditional statement, 0 is considered to be false, while any other value is considered to be true. Only when the value of both A and B is true, is the resulting comparison of A ^ B also true. The OR operator is the inclusive operator. Given two Boolean variables A and B, OR will return true if either A or B are true, including the case when both A and B are true. Consider the following truth table for the OR operator: A B A v B 0 0 0 0 1 1 1 0 1 1 1 1 Next, the NOT A operator is true when A is false, and false when A is true. Consider the following truth table for the NOT operator: A !A 0 1 1 0 Finally, the XOR operator is true when either A or B is true, but not both. Another way to say it is, XOR is true when A and B are different. There are many occasions where it is useful to evaluate an expression in this manner, so most computer architectures include it. Consider the following truth table for XOR: A B A xor B 0 0 0 0 1 1 1 0 1 1 1 0 Operator precedence Just as with arithmetic, comparison and Boolean operations have operator precedence. This means the architecture will give a higher precedence to one operator over another. Generally speaking, the Boolean order of operations for all languages is as follows: Parenthesis Relational operators Equality operators Bitwise operators (not discussed) NOT AND OR XOR Ternary operator Assignment operators It is extremely important to understand operator precedence when working with Boolean values, because mistaking how the architecture will evaluate complex logical operations will introduce bugs in your code that you will not understand how to sort out. When in doubt, remember that as in arithmetic parenthesis, take the highest precedence and anything defined within them will be evaluated first. Short-circuiting As you recall, AND only returns true when both of the operands are true, and OR returns true as soon as one operand is true. These characteristics sometimes make it possible to determine the outcome of an expression by evaluating only one of the operands. When your applications stops evaluation immediately upon determining the overall outcome of an expression, it is called short-circuiting. There are three main reasons why you would want to use short-circuiting in your code. First, short-circuiting can improve your application's performance by limiting the number of operations your code must perform. Second, when later operands could potentially generate errors based on the value of a previous operand, short-circuiting can halt execution before the higher risk operand is reached. Finally, short-circuiting can improve the readability and complexity of your code by eliminating the need for nested logical statements. Strings Strings data types are simply objects whose value is text. Under the hood, strings contain a sequential collection of read-only char objects. This read-only nature of a string object makes strings immutable, which means the objects cannot be changed once they have been created in memory. It is important to understand that changing any immutable object, not just a string, means your program is actually creating a new object in memory and discarding the old one. This is a more intensive operation than simply changing the value of an address in memory and requires more processing. Merging two strings together is called concatenation, and this is an even more costly procedure as you are disposing of two objects before creating a new one. If you find that you are editing your string values frequently, or frequently concatenating strings together, be aware that your program is not as efficient as it could be. Strings are strictly immutable in C#, Java, and Objective-C. It is interesting to note that the Swift documentation refers to strings as mutable. However, the behavior is similar to Java, in that, when a string is modified, it gets copied on assignment to another object. Therefore, although the documentation says otherwise, strings are effectively immutable in Swift as well. Summary In this article, you learned about the basic data types available to a programmer in each of the four most common mobile development languages. Numeric and floating point data type characteristics and operations are as much dependent on the underlying architecture as on the specifications of the language. You also learned about casting objects from one type to another and how the type of cast is defined as either a widening cast or a narrowing cast depending on the size of the source and target data types in the conversion. Next, we discussed Boolean types and how they are used in comparators to affect program flow and execution. In this, we discussed order of precedence of operator and nested operations. You also learned how to use short-circuiting to improve your code's performance. Finally, we examined the String data type and what it means to work with mutable objects. Resources for Article: Further resources on this subject: Why Bother? – Basic [article] Introducing Algorithm Design Paradigms [article] Algorithm Analysis [article]
Read more
  • 0
  • 0
  • 2320

article-image-game-objective
Packt
04 Jan 2017
5 min read
Save for later

Game objective

Packt
04 Jan 2017
5 min read
In this article by Alan Thorn, author of the book Mastering Unity 5.x, we will see what the game objective is and asset preparation. Every game (except for experimental and experiential games) need an objective for the player; something they must strive to do, not just within specific levels, but across the game overall. This objective is important not just for the player (to make the game fun), but also for the developer, for deciding how challenge, diversity and interest can be added to the mix. Before starting development, have a clearly stated and identified objective in mind. Challenges are introduced primarily as obstacles to the objective, and bonuses are 'things' that facilitate the objective; that make it possible and easier to achieve. For Dead Keys, the primary objective is to survive and reach the level end. Zombies threaten that objective by attacking and damaging the player, and bonuses exist along the way to make things more interesting. I highly recommend using project management and team collaboration tools to chart, document and time-track tasks within your project. And you can do this for free too. Some online tools for this include Trello (https://trello.com), Bitrix 24 (https://www.bitrix24.com), BaseCamp (https://basecamp.com), FreedCamp (https://freedcamp.com), UnFuddle (https://unfuddle.com), BitBucket (https://bitbucket.org), Microsoft Visual Studio Team Services (https://www.visualstudio.com/en-us/products/visual-studio-team-services-vs.aspx), Concord Contract Management (http://www.concordnow.com). Asset preparation When you've reached a clear decision on initial concept and design, you're ready to prototype! This means building a Unity project demonstrating the core mechanic and game rules in action; as a playable sample. After this, you typically refine the design more, and repeat prototyping until arriving at an artefact you want to pursue. From here, the art team must produce assets (meshes and textures) based on concept art, the game design, and photographic references. When producing meshes and textures for Unity, some important guidelines should be followed to achieve optimal graphical performance in-game. This is about structuring and building assets in a smart way, so they export cleanly and easily from their originating software, and can then be imported with minimal fuss, performing as best as they can at run-time. Let's see some of these guidelines for meshes and textures. Meshes - work only with good topology Good mesh topology consists in all polygons having only three or four sides in the model (not more). Additionally, Edge Loops should flow in an ordered, regular way along the contours of the model, defining its shape and form. Clean Topology Unity automatically converts, on import, any NGons (Polygons with more than four sides) into triangles, if the mesh has any. But, it's better to build meshes without NGons, as opposed to relying on Unity's automated methods. Not only does this cultivate good habits at the modelling phase, but it avoids any automatic and unpredictable retopology of the mesh, which affects how it's shaded and animated. Meshes - minimize polygon count Every polygon in a mesh entails a rendering performance hit insofar as a GPU needs time to process and render each polygon. Consequently, it's sensible to minimize the number of a polygons in a mesh, even though modern graphics hardware is adept at working with many polygons. It's good practice to minimize polygons where possible and to the degree that it doesn't detract from your central artistic vision and style. High-Poly Meshes! (Try reducing polygons where possible) There are many techniques available for reducing polygon counts. Most 3D applications (like 3DS Max, Maya and Blender) offer automated tools that decimate polygons in a mesh while retaining its basic shape and outline. However, these methods frequently make a mess of topology; leaving you with faces and edge loops leading in all directions. Even so, this can still be useful for reducing polygons in static meshes (Meshes that never animate), like statues or houses or chairs. However, it's typically bad for animated meshes where topology is especially important. Reducing Mesh Polygons with Automated Methods can produce messy topology! If you want to know the total vertex and face count of a mesh, you can use your 3D Software statistics. Blender, Maya, 3DS Max, and most 3D software, let you see vertex and face counts of selected meshes directly from the viewport. However, this information should only be considered a rough guide! This is because, after importing a mesh into Unity, the vertex count frequently turns out higher than expected! There are many reasons for this, explained in more depth online, here: http://docs.unity3d.com/Manual/OptimizingGraphicsPerformance.html In short, use the Unity Vertex Count as the final word on the actual Vertex Count of your mesh. To view the vertex-count for an imported mesh in Unity, click the right-arrow on the mesh thumbnail in the Project Panel. This shows the Internal Mesh asset. Select this asset, and then view the Vertex Count from the Preview Pane in the Object Inspector. Viewing the Vertex and Face Count for meshes in Unity Summary In this article, we've learned about what are game objectives and about asset preparation.
Read more
  • 0
  • 0
  • 36834
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-introduction-deep-learning
Packt
04 Jan 2017
19 min read
Save for later

Introduction to Deep Learning

Packt
04 Jan 2017
19 min read
In this article by Dipayan Dev, the author of the book Deep Learning with Hadoop, we will see a brief introduction to concept of the deep learning and deep feed-forward networks. "By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it."                                                                                                                  - Eliezer Yudkowsky Ever thought, why it is often difficult to beat the computer in chess, even by the best players of the game? How Facebook is able to recognize your face among hundreds of millions photos? How your mobile phone can recognize your voice, and redirects the call to the correct person selecting from hundreds of contacts listed? The primary goal of this book is to deal with many of those queries, and to provide detailed solutions to the readers. This book can be used for a wide range of reasons by a variety of readers, however, we wrote the book with two main target audiences in mind. One of the primary target audiences are the undergraduate or graduate university students learning about deep learning and Artificial Intelligence; the second group of readers belongs to the software engineers who already have a knowledge of Big Data, deep learning, and statistical modeling, but want to rapidly gain the knowledge of how deep learning can be used for Big Data and vice versa. This article will mainly try to set the foundation of the readers by providing the basic concepts, terminologies, characteristics, and the major challenges of deep learning. The article will also put forward the classification of different deep network algorithms, which have been widely used by researchers in the last decade. Following are the main topics that this article will cover: Get started with deep learning Deep learning: A revolution in Artificial Intelligence Motivations for deep learning Classification of deep learning networks Ever since the dawn of civilization, people have always dreamt of building some artificial machines or robots which can behave and work exactly like human beings. From the Greek mythological characters to the ancient Hindu epics, there are numerous such examples, which clearly suggest people's interest and inclination towards creating and having an artificial life. During the initial computer generations, people had always wondered if the computer could ever become as intelligent as a human being! Going forward, even in medical science too, the need of automated machines became indispensable and almost unavoidable. With this need and constant research in the same field, Artificial Intelligence (AI) has turned out to be a flourishing technology with its various applications in several domains, such as image processing, video processing, and many other diagnosis tools in medical science too. Although there are many problems that are resolved by AI systems on a daily basis, nobody knows the specific rules for how an AI system is programmed! Few of the intuitive problems are as follows: Google search, which does a really good job of understanding what you type or speak As mentioned earlier, Facebook too, is somewhat good at recognizing your face, and hence, understanding your interests Moreover, with the integration of various other fields, for example, probability, linear algebra, statistics, machine learning, deep learning, and so on, AI has already gained a huge amount of popularity in the research field over the course of time. One of the key reasons for he early success of AI could be because it basically dealt with fundamental problems for which the computer did not require vast amount of knowledge. For example, in 1997, IBM's Deep Blue chess-playing system was able to defeat the world champion Garry Kasparov [1]. Although this kind of achievement at that time can be considered as substantial, however, chess, being limited by only a few number of rules, it was definitely not a burdensome task to train the computer with only those number of rules! Training a system with fixed and limited number of rules is termed as hard-coded knowledge of the computer. Many Artificial Intelligence projects have undergone this hard-coded knowledge about the various aspects of the world in many traditional languages. As time progresses, this hard-coded knowledge does not seem to work with systems dealing with huge amounts of data. Moreover, the number of rules that the data were following also kept changing in a frequent manner. Therefore, most of those projects following that concept failed to stand up to the height of expectation. The setbacks faced by this hard-coded knowledge implied that those artificial intelligent systems need some way of generalizing patterns and rules from the supplied raw data, without the need of external spoon-feeding. The proficiency of a system to do so is termed as machine learning. There are various successful machine learning implementations, which we use in our daily life. Few of the most common and important implementations are as follows: Spam detection: Given an e-mail in your inbox, the model can detect whether to put that e-mail in spam or in the inbox folder. A common naive Bayes model can distinguish between such e-mails. Credit card fraud detection: A model that can detect whether a number of transactions performed at a specific time interval are done by the original customer or not. One of the most popular machine learning model, given by Mor-Yosef et al. [1990], used logistic regression, which could recommend whether caesarean delivery is needed for the patient or not! There are many such models, which have been implemented with the help of machine learning techniques: The figure shows the example of different types of representation. Let's say we want to train the machine to detect some empty spaces in between the jelly beans. In the image on the right side, we have sparse jelly beans, and it would be easier for the AI system to determine the empty parts. However, in the image on the left side, we have extremely compact jelly beans, and hence, it will be an extremely difficult task for the machine to find the empty spaces. Images sourced from USC-SIPI image database. A large portion of performance of the machine learning systems depends on the data fed to the system. This is called representation of the data. All the information related to the representation is called feature of the data. For example, if logistic regression is used to detect a brain tumor in a patient, the AI system will not try to diagnose the patient directly! Rather, the concerned doctor will provide the necessary input to the systems according to the common symptoms of that patient. The AI system will then match those inputs with the already received past inputs which were used to train the system. Based on the predictive analysis of the system, it will provide its decision regarding the disease. Although logistic regression can learn and decide based on the features given, it cannot influence or modify the way features are defined. For example, if that model was provided with a cesarean patient's report instead of the brain tumor patient's report, it would surely fail to predict the outcome, as the given features would never match with the trained data. This dependency of the machine learning systems on the representation of the data is not really unknown to us! In fact, most of our computer theory performs better based on how the data is represented. For example, the quality of database is considered based on the schema design. The execution of any database query, even on a thousand of million lines of data, becomes extremely fast if the schema is indexed properly. Therefore, the dependency of data representation of the AI systems should not surprise us. There are many such daily life examples too, where the representation of the data decides our efficiency. To locate a person from among 20 people is obviously easier than to locate the same from a crowd of 500 people. A visual representation of two different types of data representation in shown in preceding figure. Therefore, if the AI systems are fed with the appropriate featured data, even the hardest problems could be resolved. However, collecting and feeding the desired data in the correct way to the system has been a serious impediment for the computer programmer. There can be numerous real-time scenarios, where extracting the features could be a cumbersome task. Therefore, the way the data are represented decides the prime factors in the intelligence of the system. Finding cats from among a group of humans and cats could be extremely complicated if the features are not appropriate. We know that cats have tails; therefore, we might like to detect the presence of tails as a prominent feature. However, given the different tail shapes and sizes, it is often difficult to describe exactly how a tail will look like in terms of pixel values! Moreover, tails could sometimes be confused with the hands of humans. Also, overlapping of some objects could omit the presence of a cat's tail, making the image even more complicated. From all the above discussions, it can really be concluded that the success of AI systems depends, mainly, on how the data is represented. Also, various representations can ensnare and cache the different explanatory factors of all the disparities behind the data. Representation learning is one of the most popular and widely practiced learning approaches used to cope with these specific problems. Learning the representations of the next layer from the existing representation of data can be defined as representation learning. Ideally, all representation learning algorithms have this advantage of learning representations, which capture the underlying factors, a subset that might be applicable for each particular sub-task. A simple illustration is given in the following figure: The figure illustrates of representation learning. The middle layers are able to discover the explanatory factors (hidden layers, in blue rectangular boxes). Some of the factors explain each task's target, whereas some explain the inputs. However, while dealing with extracting some high-level data and features from a huge amount of raw data, which requires some sort of human-level understanding, has shown its limitations. There can be many such following examples: Differentiating the cry of two similar age babies. Identifying the image of a cat's eye in both day and night times. This becomes clumsy, because a cat's eyes glow at night unlike during daytime. In all these preceding edge cases, representation learning does not appear to behave exceptionally, and shows deterrent behavior. Deep learning, a sub-field of machine learning can rectify this major problem of representation learning by building multiple levels of representations or learning a hierarchy of features from a series of other simple representations and features [2] [8]. The figure shows how a deep learning system can represent the human image through identifying various combinations such as corners, contours, which can be defined in terms of edges. The preceding figure shows an illustration of a deep learning model. It is generally a cumbersome task for the computer to decode the meaning of raw unstructured input data, as represented by this image, as a collection of different pixel values. A mapping function, which will convert the group of pixels to identify the image, is, ideally, difficult to achieve. Also, to directly train the computer for these kinds of mapping looks almost insuperable. For these types of tasks, deep learning resolves the difficulty by creating a series of subset of mappings to reach the desired output. Each subset of mapping corresponds to a different set of layer of the model. The input contains the variables that one can observe, and hence, represented in the visible layers. From the given input, we can incrementally extract the abstract features of the data. As these values are not available or visible in the given data, these layers are termed as hidden layers. In the image, from the first layer of data, the edges can easily be identified just by a comparative study of the neighboring pixels. The second hidden layer can distinguish the corners and contours from the first layer's description of the edges. From this second hidden layer, which describes the corners and contours, the third hidden layer can identify the different parts of the specific objects. Ultimately, the different objects present in the image can be distinctly detected from the third layer. Image reprinted with permission from Ian Goodfellow, Yoshua Bengio, and Aaron Courville, Deep Learning, published by The MIT Press. Deep learning started its journey exclusively since 2006, Hinton et al. in 2006[2]; also Bengio et al. in 2007[3] initially focused on the MNIST digit classification problem. In the last few years, deep learning has seen major transitions from digits to object recognition in natural images. One of the major breakthroughs was achieved by Krizhevsky et al. in 2012 [4] using the ImageNet dataset 4. The scope of this book is mainly limited to deep learning, so before diving into it directly, the necessary definitions of deep learning should be provided. Many researchers have defined deep learning in many ways, and hence, in the last 10 years, it has gone through many explanations too! Following are few of the widely accepted definitions: As noted by GitHub, deep learning is a new area of machine learning research, which has been introduced with the objective of moving machine learning closer to one of its original goals: Artificial Intelligence. Deep learning is about learning multiple levels of representation and abstraction, which help to make sense of data such as images, sound, and text. As recently updated by Wikipedia, deep learning is a branch of machine learning based on a set of algorithms that attempt to model high-level abstractions in the data by using a deep graph with multiple processing layers, composed of multiple linear and non-linear transformations. As the definitions suggest, deep learning can also be considered as a special type of machine learning. Deep learning has achieved immense popularity in the field of data science with its ability to learn complex representation from various simple features. To have an in-depth grip on deep learning, we have listed out a few terminologies. The next topic of this article will help the readers lay a foundation of deep learning by providing various terminologies and important networks used for deep learning. Getting started with deep learning To understand the journey of deep learning in this book, one must know all the terminologies and basic concepts of machine learning. However, if the reader has already got enough insight into machine learning and related terms, they should feel free to ignore this section and jump to the next topic of this article. The readers who are enthusiastic about data science, and want to learn machine learning thoroughly, can follow Machine Learning by Tom M. Mitchell (1997) [5] and Machine Learning: a Probabilistic Perspective (2012) [6]. Image shows the scattered data points of social network analysis. Image sourced from Wikipedia. Neural networks do not perform miracles. But if used sensibly, they can produce some amazing results. Deep feed-forward networks Neural networks can be recurrent as well as feed-forward. Feed-forward networks do not have any loop associated in their graph, and are arranged in a set of layers. A network with many layers is said to be a deep network. In simple words, any neural network with two or more layers (hidden) is defined as deep feed-forward network or feed-forward neural network. Figure 4 shows a generic representation of a deep feed-forward neural network. Deep feed-forward network works on the principle that with an increase in depth, the network can also execute more sequential instructions. Instructions in sequence can offer great power, as these instructions can point to the earlier instruction. The aim of a feed-forward network is to generalize some function f. For example, classifier y=f/(x) maps from input x to category y. A deep feed-forward network modified the mapping, y=f(x; α), and learns the value of the parameter α, which gives the most appropriate value of the function. The following figure shows a simple representation of the deep-forward network to provide the architectural difference with the traditional neural network. Deep neural network is feed-forward network with many hidden layers: Datasets are considered to be the building blocks of a learning process. A dataset can be defined as a collection of interrelated sets of data, which is comprised of separate entities, but which can be used as a single entity depending on the use-case. The individual data elements of a dataset are called data points. The preceding figure gives the visual representation of the following data points: Unlabeled data: This part of data consists of human-generated objects, which can be easily obtained from the surroundings. Some of the examples are X-rays, log file data, news articles, speech, videos, tweets, and so on. Labelled data: Labelled data are normalized data from a set of unlabeled data. These types of data are usually well formatted, classified, tagged, and easily understandable by human beings for further processing. From the top-level understanding, the machine learning techniques can be classified as supervised and unsupervised learning based on how their learning process is carried out. Unsupervised learning In unsupervised learning algorithms, there is no desired output from the given input datasets. The system learns meaningful properties and features from its experience during the analysis of the dataset. During deep learning, the system generally tries to learn from the whole probability distribution of the data points. There are various types of unsupervised learning algorithms too, which perform clustering, which means separating the data points among clusters of similar types of data. However, with this type of learning, there is no feedback based on the final output, that is, there won't be any teacher to correct you! Figure 6 shows a basic overview of unsupervised clustering. A real life example of an unsupervised clustering algorithm is Google News. When we open a topic under Google News, it shows us a number of hyper-links redirecting to several pages. Each of those topics can be considered as a cluster of hyper-links that point to independent links. Supervised learning In supervised learning, unlike unsupervised learning, there is an expected output associated with every step of the experience. The system is given a dataset, and it already knows what the desired output will look like, along with the correct relationship between the input and output of every associated layer. This type of learning is often used for classification problems. A visual representation is given in Figure 7. Real-life examples of supervised learning are face detection, face recognition, and so on. Although supervised and unsupervised learning look like different identities, they are often connected to each other by various means. Hence, that fine line between these two learnings is often hazy to the student fraternity. The preceding statement can be formulated with the following mathematical expression: The general product rule of probability states that for an n number of datasets n ε ℝk, the joint distribution can be given fragmented as follows: The distribution signifies that the appeared unsupervised problem can be resolved by k number of supervised problems. Apart from this, the conditional probability of p (k | n), which is a supervised problem, can be solved using unsupervised learning algorithms to experience the joint distribution of p (n, k): Although these two types are not completely separate identities, they often help to classify the machine learning and deep learning algorithms based on the operations performed. Generally speaking, cluster formation, identifying the density of a population based on similarity, and so on are termed as unsupervised learning, whereas, structured formatted output, regression, classification, and so on are recognized as supervised learning. Semi-supervised learning As the name suggests, in this type of learning, both labelled and unlabeled data are used during the training. It's a class of supervised learning, which uses a vast amount of unlabeled data during training. For example, semi-supervised learning is used in Deep belief network (explained network), a type of deep network, where some layers learn the structure of the data (unsupervised), whereas one layer learns how to classify the data (supervised learning). In semi-supervised learning, unlabeled data from p (n) and labelled data from p (n, k) are used to predict the probability of k, given the probability of n, or p (k | n): Figure shows the impact of a large amount of unlabeled data during the semi-supervised learning technique. Art the top, it shows the decision boundary that the model puts after distinguishing the white and black circle. The figure at the bottom displays another decision boundary, which the model embraces. In that dataset, in addition to two different categories of circles, a collection of unlabeled data (grey circle) is also annexed. This type of training can be viewed as creating the cluster, and then marking those with the labelled data, which moves the decision boundary away from the high-density data region. Figure obtained from Wikipedia. Deep learning networks are all about representation of data. Therefore, semi-supervised learning is, generally, about learning a representation, whose objective function is given by the following: l = f (n) The objective of the equation is to determine the representation-based cluster. The preceding figure depicts the illustration of a semi-supervised learning. Readers can refer to Chapelle et al.'s book [7] to know more about semi-supervised learning methods. So, as we have already got a foundation of what Artificial Intelligence, machine learning, representation learning are, we can move our entire focus to elaborate on deep learning with further description. From the previously mentioned definition of deep learning, two major characteristics of deep learning can be pointed out as follows: A way of experiencing unsupervised and supervised learning of the feature representation through successive knowledge from subsequent abstract layers A model comprising of multiple abstract stages of non-linear information processing Summary In this article, we have explained most of these concepts in detail, and have also classified the various algorithms of deep learning.
Read more
  • 0
  • 0
  • 2480

article-image-setting-environment
Packt
04 Jan 2017
14 min read
Save for later

Setting Up the Environment

Packt
04 Jan 2017
14 min read
In this article by Sohail Salehi, the author of the book Angular 2 Services, you can ask the two fundamental questions, when a new development tool is announced or launched are: how different is the new tool from other competitor tools and how enhanced it is when compared to its own the previous versions? If we are going to invest our time in learning a new framework, common sense says we need to make sure we get a good return on our investment. There are so many good articles out there about cons and pros of each framework. To me, choosing Angular 2 boils down to three aspects: The foundation: Angular is introduced and supported by Google and targeted at “ever green” modern browsers. This means we, as developers, don't need to lookout for hacky solutions in each browser upgrade, anymore. The browser will always be updated to the latest version available, letting Angular worry about the new changes and leaving us out of it. This way we can focus more on our development tasks. The community: Think about the community as an asset. The bigger the community, the wider range of solutions to a particular problem. Looking at the statistics, Angular community still way ahead of others and the good news is this community is leaning towards being more involved and more contributing on all levels. The solution: If you look at the previous JS frameworks, you will see most of them focusing on solving a problem for a browser first, and then for mobile devices. The argument for that could be simple: JS wasn't meant to be a language for mobile development. But things have changed to a great extent over the recent years and people now use mobile devices more than before. I personally believe a complex native mobile application – which is implemented in Java or C – is more performant, as compared to its equivalent implemented in JS. But the thing here is that not every mobile application needs to be complex. So business owners have started asking questions like: Why do I need a machine-gun to kill a fly? (For more resources related to this topic, see here.) With that question in mind, Angular 2 chose a different approach. It solves the performance challenges faced by mobile devices first. In other words, if your Angular 2 application is fast enough on mobile environments, then it is lightning fast inside the “ever green” browsers. So that is what we are going to do in this article. First we are going to learn about Angular 2 and the main problem it is going to solve. Then we talk a little bit about the JavaScript history and the differences between Angular 2 and AngularJS 1. Introducing “The Sherlock Project” is next and finally we install the tools and libraries we need to implement this project. Introducing Angular 2 The previous JS frameworks we've used already have a fluid and easy workflow towards building web applications rapidly. But as developers what we are struggling with is the technical debt. In simple words, we could quickly build a web application with an impressive UI. But as the product kept growing and the change requests started kicking in, we had to deal with all maintenance nightmares which forces a long list of maintenance tasks and heavy costs to the businesses. Basically the framework that used to be an amazing asset, turned into a hairy liability (or technical debt if you like). One of the major "revamps" in Angular 2 is the removal of a lot of modules resulting in a lighter and faster core. For example, if you are coming from an Angular 1.x background and don't see $scope or $log in the new version, don't panic, they are still available to you via other means, But there is no need to add overhead to the loading time if we are not going to use all modules. So taking the modules out of the core results in a better performance. So to answer the question, one of the main issues Angular 2 addresses is the performance issues. This is done through a lot of structural changes. There is no backward compatibility We don't have backward compatibility. If you have some Angular projects implemented with the previous version (v1.x), depending on the complexity of the project, I wouldn't recommend migrating to the new version. Most likely, you will end up hammering a lot of changes into your migrated Angular 2 project and at the end you will realize it was more cost effective if you would just create a new project based on Angular 2 from scratch. Please keep in mind, the previous versions of AngularJS and Angular 2 share just a name, but they have huge differences in their nature and that is the price we pay for a better performance. Previous knowledge of AngularJS 1.x is not necessary You might wondering if you need to know AngularJS 1.x before diving into Angular 2. Absolutely not. To be more specific, it might be even better if you didn't have any experience using Angular at all. This is because your mind wouldn't be preoccupied with obsolete rules and syntaxes. For example, we will see a lot of annotations in Angular 2 which might make you feel uncomfortable if you come from a Angular 1 background. Also, there are different types of dependency injections and child injectors which are totally new concepts that have been introduced in Angular 2. Moreover there are new features for templating and data-binding which help to improve loading time by asynchronous processing. The relationship between ECMAScript, AtScript and TypeScript The current edition of ECMAScript (ES5) is the one which is widely accepted among all well known browsers. You can think of it as the traditional JavaScript. Whatever code is written in ES5 can be executed directly in the browsers. The problem is most of modern JavaScript frameworks contain features which require more than the traditional JavaScript capabilities. That is why ES6 was introduced. With this edition – and any future ECMAScript editions – we will be able to empower JavaScript with the features we need. Now, the challenge is running the new code in the current browsers. Most browsers, nowadays recognize standard JavaScript codes only. So we need a mediator to transform ES6 to ES5. That mediator is called a transpiler and the technical term for transformations is transpiling. There are many good transpilers out there and you are free to choose whatever you feel comfortable with. Apart from TypeScript, you might want to consider Babel (babeljs.io) as your main transpiler. Google originally planned to use AtScript to implement Angular 2, but later they joined forces with Microsoft and introduced TypeScript as the official transpiler for Angular 2. The following figure summarizes the relationship between various editions of ECMAScript, AtScript and TypeScript. For more details about JavaScript, ECMAScript and how they evolved during the past decade visit the following link: https://en.wikipedia.org/wiki/ECMAScript Setting up tools and getting started! It is important to get the foundation right before installing anything else. Depending on your operating system, install Node.js and its package manager- npm. . You can find a detailed installation manual on Node.js official website. https://nodejs.org/en/ Make sure both Node.js and npm are installed globally (they are accessible system wide) and have the right permissions. At the time of writing npm comes with Node.js out of the box. But in case their policy changes in the future, you can always download the npm and follow the installation process from the following link. https://npmjs.com The next stop would be the IDE. Feel free to choose anything that you are comfortable with. Even a simple text editor will do. I am going to use WebStorm because of its embedded TypeScript syntax support and Angular 2 features which speeds up development process. Moreover it is light weight enough to handle the project we are about to develop. You can download it from here: https://jetbrains.com/webstorm/download We are going to use simple objects and arrays as a place holder for the data. But at some stage we will need to persist the data in our application. That means we need a database. We will use the Google's Firebase Realtime database for our needs. It is fast, it doesn't need to download or setup anything locally and more over it responds instantly to your requests and aligns perfectly with Angular's two-way data-binding. For now just leave the database as it is. You don't need to create any connections or database objects. Setting up the seed project The final requirement to get started would be an Angular 2 seed project to shape the initial structure of our project. If you look at the public source code repositories you can find several versions of these seeds. But I prefer the official one for two reasons: Custom made seeds usually come with a personal twist depending on the developers taste. Although sometimes it might be a great help, but since we are going to build everything from scratch and learn the fundamental concepts, they are not favorable to our project. The official seeds are usually minimal. They are very slim and don't contain overwhelming amount of 3rd party packages and environmental configurations. Speaking about packages, you might be wondering what happened to the other JavaScript packages we needed for this application. We didn't install anything else other than Node and NPM. The next section will answer this question. Setting up an Angular 2 project in WebStorm Assuming you have installed WebStorm, fire the IDE and and checkout a new project from a git repository. Now set the repository URL to: https://github.com/angular/angular2-seed.git and save the project in a folder called “the sherlock project” as shown in the figure below: Hit the Clone button and open the project in WebStorm. Next, click on the package.json file and observe the dependencies. As you see, this is a very lean seed with minimal configurations. It contains the Angular 2 plus required modules to get the project up and running. The first thing we need to do is install all required dependencies defined inside the package.json file. Right click on the file and select the “run npm install” option. Installing the packages for the first time will take a while. In the mean time, explore the devDependencies section of package.json in the editor. As you see, we have all the required bells and whistles to run the project including TypeScript, web server and so on to start the development: "devDependencies": { "@types/core-js": "^0.9.32", "@types/node": "^6.0.38", "angular2-template-loader": "^0.4.0", "awesome-typescript-loader": "^1.1.1", "css-loader": "^0.23.1", "raw-loader": "^0.5.1", "to-string-loader": "^1.1.4", "typescript": "^2.0.2", "webpack": "^1.12.9", "webpack-dev-server": "^1.14.0", "webpack-merge": "^0.8.4" }, We also have some nifty scripts defined inside package.json that automate useful processes. For example, to start a webserver and see the seed in action we can simply execute following command in a terminal: $ npm run server or we can right click on package.json file and select “Show npm Scripts”. This will open another side tab in WebStorm and shows all available scripts inside the current file. Basically all npm related commands (which you can run from command-line) are located inside the package.json file under the scripts key. That means if you have a special need, you can create your own script and add it there. You can also modify the current ones. Double click on start script and it will run the web server and loads the seed application on port 3000. That means if you visit http://localhost:3000 you will see the seed application in your browser: If you are wondering where the port number comes from, look into package.json file and examine the server key under the scripts section:    "server": "webpack-dev-server --inline --colors --progress --display-error-details --display-cached --port 3000 --content-base src", There is one more thing before we move on to the next topic. If you open any .ts file, WebStorm will ask you if you want it to transpile the code to JavaScript. If you say No once and it will never show up again. We don't need WebStorm to transpile for us because the start script is already contains a transpiler which takes care of all transformations for us. Front-end developers versus back-end developers Recently, I had an interesting conversation with a couple of colleagues of mine which worth sharing here in this article. One of them is an avid front-end developer and the other is a seasoned back-end developer. You guessed what I'm going to talk about: The debate between back-end/front-end developers and who is the better half. We have seen these kind of debates between back-end and front-end people in development communities long enough. But the interesting thing which – in my opinion – will show up more often in the next few months (years) is a fading border between the ends (front-end/back-end). It feels like the reason that some traditional front-end developers are holding up their guard against new changes in Angular 2, is not just because the syntax has changed thus causing a steep learning curve, but mostly because they now have to deal with concepts which have existed natively in back-end development for many years. Hence, the reason that back-end developers are becoming more open to the changes introduced in Angular 2 is mostly because these changes seem natural to them. Annotations or child dependency injections for example is not a big deal to back-enders, as much as it bothers the front-enders. I won't be surprised to see a noticeable shift in both camps in the years to come. Probably we will see more back-enders who are willing to use Angular as a good candidate for some – if not all – of their back-end projects and probably we will see more front-enders taking Object-Oriented concepts and best practices more seriously. Given that JavaScript was originally a functional scripting language they probably will try to catch-up with the other camp as fast as they can. There is no comparison here and I am not saying which camp has advantage over the other one. My point is, before modern front-end frameworks, JavaScript was open to be used in a quick and dirty inline scripts to solve problems quickly. While this is a very convenient approach it causes serious problems when you want to scale a web application. Imagine the time and effort you might have to make finding all of those dependent codes and re-factor them to reflect the required changes. When it comes to scalability, we need a full separation between layers and that requires developers to move away from traditional JavaScript and embrace more OOP best practices in their day to day development tasks. That what has been practiced in all modern front-end frameworks and Angular 2 takes it to the next level by completely restructuring the model-view-* concept and opening doors to the future features which eventually will be native part of any web browser. Introducing “The Sherlock Project” During the course of this journey we are going to dive into all new Angular 2 concepts by implementing a semi-AI project called: “The Sherlock Project”. This project will basically be about collecting facts, evaluating and ranking them, and making a decision about how truthful they are. To achieve this goal, we will implement a couple of services and inject them to the project, wherever they are needed. We will discuss one aspect of the project and will focus on one related service. At the end, all services will come together to act as one big project. Summary This article covered a brief introduction to Angular 2. We saw where Angular2 comes from, what problems is it going to solve and how we can benefit from it. We talked about the project that we will be creating with Angular 2. We also saw whic other tools are needed for our project and how to install and configure them. Finally, we cloned an Angular 2 seed project, installed all its dependencies and started a web server to examine the application output in a browser Resources for Article: Further resources on this subject: Introduction to JavaScript [article] Third Party Libraries [article] API with MongoDB and Node.js [article]
Read more
  • 0
  • 0
  • 13931

article-image-deploying-first-server
Packt
04 Jan 2017
16 min read
Save for later

Deploying First Server

Packt
04 Jan 2017
16 min read
In this article by Kirill Shirinkin, the author of the book Getting Started with Terraform, we will know how we can proceed to learning how exactly Terraform works and how to use it. In this we will learn a bit about Terraform history, install it on our workstation, prepare working environment and run the tool for the first time. After having everything ready for work we will figure out what is a Terraform provider and then we will take a quick tour of what AWS and EC2 is. With this knowledge in place, we will first create an EC2 instance by hand (just to understand the pain that Terraform will eliminate) and then we will do exactly the same with the help of Terraform template. That will allow us to study the nature of Terraform state file. (For more resources related to this topic, see here.) History of Terraform Terraform was first released in July 2014 by a company called Hashicorp. That is the same company that brought us tools like Vagrant, Packer, Vault and some others. Being the fifth tools in Hashicorp stack, it was focused on describing the complete infrastructure as code. … From physical servers to containers to SaaS products, Terraform is able to create and compose all the components necessary to run any service or application. With Terraform, you describe your complete infrastructure as code, even as it spans multiple service providers. Your servers may come from AWS, your DNS may come from CloudFlare, and your database may come from Heroku. Terraform will build all these resources across all these providers in parallel. Terraform codifies knowledge about your infrastructure unlike any other tool before, and provides the workflow and tooling for safely changing and updating infrastructure. - https://www.hashicorp.com/blog/terraform.html Terraform is an open source tool released under Mozilla public license, version 2.0. The code is stored (as all other tools by Hashicorp) on GitHub and anyone can contribute to its development. As part of its Atlas product Hashicorp also offers hosted Terraform Enterprise services, which solves some of the problems, which open source version doesn't. This includes central facility to run Terraform from, access control policies, remote state file storage, notifications, built-in GitHub integration and more. Despite support of over 40 various providers, the main focus of Hashicorp developers is on Amazon Web Services, Google Cloud and Microsoft Azure. All other providers are developed and supported by community, meaning that if you are not using the main three than you might have to contribute to the codebase yourself. The code of Terraform is written in Go programming language and is released as a single binary for all major operating systems. Windows, Mac OS X, FreeBSD, OpenBSD, Salaris and any Linux are supported in both 32-bit and 64-bit versions. Terraform is still a relatively new piece of tech, being just a bit over two years old. It changes a lot over time and gets new features with every release. After learning these facts, let's finally proceed to installing Terraform and setting up our workplace. Preparing work environment In this we will focus on using Terraform in a Linux environment. The general usage of the tool should be the same on all platforms, though some advanced topics and practices. As mentioned in previous section, Terraform is distributed as a single binary, packaged inside Zip archive. Unfortunately, Hashicorp does not provide native packages for operating systems. That means the first step is too install unzip. Depending on your package manager it could be done by running sudo yum install unzip or sudo apt-get install unzip or might be even already installed. In any case, after making sure that you can un-archive Zip files, proceed to downloading Terraform from official website: https://www.terraform.io/downloads.html. Unzip it to any convenient folder. Make sure that this folder is available in your PATH environment variable. Full installation commands sequence could look like this: $> curl -O https://releases.hashicorp.com/terraform/0.7.2/terraform_0.7.2_linux_amd64.zip $> sudo unzip terraform_0.7.2_linux_amd64.zip -d /usr/local/bin/ That will extract Terraform binary to /usr/local/bin, which is already available in PATH on Linux systems. Finally, let's verify our installation: $> terraform -v Terraform v0.7.2 We have a working Terraform installation now. We are ready to write our first template. First, create an empty directory and name it packt-terraform and enter it: $> mkdir packt-terraform && cd packt-terraform When you run terraform commands it looks for files with .tf extension in a directory you run it from. Be careful: Terraform will load all files with .tf extension if you run it without arguments. Let's create our very first, not very useful yet template: $> touch template.tf To apply template you need to run terraform apply command. What does this applying mean? In Terraform, when you run apply, it will read your templates and it will try to create an infrastructure exactly as it's defined in your templates. For now, let's just apply our empty template: $> terraform apply Apply complete! Resources: 0 added, 0 changed, 0 destroyed. After each run is finished you get a number of resources that you've added, changed and destroyed. In this case, it did nothing, as we just have an empty file instead of a real template.  To make Terraform do something useful we first need to configure our provider, and even before that we need to find out what is provider. What are the many Terraform Providers Provider is something you use to configure access to the service you create resources for. For example, if you want to create AWS resources, you need to configure AWS provider, which would specify credentials to access APIs of many AWS services. At the moment of writing Terraform has 43 providers. This impressive list includes not only major cloud providers like AWS and Google Cloud, but also a smaller services, like Fastly, a Content Delivery Network (CDN) provider. Not every provider requires explicit configuration. Some of them do not even deal with external services. Instead, they provide resources for local entities. For example, you could use TLS provider to generate keys and certificates. But still, most of providers are dealing with one or another external API and requires configuration. In this we will be using AWS provider. Before we configure it, let's have a short introduction to AWS. If you are already familiar to this platform, feel free to skip next session and proceed directly to Configuring AWS provider. Short introduction to AWS Amazon Web Services is a cloud offering from Amazon, an online retail giant. Back in early 2000s, Amazon invested money into an automated platform, which would provide services for things like network, storage and computation to developers. Developers then don't need to manage underlying infrastructure. Instead, they would use provided services via APIs to provision virtual machines, storage buckets and so on. The platform, initially built to power Amazon itself, was open for public usage in 2006. The first two services: Simple Storage Service (S3) and Elastic Compute Cloud (EC2) were released and anyone could pay for using them. Fast forward 10 years. AWS has now over 70 different services, covering practically everything modern infrastructure would need. It has services for virtual networking, queue processing, transactional emails, storage, DNS, relational databases and many many others. Businesses like Netflix completely moved away from in-house hardware and instead are building new type of infrastructure on top of cloud resources, getting significant benefits in terms of flexibility and cost-savings and focusing on working on a product, rather than scaling and maturing own data center. With such an impressive list of services, it becomes increasingly hard to juggle all involved components via AWS Management Console: in-browser interface for working with AWS. Of course, AWS provides APIs for every service it has, but ones again: the number and intersection of them can be very high, and it only grows as you keep relying on the cloud. This led exactly to the problems you end either with intense ClickOps practices or you script everything you can. These problems make AWS perfect candidate for exploring Terraform, as we can fully understand the pain caused by direct usage of its services. Of course, AWS is not free to use, but luckily for a long time now they provide Free Tier. Free Tier allows you to use lots of (but not all) services for free with certain limitations. For example, you can use single EC2 instance for 750 hours a month for 12 months for free, as long as it has t2.micro type. EC2 instances are, simply, virtual servers. You pay for them per-hour of usage, and you can choose from a pre-defined list of types. Types are just different combinations of characteristics. Some are optimized for high memory usage, others were created for processor-heavy tasks. Let's create a brand new AWS account for our Terraform learning goals, as following: Open https://aws.amazon.com/free/ and click Create a free account. Follow on-screen instructions to complete registration. Please notice, that in order to use Free Tier you have to provide your credit card details. But you won't be charged unless you exceed your free usage limit. Using Elastic Compute Cloud Creating an instance through management console Just to get a feel of AWS Management Console and to fully understand how much Terraform simplifies working with AWS, let's create a single EC2 instance manually. Login to the console and choose EC2 from the list of services: Click on Launch Instance: Choose AWS Marketplace from left sidebar, type Centos in search box and click Select button for first search result: On each of the next pages just click Next till you reach the end of the process. As you see, it's not really a fast process to create a single virtual server on EC2. You have to choose AMI, instance type, configure network details and permissions, select or generate an SSH-key, properly tag it, pick right security groups and add storage. Imagine, if your day would consist only of manual tasks like this. What a boring job would it be? AMI is a source image, an instance is created from. You can create your own AMIs, use the ones provided by AWS or select one from community at AWS Marketplace. Security Group (SG) is like a Firewall. You can attach multiple SGs to an instance and define inbound and outbound rules. It allows you to configure access not only for IP ranges, but also for other security groups. And, of course, we looked at only a single service, EC2. And as you know already, there are over 70 of them, each with its own interface to click through. Let's take a look now how to achieve the same with AWS CLI. Creating an instance with AWS CLI AWS provides a CLI to interact with its APIs. It's written in Python. You can follow installation instructions from the official guide to get started: https://aws.amazon.com/cli/ Perhaps, the most important part of setting up AWS CLI is to configure access keys. We will also need these keys for Terraform. To get them, click on your username in top right part of AWS Management Console, click on Security Credentials and then download your keys from Access Keys (Access Key ID and Secret Access Key) menu. Warning: using root account access keys is considered a bad practice when working with AWS. You should use IAM users and per-user keys. For the needs of this root keys are okay, but as soon as you move production systems to AWS, consider using IAM and reduce root account usage to minimum. Once AWS CLI is installed, run aws configure command. It will prompt you for your access keys and region. Once you are finished, you can use it to talk to AWS API. Creating an EC2 instance will look like this: $> aws ec2 run-instances --image-id ami-xxxxxxxx --count 1 --instance-type t2.micro --key-name MyKeyPair --security-groups my-sg While already much better, than doing it from Management Console, it's still a long command to execute, and it covers only creation. For tracking, if instance is still there, updating it and destroying you need to construct similar long from command line calls. Let's finally do it properly: with Terraform. Configuring AWS Provider Before using Terraform to create an instance we need to configure AWS provider. This is the first piece of code we will write in our template. Templates are written in special language called Hashicorp Configuration Language (HCL) https://github.com/hashicorp/hcl. You can also write your templates in JSON, but it is recommended only if template itself is generated by a machine. There are four different ways to configure credentials: Static credentials With this method, you just hard-code your access keys write inside your template. It looks like this: provider "aws" { access_key = "xxxxxxxxxxxxx" secret_key = "xxxxxxxxxxxxx" region = "us-east-1" } Though the simplest one, it is also a least flexible and secured one. You don't want to give your credentials just like this to everyone in the team. Rather, each team member should use his or her own keys. Environment variables If not specified in the template, Terraform will try to read configuration from environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. You can also set your region with AWS_DEFAULT_REGION variable. In this case, complete configuration goes down to: provider "aws" {} Credentials file If Terraform won't find keys in the template or environment variables, it will try to fetch them from credentials file, which is typically stored in ~/.aws/credentials. If you previously installed and configured AWS CLI, then you already have credentials file generated for you. If you did not do it, then you can add it yourself, with content like this: [default] aws_access_key_id = xxxxxxxxxxxxx aws_secret_access_key = xxxxxxxxxxxxx You should always avoid setting credentials directly in the template. It's up to you if you use environment variables or credentials file. Whichever method you picked, let's add following configuration to template.tf: provider "aws" { region = "eu-central-1" } Running terraform apply still won't do anything, because we did not specify any resources we want our infrastructure to have. Let's do it. Creating EC2 instance with Terraform Resources are components of your infrastructure. It can be something as complex as complete virtual server, or something as simple as DNS record. Each resource belongs to a provider and type of the resource is suffixed with provider name. Configuration of a resource takes this form then: resource "provider-name_resource-type" "resource-name" { parameter_name = parameter_value } There are three types of things you can configure inside resource block: resource-specific parameters, meta-parameters and provisioners. For now, let's focus on resource-specific parameters. They are unique to each resource type. We will create an EC2 instance, which is created with aws_instance resource. To create an instance we need to set at least two parameters: ami and instance_type. Some parameters are required, while others are optional, ami and instance_type being the required ones. You can always check the complete list of available parameters in the docs, on the page dedicated to the particular resource. For example, to get the list and description of all aws_instance resource parameters check out https://www.terraform.io/docs/providers/aws/r/instance.html. We'll be using official Centos 7 AMI. As we configured AWS region to eu-central-1, then need AMI with id ami-9bf712f4. We will use t2.micro instance type, as it's the cheapest one and available as part of Free Tier offering. Update the template to look like this: # Provider configuration provider "aws" { region = "eu-central-1" } # Resource configuration resource "aws_instance" "hello-instance" { ami = "ami-378f925b" instance_type = "t2.micro" tags { Name = "hello-instance" } } You might also need to specify subnet_id parameter, in case you don't have a default VPC. For this you will need to create a VPC and a subnet. You can either do it now yourself. As you noticed, HCL allows commenting your code using hash sign in front of the text you want to be commented. There is another thing to look at: tags parameter. Terraform is not limited to simple string values. You can also have numbers, boolean values (true, false), lists (["elem1", "elem2", "elem3"])and maps. tags parameter is a map of tags for the instance.  Let's apply this template! $> terraform apply aws_instance.hello-instance: Creating... ami: "" => "ami-378f925b" < ……………………. > instance_type: "" => "t2.micro" key_name: "" => "<computed>" < ……………………. > tags.%: "" => "1" tags.Name: "" => "hello-instance" tenancy: "" => "<computed>" vpc_security_group_ids.#: "" => "<computed>" aws_instance.hello-instance: Still creating... (10s elapsed) aws_instance.hello-instance: Still creating... (20s elapsed) aws_instance.hello-instance: Still creating... (30s elapsed) aws_instance.hello-instance: Creation complete Apply complete! Resources: 1 added, 0 changed, 0 destroyed. The state of your infrastructure has been saved to the path below. This state is required to modify and destroy your infrastructure, so keep it safe. To inspect the complete state use the terraform show` command. State path: terraform.tfstate Wow, that's a lot of output for a simple command creating a single instance. Some parts of it were replaced with arrows-wrapped dots, so don't be surprised when you will see even more parameters values when you actually run the command. Before digging into the output, let's first verify that the instance was really created, in AWS Management Console. With just 12 lines of code and single Terraform command invocation we got our EC2 instance running. So far the result we got is not that different from using AWS CLI though: we only created a resource. What is of more interest is how do we update and destroy this instance using the same template. And to understand how Terraform does it we need to learn what the state file is. Summary In this article we learned that how to update our server using the same template and, finally, destroy it. We will already have a solid knowledge of Terraform basics and we will be ready to template your existing infrastructure. Resources for Article: Further resources on this subject: Provision IaaS with Terraform [article] Start Treating your Infrastructure as Code [article] OpenStack Networking in a Nutshell [article]
Read more
  • 0
  • 0
  • 10988

article-image-bug-tracking
Packt
04 Jan 2017
11 min read
Save for later

Bug Tracking

Packt
04 Jan 2017
11 min read
In this article by Eduardo Freitas, the author of the book Building Bots with Node.js, we will learn about Internet Relay Chat (IRC). It enables us to communicate in real time in the form of text. This chat runs on a TCP protocol in a client server model. IRC supports group messaging which is called as channels and also supports private message. (For more resources related to this topic, see here.) IRC is organized into many networks with different audiences. IRC being a client server, users need IRC clients to connect to IRC servers. IRC Client software comes as a packaged software as well as web based clients. Some browsers are also providing IRC clients as add-ons. Users can either install on their systems and then can be used to connect to IRC servers or networks. While connecting these IRC Servers, users will have to provide unique nick or nickname and choose existing channel for communication or users can start a new channel while connecting to these servers. In this article, we are going to develop one of such IRC bots for bug tracking purpose. This bug tracking bot will provide information about bugs as well as details about a particular bug. All this will be done seamlessly within IRC channels itself. It's going to be one window operations for a team when it comes to knowing about their bugs or defects. Great!! IRC Client and server As mentioned in introduction, to initiate an IRC communication, we need an IRC Client and Server or a Network to which our client will be connected. We will be using freenode network for our client to connect to. Freenode is the largest free and open source software focused IRC network. IRC Web-based Client I will be using IRC web based client using URL(https://webchat.freenode.net/). After opening the URL, you will see the following screen, As mentioned earlier, while connecting, we need to provide Nickname: and Channels:. I have provided Nickname: as Madan and at Channels: as #BugsChannel. In IRC, channels are always identified with #, so I provided # for my bugs channel. This is the new channel that we will be starting for communication. All the developers or team members can similarly provide their nicknames and this channel name to join for communication. Now let's ensure Humanity: by selecting I'm not a robot and click button Connect. Once connected, you will see the following screen. With this, our IRC client is connected to freenode network. You can also see username on right hand side as @Madan within this #BugsChannel. Whoever is joining this channel using this channel name and a network, will be shown on right hand side. In the next article, we will ask our bot to join this channel and the same network and will see how it appears within the channel. IRC bots IRC bot is a program which connects to IRC as one of the clients and appears as one of the users in IRC channels. These IRC bots are used for providing IRC Services or to host chat based custom implementations which will help teams for efficient collaboration. Creating our first IRC bot using IRC and NodeJS Let's start by creating a folder in our local drive in order to store our bot program from the command prompt. mkdir ircbot cd ircbot Assuming we have Node.js and NPM installed and let's create and initialize our package.json, which will store our bot's dependencies and definitions. npm init Once you go through the npm init options (which are very easy to follow), you'll see something similar to this. On your project folder you'll see the result which is your package.json file. Let's install irc package from NPM. This can be located at https://www.npmjs.com/package/irc. In order to install it, run this npm command. npm install –-save irc You should then see something similar to this. Having done this, the next thing to do is to update your package.json in order to include the "engines" attribute. Open with a text editor the package.json file and update it as follows. "engines": { "node": ">=5.6.0" } Your package.json should then look like this. Let's create our app.js file which will be the entry point to our bot as mentioned while setting up our node package. Our app.js should like this. var irc = require('irc'); var client = new irc.Client('irc.freenode.net', 'BugTrackerIRCBot', { autoConnect: false }); client.connect(5, function(serverReply) { console.log("Connected!n", serverReply); client.join('#BugsChannel', function(input) { console.log("Joined #BugsChannel"); client.say('#BugsChannel', "Hi, there. I am an IRC Bot which track bugs or defects for your team.n I can help you using following commands.n BUGREPORT n BUG # <BUG. NO>"); }); }); Now let's run our Node.js program and at first see how our console looks. If everything works well, our console should show our bot as connected to the required network and also joined a channel. Console can be seen as the following, Now if you look at our channel #BugsChannel in our web client, you should see our bot has joined and also sent a welcome message as well. Refer the following screen: If you look at the the preceding screen, our bot program got has executed successfully. Our bot BugTrackerIRCBot has joined the channel #BugsChannel and also bot sent an introduction message to all whoever is on channel. If you look at the right side of the screen under usernames, we are seeing BugTrackerIRCBot below @Madan Code understanding of our basic bot After seeing how our bot looks in IRC client, let's look at basic code implementation from app.js. We used irc library with the following lines, var irc = require('irc'); Using irc library, we instantiated client to connect one of the IRC networks using the following code snippet, var client = new irc.Client('irc.freenode.net', 'BugTrackerIRCBot', { autoConnect: false }); Here we connected to network irc.freenode.net and provided a nickname as BugTrackerIRCBot. This name has been given as I would like my bot to track and report the bugs in future. Now we ask client to connect and join a specific channel using the following code snippet, client.connect(5, function(serverReply) { console.log("Connected!n", serverReply); client.join('#BugsChannel', function(input) { console.log("Joined #BugsChannel"); client.say('#BugsChannel', "Hi, there. I am an IRC Bot which track bugs or defects for your team.n I can help you using following commands.n BUGREPORT n BUG # <BUG. NO>"); }); }); In preceeding code snippet, once client is connected, we get reply from server. This reply we are showing on a console. Once successfully connected, we ask bot to join a channel using the following code lines: client.join('#BugsChannel', function(input) { Remember, #BugsChannel is where we have joined from web client at the start. Now using client.join(), I am asking my bot to join the same channel. Once bot is joined, bot is saying a welcome message in the same channel using function client.say(). Hope this has given some basic understanding of our bot and it's code implementations. In the next article, we will enhance our bot so that our teams can have effective communication experience while chatting itself. Enhancing our BugTrackerIRCBot Having built a very basic IRC bot, let's enhance our BugTrackerIRCBot. As developers, we always would like to know how our programs or a system is functioning. To do this typically our testing teams carry out testing of a system or a program and log their bugs or defects into a bug tracking software or a system. We developers later can take a look at those bugs and address them as a part of our development life cycle. During this journey, developers will collaborate and communicate over messaging platforms like IRC. We would like to provide unique experience during their development by leveraging IRC bots. So here is what exactly we are doing. We are creating a channel for communication all the team members will be joined and our bot will also be there. In this channel, bugs will be reported and communicated based on developers' request. Also if developers need some additional information about a bug, chat bot can also help them by providing a URL from the bug tracking system. Awesome!! But before going in to details, let me summarize using the following steps about how we are going to do this, Enhance our basic bot program for more conversational experience Bug tracking system or bug storage where bugs will be stored and tracked for developers Here we mentioned about bug storage system. In this article, I would like to explain DocumentDB which is a NoSQL JSON based cloud storage system. What is DocumentDB? I have already explained NoSQLs. DocumentDB is also one of such NoSQLs where data is stored in JSON documents and offered by Microsoft Azure platform. Details of DocumentDB can be referred from (https://azure.microsoft.com/en-in/services/documentdb/) Setting up a DocumentDB for our BugTrackerIRCBot Assuming you already have a Microsoft Azure subscription follow these steps to configure DocumentDB for your bot. Create account ID for DocumentDB Let's create a new account called botdb using the following screenshot from Azure portal. Select NoSQL API as of DocumentDB. Select appropriate subscription and resources. I am using existing resources for this account. You can also create a new dedicated resource for this account. Once you enter all the required information, hit Create button at the bottom to create new account for DocumentDB. Newly created account botdb can be seen as the following, Create collection and database Select a botdb account from account lists shown precedingly. This will show various menu options like Properties, Settings, Collections etc. Under this account we need to create a collection to store bugs data. To create a new collection, click on Add Collection option as shown in the following screenshot, On click of Add Collection option, following screen will be shown on right side of the screen. Please enter the details as shown in the following screenshot: In the preceding screen, we are creating a new database along with our new collection Bugs. This new database will be named as BugDB. Once this database is created, we can add other bugs related collections in future in the same database. This can be done in future using option Use existing from the preceding screen. Once you enter all the relevant data, click OK to create database as well as collection. Refer the following screenshot: From the preceding screen, COLLECTION ID and DATABASE shown will be used during enhancing our bot. Create data for our BugTrackerIRCBot Now we have BugsDB with Bugs collection which will hold all the data for bugs. Let's add some data into our collection. To add a data let's use menu option Document Explorer shown in the following screenshot: This will open up a screen showing list of Databases and Collections created so far. Select our database as BugDB and collection as Bugs from the available list. Refer the following screenshot: To create a JSON document for our Bugs collection, click on Create option. This will open up a New Document screen to enter JSON based data. Please enter a data as per the following screenshot: We will be storing id, status, title, description, priority,assignedto, url attributes for our single bug document which will get stored in Bugs collection. To save JOSN document in our collection click Save button. Refer the following screenshot: This way we can create sample records in bugs collection which will be later wired up in NodeJS program. Sample list of bugs can be seen in the following screenshot: Summary Every development team needs bug tracking and reporting tools. There are typical needs of bug reporting and bug assignment. In case of critical projects these needs become also very critical for project timelines. This article showed us how we can provide a seamless experience to developers while they are communicating with peers within a channel. To summarize so far, we understood how to use DocumentDB from Microsoft Azure. Using DocumentDB, we created a new collection along with new database to store bugs data. We also added some sample JSON documents in Bugs collection. In today's world of collaboration, development teams who would be using such integrations and automations would be efficient and effective while delivering their quality products. Resources for Article: Further resources on this subject: Talking to Bot using Browser [article] Asynchronous Control Flow Patterns with ES2015 and beyond [article] Basic Website using Node.js and MySQL database [article]
Read more
  • 0
  • 0
  • 7809
article-image-hyper-v-architecture-and-components
Packt
04 Jan 2017
15 min read
Save for later

Hyper-V Architecture and Components

Packt
04 Jan 2017
15 min read
In this article by Charbel Nemnom and Patrick Lownds, the author of the book Windows Server 2016 Hyper-V Cookbook, Second Edition, we will see Hyper-V architecture along with the most important components in Hyper-V and also differences between Windows Server 2016 Hyper-V, Nano Server, Hyper-V Server, Hyper-V Client, and VMware. Virtualization is not a new feature or technology that everyone decided to have in their environment overnight. Actually, it's quite old. There are a couple of computers in the mid-60s that were using virtualization already, such as the IBM M44/44X, where you could run multiple VMs using hardware and software abstraction. It is known as the first virtualization system and the creation of the term virtual machine. Although Hyper-V is in its fifth version, Microsoft virtualization technology is very mature. Everything started in 1988 with a company named Connectix. It had innovative products such as Connectix Virtual PC and Virtual Server, an x86 software emulation for Mac, Windows, and OS/2. In 2003, Microsoft acquired Connectix and a year later released Microsoft Virtual PC and Microsoft Virtual Server 2005. After lots of improvements in the architecture during the project Viridian, Microsoft released Hyper-V in 2008, the second version in 2009 (Windows Server 2008 R2), the third version in 2012 (Windows Server 2012), a year later in 2013 the fourth version was released (Windows Server 2012 R2), the current and fifth version in 2016 (Windows Server 2016). In the past years, Microsoft has proven that Hyper-V is a strong and competitive solution for server virtualization and provides scalability, flexible infrastructure, high availability, and resiliency. To better understand the different virtualization models, and how the VMs are created and managed by Hyper-V, it is very important to know its core, architecture, and components. By doing so, you will understand how it works, you can compare with other solutions, and troubleshoot problems easily. Microsoft has long told customers that Azure datacenters are powered by Microsoft Hyper-V, and the forthcoming Azure Stack will actually allow us to run Azure in our own datacenters on top of Windows Server 2016 Hyper-V as well. For more information about Azure Stack, please refer to the following link: https://azure.microsoft.com/en-us/overview/azure-stack/ Microsoft Hyper-V proves over the years that it's a very scalable platform to virtualize any and every workload without exception. This appendix includes well-explained topics with the most important Hyper-V architecture components compared with other versions. (For more resources related to this topic, see here.) Understanding Hypervisors The Virtual Machine Manager (VMM), also known as Hypervisor, is the software application responsible for running multiple VMs in a single system. It is also responsible for creation, preservation, division, system access, and VM management running on the Hypervisor layer. These are the types of Hypervisors: VMM Type 2 VMM Hybrid VMM Type 1 VMM Type 2 This type runs Hypervisor on top of an OS, as shown in the following diagram, we have the hardware at the bottom, the OS and then the Hypervisor running on top. Microsoft Virtual PC and VMware Workstation is an example of software that uses VMM Type 2. VMs pass hardware requests to the Hypervisor, to the host OS, and finally reaching the hardware. That leads to performance and management limitation imposed by the host OS. Type 2 is common for test environments—VMs with hardware restrictions—to run on software applications that are installed in the host OS. VMM Hybrid When using the VMM Hybrid type, the Hypervisor runs on the same level as the OS, as shown in the following diagram. As both Hypervisor and the OS are sharing the same access to the hardware with the same priority, it is not as fast and safe as it could be. This is the type used by the Hyper-V predecessor named Microsoft Virtual Server 2005: VMM Type 1 VMM Type 1 is a type that has the Hypervisor running in a tiny software layer between the hardware and the partitions, managing and orchestrating the hardware access. The host OS, known as Parent Partition, run on the same level as the Child Partition, known as VMs, as shown in the next diagram. Due to the privileged access that the Hypervisor has on the hardware, it provides more security, performance, and control over the partitions. This is the type used by Hyper-V since its first release: Hyper-V architecture Knowing how Hyper-V works and how its architecture is constructed will make it easier to understand its concepts and operations. The following sections will explore the most important components in Hyper-V. Windows before Hyper-V Before we dive into the Hyper-V architecture details, it will be easy to understand what happens after Hyper-V is installed, by looking at Windows without Hyper-V, as shown in the following diagram: In a normal Windows installation, the instructions access is divided by four privileged levels in the processor called Rings. The most privileged level is Ring 0, with direct access to the hardware and where the Windows Kernel sits. Ring 3 is responsible for hosting the user level, where most common applications run and with the least privileged access. Windows after Hyper-V When Hyper-V is installed, it needs a higher privilege than Ring 0. Also, it must have dedicated access to the hardware. This is possible due to the capabilities of the new processor created by Intel and AMD, called Intel-VT and AMD-V respectively, that allows the creation of a fifth ring called Ring -1. Hyper-V uses this ring to add its Hypervisor, having a higher privilege and running under Ring 0, controlling all the access to the physical components, as shown in the following diagram: The OS architecture suffers several changes after Hyper-V installation. Right after the first boot, the Operating System Boot Loader file (winload.exe) checks the processor that is being used and loads the Hypervisor image on Ring -1 (using the files Hvix64.exe for Intel processors and Hvax64.exe for AMD processors). Then, Windows Server is initiated running on top of the Hypervisor and every VM that runs beside it. After Hyper-V installation, Windows Server has the same privilege level as a VM and is responsible for managing VMs using several components. Differences between Windows Server 2016 Hyper-V, Nano Server, Hyper-V Server, Hyper-V Client, and VMware There are four different versions of Hyper-V—the role that is installed on Windows Server 2016 (Core or Full Server), the role that can be installed on a Nano Server, its free version called Hyper-V Server and the Hyper-V that comes in Windows 10 called Hyper-V Client. The following sections will explain the differences between all the versions and a comparison between Hyper-V and its competitor, VMware. Windows Server 2016 Hyper-V Hyper-V is one of the most fascinating and improved role on Windows Server 2016. Its fifth version goes beyond virtualization and helps us deliver the correct infrastructure to host your cloud environment. Hyper-V can be installed as a role in both Windows Server Standard and Datacenter editions. The only difference in Windows Server 2012 and 2012 R2 in the Standard edition, two free Windows Server OSes are licensed whereas there are unlimited licenses in the Datacenter edition. However, in Windows Server 2016 there are significant changes between the two editions. The following table will show the difference between Windows Server 2016 Standard and Datacenter editions: Resource Windows Server 2016 Datacenter edition Windows Server 2016 Standard edition Core functionality of Windows Server Yes Yes OSes/Hyper-V Containers Unlimited 2 Windows Server Containers Unlimited Unlimited Nano Server Yes Yes Storage features for software-defined datacenter including Storage Spaces Direct and Storage Replica Yes N/A Shielded VMs Yes N/A Networking stack for software-defined datacenter Yes N/A Licensing Model Core + CAL Core + CAL As you can see in preceding table, the Datacenter edition is designed for highly virtualized private and hybrid cloud environments and Standard edition is for low density or non-virtualized (physical) environments. In Windows Server 2016, Microsoft is also changing the licensing model from a per-processor to per-core licensing for Standard and Datacenter editions. The following points will guide you in order to license Windows Server 2016 Standard and Datacenter edition: All physical cores in the server must be licensed. In other words, servers are licensed based on the number of processor cores in the physical server. You need a minimum of 16 core licenses for each server. You need a minimum of 8 core licenses for each physical processor. The core licenses will be sold in packs of two. Eight 2-core packs will be the minimum required to license each physical server. The 2-core pack for each edition is one-eighth the price of a 2-processor license for corresponding Windows Server 2012 R2 editions. The Standard edition provides rights for up to two OSEs or Hyper-V containers when all physical cores in the server are licensed. For every two additional VMs, all the cores in the server have to be licensed again. The price of 16-core licenses of Windows Server 2016 Datacenter and Standard edition will be the same price as the 2-processor license of the corresponding editions of the Windows Server 2012 R2 version. Existing customers' servers under Software Assurance agreement will receive core grants as required, with documentation. The following table illustrates the new licensing model based on number of 2-core pack licenses: Legend: Gray cells represent licensing costs White cells represent additional licensing is required Windows Server 2016 Standard edition may need additional licensing. Nano Server Nano Server is a new headless, 64-bit only installation option that installs "just enough OS" resulting in a dramatically smaller footprint that results in more uptime and a smaller attack surface. Users can choose to add server roles as needed, including Hyper-V, Scale out File Server, DNS Server and IIS server roles. User can also choose to install features, including Container support, Defender, Clustering, Desired State Configuration (DSC), and Shielded VM support. Nano Server is available in Windows Server 2016 for: Physical Machines Virtual Machines Hyper-V Containers Windows Server Containers Supports the following inbox optional roles and features: Hyper-V, including container and shielded VM support Datacenter Bridging Defender DNS Server Desired State Configuration Clustering IIS Network Performance Diagnostics Service (NPDS) System Center Virtual Machine Manager and System Center Operations Manager Secure Startup Scale out File Server, including Storage Replica, MPIO, iSCSI initiator, Data Deduplication The Windows Server 2016 Hyper-V role can be installed on a Nano Server; this is a key Nano Server role, shrinking the OS footprint and minimizing reboots required when Hyper-V is used to run virtualization hosts. Nano server can be clustered, including Hyper-V failover clusters. Hyper-V works the same on Nano Server including all features does in Windows Server 2016, aside from a few caveats: All management must be performed remotely, using another Windows Server 2016 computer. Remote management consoles such as Hyper-V Manager, Failover Cluster Manager, PowerShell remoting, and management tools like System Center Virtual Machine Manager as well as the new Azure web-based Server Management Tool (SMT) can all be used to manage a Nano Server environment. RemoteFX is not available. Microsoft Hyper-V Server 2016 Hyper-V Server 2016, the free virtualization solution from Microsoft has all the features included on Windows Server 2016 Hyper-V. The only difference is that Microsoft Hyper-V Server does not include VM licenses and a graphical interface. The management can be done remotely using PowerShell, Hyper-V Manager from another Windows Server 2016 or Windows 10. All the other Hyper-V features and limits in Windows Server 2016, including Failover Cluster, Shared Nothing Live Migration, RemoteFX, Discrete Device Assignment and Hyper-V Replica are included in the Hyper-V free version. Hyper-V Client In Windows 8, Microsoft introduced the first Hyper-V Client version. Its third version now with Windows 10. Users can have the same experience from Windows Server 2016 Hyper-V on their desktops or tablet, making their test and development virtualized scenarios much easier. Hyper-V Client in Windows 10 goes beyond only virtualization and helps Windows developers to use containers by bringing Hyper-V Containers natively into Windows 10. This will further empower developers to build amazing cloud applications benefiting from native container capabilities right in Windows. Since Hyper-V Containers utilize their own instance of the Windows kernel, the container is truly a server container all the way down to the kernel. Plus, with the flexibility of Windows container runtimes (Windows Server Containers or Hyper-V Containers), containers built on Windows 10 can be run on Windows Server 2016 as either Windows Server Containers or Hyper-V Containers. Because Windows 10 only supports Hyper-V containers, the Hyper-V feature must also be enabled. Hyper-V Client is present only in the Windows 10 Pro or Enterprise version and requires the same CPU feature as in Windows Server 2016 called Second Level Address Translation (SLAT). Although Hyper-V client is very similar to the server version, there are some components that are only present on Windows Server 2016 Hyper-V. Here is a list of components you will find only on the server version: Hyper-V Replica Remote FX capability to virtualize GPUs Discrete Device Assignment (DDA) Live Migration and Shared Nothing Live Migration ReFS Accelerated VHDX Operations SR-IOV Networks Remote Direct Memory Access (RDMA) and Switch Embedded Teaming (SET) Virtual Fibre Channel Network Virtualization Failover Clustering Shielded VMs VM Monitoring Even with these limitations, Hyper-V Client has very interesting features such as Storage Migration, VHDX, VMs running on SMB 3.1 File Shares, PowerShell integration, Hyper-V Manager, Hyper-V Extensible Switch, Quality of Services, Production Checkpoints, the same VM hardware limits as Windows Server 2016 Hyper-V, Dynamic Memory, Runtime Memory Resize, Nested Virtualization, DHCP Guard, Port Mirroring, NIC Device Naming and much more. In Windows 8, Microsoft introduced the first Hyper-V Client version. Its third version now with Windows 10. Users can have the same experience from Windows Server 2016 Hyper-V on their desktops or tablet, making their test and development virtualized scenarios much easier. Hyper-V Client in Windows 10 goes beyond only virtualization and helps Windows developers to use containers by bringing Hyper-V Containers natively into Windows 10. This will further empower developers to build amazing cloud applications benefiting from native container capabilities right in Windows. Since Hyper-V Containers utilize their own instance of the Windows kernel, the container is truly a server container all the way down to the kernel. Plus, with the flexibility of Windows container runtimes (Windows Server Containers or Hyper-V Containers), containers built on Windows 10 can be run on Windows Server 2016 as either Windows Server Containers or Hyper-V Containers. Because Windows 10 only supports Hyper-V containers, the Hyper-V feature must also be enabled. Hyper-V Client is present only in the Windows 10 Pro or Enterprise version and requires the same CPU feature as in Windows Server 2016 called Second Level Address Translation (SLAT). Although Hyper-V client is very similar to the server version, there are some components that are only present on Windows Server 2016 Hyper-V. Here is a list of components you will find only on the server version: Hyper-V Replica Remote FX capability to virtualize GPUs Discrete Device Assignment (DDA) Live Migration and Shared Nothing Live Migration ReFS Accelerated VHDX Operations SR-IOV Networks Remote Direct Memory Access (RDMA) and Switch Embedded Teaming (SET) Virtual Fibre Channel Network Virtualization Failover Clustering Shielded VMs VM Monitoring Even with these limitations, Hyper-V Client has very interesting features such as Storage Migration, VHDX, VMs running on SMB 3.1 File Shares, PowerShell integration, Hyper-V Manager, Hyper-V Extensible Switch, Quality of Services, Production Checkpoints, the same VM hardware limits as Windows Server 2016 Hyper-V, Dynamic Memory, Runtime Memory Resize, Nested Virtualization, DHCP Guard, Port Mirroring, NIC Device Naming and much more. Windows Server 2016 Hyper-V X VMware vSphere 6.0 VMware is the existing competitor of Hyper-V and the current version 6.0 offers the VMware vSphere as a free and a standalone Hypervisor, vSphere Standard, Enterprise, and Enterprise Plus. The following list compares all the features existing in the free version of Hyper-V with VMware Sphere and Enterprise Plus: Feature Windows Server 2012 R2 Windows Server 2016 VMware vSphere 6.0 VMware vSphere 6.0 Enterprise Plus Logical Processors 320 512 480 480 Physical Memory 4TB 24TB 6TB 6TB/12TB Virtual CPU per Host 2,048 2,048 4,096 4,096 Virtual CPU per VM 64 240 8 128 Memory per VM 1TB 12TB 4TB 4TB Active VMs per Host 1,024 1,024 1,024 1,024 Guest NUMA Yes Yes Yes Yes Maximum Nodes 64 64 N/A 64 Maximum VMs per Cluster 8,000 8,000 N/A 8,000 VM Live Migration Yes Yes No Yes VM Live Migration with Compression Yes Yes N/A No VM Live Migration using RDMA Yes Yes N/A No 1GB Simultaneous Live Migrations Unlimited Unlimited N/A 4 10GB Simultaneous Live Migrations Unlimited Unlimited N/A 8 Live Storage Migration Yes Yes No Yes Shared Nothing Live Migration Yes Yes No Yes Cluster Rolling Upgrades Yes Yes N/A Yes VM Replica Hot/Add virtual Disk Yes Yes Yes Yes Native 4-KB Disk Support Yes Yes No No Maximum Virtual Disk Size 64TB 64TB 2TB 62TB Maximum Pass Through Disk Size 256TB or more 256TB or more 64TB 64TB Extensible Network Switch Yes Yes No Third party vendors   Network Virtualization Yes Yes No Requires vCloud networking and security IPsec Task Offload Yes Yes No No SR-IOV Yes Yes N/A Yes Virtual NICs per VM 12 12 10 10 VM NIC Device Naming No Yes N/A No Guest OS Application Monitoring Yes Yes No No Guest Clustering with Live Migration Yes Yes N/A No Guest Clustering with Dynamic Memory Yes Yes N/A No Shielded VMs No Yes N/A No Summary In this article, we have covered Hyper-V architecture along with the most important components in Hyper-V and also differences between Windows Server 2016 Hyper-V, Nano Server, Hyper-V Server, Hyper-V Client, and VMware. Resources for Article: Further resources on this subject: Storage Practices and Migration to Hyper-V 2016 [article] Proxmox VE Fundamentals [article] Designing and Building a vRealize Automation 6.2 Infrastructure [article]
Read more
  • 0
  • 0
  • 33224

article-image-testing-and-quality-control
Packt
04 Jan 2017
19 min read
Save for later

Testing and Quality Control

Packt
04 Jan 2017
19 min read
In this article by Pablo Solar Vilariño and Carlos Pérez Sánchez, the author of the book, PHP Microservices, we will see the following topics: (For more resources related to this topic, see here.) Test-driven development Behavior-driven development Acceptance test-driven development Tools Test-driven development Test-Driven Development (TDD) is part of Agile philosophy, and it appears to solve the common developer's problem that shows when an application is evolving and growing, and the code is getting sick, so the developers fix the problems to make it run but every single line that we add can be a new bug or it can even break other functions. Test-driven development is a learning technique that helps the developer to learn about the domain problem of the application they are going to build, doing it in an iterative, incremental, and constructivist way: Iterative because the technique always repeats the same process to get the value Incremental because for each iteration, we have more unit tests to be used Constructivist because it is possible to test all we are developing during the process straight away, so we can get immediate feedback Also, when we finish developing each unit test or iteration, we can forget it because it will be kept from now on throughout the entire development process, helping us to remember the domain problem through the unit test; this is a good approach for forgetful developers. It is very important to understand that TDD includes four things: analysis, design, development, and testing; in other words, doing TDD is understanding the domain problem and correctly analyzing the problem, designing the application well, developing well, and testing it. It needs to be clear; TDD is not just about implementing unit tests, it is the whole process of software development. TDD perfectly matches projects based on microservices because using microservices in a large project is dividing it into little microservices or functionalities, and it is like an aggrupation of little projects connected by a communication channel. The project size is independent of using TDD because in this technique, you divide each functionality into little examples, and to do this, it does not matter if the project is big or small, and even less when our project is divided by microservices. Also, microservices are still better than a monolithic project because the functionalities for the unit tests are organized in microservices, and it will help the developers to know where they can begin using TDD. How to do TDD? Doing TDD is not difficult; we just need to follow some steps and repeat them by improving our code and checking that we did not break anything. TDD involves the following steps: Write the unit test: It needs to be the simplest and clearest test possible, and once it is done, it has to fail; this is mandatory. If it does not fail, there is something that we are not doing properly. Run the tests: If it has errors (it fails), this is the moment to develop the minimum code to pass the test, just what is necessary, do not code additional things. Once you develop the minimum code to pass the test, run the test again (step two); if it passes, go to the next step, if not then fix it and run the test again. Improve the test: If you think it is possible to improve the code you wrote, do it and run the tests again (step two). If you think it is perfect then write a new unit test (step one). To do TDD, it is necessary to write the tests before implementing the function; if the tests are written after the implementation has started, it is not TDD; it is just testing. If we start implementing the application without testing and it is finished, or if we start creating unit tests during the process, we are doing the classic testing and we are not approaching the TDD benefits. Developing the functions without prior testing, the abstract idea of the domain problem in your mind can be wrong or may even be clear at the start but during the development process it can change or the concepts can be mixed. Writing the tests after that, we are checking if all the ideas in our main were correct after we finished the implementation, so probably we have to change some methods or even whole functionalities after spend time coding. Obviously, testing is always better than not testing, but doing TDD is still better than just classic testing. Why should I use TDD? TDD is the answer to questions such as: Where shall I begin? How can I do it? How can I write code that can be modified without breaking anything? How can I know what I have to implement? The goal is not to write many unit tests without sense but to design it properly following the requirements. In TDD, we do not to think about implementing functions, but we think about good examples of functions related with the domain problem in order to remove the ambiguity created by the domain problem. In other words, by doing TDD, we should reproduce a specific function or case of use in X examples until we get the necessary examples to describe the function or task without ambiguity or misinterpretations. TDD can be the best way to document your application. Using other methodologies of software development, we start thinking about how the architecture is going to be, what pattern is going to be used, how the communication between microservices is going to be, and so on, but what happens if once we have all this planned, we realize that this is not necessary? How much time is going to pass until we realize that? How much effort and money are we going to spend? TDD defines the architecture of our application by creating little examples in many iterations until we realize what the architecture is; the examples will slowly show us the steps to follow in order to define what the best structures, patterns, or tools to use are, avoiding expenditure of resources during the firsts stages of our application. This does not mean that we are working without an architecture; obviously, we have to know if our application is going to be a website or a mobile app and use a proper framework. What is going to be the interoperability in the application? In our case it will be an application based on microservices, so it will give us support to start creating the first unit tests. The architectures that we remove are the architectures on top of the architecture, in other words, the guidelines to develop an application as always. TDD will produce an architecture without ambiguity from unit testing. TDD is not cure-all: In other words, it does not give the same results to a senior developer as to a junior developer, but it is useful for the entire team. Let's look at some advantages of using TDD: Code reuse: Creates every functionality with only the necessary code to pass the tests in the second stage (Green) and allows you to see if there are more functions using the same code structure or parts of a specific function, so it helps you to reuse the previous code you wrote. Teamwork is easier: It allows you to be confident with your team colleagues. Some architects or senior developers do not trust developers with poor experience, and they need to check their code before committing the changes, creating a bottleneck at that point, so TDD helps to trust developers with less experience. Increases communication between team colleagues: The communication is more fluent, so the team share their knowledge about the project reflected on the unit tests. Avoid overdesigning application in the first stages: As we said before, doing TDD allows you to have an overview of the application little by little, avoiding the creation of useless structures or patterns in your project, which, maybe, you will trash in the future stages. Unit tests are the best documentation: The best way to give a good point of view of a specific functionality is reading its unit test. It will help to understand how it works instead of human words. Allows discovering more use cases in the design stage: In every test you have to create, you will understand how the functionality should work better and all the possible stages that a functionality can have. Increases the feeling of a job well done: In every commit of your code, you will have the feeling that it was done properly because the rest of the unit tests passes without errors, so you will not be worried about other broken functionalities. Increases the software quality: During the step of refactoring, we spend our efforts on making the code more efficient and maintainable, checking that the whole project still works properly after the changes. TDD algorithm The technical concepts and steps to follow the TDD algorithm are easy and clear, and the proper way to make it happen improves by practicing it. There are only three steps, called red, green, and refactor: Red – Writing the unit tests It is possible to write a test even when the code is not written yet; you just need to think about whether it is possible to write a specification before implementing it. So, in this first step you should consider that the unit test you start writing is not like a unit test, but it is like an example or specification of the functionality. In TDD, this first example or specification is not immovable; in other words, the unit test can be modified in the future. Before starting to write the first unit test, it is necessary to think about how the Software Under Test (SUT) is going to be. We need to think about how the SUT code is going to be and how we would check that it works they way we want it to. The way that TDD works drives us to firstly design what is more comfortable and clear if it fits the requirements. Green – Make the code work Once the example is written, we have to code the minimum to make it pass the test; in other words, set the unit test to green. It does not matter if the code is ugly and not optimized, it will be our task in the next step and iterations. In this step, the important thing is only to write the necessary code for the requirements without unnecessary things. It does not mean writing without thinking about the functionality, but thinking about it to be efficient. It looks easy but you will realize that you will write extra code the first time. If you concentrate on this step, new questions will appear about the SUT behavior with different entries, but you should be strong and avoid writing extra code about other functionalities related to the current one. Instead of coding them, take notes to convert them into functionalities in the next iterations. Refactor – Eliminate redundancy Refactoring is not the same as rewriting code. You should be able to change the design without changing the behavior. In this step, you should remove the duplicity in your code and check if the code matches the principles of good practices, thinking about the efficiency, clarity, and future maintainability of the code. This part depends on the experience of each developer. The key to good refactoring is making it in small steps To refactor a functionality, the best way is to change a little part and then execute all the available tests; if they pass, continue with another little part, until you are happy with the obtained result. Behavior-driven development Behavior-Driven Development (BDD) is a process that broadens the TDD technique and mixes it with other design ideas and business analyses provided to the developers, in order to improve the software development. In BDD, we test the scenarios and classes’ behavior in order to meet the scenarios, which can be composed by many classes. It is very useful to use a DSL in order to have a common language to be used by the customer, project owner, business analyst, or developers. The goal is to have a ubiquitous language. What is BDD? As we said before, BDD is an AGILE technique based on TDD and ATDD, promoting the collaboration between the entire team of a project. The goal of BDD is that the entire team understands what the customer wants, and the customer knows what the rest of the team understood from their specifications. Most of the times, when a project starts, the developers don't have the same point of view as the customer, and during the development process the customer realizes that, maybe, they did not explain it or the developer did not understand it properly, so it adds more time to changing the code to meet the customer's needs. So, BDD is writing test cases in human language, using rules, or in a ubiquitous language, so the customer and developers can understand it. It also defines a DSL for the tests. How does it work? It is necessary to define the features as user stories (we will explain what this is in the ATDD section of this article) and their acceptance criteria. Once the user story is defined, we have to focus on the possible scenarios, which describe the project behavior for a concrete user or a situation using DSL. The steps are: Given [context], When [event occurs], Then [Outcome]. To sum up, the defined scenario for a user story gives the acceptance criteria to check if the feature is done. Acceptance Test-Driven Development Perhaps, the most important methodology in a project is the Acceptance Test-Driven Development (ATDD) or Story Test-Driven Development (STDD); it is TDD but on a different level. The acceptance (or customer) tests are the written criteria for a project meeting the business requirements that the customer demands. They are examples (like the examples in TDD) written by the project owner. It is the start of development for each iteration, the bridge between Scrum and agile development. In ATDD, we start the implementation of our project in a way different from the traditional methodologies. The business requirements written in human language are replaced by executables agreed upon by some team members and also the customer. It is not about replacing the whole documentation, but only a part of the requirements. The advantages of using ATDD are the following: Real examples and a common language for the entire team to understand the domain It allows identifying the domain rules properly It is possible to know if a user story is finished in each iteration The workflow works from the first steps The development does not start until the tests are defined and accepted by the team ATDD algorithm The algorithm of ATDD is like that of TDD but reaches more people than only the developers; in other words, doing ATDD, the tests of each story are written in a meeting that includes the project owners, developers, and QA technicians because the entire team must understand what is necessary to do and why it is necessary, so they can see if it is what the code should do. The ATDD cycle is depicted in the following diagram: Discuss The starting point of the ATDD algorithm is the discussion. In this first step, the business has a meeting with the customer to clarify how the application should work, and the analyst should create the user stories from that conversation. Also, they should be able to explain the conditions of satisfaction of every user story in order to be translated into examples. By the end of the meeting, the examples should be clear and concise, so we can get a list of examples of user stories in order to cover all the needs of the customer, reviewed and understood for him. Also, the entire team will have a project overview in order to understand the business value of the user story, and in case the user story is too big, it could be divided into little user stories, getting the first one for the first iteration of this process. Distill High-level acceptance tests are written by the customer and the development team. In this step, the writing of the test cases that we got from the examples in the discussion step begins, and the entire team can take part in the discussion and help clarify the information or specify the real needs of that. The tests should cover all the examples that were discovered in the discussion step, and extra tests could be added during this process bit by bit till we understand the functionality better. At the end of this step, we will obtain the necessary tests written in human language, so the entire team (including the customer) can understand what they are going to do in the next step. These tests can be used like a documentation. Develop In this step, the development of acceptance test cases is begun by the development team and the project owner. The methodology to follow in this step is the same as TDD, the developers should create a test and watch it fail (Red) and then develop the minimum amount of lines to pass (Green). Once the acceptance tests are green, this should be verified and tested to be ready to be delivered. During this process, the developers may find new scenarios that need to be added into the tests or even if it needs a large amount of work, it could be pushed to the user story. At the end of this step, we will have software that passes the acceptance tests and maybe more comprehensive tests. Demo The created functionality is shown by running the acceptance test cases and manually exploring the features of the new functionality. After the demonstration, the team discusses whether the user story was done properly and it meets the product owner's needs and decides if it can continue with the next story. Tools After knowing more about TDD and BDD, it is time to explain a few tools you can use in your development workflow. There are a lot of tools available, but we will only explain the most used ones. Composer Composer is a PHP tool used to manage software dependencies. You only need to declare the libraries needed by your project and the composer will manage them, installing and updating when necessary. This tool has only a few requirements: if you have PHP 5.3.2+, you are ready to go. In the case of a missing requirement, the composer will warn you. You could install this dependency manager on your development machine, but since we are using Docker, we are going to install it directly on our PHP-FPM containers. The installation of composer in Docker is very easy; you only need to add the following rule to the Dockerfile: RUN curl -sS https://getcomposer.org/installer | php -- --install-"dir=/usr/bin/ --filename=composer PHPUnit Another tool we need for our project is PHPUnit, a unit test framework. As before, we will be adding this tool to our PHP-FPM containers to keep our development machine clean. If you are wondering why we are not installing anything on our development machine except for Docker, the response is clear. Having everything in the containers will help you avoid any conflict with other projects and gives you the flexibility of changing versions without being too worried. Add the following RUN command to your PHP-FPM Dockerfile, and you will have the latest PHPUnit version installed and ready to use: RUN curl -sSL https://phar.phpunit.de/phpunit.phar -o "/usr/bin/phpunit && chmod +x /usr/bin/phpunit Now that we have all our requirements too, it is time to install our PHP framework and start doing some TDD stuff. Later, we will continue updating our Docker environment with new tools. We choose Lumen for our example. Please feel free to adapt all the examples to your favorite framework. Our source code will be living inside our containers, but at this point of development, we do not want immutable containers. We want every change we make to our code to be available instantaneously in our containers, so we will be using a container as a storage volume. To create a container with our source and use it as a storage volume, we only need to edit our docker-compose.yml and create one source container per each microservice, as follows: source_battle: image: nginx:stable volumes: - ../source/battle:/var/www/html command: "true" The above piece of code creates a container image named source_battle, and it stores our battle source (located at ../source/battle from the docker-compose.yml current path). Once we have our source container available, we can edit each one of our services and assign a volume. For instance, we can add the following line in our microservice_battle_fpm and microservice_battle_nginx container descriptions: volumes_from: - source_battle Our battle source will be available in our source container in the path, /var/www/html, and the remaining step to install Lumen is to do a simple composer execution. First, you need to be sure that your infrastructure is up with a simple command, as follows: $ docker-compose up The preceding command spins up our containers and outputs the log to the standard IO. Now that we are sure that everything is up and running, we need to enter in our PHP-FPM containers and install Lumen. If you need to know the names assigned to each one of your containers, you can do a $ docker ps and copy the container name. As an example, we are going to enter the battle PHP-FPM container with the following command: $ docker exec -it docker_microservice_battle_fpm_1 /bin/bash The preceding command opens an interactive shell in your container, so you can do anything you want; let's install Lumen with a single command: # cd /var/www/html && composer create-project --prefer-dist "laravel/lumen . Repeat the preceding commands for each one of your microservices. Now, you have everything ready to start doing Unit tests and coding your application. Summary In this article, you learned about test-driven development, behavior-driven development, acceptance test-driven development, and PHPUnit. Resources for Article: Further resources on this subject: Running Simpletest and PHPUnit [Article] Understanding PHP basics [Article] The Multi-Table Query Generator using phpMyAdmin and MySQL [Article]
Read more
  • 0
  • 0
  • 16582

article-image-neural-network
Packt
04 Jan 2017
11 min read
Save for later

What is an Artificial Neural Network?

Packt
04 Jan 2017
11 min read
In this article by Prateek Joshi, author of book Artificial Intelligence with Python, we are going to learn about artificial neural networks. We will start with an introduction to artificial neural networks and the installation of the relevant library. We will discuss perceptron and how to build a classifier based on that. We will learn about single layer neural networks and multilayer neural networks. (For more resources related to this topic, see here.) Introduction to artificial neural networks One of the fundamental premises of Artificial Intelligence is to build machines that can perform tasks that require human intelligence. The human brain is amazing at learning new things. Why not use the model of the human brain to build a machine? An artificial neural network is a model designed to simulate the learning process of the human brain. Artificial neural networks are designed such that they can identify the underlying patterns in data and learn from them. They can be used for various tasks such as classification, regression, segmentation, and so on. We need to convert any given data into the numerical form before feeding it into the neural network. For example, we deal with many different types of data including visual, textual, time-series, and so on. We need to figure out how to represent problems in a way that can be understood by artificial neural networks. Building a neural network The human learning process is hierarchical. We have various stages in our brain’s neural network and each stage corresponds to a different granularity. Some stages learn simple things and some stages learn more complex things. Let’s consider an example of visually recognizing an object. When we look at a box, the first stage identifies simple things like corners and edges. The next stage identifies the generic shape and the stage after that identifies what kind of object it is. This process differs for different tasks, but you get the idea! By building this hierarchy, our human brain quickly separates the concepts and identifies the given object. To simulate the learning process of the human brain, an artificial neural network is built using layers of neurons. These neurons are inspired from the biological neurons we discussed in the previous paragraph. Each layer in an artificial neural network is a set of independent neurons. Each neuron in a layer is connected to neurons in the adjacent layer. Training a neural network If we are dealing with N-dimensional input data, then the input layer will consist of N neurons. If we have M distinct classes in our training data, then the output layer will consist of M neurons. The layers between the input and output layers are called hidden layers. A simple neural network will consist of a couple of layers and a deep neural network will consist of many layers. Consider the case where we want to use a neural network to classify the given data. The first step is to collect the appropriate training data and label it. Each neuron acts as a simple function and the neural network trains itself until the error goes below a certain value. The error is basically the difference between the predicted output and the actual output. Based on how big the error is, the neural network adjusts itself and retrains until it gets closer to the solution. You can learn more about neural networks here: http://pages.cs.wisc.edu/~bolo/shipyard/neural/local.html. We will be using a library called NeuroLab . You can find more about it here: https://pythonhosted.org/neurolab. You can install it by running the following command on your Terminal: $ pip3 install neurolab Once you have installed it, you can proceed to the next section. Building a perceptron based classifier Perceptron is the building block of an artificial neural network. It is a single neuron that takes inputs, performs computation on them, and then produces an output. It uses a simple linear function to make the decision. Let’s say we are dealing with an N-dimension input datapoint. A perceptron computes the weighted summation of those N numbers and it then adds a constant to produce the output. The constant is called the bias of the neuron. It is remarkable to note that these simple perceptrons are used to design very complex deep neural networks. Let’s see how to build a perceptron based classifier using NeuroLab. Create a new python file and import the following packages: importnumpy as np importmatplotlib.pyplot as plt importneurolab as nl Load the input data from the text file data_perceptron.txt provided to you. Each line contains space separated numbers where the first two numbers are the features and the last number is the label: # Load input data text = np.loadtxt(‘data_perceptron.txt’) Separate the text into datapoints and labels: # Separate datapoints and labels data = text[:, :2] labels = text[:, 2].reshape((text.shape[0], 1)) Plot the datapoints: # Plot input data plt.figure() plt.scatter(data[:,0], data[:,1]) plt.xlabel(‘Dimension 1’) plt.ylabel(‘Dimension 2’) plt.title(‘Input data’) Define the maximum and minimum values that each dimension can take: # Define minimum and maximum values for each dimension dim1_min, dim1_max, dim2_min, dim2_max = 0, 1, 0, 1 Since the data is separated into two classes, we just need one bit to represent the output. So the output layer will contain a single neuron. # Number of neurons in the output layer num_output = labels.shape[1] We have a dataset where the datapoints are 2-dimensional. Let’s define a perceptron with 2 input neurons where we assign one neuron for each dimension. # Define a perceptron with 2 input neurons (because we # have 2 dimensions in the input data) dim1 = [dim1_min, dim1_max] dim2 = [dim2_min, dim2_max] perceptron = nl.net.newp([dim1, dim2], num_output) Train the perceptron with the training data: # Train the perceptron using the data error_progress = perceptron.train(data, labels, epochs=100, show=20, lr=0.03) Plot the training progress using the error metric: # Plot the training progress plt.figure() plt.plot(error_progress) plt.xlabel(‘Number of epochs’) plt.ylabel(‘Training error’) plt.title(‘Training error progress’) plt.grid() plt.show() The full code is given in the file perceptron_classifier.py. If you run the code, you will get two output figures. The first figure indicates the input datapoints: The second figure represents the training progress using the error metric: As we can observe from the preceding figure, the error goes down to 0 at the end of fourth epoch. Constructing a single layer neural network A perceptron is a good start, but it cannot do much. The next step is to have a set of neurons act as a unit to see what we can achieve. Let’s create a single neural network that consists of independent neurons acting on input data to produce the output. Create a new python file and import the following packages: importnumpy as np importmatplotlib.pyplot as plt importneurolab as nl Load the input data from the file data_simple_nn.txt provided to you. Each line in this file contains 4 numbers. The first two numbers form the datapoint and the last two numbers are the labels. Why do we need to assign two numbers for labels? Because we have 4 distinct classes in our dataset, so we need two bits represent them. # Load input data text = np.loadtxt(‘data_simple_nn.txt’) Separate the data into datapoints and labels: # Separate it into datapoints and labels data = text[:, 0:2] labels = text[:, 2:] Plot the input data: # Plot input data plt.figure() plt.scatter(data[:,0], data[:,1]) plt.xlabel(‘Dimension 1’) plt.ylabel(‘Dimension 2’) plt.title(‘Input data’) Extract the minimum and maximum values for each dimension (we don’t need to hardcode it like we did in the previous section): # Minimum and maximum values for each dimension dim1_min, dim1_max = data[:,0].min(), data[:,0].max() dim2_min, dim2_max = data[:,1].min(), data[:,1].max() Define the number of neurons in the output layer: # Define the number of neurons in the output layer num_output = labels.shape[1] Define a single layer neural network using the above parameters: # Define a single-layer neural network dim1 = [dim1_min, dim1_max] dim2 = [dim2_min, dim2_max] nn = nl.net.newp([dim1, dim2], num_output) Train the neural network using training data: # Train the neural network error_progress = nn.train(data, labels, epochs=100, show=20, lr=0.03) Plot the training progress: # Plot the training progress plt.figure() plt.plot(error_progress) plt.xlabel(‘Number of epochs’) plt.ylabel(‘Training error’) plt.title(‘Training error progress’) plt.grid() plt.show() Define some sample test datapoints and run the network on those points: The full code is given in the file simple_neural_network.py. If you run the code, you will get two figures. The first figure represents the input datapoints: # Run the classifier on test datapoints print(‘nTest results:’) data_test = [[0.4, 4.3], [4.4, 0.6], [4.7, 8.1]] for item in data_test: print(item, ‘-->‘, nn.sim([item])[0]) The second figure shows the training progress: You will see the following printed on your Terminal: If you locate those test datapoints on a 2D graph, you can visually verify that the predicted outputs are correct. Constructing a multilayer neural network In order to enable higher accuracy, we need to give more freedom the neural network. This means that a neural network needs more than one layer to extract the underlying patterns in the training data. Let’s create a multilayer neural network to achieve that. Create a new python file and import the following packages: importnumpy as np importmatplotlib.pyplot as plt importneurolab as nl In the previous two sections, we saw how to use a neural network as a classifier. In this section, we will see how to use a multilayer neural network as a regressor. Generate some sample datapoints based on the equation y = 3x^2 + 5 and then normalize the points: # Generate some training data min_val = -15 max_val = 15 num_points = 130 x = np.linspace(min_val, max_val, num_points) y = 3 * np.square(x) + 5 y /= np.linalg.norm(y) Reshape the above variables to create a training dataset: # Create data and labels data = x.reshape(num_points, 1) labels = y.reshape(num_points, 1) Plot the input data: # Plot input data plt.figure() plt.scatter(data, labels) plt.xlabel(‘Dimension 1’) plt.ylabel(‘Dimension 2’) plt.title(‘Input data’) Define a multilayer neural network with 2 hidden layers. You are free to design a neural network any way you want. For this case, let’s have 10 neurons in the first layer and 6 neurons in the second layer. Our task is to predict the value, so the output layer will contain a single neuron. # Define a multilayer neural network with 2 hidden layers; # First hidden layer consists of 10 neurons # Second hidden layer consists of 6 neurons # Output layer consists of 1 neuron nn = nl.net.newff([[min_val, max_val]], [10, 6, 1]) Set the training algorithm to gradient descent: # Set the training algorithm to gradient descent nn.trainf = nl.train.train_gd Train the neural network using the training data that was generated: # Train the neural network error_progress = nn.train(data, labels, epochs=2000, show=100, goal=0.01) Run the neural network on the training datapoints: # Run the neural network on training datapoints output = nn.sim(data) y_pred = output.reshape(num_points) Plot the training progress: # Plot training error plt.figure() plt.plot(error_progress) plt.xlabel(‘Number of epochs’) plt.ylabel(‘Error’) plt.title(‘Training error progress’) Plot the predicted output: # Plot the output x_dense = np.linspace(min_val, max_val, num_points * 2) y_dense_pred = nn.sim(x_dense.reshape(x_dense.size,1)).reshape(x_dense.size) plt.figure() plt.plot(x_dense, y_dense_pred, ‘-’, x, y, ‘.’, x, y_pred, ‘p’) plt.title(‘Actual vs predicted’) plt.show() The full code is given in the file multilayer_neural_network.py. If you run the code, you will get three figures. The first figure shows the input data: The second figure shows the training progress: The third figure shows the predicted output overlaid on top of input data: The predicted output seems to follow the general trend. If you continue to train the network and reduce the error, you will see that the predicted output will match the input curve even more accurately. You will see the following printed on your Terminal: Summary In this article, we learnt more about artificial neural networks. We discussed how to build and train neural networks. We also talked about perceptron and built a classifier based on that. We also learnt about single layer neural networks as well as multilayer neural networks. Resources for Article: Further resources on this subject: Training and Visualizing a neural network with R [article] Implementing Artificial Neural Networks with TensorFlow [article] How to do Machine Learning with Python [article]
Read more
  • 0
  • 0
  • 24394
article-image-tensorflow
Packt
04 Jan 2017
17 min read
Save for later

TensorFlow

Packt
04 Jan 2017
17 min read
In this article by Nicholas McClure, the author of the book TensorFlow Machine Learning Cookbook, we will cover basic recipes in order to understand how TensorFlow works and how to access data for this book and additional resources: How TensorFlow works Declaring tensors Using placeholders and variables Working with matrices Declaring operations (For more resources related to this topic, see here.) Introduction Google's TensorFlow engine has a unique way of solving problems. This unique way allows us to solve machine learning problems very efficiently. We will cover the basic steps to understand how TensorFlow operates. This understanding is essential in understanding recipes for the rest of this book. How TensorFlow works At first, computation in TensorFlow may seem needlessly complicated. But there is a reason for it: because of how TensorFlow treats computation, developing more complicated algorithms is relatively easy. This recipe will talk you through the pseudo code of how a TensorFlow algorithm usually works. Getting ready Currently, TensorFlow is only supported on Mac and Linux distributions. Using TensorFlow on Windows requires the usage of a virtual machine. Throughout this book we will only concern ourselves with the Python library wrapper of TensorFlow. This book will use Python 3.4+ (https://www.python.org) and TensorFlow 0.7 (https://www.tensorflow.org). While TensorFlow can run on the CPU, it runs faster if it runs on the GPU, and it is supported on graphics cards with NVidia Compute Capability 3.0+. To run on a GPU, you will also need to download and install the NVidia Cuda Toolkit (https://developer.nvidia.com/cuda-downloads). Some of the recipes will rely on a current installation of the Python packages Scipy, Numpy, and Scikit-Learn as well. How to do it… Here we will introduce the general flow of TensorFlow algorithms. Most recipes will follow this outline: Import or generate data: All of our machine-learning algorithms will depend on data. In this book we will either generate data or use an outside source of data. Sometimes it is better to rely on generated data because we will want to know the expected outcome. Transform and normalize data: The data is usually not in the correct dimension or type that our TensorFlow algorithms expect. We will have to transform our data before we can use it. Most algorithms also expect normalized data and we will do this here as well. TensorFlow has built in functions that can normalize the data for you as follows: data = tf.nn.batch_norm_with_global_normalization(...) Set algorithm parameters: Our algorithms usually have a set of parameters that we hold constant throughout the procedure. For example, this can be the number of iterations, the learning rate, or other fixed parameters of our choosing. It is considered good form to initialize these together so the reader or user can easily find them, as follows: learning_rate = 0.01 iterations = 1000 Initialize variables and placeholders: TensorFlow depends on us telling it what it can and cannot modify. TensorFlow will modify the variables during optimization to minimize a loss function. To accomplish this, we feed in data through placeholders. We need to initialize both of these, variables and placeholders with size and type, so that TensorFlow knows what to expect. See the following: code: a_var = tf.constant(42) x_input = tf.placeholder(tf.float32, [None, input_size]) y_input = tf.placeholder(tf.fload32, [None, num_classes]) Define the model structure: After we have the data, and have initialized our variables and placeholders, we have to define the model. This is done by building a computational graph. We tell TensorFlow what operations must be done on the variables and placeholders to arrive at our model predictions: y_pred = tf.add(tf.mul(x_input, weight_matrix), b_matrix) Declare the loss functions: After defining the model, we must be able to evaluate the output. This is where we declare the loss function. The loss function is very important as it tells us how far off our predictions are from the actual values: loss = tf.reduce_mean(tf.square(y_actual – y_pred)) Initialize and train the model: Now that we have everything in place, we need to create an instance for our graph, feed in the data through the placeholders and let TensorFlow change the variables to better predict our training data. Here is one way to initialize the computational graph: with tf.Session(graph=graph) as session: ... session.run(...) ... Note that we can also initiate our graph with: session = tf.Session(graph=graph) session.run(…) (Optional) Evaluate the model: Once we have built and trained the model, we should evaluate the model by looking at how well it does with new data through some specified criteria. (Optional) Predict new outcomes: It is also important to know how to make predictions on new, unseen, data. We can do this with all of our models, once we have them trained. How it works… In TensorFlow, we have to setup the data, variables, placeholders, and model before we tell the program to train and change the variables to improve the predictions. TensorFlow accomplishes this through the computational graph. We tell it to minimize a loss function and TensorFlow does this by modifying the variables in the model. TensorFlow knows how to modify the variables because it keeps track of the computations in the model and automatically computes the gradients for every variable. Because of this, we can see how easy it can be to make changes and try different data sources. See also A great place to start is the official Python API Tensorflow documentation: https://www.tensorflow.org/versions/r0.7/api_docs/python/index.html There are also tutorials available: https://www.tensorflow.org/versions/r0.7/tutorials/index.html Declaring tensors Getting ready Tensors are the data structure that TensorFlow operates on in the computational graph. We can declare these tensors as variables or feed them in as placeholders. First we must know how to create tensors. When we create a tensor and declare it to be a variable, TensorFlow creates several graph structures in our computation graph. It is also important to point out that just by creating a tensor, TensorFlow is not adding anything to the computational graph. TensorFlow does this only after creating a variable out of the tensor. See the next section on variables and placeholders for more information. How to do it… Here we will cover the main ways to create tensors in TensorFlow. Fixed tensors: Creating a zero filled tensor. Use the following: zero_tsr = tf.zeros([row_dim, col_dim]) Creating a one filled tensor. Use the following: ones_tsr = tf.ones([row_dim, col_dim]) Creating a constant filled tensor. Use the following: filled_tsr = tf.fill([row_dim, col_dim], 42) Creating a tensor out of an existing constant.Use the following: constant_tsr = tf.constant([1,2,3]) Note that the tf.constant() function can be used to broadcast a value into an array, mimicking the behavior of tf.fill() by writing tf.constant(42, [row_dim, col_dim]) Tensors of similar shape: We can also initialize variables based on the shape of other tensors, as follows: zeros_similar = tf.zeros_like(constant_tsr) ones_similar = tf.ones_like(constant_tsr) Note, that since these tensors depend on prior tensors, we must initialize them in order. Attempting to initialize all the tensors all at once will result in an error. Sequence tensors: Tensorflow allows us to specify tensors that contain defined intervals. The following functions behave very similarly to the range() outputs and numpy's linspace() outputs. See the following function: linear_tsr = tf.linspace(start=0, stop=1, start=3) The resulting tensor is the sequence [0.0, 0.5, 1.0]. Note that this function includes the specified stop value. See the following function integer_seq_tsr = tf.range(start=6, limit=15, delta=3) The result is the sequence [6, 9, 12]. Note that this function does not include the limit value. Random tensors: The following generated random numbers are from a uniform distribution: randunif_tsr = tf.random_uniform([row_dim, col_dim], minval=0, maxval=1) Know that this random uniform distribution draws from the interval that includes the minval but not the maxval ( minval<=x<maxval ). To get a tensor with random draws from a normal distribution, as follows: randnorm_tsr = tf.random_normal([row_dim, col_dim], mean=0.0, stddev=1.0) There are also times when we wish to generate normal random values that are assured within certain bounds. The truncated_normal() function always picks normal values within two standard deviations of the specified mean. See the following: runcnorm_tsr = tf.truncated_normal([row_dim, col_dim], mean=0.0, stddev=1.0) We might also be interested in randomizing entries of arrays. To accomplish this there are two functions that help us, random_shuffle() and random_crop(). See the following: shuffled_output = tf.random_shuffle(input_tensor) cropped_output = tf.random_crop(input_tensor, crop_size) Later on in this book, we will be interested in randomly cropping an image of size (height, width, 3) where there are three color spectrums. To fix a dimension in the cropped_output, you must give it the maximum size in that dimension: cropped_image = tf.random_crop(my_image, [height/2, width/2, 3]) How it works… Once we have decided on how to create the tensors, then we may also create the corresponding variables by wrapping the tensor in the Variable() function, as follows. More on this in the next section: my_var = tf.Variable(tf.zeros([row_dim, col_dim])) There's more… We are not limited to the built in functions, we can convert any numpy array, Python list, or constant to a tensor using the function convert_to_tensor(). Know that this function also accepts tensors as an input in case we wish to generalize a computation inside a function. Using placeholders and variables Getting ready One of the most important distinctions to make with data is whether it is a placeholder or variable. Variables are the parameters of the algorithm and TensorFlow keeps track of how to change these to optimize the algorithm. Placeholders are objects that allow you to feed in data of a specific type and shape or that depend on the results of the computational graph, like the expected outcome of a computation. How to do it… The main way to create a variable is by using the Variable() function, which takes a tensor as an input and outputs a variable. This is the declaration and we still need to initialize the variable. Initializing is what puts the variable with the corresponding methods on the computational graph. Here is an example of creating and initializing a variable: my_var = tf.Variable(tf.zeros([2,3])) sess = tf.Session() initialize_op = tf.initialize_all_variables() sess.run(initialize_op) To see whatthe computational graph looks like after creating and initializing a variable, see the next part in this section, How it works…, Figure 1. Placeholders are just holding the position for data to be fed into the graph. Placeholders get data from a feed_dict argument in the session. To put a placeholder in the graph, we must perform at least one operation on the placeholder. We initialize the graph, declare x to be a placeholder, and define y as the identity operation on x, which just returns x. We then create data to feed into the x placeholder and run the identity operation. It is worth noting that Tensorflow will not return a self-referenced placeholder in the feed dictionary. The code is shown below and the resulting graph is in the next section, How it works…: sess = tf.Session() x = tf.placeholder(tf.float32, shape=[2,2]) y = tf.identity(x) x_vals = np.random.rand(2,2) sess.run(y, feed_dict={x: x_vals}) # Note that sess.run(x, feed_dict={x: x_vals}) will result in a self-referencing error. How it works… The computational graph of initializing a variable as a tensor of zeros is seen in Figure 1to follow: Figure 1: Variable Figure 1: Here we can see what the computational graph looks like in detail with just one variable, initialized to all zeros. The grey shaded region is a very detailed view of the operations and constants involved. The main computational graph with less detail is the smaller graph outside of the grey region in the upper right. For more details on creating and visualizing graphs. Similarly, the computational graph of feeding a numpy array into a placeholder can be seen to follow, in Figure 2: Figure 2: Computational graph of an initialized placeholder Figure 2: Here is the computational graph of a placeholder initialized. The grey shaded region is a very detailed view of the operations and constants involved. The main computational graph with less detail is the smaller graph outside of the grey region in the upper right. There's more… During the run of the computational graph, we have to tell TensorFlow when to initialize the variables we have created. While each variable has an initializer method, the most common way to do this is with the helper function initialize_all_variables(). This function creates an operation in the graph that initializes all the variables we have created, as follows: initializer_op = tf.initialize_all_variables() But if we want to initialize a variable based on the results of initializing another variable, we have to initialize variables in the order we want, as follows: sess = tf.Session() first_var = tf.Variable(tf.zeros([2,3])) sess.run(first_var.initializer) second_var = tf.Variable(tf.zeros_like(first_var)) # Depends on first_var sess.run(second_var.initializer) Working with matrices Getting ready Many algorithms depend on matrix operations. TensorFlow gives us easy-to-use operations to perform such matrix calculations. For all of the following examples, we can create a graph session by running the following code: import tensorflow as tf sess = tf.Session() How to do it… Creating matrices: We can create two-dimensional matrices from numpy arrays or nested lists, as we described in the earlier section on tensors. We can also use the tensor creation functions and specify a two-dimensional shape for functions like zeros(), ones(), truncated_normal(), and so on: Tensorflow also allows us to create a diagonal matrix from a one dimensional array or list with the function diag(), as follows: identity_matrix = tf.diag([1.0, 1.0, 1.0]) # Identity matrix A = tf.truncated_normal([2, 3]) # 2x3 random normal matrix B = tf.fill([2,3], 5.0) # 2x3 constant matrix of 5's C = tf.random_uniform([3,2]) # 3x2 random uniform matrix D = tf.convert_to_tensor(np.array([[1., 2., 3.],[-3., -7., -1.],[0., 5., -2.]])) print(sess.run(identity_matrix)) [[ 1. 0. 0.] [ 0. 1. 0.] [ 0. 0. 1.]] print(sess.run(A)) [[ 0.96751703 0.11397751 -0.3438891 ] [-0.10132604 -0.8432678 0.29810596]] print(sess.run(B)) [[ 5. 5. 5.] [ 5. 5. 5.]] print(sess.run(C)) [[ 0.33184157 0.08907614] [ 0.53189191 0.67605299] [ 0.95889051 0.67061249]] print(sess.run(D)) [[ 1. 2. 3.] [-3. -7. -1.] [ 0. 5. -2.]] Note that if we were to run sess.run(C) again, we would reinitialize the random variables and end up with different random values. Addition and subtraction uses the following function: print(sess.run(A+B)) [[ 4.61596632 5.39771316 4.4325695 ] [ 3.26702736 5.14477345 4.98265553]] print(sess.run(B-B)) [[ 0. 0. 0.] [ 0. 0. 0.]] Multiplication print(sess.run(tf.matmul(B, identity_matrix))) [[ 5. 5. 5.] [ 5. 5. 5.]] Also, the function matmul() has arguments that specify whether or not to transpose the arguments before multiplication or whether each matrix is sparse. Transpose the arguments as follows: print(sess.run(tf.transpose(C))) [[ 0.67124544 0.26766731 0.99068872] [ 0.25006068 0.86560275 0.58411312]] Again, it is worth mentioning the reinitializing that gives us different values than before. Determinant, use the following: print(sess.run(tf.matrix_determinant(D))) -38.0 Inverse: print(sess.run(tf.matrix_inverse(D))) [[-0.5 -0.5 -0.5 ] [ 0.15789474 0.05263158 0.21052632] [ 0.39473684 0.13157895 0.02631579]] Note that the inverse method is based on the Cholesky decomposition if the matrix is symmetric positive definite or the LU decomposition otherwise. Decompositions: Cholesky decomposition, use the following: print(sess.run(tf.cholesky(identity_matrix))) [[ 1. 0. 1.] [ 0. 1. 0.] [ 0. 0. 1.]] Eigenvalues and Eigenvectors, use the following code: print(sess.run(tf.self_adjoint_eig(D)) [[-10.65907521 -0.22750691 2.88658212] [ 0.21749542 0.63250104 -0.74339638] [ 0.84526515 0.2587998 0.46749277] [ -0.4880805 0.73004459 0.47834331]] Note that the function self_adjoint_eig() outputs the eigen values in the first row and the subsequent vectors in the remaining vectors. In mathematics, this is called the eigen decomposition of a matrix. How it works… TensorFlow provides all the tools for us to get started with numerical computations and add such computations to our graphs. This notation might seem quite heavy for simple matrix operations. Remember that we are adding these operations to the graph and telling TensorFlow what tensors to run through those operations. Declaring operations Getting ready Besides the standard arithmetic operations, TensorFlow provides us more operations that we should be aware of and how to use them before proceeding. Again, we can create a graph session by running the following code: import tensorflow as tf sess = tf.Session() How to do it… TensorFlow has the standard operations on tensors, add(), sub(), mul(), and div(). Note that all of these operations in this section will evaluate the inputs element-wise unless specified otherwise. TensorFlow provides some variations of div() and relevant functions. It is worth mentioning that div() returns the same type as the inputs. This means it really returns the floor of the division (akin to Python 2) if the inputs are integers. To return the Python 3 version, which casts integers into floats before dividing and always returns a float, TensorFlow provides the function truediv()shown as follows: print(sess.run(tf.div(3,4))) 0 print(sess.run(tf.truediv(3,4))) 0.75 If we have floats and want integer division, we can use the function floordiv(). Note that this will still return a float, but rounded down to the nearest integer. The function is shown as follows: print(sess.run(tf.floordiv(3.0,4.0))) 0.0 Another important function is mod(). This function returns the remainder after division.It is shown as follows: print(sess.run(tf.mod(22.0, 5.0))) 2.0 The cross product between two tensors is achieved by the cross() function. Remember that the cross product is only defined for two 3-dimensional vectors, so it only accepts two 3-dimensional tensors. The function is shown as follows: print(sess.run(tf.cross([1., 0., 0.], [0., 1., 0.]))) [ 0. 0. 1.0] Here is a compact list of the more common math functions. All of these functions operate element-wise: abs() Absolute value of one input tensor ceil() Ceiling function of one input tensor cos() Cosine function of one input tensor exp() Base e exponential of one input tensor floor() Floor function of one input tensor inv() Multiplicative inverse (1/x) of one input tensor log() Natural logarithm of one input tensor maximum() Element-wise max of two tensors minimum() Element-wise min of two tensors neg() Negative of one input tensor pow() The first tensor raised to the second tensor element-wise round() Rounds one input tensor rsqrt() One over the square root of one tensor sign() Returns -1, 0, or 1, depending on the sign of the tensor sin() Sine function of one input tensor sqrt() Square root of one input tensor square() Square of one input tensor Specialty mathematical functions: There are some special math functions that get used in machine learning that are worth mentioning and TensorFlow has built in functions for them. Again, these functions operate element-wise, unless specified otherwise: digamma() Psi function, the derivative of the lgamma() function erf() Gaussian error function, element-wise, of one tensor erfc() Complimentary error function of one tensor igamma() Lower regularized incomplete gamma function igammac() Upper regularized incomplete gamma function lbeta() Natural logarithm of the absolute value of the beta function lgamma() Natural logarithm of the absolute value of the gamma function squared_difference() Computes the square of the differences between two tensors How it works… It is important to know what functions are available to us to add to our computational graphs. Mostly we will be concerned with the preceding functions. We can also generate many different custom functions as compositions of the preceding, as follows: # Tangent function (tan(pi/4)=1) print(sess.run(tf.div(tf.sin(3.1416/4.), tf.cos(3.1416/4.)))) 1.0 There's more… If we wish to add other operations to our graphs that are not listed here, we must create our own from the preceding functions. Here is an example of an operation not listed above that we can add to our graph: # Define a custom polynomial function def custom_polynomial(value): # Return 3 * x^2 - x + 10 return(tf.sub(3 * tf.square(value), value) + 10) print(sess.run(custom_polynomial(11))) 362 Summary Thus in this article we have implemented some introductory recipes that will help us to learn the basics of TensorFlow. Resources for Article: Further resources on this subject: Data Clustering [article] The TensorFlow Toolbox [article] Implementing Artificial Neural Networks with TensorFlow [article]
Read more
  • 0
  • 0
  • 2713

article-image-web-development-react-and-bootstrap
Packt
04 Jan 2017
18 min read
Save for later

Web Development with React and Bootstrap

Packt
04 Jan 2017
18 min read
In this article by Harmeet Singh and Mehul Bhat, the authors of the book Learning Web Development with React and Bootstrap, we are going to see how we can build responsive web applications with the help of Bootstrap and ReactJS. (For more resources related to this topic, see here.) There are many different ways to build modern web applications with JavaScript and CSS, including a lot of different tool choices, and a lot of new theory to learn. This book introduces you to ReactJS and Bootstrap which you will likely come across as you learn about modern web app development. They both are used for building fast and scalable user interfaces. React is famously known as a view (V) in MVC when we talk about defining M and C we need to look somewhere else or we can use other frameworks like Redux and Flux to handle remote data. The best way to learn code is to write code, so we're going to jump right in. To show you just how easy it is to get up and running with Bootstrap and ReactJS, we're going to cover theory and will see how we can make super simple applications as well as integrate with other applications. ReactJS React (sometimes styled React.js or ReactJS) is an open-source JavaScript library which provides a view for data rendered as HTML. Components have been used typically to render React views which contain additional components specified as custom HTML tags. React gives you a trivial virtual DOM, powerful views without templates, unidirectional data flow and explicit mutation. It is very methodical in updating the HTML document when data changes; and a clean separation between components on a modern single-page application. As your app comes into existence and develops, it's advantageous to ensure that your components are used in the right manner and the React app consists of reusable components, which makes code reuse, testing, and separation of concerns easy. React is not only V in MVC, it has stateful components, it handles mapping from input to state changes, and it renders components. In this sense, it does everything that an MVC would do. Let's look at React's component life cycle and it's different levels: Bootstrap Bootstrap is an open source frontend framework maintained by Twitter for developing responsive websites and web applications. It includes HTML, CSS, and JavaScript code to build user interface components. It's a faster and an easier way to develop powerful mobile-first user interface. Bootstrap grid system allows us to create responsive 12 column grids, layout, and components. It includes predefined classes for easy layout options (fixed width and full width). Bootstrap have pre-styled dozen reusable components and custom jQuery plugins like button, alerts, dropdown, modal, tooltip tab, pagination, carousal, badges, icons and many more. Bootstrap package includes the compiled and minified version of CSS and JS for our app we just need CSS bootstrap.min.css and fonts folder. This style sheet will provide you the look and feel of all components, responsive layout structure for our application. In the previous version Bootstrap included icons as image but in version 3 they have replaced icons as fonts. We can also customize the Bootstrap CSS stylesheet as per the component featured in our application. React-Bootstrap The React-Bootstrap JavaScript framework is similar to Bootstrap rebuilt for React. It's a complete reimplementation of the Bootstrap frontend reusable components in React. React-Bootstrap has no dependency on any other framework, such as Bootstrap.js or jQuery. It means that if you are using React-Bootstrap then we don't need to include the jQuery in your project as a dependency. Using React-Bootstrap, we can be sure that, there won't be external JavaScript calls to render the component which might be incompatible with the React DOM render. However, you can still achieve the same functionality and look and feel as Twitter Bootstrap, but with much cleaner code. Benefits of React-Bootstrap Compare to Twitter Bootstrap, we can import required code/component. It saves a bit of typing and bugs by compressing the Bootstrap. It reduces typing efforts and, more importantly, conflicts by compressing the Bootstrap. We don't need to think about the different approaches taken by Bootstrap versus. React It is easy to use It encapsulates in elements It uses JSX syntax It avoids React rendering of the virtual DOM It is easy to detect DOM changes and update the DOM without any conflict It doesn't have any dependency on other libraries, such as jQuery Bootstrap grid system Bootstrap is based on a 12-column grid system which includes a powerful responsive structure and a mobile-first fluid grid system that allows us to scaffold our web app with very few elements. In Bootstrap, we have a predefined series of classes to compose of rows and columns, so before we start we need to include the <div>tag with the container class to wrap our rows and columns. Otherwise, the framework won't respond as expected because Bootstrap has written CSS which is dependent on it. The preceding snippet has HTML structure of container class <div> tag: <div class="container"><div> This will make your web app the centre of the page as well as control the rows and columns to work as expected in responsive. There are four class prefixes which help to define the behaviour of the columns. All the classes are related to different device screen sizes and react in familiar ways. The following table from http://getbootstrap.com defines the variations between all four classes:   Extra small devices Phones (<768px) Small devices Tablets (≥768px) Medium devices Desktops (≥992px) Large devices Desktops (≥1200px) Grid behavior Horizontal at all times Collapsed to start, horizontal above breakpoints Container width None (auto) 750px 970px 1170px Class prefix .col-xs- .col-sm- .col-md- .col-lg- # of columns 12 Column width Auto ~62px ~81px ~97px Gutter width 30px (15px on each side of a column) Nestable Yes Offsets Yes Column ordering Yes React components React is basically based on a modular build, with encapsulated components that manage their own state so it will efficiently update and render your components when data changes. In React, components logic is written in JavaScript instead of templates so you can easily pass rich data through your app and manage the state out of the DOM. Using the render() method, we are rendering a component in React that takes input data and returns what you want to display. It can either take HTML tags (strings) or React components (classes). Let's take a quick look at examples of both: var myReactElement = <div className="hello" />; ReactDOM.render(myReactElement, document.getElementById('example')); In this example, we are passing HTML as a string into the render method which we have used before creating the <Navbar>: var ReactComponent = React.createClass({/*...*/}); var myReactElement = <ReactComponent someProperty={true} />; ReactDOM.render(myReactElement, document.getElementById('example')); In the preceding example, we are rendering the component, just to create a local variable that starts with an uppercase convention. Using the upper versus lowercase convention in React's JSX will distinguish between local component classes and HTML tags. So, we can create our React elements or components in two ways: either we can use Plain JavaScript with React.createElement or React's JSX. React.createElement() Using JSX in React is completely optional for creating the react app. As we know, we can create elements with React.createElement which take three arguments: a tag name or component, a properties object, and a variable number of child elements, which is optional: var profile = React.createElement('li',{className:'list-group-item'}, 'Profile'); var profileImageLink = React.createElement('a',{className:'center-block text-center',href:'#'},'Image'); var profileImageWrapper = React.createElement('li',{className:'list-group-item'}, profileImageLink); var sidebar = React.createElement('ul', { className: 'list-group' }, profile, profileImageWrapper); ReactDOM.render(sidebar, document.getElementById('sidebar')); In the preceding example, we have used React.createElement to generate a ulli structure. React already has built-in factories for common DOM HTML tags. JSX in React JSX is extension of JavaScript syntax and if you observe the syntax or structure of JSX, you will find it similar to XML coding. JSX is doing preprocessor footstep which adds XML syntax to JavaScript. Though, you can certainly use React without JSX but JSX makes react a lot more neat and elegant. Similar like XML, JSX tags are having tag name, attributes, and children and in that if an attribute value is enclosed in quotes that value becomes a string. The way XML is working with balanced opening and closing tags, JSX works similarly and it also helps to understand and read huge amount of structures easily than JavaScript functions and objects. Advantages of using JSX in React Take a look at the following points: JSX is very simple to understand and think about than JavaScript functions Mark-up of JSX would be more familiar to non-programmers Using JSX, your markup becomes more semantic, organized, and significant JSX – acquaintance or understanding In the development region, user interface developer, user experience designer, and quality assurance people are not much familiar with any programming language but JSX makes their life easy by providing easy syntax structure which is visually similar to HTML structure. JSX shows a path to indicate and see through your mind's eye, the structure in a solid and concise way. JSX – semantics/structured syntax Till now, we have seen how JSX syntax is easy to understand and visualize, behind this there is big reason of having semantic syntax structure. JSX with pleasure converts your JavaScript code into more standard way, which gives clarity to set your semantic syntax and significance component. With the help of JSX syntax you can declare structure of your custom component with information the way you do in HTML syntax and that will do all magic to transform your syntax to JavaScript functions. ReactDOM namespace helps us to use all HTML elements with the help of ReactJS, isn't this an amazing feature! It is. Moreover, the good part is, you can write your own named components with help of ReactDOM namespace. Please check out below HTML simple mark-up and how JSX component helps you to have semantic markup. <div className="divider"> <h2>Questions</h2><hr /> </div> As you can see in the preceding example, we have wrapped <h2>Questions</h2><hr /> with <div> tag which has classNamedivider so, in React composite component, you can create similar structure and it is as easy as you do your HTML coding with semantic syntax: <Divider> Questions </Divider> Composite component As we know that, you can create your custom component with JSX markup and JSX syntax will transform your component to JavaScript syntax component. Namespace components It's another feature request which is available in React JSX. We know that JSX is just an extension of JavaScript syntax and it also provides ability to use namespace so, React is also using JSX namespace pattern rather than XML namespacing. By using, standard JavaScript syntax approach which is object property access, this feature is useful for assigning component directly as <Namespace.Component/> instead of assigning variables to access components which are stored in an object. JSXTransformer JSXTransformer is another tool to compile JSX in the browser. While reading a code, browser will read attribute type="text/jsx" in your mentioned <script> tag and it will only transform those scripts which has mentioned type attribute and then it will execute your script or written function in that file. The code will be executed in same manner the way React-tools executes on the server. JSXTransformer is deprecating in current version of React, but you can find the current version on any provided CDNs and Bower. As per my opinion, it would be great to use Babel REPL tool to compile JavaScript. It has already adopted by React and broader JavaScript community. Attribute expressions If you can see above example of show/Hide we have used attribute expression for show the message panel and hide it. In react, there is a bit change in writing an attribute value, in JavaScript expression we write attribute in quotes ("") but we have to provide pair of curly braces ({}). var showhideToggle = this.state.collapse ? (<MessagePanel>):null/>; Boolean attributes As in Boolean attribute, there are two values, either it can be true or false and if we neglect its value in JSX while declaring attribute, it by default takes value as true. If we want to have attribute value false then we have to use an attribute expression. This scenario can come regularly when we use HTML form elements, for example disabled attribute, required attribute, checked attribute, and readOnly attribute. In Bootstrap example: aria-haspopup="true"aria-expanded="true" // example of writing disabled attribute in JSX <input type="button" disabled />; <input type="button" disabled={true} />; JavaScript expressions As seen in the preceding example, you can embed JavaScript expressions in JSX using syntax that will be accustomed to any handlebars user, for example style = { displayStyle } allocates the value of the JavaScript variable displayStyle to the element's style attribute. Styles Same as the expression, you can set styles by assigning an ordinary JavaScript object to the style attribute. How interesting, if someone tells you, not to write CSS syntax but you can write JavaScript code to achieve the same, no extra efforts. Isn't it superb stuff! Yes, it is. Events There is a set of event handlers that you can bind in a way that should look much acquainted to anybody who knows HTML. Generally, as per our practice we set properties on to the object which is anti-pattern in JSX attribute standard. var component = <Component />; component.props.foo = x; // bad component.props.bar = y; // also bad As shown in the preceding example, you can see the anti-pattern and it's not the best practice. If you don't know about properties of JSX attributes then propTypes won't be set and it will throw errors which would be difficult for you to trace. Props is very sensitive part of attribute so, you should not change it, as each props is having predefined method and you should use it as it is meant for, like we use other JavaScript methods or HTML tags. This doesn't mean that it is impossible to change Props, it is possible but it is against standard defined by React. Even in React, it will throw error. Spread attributes Let's check out JSX feature—spread attributes: var props = {}; props.foo = x; props.bar = y; var component = <Component {...props} />; As you see in above example, your properties which you have declared have become part of your component's props as well. Reusability of attributes is also possible here and you can also map it with other attributes. But you have to be very careful in ordering your attributes while you declare it, as it will override the previous declared attribute with lastly declared one. Props and state React components translate your raw data into Rich HTML, the props and state together build with that raw data to keep your UI consistent. Ok, let's identify what exactly it is: Props and state are both plain JS objects. It triggers with a render update. React manage the component state by calling setState (data, callback). This method will merge data into this.state, and re-renders the component to keep our UI up to date. For example, the state of the drop-down menu (visible or hidden). React component props - short for "properties" that don't change over time. For example, drop-down menu items. Sometimes components only take some data with this .props method and render it, which makes your component stateless. Using props and state together helps you to make an interactive app. Component life cycle methods In React each component has its own specific callback function. These callback's functions play an important role when we are thinking about DOM manipulation or integrating other plugins in React (jQuery). Let's look at some commonly used methods in the lifecycle of a component: getInitialState(): This method will help you to get the initial state of a component. componentDidMount: This method is called automatically when a component is rendered or mounted for the first time in DOM. Integrate JavaScript frameworks, we'll use this method to perform operations like setTimeout or setInterval, or send AJAX requests. componentWillReceiveProps: This method will be used to receive a new props. componentWillUnmount: This method is invoked before component is unmounted from DOM. Cleanup the DOM memory elements which are mounted in componentDidMount method. componentWillUpdate: This method invoked before updating a new props and state. componentDidUpdate: This is invoked immediately when the component has been updated in DOM What is Redux? As we know, in single page applications (SPAs) when we have to contract with state and time, it would be difficult to handgrip state over time. Here, Redux helps a lot, how? Because, in JavaScript application, Redux is handling two states: one is Data state and another is UI state and it's standard option for SPAs (single page applications). Moreover, bearing in mind, Redux can be used with AngularJS or jQuery or React JavaScript libraries or frameworks. Now we know, what does Redux mean? In short, Redux is a helping hand to play with states while developing JavaScript applications. We have seen in our previous examples like, the data flows in one direction only from parent level to child level and it is known as "unidirectional data flow". React has same flow direction from data to components so in this case it would be very difficult for proper communication of two components in React. Redux's architecture benefits: Compare to other frameworks, it has more benefits: It might not have any other way effects As we know, binning is not needed because components cant not interact directly States are managed globally so less possibility of mismanagement Sometimes, for middleware it would be difficult to manage other way effects React Top Level API When we are talking about React API, it's the starting step to get into React library. Different usage of React will provide different output like using React script tag will make top-level APIs available on the React global, using ES6 with npm will allow us to write import React from 'react' and using ES5 with npm will allow us to write var React = require('react'), so there are multiple ways to intialize the React with different features. Mount/Unmount component Always, it's recommended to have custom wrapper API in your API, suppose we have single root or more than one root and it will be deleted at some period, so you will not lose it. Facebook is also having the similar set up which automatically calls unmountComponentAtNode. I also suggest not to call ReactDOM.render() every time but ideal way is to write or use it through library so, by that way Application will have mounting and unmounting to manage it. Creating custom wrapper will help you to manage configuration at one place like internationalization, routers, user data and it would be very painful to set up all configuration every time at different places. React integration with other APIs React integration is nothing but converting Web component to React component by using JSX, Redux, and other methods of React. I would like to share here, some of the best practices to be followed to have 100% quality output. Things to remember while creating application with React Take a look at the following points to remember: Before you start working on React, always remember that it is just a View library, not the MVC framework. It is advisable to have small length of component to deal with classes and modules as well as it makes life easy while code understanding, unit testing and long run maintenance of component. React has introduced functions of props in its 0.14 version which is recommended to use, it is also known as functional component which helps to split your component. To avoid painful journey while dealing with React-based app, please don't use much states. As I said earlier that React is only view library so, to deal with rendering part, I recommend to use Redux rather than other frameworks of Flux. If you want to have more type safety then always use propTypes which also helps to catch bug early and acts as a document. I recommend use of shallow rendering method to test React component which allows rendering single component without touching their child components. While dealing with large React applications, always use Webpack, NPM, ES6, JSX, and Babel to complete your application. If you want to have deep dive into React's application and its elements, you can use Reduxdev tools. Summary To begin with, we saw just how easy it is to get ReactJS and Bootstrap installed with the inclusion of JavaScript files and a style sheet. With Bootstrap, we work towards having a responsive grid system for different mobile devices and applied the fundamental styles of HTML elements with the inclusion of a few classes and divs. We also saw the framework's new mobile-first responsive design in action without cluttering up our markup with unnecessary classes or elements. Resources for Article: Further resources on this subject: Getting Started with React and Bootstrap [article] Getting Started with ASP.NET Core and Bootstrap 4 [article] Frontend development with Bootstrap 4 [article]
Read more
  • 0
  • 0
  • 32341
Modal Close icon
Modal Close icon