Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7014 Articles
article-image-delphi-cookbook
Packt
07 Jul 2016
6 min read
Save for later

Delphi Cookbook

Packt
07 Jul 2016
6 min read
In this article by Daniele Teti author of the book Delphi Cookbook - Second Edition we will study about Multithreading. Multithreading can be your biggest problem if you cannot handle it with care. One of the fathers of the Delphi compiler used to say: "New programmers are drawn to multithreading like moths to flame, with similar results." – Danny Thorpe (For more resources related to this topic, see here.) In this chapter, we will discuss some of the main techniques to handle single or multiple background threads. We'll talk about shared resource synchronization and thread-safe queues and events. The last three recipes will talk about the Parallel Programming Library introduced in Delphi XE7, and I hope that you will love it as much as I love it. Multithreaded programming is a huge topic. So, after reading this chapter, although you will not become a master of it, you will surely be able to approach the concept of multithreaded programming with confidence and will have the basics to jump on to more specific stuff when (and if) you require them. Talking with the main thread using a thread-safe queue Using a background thread and working with its private data is not difficult, but safely bringing information retrieved or elaborated by the thread back to the main thread to show them to the user (as you know, only the main thread can handle the GUI in VCL as well as in FireMonkey) can be a daunting task. An even more complex task would be establishing a generic communication between two or more background threads. In this recipe, you'll see how a background thread can talk to the main thread in a safe manner using the TThreadedQueue<T> class. The same concepts are valid for a communication between two or more background threads. Getting ready Let's talk about a scenario. You have to show data generated from some sort of device or subsystem, let's say a serial, a USB device, a query polling on the database data, or a TCP socket. You cannot simply wait for data using TTimer because this would freeze your GUI during the wait, and the wait can be long. You have tried it, but your interface became sluggish… you need another solution! In the Delphi RTL, there is a very useful class called TThreadedQueue<T> that is, as the name suggests, a particular parametric queue (a FIFO data structure) that can be safely used from different threads. How to use it? In the programming field, there is mostly no single solution valid for all situations, but the following one is very popular. Feel free to change your approach if necessary. However, this is the approach used in the recipe code: Create the queue within the main form. Create a thread and inject the form queue to it. In the thread Execute method, append all generated data to the queue. In the main form, use a timer or some other mechanism to periodically read from the queue and display data on the form. How to do it… Open the recipe project called ThreadingQueueSample.dproj. This project contains the main form with all the GUI-related code and another unit with the thread code. The FormCreate event creates the shared queue with the following parameters that will influence the behavior of the queue: QueueDepth = 100: This is the maximum queue size. If the queue reaches this limit, all the push operations will be blocked for a maximum of PushTimeout, then the Push call will fail with a timeout. PushTimeout = 1000: This is the timeout in milliseconds that will affect the thread, that in this recipe is the producer of a producer/consumer pattern. PopTimeout = 1: This is the timeout in milliseconds that will affect the timer when the queue is empty. This timeout must be very short because the pop call is blocking in nature, and you are in the main thread that should never be blocked for a long time. The button labeled Start Thread creates a TReaderThread instance passing the already created queue to its constructor (this is a particular type of dependency injection called constructor injection). The thread declaration is really simple and is as follows: type TReaderThread = class(TThread) private FQueue: TThreadedQueue<Byte>; protected procedure Execute; override; public constructor Create(AQueue: TThreadedQueue<Byte>); end; While the Execute method simply appends randomly generated data to the queue, note that the Terminated property must be checked often so the application can terminate the thread and wait a reasonable time for its actual termination. In the following example, if the queue is not empty, check the termination at least every 700 msec ca: procedure TReaderThread.Execute; begin while not Terminated do begin TThread.Sleep(200 + Trunc(Random(500))); // e.g. reading from an actual device FQueue.PushItem(Random(256)); end; end; So far, you've filled the queue. Now, you have to read from the queue and do something useful with the read data. This is the job of a timer. The following is the code of the timer event on the main form: procedure TMainForm.Timer1Timer(Sender: TObject); var Value: Byte; begin while FQueue.PopItem(Value) = TWaitResult.wrSignaled do begin ListBox1.Items.Add(Format('[%3.3d]', [Value])); end; ListBox1.ItemIndex := ListBox1.Count - 1; end; That's it! Run the application and see how we are reading the data coming from the threads and showing the main form. The following is a screenshot: The main form showing data generated by the background thread There's more… The TThreadedQueue<T> is very powerful and can be used to communicate between two or more background threads in a consumer/producer schema as well. You can use multiple producers, multiple consumers, or both. The following screenshot shows a popular schema used when the speed at which the data generated is faster than the speed at which the same data is handled. In this case, usually you can gain speed on the processing side using multiple consumers. Single producer, multiple consumers Summary In this article we had a look at how to talk to the main thread using a thread-safe queue. Resources for Article: Further resources on this subject: Exploring the Usages of Delphi[article] Adding Graphics to the Map[article] Application Performance[article]
Read more
  • 0
  • 0
  • 16833

article-image-animating-elements
Packt
05 Jul 2016
17 min read
Save for later

Animating Elements

Packt
05 Jul 2016
17 min read
In this article by Alex Libby, author of the book Mastering PostCSS for Web Design, you will study about animating elements. A question if you had the choice of three websites: one static, one with badly done animation, and one that has been enhanced with subtle use of animation. Which would you choose? Well, my hope is the answer to that question should be number three: animation can really make a website stand out if done well, or fail miserably if done badly! So far, our content has been relatively static, save for the use of media queries. It's time though to take a look at how PostCSS can help make animating content a little easier. We'll begin with a quick recap on the basics of animation before exploring the route to moving away from pure animation through to SASS and finally across to PostCSS. We will cover a number of topics throughout this article, which will include: A recap on the use of jQuery to animate content Switching to CSS-based animation Exploring the use of prebuilt libraries, such as Animate.css (For more resources related to this topic, see here.) Let's make a start! Revisiting basic animations Animation is quickly becoming a king in web development; more and more websites are using animations to help bring life and keep content fresh. If done correctly, they add an extra layer of experience for the end user; if done badly, the website will soon lose more custom than water through a sieve! Throughout the course of the article, we'll take a look at making the change from writing standard animation through to using processors, such as SASS, and finally, switching to using PostCSS. I can't promise you that we'll be creating complex JavaScript-based demos, such as the Caaaat animation (http://roxik.com/cat/ try resizing the window!), but we will see that using PostCSS is really easy when creating animations for the browser. To kick off our journey, we'll start with a quick look at the traditional animation. How many times have you had to use .animate() in jQuery over the years? Thankfully, we have the power of CSS3 to help with simple animations, but there was a time when we had to animate content using jQuery. As a quick reminder, try running animate.html from the T34 - Basic animation using jQuery animate() folder. It's not going to set the world on fire, but is a nice reminder of the times gone by, when we didn't know any better: If we take a look at a profile of this animation from within a DOM inspector from within a browser, such as Firefox, it would look something like this screenshot: While the numbers aren't critical, the key point here are the two dotted green lines and that the results show a high degree of inconsistent activity. This is a good indicator that activity is erratic, with a low frame count, resulting in animations that are jumpy and less than 100% smooth. The great thing though is that there are options available to help provide smoother animations; we'll take a brief look at some of the options available before making the change to using PostCSS. For now though, let's make that first step to moving away from using jQuery, beginning with a look at the options available for reducing dependency on the use of .animate() or jQuery. Moving away from jQuery Animating content can be a contentious subject, particularly if jQuery or JavaScript is used. If we were to take a straw poll of 100 people and ask which they used, it is very likely that we would get mixed answers! A key answer of "it depends" is likely to feature at or near the top of the list of responses; many will argue that animating content should be done using CSS, while others will affirm that JavaScript-based solutions still have value. Leaving this aside, shall we say lively debate? If we're looking to move away from using jQuery and in particular .animate(), then we have some options available to us: Upgrade your version of jQuery! Yes, this might sound at odds with the theme of this article, but the most recent versions of jQuery introduced the use of requestAnimationFrame, which improved performance, particularly on mobile devices. A quick and dirty route is to use the jQuery Animate Enhanced plugin, available from http://playground.benbarnett.net/jquery-animate-enhanced/ - although a little old, it still serves a useful purpose. It will (where possible) convert .animate() calls into CSS3 equivalents; it isn't able to convert all, so any that are not converted will remain as .animate() calls. Using the same principle, we can even take advantage of the JavaScript animation library, GSAP. The Greensock team have made available a plugin (from https://greensock.com/jquery-gsap-plugin) that replaces jQuery.animate() with their own GSAP library. The latter is reputed to be 20 times faster than standard jQuery! With a little effort, we can look to rework our existing code. In place of using .animate(), we can add the equivalent CSS3 style(s) into our stylesheet and replace existing calls to .animate() with either .removeClass() or .addClass(), as appropriate. We can switch to using libraries, such as Transit (http://ricostacruz.com/jquery.transit/). It still requires the use of jQuery, but gives better performance than using the standard .animate() command. Another alternative is Velocity JS by Jonathan Shapiro, available from http://julian.com/research/velocity/; this has the benefit of not having jQuery as a dependency. There is even talk of incorporating all or part of the library into jQuery, as a replacement for .animate(). For more details, check out the issue log at https://github.com/jquery/jquery/issues/2053. Many people automatically assume that CSS animations are faster than JavaScript (or even jQuery). After all, we don't need to call an external library (jQuery); we can use styles that are already baked into the browser, right? The truth is not as straightforward as this. In short, the right use of either will depend on your requirements and the limits of each method. For example, CSS animations are great for simple state changes, but if sequencing is required, then you may have to resort to using the JavaScript route. The key, however, is less in the method used, but more in how many frames per second are displayed on the screen. Most people cannot distinguish above 60fps. This produces a very smooth experience. Anything less than around 25FPS will produce blur and occasionally appear jerky – it's up to us to select the best method available, that produces the most effective solution. To see the difference in frame rate, take a look at https://frames-per-second.appspot.com/ the animations on this page can be controlled; it's easy to see why 60FPS produces a superior experience! So, which route should we take? Well, over the next few pages, we'll take a brief look at each of these options. In a nutshell, they are all methods that either improve how animations run or allow us to remove the dependency on .animate(), which we know is not very efficient! True, some of these alternatives still use jQuery, but the key here is that your existing code could be using any or a mix of these methods. All of the demos over the next few pages were run at the same time as a YouTube video was being run; this was to help simulate a little load and get a more realistic comparison. Running animations under load means less graphics processing power is available, which results in a lower FPS count. Let's kick off with a look at our first option—the Transit JS library. Animating content with Transit.js In an ideal world, any project we build will have as few dependencies as possible; this applies equally to JavaScript or jQuery-based content as CSS styling. To help with reducing dependencies, we can use libraries such as TransitJS or Velocity to construct our animations. The key here is to make use of the animations that these libraries create as a basis for applying styles that we can then manipulate using .addClass() or .removeClass(). To see what I mean, let's explore this concept with a simple demo: We'll start by opening up a copy of animate.html. To make it easier, we need to change the reference to square-small from a class to a selector: <div id="square-small"></div> Next, go ahead and add in a reference to the Transit library immediately before the closing </head> tag: <script src="js/jquery.transit.min.js"></script> The Transit library uses a slightly different syntax, so go ahead and update the call to .animate() as indicated: smallsquare.transition({x: 280}, 'slow'); Save the file and then try previewing the results in a browser. If all is well, we should see no material change in the demo. But the animation will be significantly smoother—the frame count is higher, at 44.28fps, with less dips. Let's compare this with the same profile screenshot taken for revisiting basic animations earlier in this article. Notice anything? Profiling browser activity can be complex, but there are only two things we need to concern ourselves with here: the fps value and the state of the green line. The fps value, or frames per second, is over three times higher, and for a large part, the green line is more consistent with fewer more short-lived dips. This means that we have a smoother, more consistent performance; at approximately 44fps, the average frame rate is significantly better than using standard jQuery. But we're still using jQuery! There is a difference though. Libraries such as Transit or Velocity convert animations where possible to CSS3 equivalents. If we take a peek under the covers, we can see this in the flesh: We can use this to our advantage by removing the need to use .animate() and simply use .addClass() or .removeClass(). If you would like to compare our simple animation when using Transit or Velocity, there are examples available in the code download, as demos T35A and T35B, respectively. To take it to the next step, we can use the Velocity library to create a version of our demo using plain JavaScript. We'll see how as part of the next demo. Beware though this isn't an excuse to still use JavaScript; as we'll see, there is little difference in the frame count! Animating with plain JavaScript Many developers are used to working with jQuery. After all, it makes it a cinch to reference just about any element on a page! Sometimes though, it is preferable to work in native JavaScript; this could be for speed. If we only need to support newer browsers (such as IE11 or Edge, and recent versions of Chrome or Firefox), then adding jQuery as a dependency isn't always necessary. The beauty about libraries, such as Transit (or Velocity), means that we don't always have to use jQuery to still achieve the same effect; as we'll see shortly, removing jQuery can help improve matters! Let's put this to the test and adapt our earlier demo to work without using jQuery: We'll start by extracting a copy of the T35B folder from the code download bundle. Save this to the root of our project area. Next, we need to edit a copy of animate.html within this folder. Go ahead and remove the link to jQuery and then remove the link to velocity.ui.min.js; we should be left with this in the <head> of our file: <link rel="stylesheet" type="text/css" href="css/style.css">   <script src="js/velocity.min.js"></script> </head> A little further down, alter the <script> block as shown:   <script>     var smallsquare = document.getElementById('square-small');     var animbutton = document.getElementById('animation-button');     animbutton.addEventListener("click", function() {       Velocity(document.getElementById('square-small'), {left: 280}, {duration: 'slow'});     }); </script> Save the file and then preview the results in a browser. If we monitor performance of our demo using a DOM Inspector, we can see a similar frame rate being recorded in our demo: With jQuery as a dependency no longer in the picture, we can clearly see that the frame rate is improved—the downside though is that the support is reduced for some browsers, such as IE8 or 9. This may not be an issue for your website; both Microsoft and the jQuery Core Team have announced changes to drop support for IE8-10 and IE8 respectively, which will help encourage users to upgrade to newer browsers. It has to be said though that while using CSS3 is preferable for speed and keeping our pages as lightweight as possible, using Velocity does provide a raft of extra opportunities that may be of use to your projects. The key here though is to carefully consider if you really do need them or whether CSS3 will suffice and allow you to use PostCSS. Switching classes using jQuery At this point, there is one question that comes to mind—what about using class-based animation? By this, I mean dropping any dependency on external animation libraries, and switching to use plain jQuery with either .addClass() or .removeClass() methods. In theory, it sounds like a great idea—we can remove the need to use .animate() and simply swap classes as needed, right? Well, it's an improvement, but it is still lower than using a combination of pure JavaScript and switching classes. It will all boil down to a trade-off between using the ease of jQuery to reference elements against pure JavaScript for speed, as follows: 1.      We'll start by opening a copy of animate.html from the previous exercise. First, go ahead and replace the call to VelocityJS with this line within the <head> of our document: <script src="js/jquery.min.js"></script> 2.      Next, remove the code between the <script> tags and replace it with this: var smallsquare = $('.rectangle').find('.square-small'); $('#animation-button').on("click", function() {       smallsquare.addClass("move");       smallsquare.one('transitionend', function(e) {     $('.rectangle').find('.square-small') .removeClass("move");     });  }); 3.      Save the file. If we preview the results in a browser, we should see no apparent change in how the demo appears, but the transition is marginally more performant than using a combination of jQuery and Transit. The real change in our code though, will be apparent if we take a peek under the covers using a DOM Inspector. Instead of using .animate(), we are using CSS3 animation styles to move our square-small <div>. Most browsers will accept the use of transition and transform, but it is worth running our code through a process, such as Autocomplete, to ensure we apply the right vendor prefixes to our code. The beauty about using CSS3 here is that while it might not suit large, complex animations, we can at least begin to incorporate the use of external stylesheets, such as Animate.css, or even use a preprocessor, such as SASS to create our styles. It's an easy change to make, so without further ado and as the next step on our journey to using PostCSS, let's take a look at this in more detail. If you would like to create custom keyframe-based animations, then take a look at http://cssanimate.com/, which provides a GUI-based interface for designing them and will pipe out the appropriate code when requested! Making use of prebuilt libraries Up to this point, all of our animations have had one thing in common; they are individually created and stored within the same stylesheet as other styles for each project. This will work perfectly well, but we can do better. After all, it's possible that we may well create animations that others have already built! Over time, we may also build up a series of animations that can form the basis of a library that can be reused for future projects. A number of developers have already done this. One example of note is the Animate.css library created by Dan Eden. In the meantime, let's run through a quick demo of how it works as a precursor to working with it in PostCSS. The images used in this demo are referenced directly from the LoremPixem website as placeholder images. Let's make a start: We'll start by extracting a copy of the T37 folder from the code download bundle. Save the folder to our project area. Next, open a new file and add the following code: body { background: #eee; } #gallery {   width: 745px;   height: 500px;   margin-left: auto;   margin-right: auto; }   #gallery img {   border: 0.25rem solid #fff;   margin: 20px;   box-shadow: 0.25rem 0.25rem 0.3125rem #999;   float: left; } .animated {   animation-duration: 1s; animation-fill-mode: both; } .animated:hover {   animation-duration: 1s;   animation-fill-mode: both; }  Save this as style.css in the css subfolder within the T37 folder. Go ahead and preview the results in a browser. If all is well, then we should see something akin to this screenshot: If we run the demo, we should see images run through different types of animation; there is nothing special or complicated here. The question is though, how does it all fit in with PostCSS? Well, there's a good reason for this; there will be some developers who have used Animate.css in the past and will be familiar with how it works; we will also be using a the postcss-animation plugin later in Updating code to use PostCSS, which is based on the Animate.css stylesheet library. For those of you who are not familiar with the stylesheet library though, let's quickly run through how it works within the context of our demo. Dissecting the code to our demo The effects used in our demo are quite striking. Indeed, one might be forgiven for thinking that they required a lot of complex JavaScript! This, however, could not be further from the truth. The Animate.css file contains a number of animations based on @keyframe similar to this: @keyframes bounce {   0%, 20%, 50%, 80%, 100% {transform: translateY(0);}   40% {transform: translateY(-1.875rem);}   60% {transform: translateY(-0.9375rem);} } We pull in the animations using the usual call to the library within the <head> section of our code. We can then call any animation by name from within our code:   <div id="gallery">     <a href="#"><img class="animated bounce" src="http://lorempixum.com/200/200/city/1" alt="" /></a> ...   </div>   </body> You will notice the addition of the .animated class in our code. This controls the duration and timing of the animation, which are set according to which animation name has been added to the code. The downside of not using JavaScript (or jQuery for that matter) means that the animation will only run once when the demo is loaded; we can set it to run continuously by adding the .infinite class to the element being animated (this is part of the Animate library). We can fake a click option in CSS, but it is an experimental hack that is not supported across all the browsers. To affect any form of control, we really need to use JavaScript (or even jQuery)! If you are interested in details of the hack, then take a look at this response on Stack Overflow at http://stackoverflow.com/questions/13630229/can-i-have-an-onclick-effect-in-css/32721572#32721572. Okay! Onward we go. We've covered the basic use of prebuilt libraries, such as Animate. It's time to step up a gear and make the transition to PostCSS. Summary In this article, we studied about recap on the use of jQuery to animate content. We also looked into switching to CSS-based animation. At last, we saw how to make use of prebuilt libraries in short.  Resources for Article:   Further resources on this subject: Responsive Web Design with HTML5 and CSS3 - Second Edition [article] Professional CSS3 [article] Instant LESS CSS Preprocessor How-to [article]
Read more
  • 0
  • 0
  • 9929

article-image-getting-started-vr-programming
Jake Rheude
04 Jul 2016
8 min read
Save for later

Getting Started with VR Programming

Jake Rheude
04 Jul 2016
8 min read
This guide will go through some simple programming for VR apps using the Google VR SDK (software development kit) and the Unity3D game engine. This guide will assume that you already have a mobile device capable of running Google VR apps with a Google Cardboard, as well as a computer able to run Unity3D. Getting Started First and foremost, download the latest version of Unity3D from their website. Out of the four options, select “Personal” since it costs nothing to the user. Then download and run the installer. The installation process is straightforward. However, you must make sure that you select the “Android Build Support” component if you are planning on using an Android device or “iOS Build Support” for an iOS device. If you are unsure at this point, just select both, as neither of them requires a lot of space. Now that you have Unity3D installed, the next step is to set it up for the Google VR SDK which can be found here. After agreeing to the terms and conditions, you will be given a link to download the repository directly. After downloading and extracting the ZIP file, you will notice that it contains a Unity Package file. Double-click on the file, and Unity will automatically load up. You will then see a window similar to the pop up below on your screen. Click the “NEW” button on the top right corner to begin your first Google VR project. Give it any project name other than the default “New Unity Project” name. For this guide, I have chosen “VR Programming Tutorial” as the project name.   As soon as your new project loads up, so will the Google VR SDK Unity Package. The relevant files should all be selected by default, so simply click the “Import” button on the bottom right corner to include the SDK into your project.   In your project’s “Assets” folder, there should be a folder named “GoogleVR”. This is where all the necessary components are located in order to begin working with the SDK.   From the “Assets” folder, go into “GoogleVR”->”DemoScenes”->”HeadSetDemo”. Double-click on the Unity icon that is named “DemoScene”. You should see something similar to this upon opening the scene file. This is where you can preview the scene before playing it to get an idea of how the game objects will be laid out in the environment. So let’s try that by clicking on the “Play” button. The scene will start out from the user’s perspective, which would be the main camera.   There is a slight difference in how the left eye and right eye camera are displaying the environment. This is called distortion correction, which is intentionally designed that way in order to accustom the display to the Google Cardboard eye lenses. You may be wondering why you are unable to look around with your mouse. This design is also intentional to allow the developer to hover the mouse pointer in and out of the game window without disrupting the scene while it is playing. In order to look around in the environment, hold down the Ctrl key, and then the Alt key to enable head movement. Make sure to press the keys in this order, otherwise you will only be rotating the display along the Z-axis. You might also be wondering where the interactive menu on the floor canvas has gone. The menu is still there, it’s just that it does not appear in VR mode. Notice that the dot in the center of the display will turn into a halo when you move it over the hovering cube. This happens whenever the dot is placed over a game object in the environment that is interactive. So even if the menu is not visible, you are still able to select the menu items. If you happen to click on the “VR Mode” button, the left eye and right eye cameras will simply go away and the main camera will be the only camera that displays the world space. VR Mode can be enabled/disabled by clicking on the "VR Mode Enabled" checkbox in the project's inspector. Simply select "GvrMain" in the DemoScene hierarchy to have the inspector display its information. How the scene is displayed when VR mode is disabled. Note that as of the current implementation of Google VR, it is impossible to add UI components into the world space. This is due to the stereoscopic functionality of Google VR and the mathematics involved in calculating the distance of the game objects from the left eye and right eye cameras relative to the world environment. However, it is possible to add non-interactive UI elements (i.e. player HUD) as a child 3D element with the main camera being its parent. If you wish to create interactive UI components, they must be done strictly as game objects in the world space. This also implies that the interactive UI components must be selected by the user from a fixed position in the world space, as they would find it difficult to make their selections otherwise. Now that we have gone over the basics of the Google VR SDK, let’s move onto some programming. Applying Attributes to Game Objects When creating an interactive medium of any kind (in this case a VR app), some of the most basic functions can end up being more complicated than they initially seem to be. We will demonstrate that by incorporating, what seems to be, simple user movement. In the same DemoScene scene, we will add four more cubes to the environment. For the sake of cleanliness, first we will remove the existing cube as it will be an obstruction for our new cube. To delete a game object from a scene, simply right-click it in the hierarchy and select "Delete". Now that we have removed the existing cube, add a new one by clicking "Create" in the hierarchy, select "3D Object" and then "Cube".   Move the cube about 4-5 units along the X or Z axis away from the origin. You can do so by clicking and dragging the red or blue arrow. Now that we have added our cube, the next step is to add a script to the player’s perspective object. For this project, we can use the “GvrMain” game object to incorporate the player’s movement. In the inspector tab, click on the "Add Component" button, select "New Script" and create a new script titled "MoveToCube".   Once the script has been created, click on the cogwheel icon and select "Edit Script".   Copy and paste this code below into MoveToCube.cs Next, add an Event Trigger component to your cube.   Create a new script titled "CubeSelect". Then select the cogwheel icon and select "Edit Script" to open the script in the script editor.   Copy and paste the code below into your CubeSelect.cs script.     Click on the "Add New Event Type" button. Select "PointerClick". Click the + icon to add a method to refer to. In the left box, select the "Cube" game object. For the method, select "CubeSelect" and then click on "GetCubePosition". Finally, select "GvrMain" as the target game object for the method. When you are finished adding the necessary components, copy and paste the cube in the project hierarchy tab three times in order to get four cubes. They will seem as if they did not appear on the scene, only because they are overlapping each other. Change the positions of each cube so that they are separated from each other along the X and Z axis. Once completed, the scene should look something similar to this: Now you can run the VR app and see for yourself that we have now incorporated player movement in this basic implementation. Tips and General Advice Many developers recommend that you do not incorporate any acceleration and/or deceleration to the main camera. Doing so will cause nausea to the users and thus give them a negative experience with your VR application. Keep your VR app relatively simple! The user only has two modes of input: head tracking and the Cardboard trigger button. Trying to force functionality with multiple gestures (i.e. looking straight down and/or up) will not be intuitive to the user and will more than likely cause frustration. About the Author Jake Rheude is the Director of Business Development for Red Stag Fulfillment, a US-based e-commerce fulfillment provider focused primarily on serving ecommerce businesses shipping heavy, large, or valuable products to customers all around the world. Red Stag is so confident in its fulfillment software combined with their warehouse operations, that for any error, inaccuracy, or late shipment, not only will they reimburse you for that order, but they’ll write you a check for $50.
Read more
  • 0
  • 0
  • 28071

article-image-data-science-r
Packt
04 Jul 2016
16 min read
Save for later

Data Science with R

Packt
04 Jul 2016
16 min read
In this article by Matthias Templ, author of the book Simulation for Data Science with R, we will cover: What is meant bydata science A short overview of what Ris The essential tools for a data scientist in R (For more resources related to this topic, see here.) Data science Looking at the job market it is no doubt that the industry needs experts on data science. But what is data science and what's the difference to statistics or computational statistics? Statistics is computing with data. In computational statistics, methods and corresponding software are developed in a highly data-depended manner using modern computational tools. Computational statistics has a huge intersection with data science. Data science is the applied part of computational statistics plus data management including storage of data, data bases, and data security issues. The term data science is used when your work is driven by data with a less strong component on method and algorithm development as computational statistics, but with a lot of pure computer science topics related to storing, retrieving, and handling data sets. It is the marriage of computer science and computational statistics. As an example to show differences, we took the broad area of visualization. A data scientist is also interested in pure process related visualizations (airflows in an engine, for example),while in computational statistics, methods for visualization of data and statistical results are onlytouched upon. Data science is the management of the entire modelling process, from data collection to automatized reporting and presenting the results. Storage and managing data, data pre-processing (editing, imputation), data analysis, and modelling are included in this process. Data scientists use statistics and data-oriented computer science tools to solve the problems they face. R R has become an essential tool for statistics and data science(Godfrey 2013). As soon as data scientists have to analyze data, R might be the first choice. The opensource programming language and software environment, R, is currently one of the most widely used and popular software tools for statistics and data analysis. It is available at the Comprehensive R Archive Network (CRAN) as free software under the terms of the Free Software Foundation's GNU General Public License (GPL) in source code and binary form. The R Core Team defines R as an environment. R is an integrated suite of software facilities for data manipulation, calculation, and graphical display. Base R includes: A suite of operators for calculations on arrays, mostly written in C and integrated in R Comprehensive, coherent, and integrated collection of methods for data analysis Graphical facilities for data analysis and display, either on-screen or in hard copy A well-developed, simple, and effective programming language thatincludes conditional statements, loops, user-defined recursive functions, and input and output facilities A flexible object-oriented system facilitating code reuse High performance computing with interfaces to compiled code and facilities for parallel and grid computing The ability to be extended with (add-on) packages An environment that allows communication with many other software tools Each R package provides a structured standard documentation including code application examples. Further documents(so called vignettes???)potentially show more applications of the packages and illustrate dependencies between the implemented functions and methods. R is not only used extensively in the academic world, but also companies in the area of social media (Google, Facebook, Twitter, and Mozilla Corporation), the banking world (Bank of America, ANZ Bank, Simple), food and pharmaceutical areas (FDA, Merck, and Pfizer), finance (Lloyd, London, and Thomas Cook), technology companies (Microsoft), car construction and logistic companies (Ford, John Deere, and Uber), newspapers (The New York Times and New Scientist), and companies in many other areas; they use R in a professional context(see also, Gentlemen 2009andTippmann 2015). International and national organizations nowadays widely use R in their statistical offices(Todorov and Templ 2012 and Templ and Todorov 2016). R can be extended with add-on packages, and some of those extensions are especially useful for data scientists as discussed in the following section. Tools for data scientists in R Data scientists typically like: The flexibility in reading and writing data including the connection to data bases To have easy-to-use, flexible, and powerful data manipulation features available To work with modern statistical methodology To use high-performance computing tools including interfaces to foreign languages and parallel computing Versatile presentation capabilities for generating tables and graphics, which can readily be used in text processing systems, such as LaTeX or Microsoft Word To create dynamical reports To build web-based applications An economical solution The following presented tools are related to these topics and helps data scientists in their daily work. Use a smart environment for R Would you prefer to have one environment that includes types of modern tools for scientific computing, programming and management of data and files, versioning, output generation that also supports a project philosophy, code completion, highlighting, markup languages and interfaces to other software, and automated connections to servers? Currently two software products supports this concept. The first one is Eclipse with the extensionSTATET or the modified Eclipse IDE from Open Analytics called Architect. The second is a very popular IDE for R called RStudio, which also includes the named features and additionally includes an integration of the packages shiny(RStudio, Inc. 2014)for web-based development and integration of R and rmarkdown(Allaire et al. 2015). It provides a modern scientific computing environment, well designed and easy to use, and most importantly, distributed under GPL License. Use of R as a mediator Data exchange between statistical systems, database systems, or output formats is often required. In this respect, R offers very flexible import and export interfaces either through its base installation but mostly through add-on packages, which are available from CRAN or GitHub. For example, the packages xml2(Wickham 2015a)allow to read XML files. For importing delimited files, fixed width files, and web log files, it is worth mentioning the package readr(Wickham and Francois 2015a)or data.table(Dowle et al. 2015)(functionfread), which are supposed to be faster than the available functions in base R. The packages XLConnect(Mirai Solutions GmbH 2015)can be used to read and write Microsoft Excel files including formulas, graphics, and so on. The readxlpackage(Wickham 2015b)is faster for data import but do not provide export features. The foreignpackages(R Core Team 2015)and a newer promising package called haven(Wickham and Miller 2015)allow to read file formats from various commercial statistical software. The connection to all major database systems is easily established with specialized packages. Note that theROBDCpackage(Ripley and Lapsley 2015)is slow but general, while other specialized packages exists for special data bases. Efficient data manipulation as the daily job Data manipulation, in general but in any case with large data, can be best done with the dplyrpackage(Wickham and Francois 2015b)or the data.tablepackage(Dowle et al. 2015). The computational speed of both packages is much faster than the data manipulation features of base R, while data.table is slightly faster than dplyr using keys and fast binary search based methods for performance improvements. In the author's viewpoint, the syntax of dplyr is much easier to learn for beginners as the base R data manipulation features, and it is possible to write thedplyr syntax using data pipelines that is internally provided by package magrittr(Bache and Wickham 2014). Let's take an example to see the logical concept. We want to compute a new variableEngineSizeas the square ofEngineSizefrom the data set Cars93. For each group, we want to compute the minimum of the new variable. In addition, the results should be sorted in descending order: data(Cars93, package = "MASS") library("dplyr") Cars93 %>%   mutate(ES2 = EngineSize^2) %>%   group_by(Type) %>%   summarize(min.ES2 = min(ES2)) %>%   arrange(desc(min.ES2)) ## Source: local data frame [6 x 2] ## ##      Type min.ES2 ## 1   Large   10.89 ## 2     Van    5.76 ## 3 Compact    4.00 ## 4 Midsize    4.00 ## 5  Sporty    1.69 ## 6   Small    1.00 The code is somehow self-explanatory, while data manipulation in base R and data.table needs more expertise on syntax writing. In the case of large data files thatexceed available RAM, interfaces to (relational) database management systems are available, see the CRAN task view on high-performance computingthat includes also information about parallel computing. According to data manipulation, the excellent packages stringr, stringi, and lubridate for string operations and date-time handling should also be mentioned. The requirement of efficient data preprocessing A data scientist typically spends a major amount of time not only ondata management issues but also on fixing data quality problems. It is out of the scope of this book to mention all the tools for each data preprocessing topic. As an example, we concentrate on one particular topic—the handling of missing values. The VIMpackage(Templ, Alfons, and Filzmoser 2011)(Kowarik and Templ 2016)can be used for visual inspection and imputation of data. It is possible to visualize missing values using suitable plot methods and to analyze missing values' structure in microdata using univariate, bivariate, multiple, and multivariate plots. The information on missing values from specified variables is highlighted in selected variables. VIM can also evaluate imputations visually. Moreover, the VIMGUIpackage(Schopfhauser et al., 2014)provides a point and click graphical user interface (GUI). One plot, a parallel coordinate plot, for missing values is shown in the following graph. It highlights the values on certain chemical elements. In red, those values are marked that contain the missing in the chemical element Bi. It is easy to see missing at random situations with such plots as well as to detect any structure according to the missing pattern. Note that this data is compositional thus transformed using a log-ratio transformation from the package robCompositions(Templ, Hron, and Filzmoser 2011): library("VIM") data(chorizonDL, package = "VIM") ## for missing values x <- chorizonDL[,c(15,101:110)] library("robCompositions") x <- cenLR(x)$x.clr parcoordMiss(x,     plotvars=2:11, interactive = FALSE) legend("top", col = c("skyblue", "red"), lwd = c(1,1),     legend = c("observed in Bi", "missing in Bi")) To impute missing values,not onlykk-nearest neighbor and hot-deck methods are included, but also robust statistical methods implemented in an EMalgorithm, for example, in the functionirmi. The implemented methods can deal with a mixture of continuous, semi-continuous, binary, categorical, and count variables: any(is.na(x)) ## [1] TRUE ximputed <- irmi(x) ## Time difference of 0.01330566 secs any(is.na(ximputed)) ## [1] FALSE Visualization as a must While in former times, results were presented mostly in tables and data was analyzed by their values on screen; nowadays visualization of data and results becomes very important. Data scientists often heavily use visualizations to analyze data andalso for reporting and presenting results. It's already a nogo to not make use of visualizations. R features not only it's traditional graphical system but also an implementation of the grammar of graphics book(Wilkinson 2005)in the form of the R package(Wickham 2009). Why a data scientist should make use of ggplot2? Since it is a very flexible, customizable, consistent, and systematic approach to generate graphics. It allows to define own themes (for example, cooperative designs in companies) and support the users with legends and optimal plot layout. In ggplot2, the parts of a plot are defined independently. We do not go into details and refer to(Wickham 2009)or(???), but here's a simple example to show the user-friendliness of the implementation: library("ggplot2") ggplot(Cars93, aes(x = Horsepower, y = MPG.city)) + geom_point() + facet_wrap(~Cylinders) Here, we mapped Horsepower to the x variable and MPG.city to the y variable. We used Cylinder for faceting. We usedgeom_pointto tell ggplot2 to produce scatterplots. Reporting and webapplications Every analysis and report should be reproducible, especially when a data scientist does the job. Everything from the past should be able to compute at any time thereafter. Additionally,a task for a data scientist is to organize and managetext,code,data, andgraphics. The use of dynamical reporting tools raise the quality of outcomes and reduce the work-load. In R, the knitrpackage provides functionality for creating reproducible reports. It links code and text elements. The code is executed and the results are embedded in the text. Different output formats are possible such as PDF,HTML, orWord. The structuring can be most simply done using rmarkdown(Allaire et al., 2015). markdown is a markup language with many features, including headings of different sizes, text formatting, lists, links, HTML, JavaScript,LaTeX equations, tables, and citations. The aim is to generate documents from plain text. Cooperate designs and styles can be managed through CSS stylesheets. For data scientists, it is highly recommended to use these tools in their daily work. We already mentioned the automated generation from HTML pages from plain text with rmarkdown. The shinypackage(RStudio Inc. 2014)allows to build web-based applications. The website generated with shiny changes instantly as users modify inputs. You can stay within the R environment to build shiny user interfaces. Interactivity can be integrated using JavaScript, and built-in support for animation and sliders. Following is a very simple example that includes a slider and presents a scatterplot with highlighting of outliers given. We do not go into detail on the code that should only prove that it is just as simple to make a web application with shiny: library("shiny") library("robustbase") ## Define server code server <- function(input, output) {   output$scatterplot <- renderPlot({     x <- c(rnorm(input$obs-10), rnorm(10, 5)); y <- x + rnorm(input$obs)     df <- data.frame("x" = x, "y" = y)     df$out <- ifelse(covMcd(df)$mah > qchisq(0.975, 1), "outlier", "non-outlier")     ggplot(df, aes(x=x, y=y, colour=out)) + geom_point()   }) }   ## Define UI ui <- fluidPage(   sidebarLayout(     sidebarPanel(       sliderInput("obs", "No. of obs.", min = 10, max = 500, value = 100, step = 10)     ),     mainPanel(plotOutput("scatterplot"))   ) )   ## Shiny app object shinyApp(ui = ui, server = server) Building R packages First, RStudio and the package devtools(Wickham and Chang 2016)make life easy when building packages. RStudio has a lot of facilities for package building, and it's integrated package devtools includes features for checking, building, and documenting a package efficiently, and includes roxygen2(Wickham, Danenberg, and Eugster)for automated documentation of packages. When code of a package is updated,load_all('pathToPackage')simulates a restart of R, the new installation of the package and the loading of the newly build packages. Note that there are many other functions available for testing, documenting, and checking. Secondly, build a package whenever you wrote more than two functions and whenever you deal with more than one data set. If you use it only for yourself, you may be lazy with documenting the functions to save time. Packages allow to share code easily, to load all functions and data with one line of code, to have the documentation integrated, and to support consistency checks and additional integrated unit tests. Advice for beginners is to read the manualWriting R Extensions, and use all the features that are provided by RStudio and devtools. Summary In this article, we discussed essential tools for data scientists in R. This covers methods for data pre-processing, data manipulation, and tools for reporting, reproducible work, visualization, R packaging, and writing web-applications. A data scientist should learn to use the presented tools and deepen the knowledge in the proposed methods and software tools. Having learnt these lessons, a data scientist is well-prepared to face the challenges in data analysis, data analytics, data science, and data problems in practice. References Allaire, J.J., J. Cheng, Xie Y, J. McPherson, W. Chang, J. Allen, H. Wickham, and H. Hyndman. 2015.Rmarkdown: Dynamic Documents for R.http://CRAN.R-project.org/package=rmarkdown. Bache, S.M., and W. Wickham. 2014.magrittr: A Forward-Pipe Operator for R.https://CRAN.R-project.org/package=magrittr. Dowle, M., A. Srinivasan, T. Short, S. Lianoglou, R. Saporta, and E. Antonyan. 2015.Data.table: Extension of Data.frame.https://CRAN.R-project.org/package=data.table. Gentlemen, R. 2009. "Data Analysts Captivated by R's Power."New York Times.http://www.nytimes.com/2009/01/07/technology/business-computing/07program.html. Godfrey, A.J.R. 2013. "Statistical Analysis from a Blind Person's Perspective."The R Journal5 (1): 73–80. Kowarik, A., and M. Templ. 2016. "Imputation with the R Package VIM."Journal of Statistical Software. Mirai Solutions GmbH. 2015.XLConnect: Excel Connector for R.http://CRAN.R-project.org/package=XLConnect. R Core Team. 2015.Foreign: Read Data Stored by Minitab, S, SAS, SPSS, Stata, Systat, Weka, dBase, ….http://CRAN.R-project.org/package=foreign. Ripley, B., and M. Lapsley. 2015.RODBC: ODBC Database Access.http://CRAN.R-project.org/package=RODBC. RStudio Inc. 2014.Shiny: Web Application Framework for R.http://CRAN.R-project.org/package=shiny. Schopfhauser, D., M. Templ, A. Alfons, A. Kowarik, and B. Prantner. 2014.VIMGUI: Visualization and Imputation of Missing Values.http://CRAN.R-project.org/package=VIMGUI. Templ, M., A. Alfons, and P. Filzmoser. 2011. "Exploring Incomplete Data Using Visualization Techniques."Advances in Data Analysis and Classification6 (1): 29–47. Templ, M., and V. Todorov. 2016. "The Software Environment R for Official Statistics and Survey Methodology."Austrian Journal of Statistics45 (1): 97–124. Templ, M., K. Hron, and P. Filzmoser. 2011.RobCompositions: An R-Package for Robust Statistical Analysis of Compositional Data. John Wiley; Sons. Tippmann, S. 2015. "Programming Tools: Adventures with R."Nature, 109–10. doi:10.1038/517109a. Todorov, V., and M. Templ. 2012.R in the Statistical Office: Part II. Working paper 1/2012. United Nations Industrial Development. Wickham, H. 2009.Ggplot2: Elegant Graphics for Data Analysis. Springer-Verlag New York.http://had.co.nz/ggplot2/book. 2015a.Xml2: Parse XML.http://CRAN.R-project.org/package=xml2. 2015b.Readxl: Read Excel Files.http://CRAN.R-project.org/package=readxl. Wickham, H., and W. Chang. 2016.Devtools: Tools to Make Developing R Packages Easier.https://CRAN.R-project.org/package=devtools. Wickham, H., and R. Francois. 2015a.Readr: Read Tabular Data.http://CRAN.R-project.org/package=readr. 2015b.dplyr: A Grammar of Data Manipulation.https://CRAN.R-project.org/package=dplyr. Wickham, H., and E. Miller. 2015.Haven: Import SPSS,Stata and SAS Files.http://CRAN.R-project.org/package=haven. Wickham, H., P. Danenberg, and M. Eugster.Roxygen2: In-Source Documentation for R.https://github.com/klutometis/roxygen. Wilkinson, L. 2005.The Grammar of Graphics (Statistics and Computing). Secaucus, NJ, USA: Springer-Verlag New York, Inc. Resources for Article: Further resources on this subject: Adding Media to Our Site [article] Data Tables and DataTables Plugin in jQuery 1.3 with PHP [article] JavaScript Execution with Selenium [article]
Read more
  • 0
  • 0
  • 3621

article-image-voice-interaction-and-android-marshmallow
Raka Mahesa
30 Jun 2016
6 min read
Save for later

Voice Interaction and Android Marshmallow

Raka Mahesa
30 Jun 2016
6 min read
"Jarvis, play some music." You might imagine that to be a quote from some Iron Man stories (and hey, that might be an actual quote), but if you replace the "Jarvis" part with "OK Google," you'll get an actual line that you can speak to your Android phone right now that will open a music player and play a song. Go ahead and try it out yourself. Just make sure you're on your phone's home screen when you do it. This feature is called Voice Action, and it was actually introduced years ago in 2010, though back then it only worked on certain apps. However, Voice Action only accepts a single-line voice command, unlike Jarvis who usually engages in a conversation with its master. For example, if you ask Jarvis to play music, it will probably reply by asking what music you want to play. Fortunately, this type of conversation will no longer be limited to movies or comic books, because with Android Marshmallow, Google has introduced an API for that: the Voice Interaction API. As the name implies, the Voice Interaction API enables you to add voice-based interaction to its app. When implemented properly, the user will be able to command his/her phone to do a particular task without any touch interaction just by having a conversation with the phone. Pretty similar to Jarvis, isn't it? So, let's try it out! One thing to note before beginning: the Voice Interaction API can only be activated if the app is launched using Voice Action. This means that if the app is opened from the launcher via touch, the API will return a null object and cannot be used on that instance. So let’s cover a bit of Voice Action first before we delve further into using the Voice Interaction API. Requirements To use the Voice Interaction API, you need: Android Studio v1.0 or above Android 6.0 (API 23) SDK A device with Android Marshmallow installed (optional) Voice Action Let's start by creating a new project with a blank activity. You won’t use the app interface and you can use the terminal logging to check what app does, so it's fine to have an activity with no user interface here. Okay, you now have the activity. Let’s give the user the ability to launch it using a voice command. Let's pick a voice command for our app—such as a simple "take a picture" command? This can be achieved by simply adding intent filters to the activity. Add these lines to your app manifest file and put them below the original intent filter of your app activity. <intent-filter> <action android_name="android.media.action.STILL_IMAGE_CAMERA" /> <category android_name="android.intent.category.DEFAULT" /> <category android_name="android.intent.category.VOICE" /> </intent-filter> These lines will notify the operating system that your activity should be triggered when a certain voice command is spoken. The action "android.media.action.STILL_IMAGE_CAMERA" is associated with the "take a picture" command, so to activate the app using a different command, you need to specify a different action. Check out this list if you want to find out what other commands are supported. And that's all you need to do to implement Voice Action for your app. Build the app and run it on your phone. So when you say "OK Google, take a picture", your activity will show up. Voice Interaction All right, let's move on to Voice Interaction. When the activity is created, before you start the voice interaction part, you must always check whether the activity was started from Voice Action and whether the VoiceInteractor service is available. To do that, call the isVoiceInteraction() function to check the returned value. If it returns true, then it means the service is available for you to use. Let's say you want your app to first ask the user which side he/she is on, then changes the app background color accordingly. If the user chooses the dark side, the color will be black, but if the user chooses the light side, the app color will be white. Sounds like a simple and fun app, doesn't it? So first, let’s define what options are available for users to choose. You can do this by creating an instance of VoiceInteractor.PickOptionRequest.Option for each available choice. Note that you can associate more than one word with a single option, as can be seen in the following code. VoiceInteractor.PickOptionRequest.Option option1 = new VoiceInteractor.PickOptionRequest.Option(“Light”, 0); option1.addSynonym(“White”); option1.addSynonym(“Jedi”); VoiceInteractor.PickOptionRequest.Option option2 = new VoiceInteractor.PickOptionRequest.Option(“Dark”, 1); option12addSynonym(“Black”); option2.addSynonym(“Sith”); The next step is to define a Voice Interaction request and tell the VoiceInteractor service to execute that requests. For this app, use the PickOptionRequest for the request object. You can check out other request types on this page. VoiceInteractor.Option[] options = new VoiceInteractor.Option[] { option1, option2 } VoiceInteractor.Prompt prompt = new VoiceInteractor.Prompt("Which side are you on"); getVoiceInteractor().submitRequest(new PickOptionRequest(prompt, options, null) { //Handle each option here }); And determine what to do based on the choice picked by the user. This time, we'll simply check the index of the selected option and change the app background color based on that (we won't delve into how to change the app background color here; let's leave it for another occasion). @Override public void onPickOptionResult(boolean finished, Option[] selections, Bundle result) { if (finished && selections.length == 1) { if (selections[0].getIndex() == 0) changeBackgroundToWhite(); else if (selections[0].getIndex() == 1) changeBackgroundToBlack(); } } @Override public void onCancel() { closeActivity(); } And that's it! When you run your app on your phone, it should ask which side you're on if you launch it using Voice Action. You've only learned the basics here, but this should be enough to add a little voice interactivity to your app. And if you ever want to create a Jarvis version, you just need to add "sir" to every question your app asks. About the author Raka Mahesa is a game developer at Chocoarts who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also tweets regularly as @legacy99.
Read more
  • 0
  • 0
  • 15623

article-image-angular-2-components-making-app-development-easier
Mary Gualtieri
30 Jun 2016
5 min read
Save for later

Angular 2 Components: Making app development easier

Mary Gualtieri
30 Jun 2016
5 min read
When Angular 2 was announced in October 2014, the JavaScript community went into a frenzy. What was going to happen to the beloved Angular 1, which so many developers loved? Could change be a good thing? Change only means improvement, so we should welcome Angular 2 with open arms and embrace it. One of the biggest changes from Angular 1 to Angular 2 was the purpose of a component. In Angular 2, components are the main way to build and specify logic on a page; or in other words, components are where you define a specific responsibility of your application. In Angular 1, this was achieved through directives, controllers, and scope. With this change, Angular 2 offers better functionality and actually makes it easier to build applications. Angular 2 components also ensure that code from different sections of the application will not interfere with other sections of the component. To build a component, let’s first look at what is needed to create it: An association of DOM/host elements. Well-defined input and output properties of public APIs. A template that describes how the component is rendered on the HTML page. Configuration of the dependency injection. Input and output properties Input and Output properties are considered the public API of a component, allowing you to access the backend data. The input property is where data flows into a component. Data flows out of the component through the output property. The purpose of the input and output propertiesis to represent a component in your application. Template A template is needed in order to render a component on a page. In order to render the template, you must have a list of directives that can be used in the template. Host elements In order for a component to be rendered in DOM, the component must associate itself with a DOM or host element. A component can interact with host elements by listening to its events, updating properties, and invoking methods. Dependency Injection Dependency Injection is when a component depends on a service. You request this service through a constructor, and the framework provides you with this service. This is significant because you can depend on interfaces instead of concrete types. The benefit of this enables testability and more control. Dependency Injections is created and configured in directives and component decorators. Bootstrap In Angular,you have to bootstrap in order to initialize your application through automation or by manually initializing it. In Angular 1, to automatically bootstrap your app, you added ng-app into your HTML file. To manually bootstrap your app, you would add angular.bootstrap(document, [‘myApp’});. In Angular 2, you can bootstrap by just adding bootstrap();. It’s important to remember that bootstrapping in Angular is completely differentfromTwitter Bootstrap. Directives Directives are essentially components without a template. The purpose behind a directive is to allow components to interact with one another. Another way to think of a directive is a component with a template. You still have the option to write a directive with a decorator. Selectors Selectors are very easy to understand. Use a selector in order for Angular to identify the component. The selector is used to call the component into the HTML file. For example, if your selector is called App, you can use <app> </app> to call the component in the HTML file. Let’s build a simple component! Let’s walk through the steps required to actually build a component using Angular 2: Step 1: add a component decorator: Step 2: Add a selector: In your HTML file, use <myapp></myapp> to call the template. Step 3: Add a template: Step 4: Add a class to represent the component: Step 5: Bootstrap the component class: Step 6: Finally, import both the bootstrap and Component file: This is a root component. In Angular, you have what is called a component tree. Everything comes back to the component tree. The question that you must ask yourself is: what does a component looks like if it is not a root component? Perform the following steps for this: Step 1: Add an import component: For every component that you create, it is important to add “import {Component} from "angular2/core";” Step 2: Add a selector and a template: Step 3: Export the class that represents the component: Step 4: Switch to the root component. Then, import the component file: Add the relative path(./todo) to the file. Step 5: Add an array of directives to the root component in order to be able to use the component: Let’s review. In order to make a component, you must associate host elements, have well-defined input and output properties, have a template, and configure Dependency Injection. This is all achieved through the use of selectors, directives, and a template. Angular 2 is still in the beta stages, but once it arrives, it will be a game changer for the development world. About the author Mary Gualtieri is a full stack web developer and web designer who enjoys all aspects of the web and creating a pleasant user experience. Web development, specifically frontend development, is an interest of hers because it challenges her to think outside of the box and solve problems, all while constantly learning. She can be found on GitHub as MaryGualtieri
Read more
  • 0
  • 0
  • 5898
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-development-tricks-unreal-engine-4
Packt
22 Jun 2016
39 min read
Save for later

Development Tricks with Unreal Engine 4

Packt
22 Jun 2016
39 min read
In this article by Benjamin Carnall, the author of Unreal Engine 4 by Example, we will look at some development tricks with Unreal Engine 4. (For more resources related to this topic, see here.) Creating the C++ world objects With the character constructed, we can now start to build the level. We are going to create a block out for the lanes that we will be using for the level. We can then use this block to construct a mesh that we can reference in code. Before we get into the level creation, we should ensure the functionality that we implemented for the character works as intended. With the BountyDashMap open, navigate to the C++ classes folder of the content browser. Here, you will be able to see the BountyDashCharacter. Drag and drop the character into the game level onto the platform. Then, search for TargetPoint in the Modes Panel. Drag and drop three of these target points into the game level, and you will be presented with the following: Now, press the Play button to enter the PIE (Play in Editor) mode. The character will be automatically possessed and used for input. Also, ensure that when you press A or D, the character moves to the next available target point. Now that we have the base of the character implemented, we should start to build the level. We require three lanes for the player to run down and obstacles for the player to dodge. For now, we must focus on the lanes that the player will be running on. Let's start by blocking out how the lanes will appear in the level. Drag a BSP box brush into the game world. You can find the Box brush in the Modes Panel under the BSP section, which is under the name called Box. Place the box at world location (0.0f, 0.0f, and -100.0f). This will place the box in the center of the world. Now, change the X Property of the box under the Brush settings section of the Details panel to 10000. We require this lane to be so long so that later on, we can hide the end using fog without obscuring the objects that the player will need to dodge. Next, we need to click and drag two more copies of this box. You can do this by holding Alt while moving an object via the transform widget. Position one box copy at world location (0.0f, -230.0f, -100) and the next at (0.0f, 230, -100). The last thing we need to do to finish blocking the level is place the Target Points in the center of each lane. You will be presented with this when you are done: Converting BSP brushes into a static mesh The next thing we need to do is convert the lane brushes we made into one mesh, so we can reference it within our code base. Select all of the boxes in the scene. You can do this by holding Ctrl while selecting the box brushes in the editor. With all of the brushes selected, address the Details panel. Ensure that the transformation of your selection is positioned in the middle of these three brushes. If it is not, you can either reselect the brushes in a different order, or you can group the brushes by pressing Ctrl + G while the boxes are selected. This is important as the position of the transform widget shows what the origin of the generated mesh will be. With the grouping or boxes selected, address the Brush Settings section in the Details Panel there is a small white expansion arrow at the bottom of the section; click on this now. You will then be presented with a create static mesh button; press this now. Name this mesh Floor_Mesh_BountyDash, and save it under the Geometry/Meshes/ of the content folder. Smoke and Mirrors with C++ objects We are going to create the illusion of movement within our level. You may have noticed that we have not included any facilities in our character to move forward in the game world. This is because our character will never advance past his X positon at 0. Instead, we are going to be moving the world toward and past him. This way, we can create very easy spawning and processing logic for the obstacles and game world, without having to worry about continuously spawning objects that the player can move past further and further down the X axis. We require some of the level assets to move through the world, so we can establish the illusion of movement for the character. One of these moving objects will be the floor. This requires some logic that will reposition floor meshes as they reach a certain depth behind the character. We will be creating a swap chain of sorts that will work with three meshes. The meshes will be positioned in a contiguous line. As the meshes move underneath and behind the player, we need to move any mesh that is far enough behind the player’s back to the front of the swap chain. The effect is a never-ending chain of floor meshes constantly flowing underneath the player. The following diagram may help to understand the concept: Obstacles and coin pickups will follow a similar logic. However, they will simply be destroyed upon reaching the Kill point in the preceding diagram. Modifying the BountyDashGameMode Before we start to create code classes that will feature in our world, we are going to modify the BountyDashGameMode that was generated when the project was created. The game mode is going to be responsible for all of the game state variables and rules. Later on, we are going to use the game mode to determine how the player respawns when the game is lost. BountyDashGameMode Class Definition The game mode is going to be fairly simple; we are going to a add a few member variables that will hold the current state of the game, such as game speed, game level, and the number of coins needed to increase the game speed. Navigate to BountyDashGameMode.h and add the following code: UCLASS(minimalapi) class ABountyDashGameMode : public AGameMode { GENERATED_BODY()   UPROPERTY() float gameSpeed;   UPROPERTY() int32 gameLevel; As you can see, we have two private member variables called gameSpeed and gameLevel. These are private, as we wish no other object to be able to modify the contents of these values. You will also note that the class has been specified with minimalapi. This specifier effectively informs the engine that other code modules will not need information from this object outside of the class type. This means you will be able to cast this class type, but functions cannot be called within other modules. This is specified as a way to optimize compile times, as no module outside of this project API will require interactions with our game mode. Next, we declare the public functions and members that we will be using within our game mode. Add the following code to the ABountyDashGameMode class definition: public: ABountyDashGameMode();       void CharScoreUp(unsigned int charScore);       UFUNCTION()     float GetInvGameSpeed();       UFUNCTION()     float GetGameSpeed();       UFUNCTION()     int32 GetGameLevel(); The function called CharScroreUp()takes in the player’s current score (held by the player) and changes game values based on this score. This means we are able to make the game more difficult as the player scores more points. The next three functions are simply the accessor methods that we can use to get the private data of this class in other objects. Next, we need to declare our protected members that we have exposed to be EditAnywhere, so we may adjust these from the editor for testing purposes: protected:   UPROPERTY(EditAnywhere, BlueprintReadOnly) int32 numCoinsForSpeedIncrease;   UPROPERTY(EditAnywhere, BlueprintReadWrite) float gameSpeedIncrease;   }; The numCoinsForSpeedIncrease variable will determine how many coins it takes to increase the speed of the game, and the gameSpeedIncrease value will determine how much faster the objects move when the numCoinsForSpeedIncrease value has been met. BountyDashGameMode Function Definitions Let's begin add some definitions to the BountyDashGameMode functions. They will be very simple at this point. Let's start by providing some default values for our member variables within the constructor and by assigning the class that is to be used for our default pawn. Add the definition for the ABountyDashGameMode constructor: ABountyDashGameMode::ABountyDashGameMode() {     // set default pawn class to our ABountyDashCharacter     DefaultPawnClass = ABountyDashCharacter::StaticClass();       numCoinsForSpeedIncrease = 5;     gameSpeed = 10.0f;     gameSpeedIncrease = 5.0f;     gameLevel = 1; } Here, we are setting the default pawn class; we are doing this by calling StaticClass() on the ABountyDashCharacter. As we have just referenced, the ABountyDashCharacter type ensures that #include BountyDashCharacter.h is added to the BountyDashGameMode.cpp include list. The StaticClass() function is provided by default for all objects, and it returns the class type information of the object as a UClass*. We then establish some default values for member variables. The player will have to pick up five coins to increase the level. The game speed is set to 10.0f (10m/s), and the game will speed up by 5.0f (5m/s) every time the coin quota is reached. Next, let's add a definition for the CharScoreUp() function: void ABountyDashGameMode::CharScoreUp(unsignedintcharScore) {     if (charScore != 0 &&        charScore % numCoinsForSpeedIncrease == 0)     {         gameSpeed += gameSpeedIncrease;         gameLevel++;     } } This function is quite self-explanatory. The character's current score is passed into the function. We then check whether the character's score is not currently 0, and we check if the remainder of our character score is 0 after being divided by the number of coins needed for a speed increase; that is, if it is divided equally, thus the quota has been reached. We then increase the game speed by the gameSpeedIncrease value and then increment the level. The last thing we need to add is the accessor methods described previously. They do not require too much explanation short of the GetInvGameSpeed() function. This function will be used by objects that wish to be pushed down the X axis at the game speed: float ABountyDashGameMode::GetInvGameSpeed() {     return -gameSpeed; }   float ABountyDashGameMode::GetGameSpeed() {     return gameSpeed; }   int32 ABountyDashGameMode::GetGameLevel() {     return gameLevel; } Getting our game mode via Template functions The ABountyDashGame mode now contains information and functionality that will be required by most of the BountyDash objects we create going forward. We need to create a light-weight method to retrieve our custom game mode, ensuring that the type information is preserved. We can do this by creating a template function that will take in a world context and return the correct game mode handle. Traditionally, we could just use a direct cast to ABountyDashGameMode; however, this would require including BountyDashGameMode.h in BountyDash.h. As not all of our objects will require the knowledge of the game mode, this is wasteful. Navigate to the BoutyDash.h file now. You will be presented with the following: #pragma once   #include "Engine.h" What currently exists in the file is very simple—#pragma once has again been used to ensure that the compiler only builds and includes the file once. Then Engine.h has been included, so every other object in BOUNTYDASH_API (they include BountyDash.h by default) has access to the functions within Engine.h. This is a good place to include utility functions that you wish all objects to have access to. In this file, include the following lines of code: template<typename T> T* GetCustomGameMode(UWorld* worldContext) {     return Cast<T>(worldContext->GetAuthGameMode()); } This code, simply put, is a template function that takes in a game world handle. It gets the game mode from this context via the GetAuthGameMode() function, and then casts this game mode to the template type provided to the function. We must cast to the template type as the GetAuthGameMode() simply returns a AGameMode handle. Now, with this in place, let's begin coding our never ending floor. Coding the floor The construction of the floor will be quite simple in essence, as we only need a few variables and a tick function to achieve the functionality we need. Use the class wizard to create a class named Floor that inherits from AActor. We will start by modifying the class definition found in Floor.h. Navigate to this file now. Floor Class Definition The class definition for the floor is very basic. All we need is a Tick function and some accessor methods, so we may provide some information about the floor to other objects. I have also removed the BeginPlay function provided by default by the class wizard as it is not needed. The following is what you will need to write for the AFloor class definition in its entirety, replace what is present in Floor.h with this now (keeping the #include list intact): UCLASS() class BOUNTYDASH_API AFloor : public AActor { GENERATED_BODY()     public:      // Sets default values for this actor's properties     AFloor();         // Called every frame     virtual void Tick( float DeltaSeconds ) override;       float GetKillPoint();     float GetSpawnPoint();   protected:     UPROPERTY(EditAnywhere)     TArray<USceneComponent*> FloorMeshScenes;       UPROPERTY(EditAnywhere)     TArray<UStaticMeshComponent*> FloorMeshes;       UPROPERTY(EditAnywhere)     UBoxComponent* CollisionBox;       int32 NumRepeatingMesh;     float KillPoint;     float SpawnPoint; }; We have three UPROPERTY declared members. The first two being TArrays that will hold handles to the USceneComponent and UMeshComponent objects that will make up the floor. We require the TArray of scene components as the USceneComponentobjects provide us with a world transform that we can apply translations to, so we may update the position of the generated floor mesh pieces. The last UPROPERTY is a collision box that will be used for the actual player collisions to prevent the player from falling through the moving floor. The reason we are using BoxComponent instead of the meshes for collision is because we do not want the player to translate with the moving meshes. Due to surface friction simulation, having the character to collide with any of the moving meshes will cause the player to move with the mesh. The last three members are protected and do not require any UPROPRTY specification. We are simply going to use the two float values—KillPoint and SpawnPoint—to save output calculations from the constructor, so we may use them in the Tick() function. The integer value called NumRepeatingMesh will be used to determine how many meshes we will have in the chain. Floor Function Definitions As always, we will start with the constructor of the floor. We will be performing the bulk of our calculations for this object here. We will be creating USceneComponents and UMeshComponents we are going to use to make up our moving floor. With dynamic programming in mind, we should establish the construction algorithm, so we can create any number of meshes in the moving line. Also, as we will be getting the speed of the floors movement form the game mode, ensure that #include “BountyDashGameMode.h” is included in Floor.cpp The AFloor::AFloor() constructor Start this by adding the following lines to the AFloor constructor called AFloor::AFloor(), which is found in Floor.cpp: RootComponent =CreateDefaultSubobject<USceneComponent>(TEXT("Root"));   ConstructorHelpers::FObjectFinder<UStaticMesh>myMesh(TEXT( "/Game/Barrel_Hopper/Geometry/Floor_Mesh_BountyDash.Floor_Mesh_BountyDash"));   ConstructorHelpers::FObjectFinder<UMaterial>myMaterial(TEXT( "/Game/StarterContent/Materials/M_Concrete_Tiles.M_Concrete_Tiles")); To start with, we are simply using FObjectFinders to find the assets that we require for the mesh. For the myMesh finder, ensure that you parse the reference location of the static floor mesh that we created earlier. We also created a scene component to be used as the root component for the floor object. Next, we are going to be checking the success of the mesh acquisition and then establishing some variables for the mesh placement logic: if (myMesh.Succeeded()) {     NumRepeatingMesh = 3;       FBoxSphereBounds myBounds = myMesh.Object->GetBounds();     float XBounds = myBounds.BoxExtent.X * 2;     float ScenePos = ((XBounds * (NumRepeatingMesh - 1)) / 2.0f) * -1;       KillPoint = ScenePos - (XBounds * 0.5f);     SpawnPoint = (ScenePos * -1) + (XBounds * 0.5f); Note that we have just opened an if statement without closing the scope; from time to time, I will split the segments of the code within a scope across multiple pages. If you are ever lost as to the current scope that we are working from, then look for this comment; <-- Closing If(MyMesh.Succed()) or in the future a similarly named comment. Firstly, we are initializing the NumRepeatingMesh value with 3. We are using a variable here instead of a hard coded value so that we may update the number of meshes in the chain without having to refactor the remaining code base. We then get the bounds of the mesh object using the function called GetBounds() on the mesh asset that we just retrieved. This returns the FBoxSphereBounds structure, which will provide you with all of the bounding information of a static mesh asset. We then use the X component of the member called BoxExtent to initialize Xbounds. BoxExtent is a vector that holds the extent of the bounding box of this mesh. We save the X component of this vector, so we can use it for mesh chain placement logic. We have doubled this value, as the BoxExtent vector only represents the extent of the box from origin to one corner of the mesh. Meaning, if we wish the total bounds of the mesh, we must double any of the BoxExtent components. Next, we calculate the initial scene positon of the first USceneCompoennt we will be attaching a mesh to and storing in the ScenePos array. We can determine this position by getting the total length of all of the meshes in the (XBounds * (numRepeatingMesh – 1) chain and then halve the resulting value, so we can get the distance the first SceneComponent that will be from origin along the X axis. We also multiply this value by -1 to make it negative, as we wish to start our mesh chain behind the character (at the X position 0). We then use ScenePos to specify killPoint, which represents the point in space at which floor mesh pieces should get to, before swapping back to the start of the chain. For the purposes the swap chain, whenever a scene component is half a mesh piece length behind the position of the first scene component in the chain, it should be moved to the other side of the chain. With all of our variables in place, we can now iterate through the number of meshes we desire (3) and create the appropriate components. Add the following code to the scope of the if statement that we just opened: for (int i = 0; i < NumRepeatingMesh; ++i) { // Initialize Scene FString SceneName = "Scene" + FString::FromInt(i); FName SceneID = FName(*SceneName); USceneComponent* thisScene = CreateDefaultSubobject<USceneComponent>(SceneID); check(thisScene);   thisScene->AttachTo(RootComponent); thisScene->SetRelativeLocation(FVector(ScenePos, 0.0f, 0.0f)); ScenePos += XBounds;   floorMeshScenes.Add(thisScene); Firstly, we are creating a name for the scene component by appending Scene with the iteration value that we are up too. We then convert this appended FString to FName and provide this to the CreateDefaultSubobject template function. With the resultant USceneComponent handle, we call AttachTo()to bind it to the root component. Then, we set the RelativeLocation of USceneComponent. Here, we are parsing in the ScenePos value that we calculated earlier as the Xcomponent of the new relative location. The relative location of this component will always be based off of the position of the root SceneComponent that we created earlier. With the USceneCompoennt appropriately placed, we increment the ScenePos value by that of the XBounds value. This will ensure that subsequent USceneComponents created in this loop will be placed an entire mesh length away from the previous one, forming a contiguous chain of meshes attached to scene components. Lastly, we add this new USceneComponent to floorMeshScenes, so we may later perform translations on the components. Next, we will construct the mesh components and add the following code to the loop: // Initialize Mesh FString MeshName = "Mesh" + FString::FromInt(i); UStaticMeshComponent* thisMesh = CreateDefaultSubobject<UStaticMeshComponent>(FName(*MeshName)); check(thisMesh);   thisMesh->AttachTo(FloorMeshScenes[i]); thisMesh->SetRelativeLocation(FVector(0.0f, 0.0f, 0.0f)); thisMesh->SetCollisionProfileName(TEXT("OverlapAllDynamic"));   if (myMaterial.Succeeded()) {     thisMesh->SetStaticMesh(myMesh.Object);     thisMesh->SetMaterial(0, myMaterial.Object); }   FloorMeshes.Add(thisMesh); } // <--Closing For(int i = 0; i < numReapeatingMesh; ++i) As you can see, we have performed a similar name creation process for the UMeshComponents as we did for the USceneComponents. The preceding construction process was quite simple. We attach the mesh to the scene component so the mesh will follow any translation that we apply to the parent USceneComponent. We then ensure that the mesh's origin will be centered around the USceneComponent by setting its relative location to be (0.0f, 0.0f, 0.0f). We then ensure that meshes do not collide with anything in the game world; we do so with the SetCollisionProfileName() function. If you remember, when we used this function earlier, we provided a profile name that we wish edthe object to use the collision properties from. In our case, we wish this mesh to overlap all dynamic objects, thus we parse OverlapAllDynamic. Without this line of code, the character may collide with the moving floor meshes, and this will drag the player along at the same speed, thus breaking the illusion of motion we are trying to create. Lastly, we assign the static mesh object and material we obtained earlier with the FObjectFinders. We ensure that we add this new mesh object to the FloorMeshes array in case we need it later. We also close the loop scope that we created earlier. The next thing we are going to do is create the collision box that will be used for character collisions. With the box set to collide with everything and the meshes set to overlap everything, we will be able to collide on the stationary box while the meshes whip past under our feet. The following code will create a box collider: collisionBox =CreateDefaultSubobject<UBoxComponent>(TEXT("CollsionBox")); check(collisionBox);   collisionBox->AttachTo(RootComponent); collisionBox->SetBoxExtent(FVector(spawnPoint, myBounds.BoxExtent.Y, myBounds.BoxExtent.Z)); collisionBox->SetCollisionProfileName(TEXT("BlockAllDynamic"));   } // <-- Closing if(myMesh.Succeeded()) As you can see, we initialize UBoxComponent as we always initialize components. We then attach the box to the root component as we do not wish it to move. We also set the box extent to be that of the length of the entire swap chain by setting the spawnPoint value as the X bounds of the collider. We set the collision profile to BlockAllDynamic. This means it will block any dynamic actor, such as our character. Note that we have also closed the scope of the if statement opened earlier. With the constructor definition finished, we might as well define the accessor methods for spawnPoint and killPoint before we move onto theTick() function: float AFloor::GetKillPoint() {     return KillPoint; }   float AFloor::GetSpawnPoint() {     return SpawnPoint; } AFloor::Tick() Now, it is time to write the function that will move the meshes and ensure that they move back to the start of the chain when they reach KillPoint. Add the following code to the Tick() function found in Floor.cpp: for (auto Scene : FloorMeshScenes) { Scene->AddLocalOffset(FVector(GetCustomGameMode <ABountyDashGameMode>(GetWorld())->GetInvGameSpeed(), 0.0f, 0.0f));   if (Scene->GetComponentTransform().GetLocation().X <= KillPoint) {     Scene->SetRelativeLocation(FVector(SpawnPoint, 0.0f, 0.0f)); } } Here we use a C++ 11 range-based for the loop. Meaning that for each element inside of FloorMeshScenes, it will populate the scene handle of type auto with a pointer to whatever type is contained by FloorMeshScenes; in this case, it is USceneComponent*. For every scene component contained within FloorMeshScenes, we add a local offset to each frame. The amount we offset each frame is dependent on the current game speed. We are getting the game speed from the game mode via the template function that we wrote earlier. As you can see, we have specified the template function to be of the ABountyDashGameMode type, thus we will have access to the bounty dash game mode functionality. We have done this so that the floor will move faster under the player's feet as the speed of the game increases. The next thing we do is check the X value of the scene component’s location. If this value is equal to or less than the value stored in KillPoint, we reposition the scene component back to the spawn point. As we attached the meshes to USceenComponents earlier, the meshes will also translate with the scene components. Lastly, ensure that you have added #include BountyDashGameMode.h to .cpp include the list. Placing the Floor in the level! We are done making the floor! Compile the code and return to the level editor. We can now place this new floor object in the level! Delete the static mesh that would have replaced our earlier box brushes, and drag and drop the Floor object into the scene. The floor object can be found under the C++ classes folder of the content browser. Select the floor in the level, and ensure that its location is set too (0.0f, 0.0f, and -100.f). This will place the floor just below the player's feet around origin. Also, ensure that the ATargetPoints that we placed earlier are in the right positions above the lanes. With all this in place, you should be able to press play and observe the floor moving underneath the player indefinitely. You will see something similar to this: You will notice that as you move between the lanes by pressing A and D, the player maintains the X position of the target points but nicely travels to the center of each lane. Creating the obstacles The next step for this project is to create the obstacles that will come flying at the player. These obstacles are going to be very simple and contain only a few members and functions. These obstacles are to only used to serve as a blockade for the player, and all of the collision with the obstacles will be handled by the player itself. Use the class wizard to create a new class named Obstacle, and inherit this object from AActor. Once the class has been generated, modify the class definition found in Obstacle.h so that it appears as follows: UCLASS(BlueprintType) class BOUNTYDASH_API AObstacle: public AActor { GENERATED_BODY()         float KillPoint;   public:      // Sets default values for this actor's properties     AObstacle ();       // Called when the game starts or when spawned     virtual void BeginPlay() override;         // Called every frame     virtual void Tick( float DeltaSeconds ) override;     void SetKillPoint(float point); float GetKillPoint();   protected:     UFUNCITON()     virtual void MyOnActorOverlap(AActor* otherActor);       UFUNCTION()     virtual void MyOnActorEndOverlap(AActor* otherActor);     public: UPROPERTY(EditAnywhere, BlueprintReadWrite)     USphereComponent* Collider;       UPROPERTY(EditAnywhere, BlueprintReadWrite)     UStaticMeshComponent* Mesh; }; You will notice that the class has been declared with the BlueprintType specifier! This object is simple enough to justify extension into blueprint, as there is no new learning to be found within this simple object, and we can use blueprint for convenience. For this class, we have added a private member called KillPoint that will be used to determine when AObstacle should destroy itself. We have also added the accessor methods for this private member. You will notice that we have added the MyActorBeginOverlap and MyActorEndOverlap functions that we will provide appropriate delegates, so we can provide custom collision response for this object. The definitions of these functions are not too complicated either. Ensure that you have included #include BountyDashGameMode.h in Obstacle.cpp. Then, we can begin filling out our function definitions; the following is code what we will use for the constructor: AObstacle::AObstacle() { PrimaryActorTick.bCanEverTick = true;   Collider = CreateDefaultSubobject<USphereComponent>(TEXT("Collider")); check(Collider);   RootComponent = Collider; Collider ->SetCollisionProfileName("OverlapAllDynamic");   Mesh = CreateDefaultSubobject<UStaticMeshComponent>(TEXT("Mesh")); check(Mesh); Mesh ->AttachTo(Collider); Mesh ->SetCollisionResponseToAllChannels(ECR_Ignore); KillPoint = -20000.0f;   OnActorBeginOverlap.AddDynamic(this, &AObstacle::MyOnActorOverlap); OnActorBeginOverlap.AddDynamic(this, &AObstacle::MyOnActorEndOverlap); } The only thing of note within this constructor is that again, we set the mesh of this object to ignore the collision response to all the channels; this means that the mesh will not affect collision in any way. We have also initialized kill point with a default value of -20000.0f. Following that we are binding the custom the MyOnActorOverlap and MyOnActorEndOverlap functions to appropriate delegates. The Tick() function of this object is responsible for translating the obstacle during play. Add the following code to the Tick function of AObstacle: void AObstacle::Tick( float DeltaTime ) {     Super::Tick( DeltaTime ); float gameSpeed = GetCustomGameMode<ABountyDashGameMode>(GetWorld())-> GetInvGameSpeed();       AddActorLocalOffset(FVector(gameSpeed, 0.0f, 0.0f));       if (GetActorLocation().X < KillPoint)     {         Destroy();     } } As you can see the tick function will add an offset to AObstacle each frame along the X axis via the AddActorLocalOffset function. The value of the offset is determined by the game speed set in the game mode; again, we are using the template function that we created earlier to get the game mode to call GetInvGameSpeed(). AObstacle is also responsible for its own destruction; upon reaching a maximum bounds defined by killPoint, the AObstacle will destroy itself. The last thing to we need to add is the function definitions for the OnOverlap functions the and KillPoint accessors: void AObstacle::MyOnActorOverlap(AActor* otherActor) {     } void AObstacle::MyOnActorEndOverlap(AActor* otherActor) {   }   void AObstacle::SetKillPoint(float point) {     killPoint = point; }   float AObstacle::GetKillPoint() {     return killPoint; } Now, let's abstract this class into blueprint. Compile the code and go back to the game editor. Within the content folder, create a new blueprint object that inherits form the Obstacle class that we just made, and name it RockObstacleBP. Within this blueprint, we need to make some adjustments. Select the collider component that we created, and expand the shape sections in the Details panel. Change the Sphere radius property to 100.0f. Next, select the mesh component and expand the Static Mesh section. From the provided drop-down menu, choose the SM_Rock mesh. Next, expand the transform section of the Mesh component details panel and match these values:   You should end up with an object that looks similar to this:   Spawning Actors from C++ Despite the obstacles being fairly easy to implement from a C++ standpoint, the complication usually comes from the spawning system that we will be using to create these objects in the game. We will leverage a similar system to the player's movement by basing the spawn locations off of the ATargetPoints that are already in the scene. We can then randomly select a spawn target when we require a new object to spawn. Open the class wizard now, and create a class that inherits from Actor and call it ObstacleSpawner. We inherit from AActor as even though this object does not have a physical presence in the scene, we still require ObstacleSpawner to tick. The first issue we are going to encounter is that our current target points give us a good indication of the Y positon for our spawns, but the X position is centered around the origin. This is undesirable for the obstacle spawn point, as we would like to spawn these objects a fair distance away from the player, so we can do two things. One, obscure the popping of spawning the objects via fog, and two, present the player with enough obstacle information so that they may dodge them at high speeds. This means we are going to require some information from our floor object; we can use the KillPoint and SpawnPoint members of the floor to determine the spawn location and kill the location of the Obstacles. Obstacle Spawner Class definition This will be another fairly simple object. It will require a BeginPlay function, so we may find the floor and all the target points that we require for spawning. We also require a Tick function so that we may process spawning logic on a per frame basis. Thankfully, both of these are provided by default by the class wizard. We have created a protected SpawnObstacle() function, so we may group that functionality together. We are also going to require a few UPRORERTY declared members that can be edited from the Level editor. We need a list of obstacle types to spawn; we can then randomly select one of the types each time we spawn an obstacle. We also require the spawn targets (though, we will be populating these upon beginning the play). Finally, we will need a spawn time that we can set for the interval between obstacles spawning. To accommodate for all of this, navigate to ObstacleSpawner.h now and modify the class definition to match the following: UCLASS() class BOUNTYDASH_API AObstacleSpawner : public AActor {     GENERATED_BODY()     public:     // Sets default values for this actor's properties     AObstacleSpawner();       // Called when the game starts or when spawned     virtual void BeginPlay() override;         // Called every frame     virtual void Tick( float DeltaSeconds ) override;     protected:       void SpawnObstacle();   public:     UPROPERTY(EditAnywhere, BlueprintReadWrite)     TArray<TSubclassof<class AObstacle*>> ObstaclesToSpawn;       UPROPERTY()     TArray<class ATargetPoint*>SpawnTargets;       UPROPERTY(EditAnywhere, BlueprintReadWrite)     float SpawnTimer;       UPROPERTY()     USceneComponent* scene; private:     float KillPoint;     float SpawnPoint;    float TimeSinceLastSpawn; }; I have again used TArrays for our containers of obstacle objects and spawn targets. As you can see, the obstacle list is of type TSubclassof<class AObstacle>>. This means that the objects in TArray will be class types that inherit from AObscatle. This is very useful as not only will we be able to use these array elements for spawn information, but the engine will also filter our search when we will be adding object types to this array from the editor. With these class types, we will be able to spawn objects that inherit from AObject (including blueprints) when required. We have also included a scene object, so we can arbitrarily place AObstacleSpawner in the level somewhere, and we can place two private members that will hold the kill and the spawn point of the objects. The last element is a float timer that will be used to gauge how much time has passed since the last obstacle spawn. Obstacle Spawner function definitions Okay now, we can create the body of the AObstacleSpawner object. Before we do so, ensure to include the list in ObstacleSpawner.cpp as follows: #include "BountyDash.h" #include "BountyDashGameMode.h" #include "Engine/TargetPoint.h" #include “Floor.h” #include “Obstacle.h” #include "ObstacleSpawner.h"   Following this we have a very simple constructor that establishes the root scene component: // Sets default values AObstacleSpawner::AObstacleSpawner() { // Set this actor to call Tick() every frame.  You can turn this off to improve performance if you don't need it. PrimaryActorTick.bCanEverTick = true;   Scene = CreateDefaultSubobject<USceneComponent>(TEXT("Root")); check(Scene); RootComponent = scene;   SpawnTimer = 1.5f; } Following the constructor, we have BeginPlay(). Inside this function, we are going to do a few things. Firstly, we are simply performing the same in the level object retrieval that we executed in ABountyDashCarhacter to get the location of ATargetPoints. However, this object also requires information from the floor object in the level. We are also going to get the floor object the same way we did with the ATargetPoints by utilizing TActorIterators. We will then get the required kill and spawn the point information. We will also set TimeSinceLastSpawn to SpawnTimer so that we begin spawning objects instantaneously: // Called when the game starts or when spawned void AObstacleSpawner::BeginPlay() {     Super::BeginPlay();   for(TActorIterator<ATargetPoint> TargetIter(GetWorld()); TargetIter;     ++TargetIter)     {         SpawnTargets.Add(*TargetIter);     }   for (TActorIterator<AFloor> FloorIter(GetWorld()); FloorIter;         ++FloorIter)     {         if (FloorIter->GetWorld() == GetWorld())         {             KillPoint = FloorIter->GetKillPoint();             SpawnPoint = FloorIter->GetSpawnPoint();         }     }     TimeSinceLastSpawn = SpawnTimer; } The next function we will understand in detail is Tick(), which is responsible for the bulk of the AObstacleSpawner functionality. Within this function, we need to check if we require a new object to be spawned based off of the amount of time that has passed since we last spawned an object. Add the following code to AObstacleSpawner::Tick() underneath Super::Tick(): TimeSinceLastSpawn += DeltaTime;   float trueSpawnTime = spawnTime / (float)GetCustomGameMode <ABountyDashGameMode>(GetWorld())->GetGameLevel();   if (TimeSinceLastSpawn > trueSpawnTime) {     timeSinceLastSpawn = 0.0f;     SpawnObstacle (); } Here, we are accumulating the delta time in TimeSinceLastSpawn, so we may gauge how much real time has passed since the last obstacle was spawned. We then calculate the trueSpawnTime of the AObstacleSpawner. This is based off of a base SpawnTime, which is divided by the current game level retrieved from the game mode via the GetCustomGamMode() template function. This means that as the game level increases and the obstacles begin to move faster, the obstacle spawner will also spawn objects at a faster rate. If the accumulated timeSinceLastSpawn is greater than the calculated trueSpawnTime, we need to call SpawnObject() and reset the timeSinceLastSpawn timer to 0.0f. Getting information from components in C++ Now, we need to write the spawn function. This spawn function is going to have to retrieve some information from the components of the object that is being spawned. As we have allowed our AObstacle class to be extended into blueprint, we have also exposed the object to a level of versatility that we must compensate for in the codebase. With the ability to customize the mesh and bounds of the Sphere Collider that makes up any given obstacle, we must be sure to spawn the obstacle in the right place regardless of size! To do this, we are going to need to obtain information form the components contained within the spawned AObstacle class. This can be done via GetComponentByClass(). It will take the UClass* of the component you wish to retrieve, and it will return a handle to the component if it has been found. We can then cast this handle to the appropriate type and retrieve the information that we require! Let's begin detailing the spawn function; add the following code to ObstacleSpawner.cpp: void AObstacleSpawner::SpawnObstacle() { if (SpawnTargets.Num() > 0 && ObstaclesToSpawn.Num() > 0) { short Spawner = Fmath::Rand() % SpawnTargets.Num();     short Obstical = Fmath::Rand() % ObstaclesToSpawn.Num();     float CapsuleOffset = 0.0f; Here, we ensure that both of the arrays have been populated with at least one valid member. We then generate the random lookup integers that we will use to access the SpawnTargets and obstacleToSpawn arrays. This means that every time we spawn an object, both the lane spawned in and the type of the object will be randomized. We do this by generating a random value with FMath::Rand(), and then we find the remainder of this number divided by the number of elements in the corresponding array. The result will be a random number that exists between zero and the number of objects in either array minus one which is perfect for our needs. Continue by adding the following code: FActorSpawnParameters SpawnInfo;   FTransform myTrans = SpawnTargets[Spawner]->GetTransform(); myTrans.SetLocation(FVector(SpawnPoint, myTrans.GetLocation().Y, myTrans.GetLocation().Z)); Here, we are using a struct called FActorSpawnParameters. The default values of this struct are fine for our purposes. We will soon be parsing this struct to a function in our world context. After this, we create a transform that we will be providing to the world context as well. The transform of the spawner will suffice apart from the X component of the location. We need to adjust this so that the X value of the spawn transform matches the spawn point that we retrieved from the floor. We do this by setting the X component of the spawn transforms location to be the spawnPoint value that we received earlier, and w make sure that the other components of the location vector to be the Y and Z components of the current location. The next thing we must do is actually spawn the object! We are going to utilize a template function called SpawnActor() that can be called form the UWorld* handle returned by GetWorld(). This function will spawn an object of a specified type in the game world at a specified location. The type of the object is determined by passing the UClass* handle that holds the object type we wish to spawn. The transform and spawn parameters of the object are also determined by the corresponding input parameters of SpawnActor(). The template type of the function will dictate the type of object that is spawned and the handle that is returned from the function. In our case, we require AObstacle to be spawned. Add the following code to the SpawnObstacle function: AObstacle* newObs = GetWorld()-> SpawnActor<AObstacle>(obstacleToSpawn[Obstical, myTrans, SpawnInfo);   if(newObs) { newObs->SetKillPoint(KillPoint); As you can see we are using SpawnActor() with a template type of AObstacle. We use the random look up integer we generated before to retrieve the class type from the obstacleToSpawn array. We also provide the transform and spawn parameters we created earlier to SpawnActor(). If the new AObstacle was created successfully we then save the return of this function into an AObstacle handle that we will use to set the kill point of the obstacle via SetKillPoint(). We must now adjust the height of this object. The object will more than likely spawn in the ground in its current state. We need to get access to the sphere component of the obstacle so we may get the radius of this capsule and adjust the positon of the obstacle so it sits above the ground. We can use the capsule as a reliable resource as it is the root component of the Obstacle thus we can move the Obstacle entirely out of the ground if we assume the base of the sphere will line up with the base of the mesh. Add the following code to the SpawnObstacle() function: USphereComponent* obsSphere = Cast<USphereComponent> (newObs->GetComponentByClass(USphereComponent::StaticClass()));   if (obsSphere) { newObs->AddActorLocalOffset(FVector(0.0f, 0.0f, obsSphere-> GetUnscaledSphereRadius())); } }//<-- Closing if(newObs) }//<-- Closing if(SpawnTargets.Num() > 0 && obstacleToSpawn.Num() > 0) Here, we are getting the sphere component out of the newObs handle that we obtained from SpawnActor() via GetComponentByClass(), which was mentioned previously. We pass the class type of USphereComponent via the static function called StaticClass() to the function. This will return a valid handle if newObs does indeed contain USphereComponent (which we know it does). We then cast the result of this function to USphereComponent*, and save it in the obsSphere handle. We ensure that this handle is valid; if it is, we can then offset the actor that we just spawned on the Z axis by the unscaled radius of the sphere component. This will result in all the obstacles spawned be in line with the top of the floor! Ensuring the obstacle spawner works Okay, now is the time to bring the obstacle spawner into the scene. Be sure to compile the code, then navigate to the C++ classes folder of the content browser. From here, drag and drop ObstacleSpawner into the scene. Select the new ObstacleSpawner via the World Outlier, and address the Details panel. You will see the exposed members under the ObstacleSpawner section like so: Now, to add RockObstacleBP that we made earlier to the ObstacleToSpawn array, press the small white plus next to the property in the Details panel; this will add an element to TArray that you will then be able to customize. Select the drop-down menu that currently says None. Within this menu, search for RockObstacleBP and select it. If you wish to create and add more obstacle types to this array, feel free to do so. We do not need to add any members to the Spawn Targets property, as this will happen automatically. Now, press Play and behold a legion of moving rocks. Summary This article gives an overview about various development tricks associated with Unreal Engine 4. Resources for Article: Further resources on this subject: Special Effects [article] Bang Bang – Let's Make It Explode [article] The Game World [article]
Read more
  • 0
  • 0
  • 46434

article-image-adding-media-our-site
Packt
21 Jun 2016
19 min read
Save for later

Adding Media to Our Site

Packt
21 Jun 2016
19 min read
In this article by Neeraj Kumar et al, authors of the book, Drupal 8 Development Beginner's Guide - Second Edtion, explains a text-only site is not going to hold the interest of visitors; a site needs some pizzazz and some spice! One way to add some pizzazz to your site is by adding some multimedia content, such as images, video, audio, and so on. But, we don't just want to add a few images here and there; in fact, we want an immersive and compelling multimedia experience that is easy to manage, configure, and extend. The File entity (https://drupal.org/project/file_entity) module for Drupal 8 will enable us to manage files very easily. In this article, we will discover how to integrate the File entity module to add images to our d8dev site, and will explore compelling ways to present images to users. This will include taking a look at the integration of a lightbox-type UI element for displaying the File-entity-module-managed images, and learning how we can create custom image styles through UI and code. The following topics will be covered in this article: The File entity module for Drupal 8 Adding a Recipe image field to your content types Code example—image styles for Drupal 8 Displaying recipe images in a lightbox popup Working with Drupal issue queues (For more resources related to this topic, see here.) Introduction to the File entity module As per the module page at https://www.drupal.org/project/file_entity: File entity provides interfaces for managing files. It also extends the core file entity, allowing files to be fieldable, grouped into types, viewed (using display modes) and formatted using field formatters. File entity integrates with a number of modules, exposing files to Views, Entity API, Token and more. In our case, we need this module to easily edit image properties such as Title text and Alt text. So these properties will be used in the colorbox popup to display them as captions. Working with dev versions of modules There are times when you come across a module that introduces some major new features and is fairly stable, but not quite ready for use on a live/production website, and is therefore available only as a dev version. This is a perfect opportunity to provide a valuable contribution to the Drupal community. Just by installing and using a dev version of a module (in your local development environment, of course), you are providing valuable testing for the module maintainers. Of course, you should enter an issue in the project's issue queue if you discover any bugs or would like to request any additional features. Also, using a dev version of a module presents you with the opportunity to take on some custom Drupal development. However, it is important that you remember that a module is released as a dev version for a reason, and it is most likely not stable enough to be deployed on a public-facing site. Our use of the File entity module in this article is a good example of working with the dev version of a module. One thing to note: Drush will download official and dev module releases. But at this point in time, there is no official port for the File entity module in Drupal, so we will use the unofficial one, which lives on GitHub (https://github.com/drupal-media/file_entity). In the next step, we will be downloading the dev release with GitHub. Time for action – installing a dev version of the File entity module In Drupal, we use Drush to download and enable any module/theme, but there is no official port yet for the file entity module in Drupal, so we can use the unofficial one, which lives on GitHub at https://github.com/drupal-media/file_entity: Open the Terminal (Mac OS X) or Command Prompt (Windows) application, and go to the root directory of your d8dev site. Go inside the modules folder and download the File entity module from GitHub. We use the git command to download this module: $ git clone https://github.com/drupal-media/file_entity. Another way is to download a .zip file from https://github.com/drupal-media/file_entity and extract it in the modules folder: Next, on the Extend page (admin/modules), enable the File entity module. What just happened? We enabled the File entity module, and learned how to download and install with GitHub. A new recipe for our site In this article, we are going to create a new recipe: Thai Basil Chicken. If you would like to have more real content to use as an example, and feel free to try the recipe out! Name: Thai Basil Chicken Description: A spicy, flavorful version of one of my favorite Thai dishes RecipeYield : Four servings PrepTime: 25 minutes CookTime: 20 minutes Ingredients : One pound boneless chicken breasts Two tablespoons of olive oil Four garlic cloves, minced Three tablespoons of soy sauce Two tablespoons of fish sauce Two large sweet onions, sliced Five cloves of garlic One yellow bell pepper One green bell pepper Four to eight Thai peppers (depending on the level of hotness you want) One-third cup of dark brown sugar dissolved in one cup of hot water One cup of fresh basil leaves Two cups of Jasmin rice Instructions: Prepare the Jasmine rice according to the directions. Heat the olive oil in a large frying pan over medium heat for two minutes. Add the chicken to the pan and then pour on soy sauce. Cook the chicken until there is no visible pinkness—approximately 8 to 10 minutes. Reduce heat to medium low. Add the garlic and fish sauce, and simmer for 3 minutes. Next, add the Thai chilies, onion, and bell pepper and stir to combine. Simmer for 2 minutes. Add the brown sugar and water mixture. Stir to mix, and then cover. Simmer for 5 minutes. Uncover, add basil, and stir to combine. Serve over rice. Time for action – adding a Recipe images field to our Recipe content type We will use the manage fields administrative page to add a Media field to our d8dev Recipe content type: Open up the d8dev site in your favorite browser, click on the Structure link in the Admin toolbar, and then click on the Content types link. Next, on the Content types administrative page, click on the Manage fields link for your Recipe content type: Now, on the Manage fields administrative page, click on the Add field link. On the next screen, select Image from the Add a new field dropdown and Label as Recipe images. Click on the Save field settings button. Next, on the Field settings page, select Unlimited as the allowed number of values. Click on the Save field settings button. On the next screen, leave all settings as they are and click on the Save settings button. Next, on the Manage form display page, select widget Editable file for the Recipe images field and click on the Save button. Now, on the Manage display page, for the Recipe images field, select Hidden as the label. Click on the settings icon. Then select Medium (220*220) as the image style, and click on the Update button. At the bottom, click on the Save button: Let's add some Recipe images to a recipe. Click on the Content link in the menu bar, and then click on Add content and Recipe. On the next screen, fill in the title as Thai Basil Chicken and other fields respectively as mentioned in the preceding recipe details. Now, scroll down to the new Recipe images field that you have added. Click on the Add a new file button or drag and drop images that you want to upload. Then click on the Save and Publish button: Reload your Thai Basil Chicken recipe page, and you should see something similar to the following: All the images are stacked on top of each other. So, we will add the following CSS just under the style for field--name-field-recipe-images and field--type-recipe-images in the /modules/d8dev/styles/d8dev.css file, to lay out the Recipe images in more of a grid: .node .field--type-recipe-images { float: none !important; } .field--name-field-recipe-images .field__item { display: inline-flex; padding: 6px; } Now we will load this d8dev.css file to affect this grid style. In Drupal 8, loading a CSS file has a process: Save the CSS to a file. Define a library, which can contain the CSS file. Attach the library to a render array in a hook. So, we have already saved a CSS file called d8dev.css under the styles folder; now we will define a library. To define one or more (asset) libraries, add a *.libraries.yml file to your theme folder. Our module is named d8dev, and then the filename should be d8dev.libraries.yml. Each library in the file is an entry detailing CSS, like this: d8dev: version: 1.x css: theme: styles/d8dev.css: {} Now, we define the hook_page_attachments() function to load the CSS file. Add the following code inside the d8dev.module file. Use this hook when you want to conditionally add attachments to a page: /** * Implements hook_page_attachments(). */ function d8dev_page_attachments(array &$attachments) { $attachments['#attached']['library'][] = 'd8dev/d8dev'; } Now, we will need to clear the cache for our d8dev site by going to Configuration, clicking on the Performance link, and then clicking on the Clear all caches button. Reload your Thai Basil Chicken recipe page, and you should see something similar to the following: What just happened? We added and configured a media-based field for our Recipe content type. We updated the d8dev module with custom CSS code to lay out the Recipe images in more of a grid format. And also we looked at how to attach a CSS file through a module. Creating a custom image style Before we configure a colorbox feature, we are going to create a custom image style to use when we add them in colorbox content preview settings. Image styles for Drupal 8 are part of the core Image module. The core image module provides three default image styles—thumbnail, medium, and large—as seen in the following Image style configuration page: Now, we are going to add a fifth custom image style, an image style that will resize our images somewhere between the 100 x 75 thumbnail style and the 220 x 165 medium style. We will walkthrough the process of creating an image style through the Image style administrative page, and also walkthrough the process of programmatically creating an image style. Time for action – adding a custom image style through the image style administrative page First, we will use the Image style administrative page (admin/config/media/image-styles) to create a custom image style: Open the d8dev site in your favorite browser, click on the Configuration link in the Admin toolbar, and click on the Image styles link under the Media section. Once the Image styles administrative page has loaded, click on the Add style link. Next, enter small for the Image style name of your custom image style, and click on the Create new style button: Now, we will add the one and only effect for our custom image style by selecting Scale from the EFFECT options and then clicking on the Add button. On the Add Scale effect page, enter 160 for the width and 120 for the height. Leave the Allow Upscaling checkbox unchecked, and click on the Add effect button: Finally, just click on the Update style button on the Edit small style administrative page, and we are done. We now have a new custom small image style that we will be able to use to resize images for our site: What just happened? We learned how easy it is to add a custom image style with the administrative UI. Now, we are going to see how to add a custom image style by writing some code. The advantage of having code-based custom image styles is that it will allow us to utilize a source code repository, such as Git, to manage and deploy our custom image styles between different environments. For example, it would allow us to use Git to promote image styles from our development environment to a live production website. Otherwise, the manual configuration that we just did would have to be repeated for every environment. Time for action – creating a programmatic custom image style Now, we will see how we can add a custom image style with code: The first thing we need to do is delete the small image style that we just created. So, open your d8dev site in your favorite browser, click on the Configuration link in the Admin toolbar, and then click on the Image styles link under the Media section. Once the Image styles administrative page has loaded, click on the delete link for the small image style that we just added. Next, on the Optionally select a style before deleting small page, leave the default value for the Replacement style select list as No replacement, just delete, and click on the Delete button: In Drupal 8, image styles have been converted from an array to an object that extends ConfigEntity. All image styles provided by modules need to be defined as YAML configuration files in the config/install folder of each module. Suppose our module is located at modules/d8dev. Create a file called modules/d8dev/config/install/image.style.small.yml with the following content: uuid: b97a0bd7-4833-4d4a-ae05-5d4da0503041 langcode: en status: true dependencies: { } name: small label: small effects: c76016aa-3c8b-495a-9e31-4923f1e4be54: uuid: c76016aa-3c8b-495a-9e31-4923f1e4be54 id: image_scale weight: 1 data: width: 160 height: 120 upscale: false We need to use a UUID generator to assign unique IDs to image style effects. Do not copy/paste UUIDs from other pieces of code or from other image styles! The name of our custom style is small, is provided as the name and label as same. For each effect that we want to add to our image style, we will specify the effect we want to use as the name key, and then pass in values as the settings for the effect. In the case of the image_scale effect that we are using here, we pass in the width, height, and upscale settings. Finally, the value for the weight key allows us to specify the order the effects should be processed in, and although it is not very useful when there is only one effect, it becomes important when there are multiple effects. Now, we will need to uninstall and install our d8dev module by going to the Extend page. On the next screen click on the Uninstall tab, check the d8dev checkbox and click on the Uninstall button. Now, click on the List tab, check d8dev, and click on the Install button. Then, go back to the Image styles administrative page and you will see our programmatically created small image style. What just happened? We created a custom image style with some custom code. We then configured our Recipe content type to use our custom image style for images added to the Recipe images field. Integrating the Colorbox and File entity modules The File entity module provides interfaces for managing files. For images, we will be able to edit Title text, Alt text, and Filenames easily. However, the images are taking up quite a bit of room. Let's create a pop-up lightbox gallery and show images in a popup. When someone clicks on an image, a lightbox will pop up and allow the user to cycle through larger versions of all associated images. Time for action – installing the Colorbox module Before we can display Recipe images in a Colorbox, we need to download and enable the module: Open the Mac OS X Terminal or Windows Command Prompt, and change to the d8dev directory. Next, use Drush to download and enable the current dev release of the Colorbox module (http://drupal.org/project/colorbox): $ drush dl colorbox-8.x-1.x-dev Project colorbox (8.x-1.x-dev) downloaded to /var/www/html/d8dev/modules/colorbox. [success] $ drushencolorbox The following extensions will be enabled: colorbox Do you really want to continue? (y/n): y colorbox was enabled successfully. [ok] The Colorbox module depends on the Colorbox jQuery plugin available at https://github.com/jackmoore/colorbox. The Colorbox module includes a Drush task that will download the required jQuery plugin at the /libraries directory: $drushcolorbox-plugin Colorbox plugin has been installed in libraries [success] Next, we will look into the Colorbox display formatter. Click on the Structure link in the Admin toolbar, then click on the Content types link, and finally click on the manage display link for your Recipe content type under the Operations dropdown: Next, click on the FORMAT select list for the Recipe images field, and you will see an option for Colorbox, Select as Colorbox then you will see the settings change. Then, click on the settings icon: Now, you will see the settings for Colorbox. Select Content image style as small and Content image style for first image as small in the dropdown, and use the default settings for other options. Click on the Update button and next on the Save button at the bottom: Reload our Thai Basil Chicken recipe page, and you should see something similar to the following (with the new image style, small): Now, click on any image and then you will see the image loaded in the colorbox popup: We have learned more about images for colorbox, but colorbox also supports videos. Another way to add some spice to our site is by adding videos. So there are several modules available to work with colorbox for videos. The Video Embed Field module creates a simple field type that allows you to embed videos from YouTube and Vimeo and show their thumbnail previews simply by entering the video's URL. So you can try this module to add some pizzazz to your site! What just happened? We installed the Colorbox module and enabled it for the Recipe images field on our custom Recipe content type. Now, we can easily add images to our d8dev content with the Colorbox pop-up feature. Working with Drupal issue queues Drupal has its own issue queue for working with a team of developers around the world. If you need help for a specific project, core, module, or a theme related, you should go to the issue queue, where the maintainers, users, and followers of the module/theme communicate. The issue page provides a filter option, where you can search for specific issues based on Project, Assigned, Submitted by, Followers, Status, Priority, Category, and so on. We can find issues at https://www.drupal.org/project/issues/colorbox. Here, replace colorbox with the specific module name. For more information, see https://www.drupal.org/issue-queue. In our case, we have one issue with the colorbox module. Captions are working for the Automatic and Content title properties, but are not working for the Alt text and Title text properties. To check this issue, go to Structure | Content types and click on Manage display. On the next screen, click on the settings icon for the Recipe images field. Now select the Caption option as Title text or Alt text and click on the Update button. Finally, click on the Save button. Reload the Thai Basil Chicken recipe page, and click on any image. Then it opens in popup, but we cannot see captions for this. Make sure you have the Title text and Alt text properties updated for Recipe images field for the Thai Basil Chicken recipe. Time for action – creating an issue for the Colorbox module Now, before we go and try to figure out how to fix this functionality for the Colorbox module, let's create an issue: On https://www.drupal.org/project/issues/colorbox, click on the Create a new issue link: On the next screen we will see a form. We will fill in all the required fields: Title, Category as Bug report, Version as 8.x-1.x-dev, Component as Code, and the Issue summary field. Once I submitted my form, an issue was created at https://www.drupal.org/node/2645160. You should see an issue on Drupal (https://www.drupal.org/) like this: Next, the Maintainers of the colorbox module will look into this issue and reply accordingly. Actually, @frjo replied saying "I have never used that module but if someone who does sends in a patch I will take a look at it." He is a contributor to this module, so we will wait for some time and will see if someone can fix this issue by giving a patch or replying with useful comments. In case someone gives the patch, then we have to apply that to the colorbox module. This information is available on Drupal at https://www.drupal.org/patch/apply . What just happened? We understood and created an issue in the Colorbox module's issue queue list. We also looked into what the required fields are and how to fill them to create an issue in the Drupal module queue list form. Summary In this article, we looked at a way to use our d8dev site with multimedia, creating image styles using some custom code, and learned some new ways of interacting with the Drupal developer community. We also worked with the Colorbox module to add images to our d8dev content with the Colorbox pop-up feature. Lastly, we looked into the custom module to work with custom CSS files. Resources for Article: Further resources on this subject: Installing Drupal 8 [article] Drupal 7 Social Networking: Managing Users and Profiles [article] Drupal 8 and Configuration Management [article]
Read more
  • 0
  • 0
  • 33605

article-image-communication-and-network-security
Packt
21 Jun 2016
7 min read
Save for later

Communication and Network Security

Packt
21 Jun 2016
7 min read
In this article by M. L. Srinivasan, the author of the book CISSP in 21 Days, Second Edition, the communication and network security domain deals with the security of voice and data communications through Local area, Wide area, and Remote access networking. Candidates are expected to have knowledge in the areas of secure communications; securing networks; threats, vulnerabilities, attacks, and countermeasures to communication networks; and protocols that are used in remote access. (For more resources related to this topic, see here.) Observe the following diagram. This represents seven layers of the OSI model. This article covers protocols and security in the fourth layer, which is the Transport layer: Transport layer protocols and security The Transport layer does two things. One is to pack the data given out by applications to a format that is suitable for transport over the network, and the other is to unpackthe data received from the network to a format suitable for applications. In this layer, some of the important protocols are Transmission Control Protocol (TCP), User Datagram Protocol (UDP), Stream Control Transmission Protocol (SCTP), Datagram Congestion Control Protocol (DCCP), and Fiber Channel Protocol (FCP). The process of packaging the data packets received from the applications is called encapsulation, and the output of such a process is called a datagram. Similarly, the process of unpacking the datagram received from the network is called decapstulation. When moving from the seventh layer down to the fourth one, when the fourth layer's header is placed on data, it comes as a datagram. When the datagram is encapsulated with the third layer's header, it becomes a packet, the encapsulated packet becomes a frame, and puts on the wire as bits. The following section describes some of the important protocols in this layer along with security concerns and countermeasures. Transmission Control Protocol (TCP) It is a core Internet protocol that provides reliable delivery mechanisms over the Internet. TCP is a connection-oriented protocol. A protocol that guarantees the delivery of datagram (packets) to the destination application by way of a suitable mechanism (for example, a three-way handshake SYN, SYN-ACK, and ACK in TCP) is called a connection-oriented protocol. The reliability of the datagram delivery of such protocol is high due to the acknowledgment part by the receiver. This protocol has two primary functions. The primary function of TCP is the transmission of datagram between applications, and the secondary one is in terms of controls that are necessary for ensuring reliable transmissions. Applications where the delivery needs to be assured such as e-mail, the World Wide Web (WWW), file transfer,and so on use TCP for transmission. Threats, vulnerabilities, attacks, and countermeasures One of the common threats to TCP is a service disruption. A common vulnerability is half-open connections exhausting the server resources. The Denial of Service attacks such as TCP SYN attacks as well as connection hijacking such as IP Spoofing attacks are possible. A half-open connection is a vulnerability in the TCP implementation.TCP uses a three-way handshake to establish or terminate connections. Refer to the following diagram: In a three-way handshake, the client first (workstation) sends a request to the server (for example www.SomeWebsite.com). This is called a SYN request. The server acknowledges the request by sending a SYN-ACK, and in the process, it creates a buffer for this connection. The client does a final acknowledgement by ACK. TCP requires this setup, since the protocol needs to ensure the reliability of the packet delivery. If the client does not send the final ACK, then the connection is called half open. Since the server has created a buffer for that connection,a certain amount of memory or server resource is consumed. If thousands of such half-open connections are created maliciously, then the server resources maybe completely consumed resulting in the Denial-of-Service to legitimate requests. TCP SYN attacks are technically establishing thousands of half-open connections to consume the server resources. There are two actions that an attacker might do. One is that the attacker or malicious software will send thousands of SYN to the server and withheld ACK. This is called SYN flooding. Depending on the capacity of the network bandwidth and the server resources, in a span of time,all the resources will be consumed resulting in the Denial-of-Service. If the source IP was blocked by some means, then the attacker or the malicious software would try to spoof the source IP addresses to continue the attack. This is called SYN spoofing. SYN attacks such as SYN flooding and SYN spoofing can be controlled using SYN cookies with cryptographic hash functions. In this method, the server does not create the connection at the SYN-ACK stage. The server creates a cookie with the computed hash of the source IP address, source port, destination IP, destination port, and some random values based on the algorithm and sends it as SYN-ACK. When the server receives an ACK, it checks the details and creates the connection. A cookie is a piece of information usually in the form of text file sent by the server to a client. Cookies are generally stored in browser disk or client computers, and they are used for purposes such as authentication, session tracking, and management. User Datagram Protocol (UDP) UDP is a connectionless protocol and is similar to TCP. However, UDP does not provide the delivery guarantee of data packets. A protocol that does not guarantee the delivery of datagram (packets) to the destination is called connectionless protocol. In other words, the final acknowledgment is not mandatory in UDP. UDP uses one-way communication. The speed delivery of the datagram by UDP is high. UDP is predominantly used where a loss of intermittent packets is acceptable such as video or audio streaming. Threats, vulnerabilities, attacks, and countermeasures Service disruptions are common threats, and validation weaknesses facilitate such threats. UDP flood attacks cause service disruptions, and controlling UDP packet size acts as a countermeasure to such attacks. Internet Control Message Protocol (ICMP) ICMP is used to discover service availability in network devices, servers ,and so on. ICMP expects response messages from devices or systems to confirm the service availability. Threats, vulnerabilities, attacks, and countermeasures Service disruptions are common threats. Validation weaknesses facilitate such threats. ICMP flood attacks, such as the ping of death, causes service disruptions; and controlling ICMP packet size acts as a countermeasure to such attacks. Pinging is a process of sending the Internet Control Message Protocol (ICMP) ECHO_REQUEST message to servers or hosts to check whether they are up and running. In this process,the server or host on the network responds to a ping request, and such a response is called echo. A ping of death refers to sending large numbers of ICMP packets to the server to crash the system. Other protocols in transport layer Stream Control Transmission Protocol (SCTP): This is a connection-oriented protocol similar to TCP, but it provides facilities such as multi-streaming and multi-homing for better performance and redundancy. It is used in UNIX-like operating systems. Datagram Congestion Control Protocol (DCCP): As the name implies, this is a Transport layer protocol that is used for congestion control. Applications her include the Internet telephony and video/audio streaming over the network. Fiber Channel Protocol (FCP): This protocol is used in high-speed networking. One of the prominent applications here is Storage Area Network (SAN). Storage Area Network (SAN) is a network architecture used to attach remote storage devices, such as tape drives anddisk arrays, to the local server. This facilitates using storage devices as if they are local devices. Summary This article covers protocols and security in thetransport layer, which is the fourth layer. Resources for Article: Further resources on this subject: The GNS3 orchestra [article] CISSP: Vulnerability and Penetration Testing for Access Control [article] CISSP: Security Measures for Access Control [article]
Read more
  • 0
  • 0
  • 20300

article-image-creating-multitenant-applications-azure
Packt
21 Jun 2016
18 min read
Save for later

Creating Multitenant Applications in Azure

Packt
21 Jun 2016
18 min read
This article, written by Roberto Freato and Marco Parenzan, is from the book Mastering Cloud Development using Microsoft Azure by Packt Publishing, and it teaches us how to create multitenant applications in Azure. This book guides you through the many efficient ways of mastering the cloud services and using Microsoft Azure and its services to its maximum capacity. (For more resources related to this topic, see here.) A tenant is a private space for a user or a group of users in an application. A typical way to identify a tenant is by its domain name. If multiple users share a domain name, we say that these users live inside the same tenant. If a group of users use a different reserved domain name, they live in a reserved tenant. From this, we can infer that different names are used to identify different tenants. Different domain names can imply different app instances, but we cannot say the same about deployed resources. Multitenancy is one of the funding principles of Cloud Computing. Developers need to reach economy of scale, which allows every cloud user to scale as needed without paying for overprovisioned resources or suffering for underprovisioned resources. To do this, cloud infrastructure needs to be oversized for a single user and sized for a pool of potential users that share the same group of resources during a certain period of time. Multitenancy is a pattern. Legacy on-premise applications usually tend to be a single-tenant app, shared between users because of the lack of specific DevOps tasks. Provisioning an app for every user can be a costly operation. Cloud environments invite reserving a single tenant for each user (or group of users) to enforce better security policies and to customize tenants for specific users because all DevOps tasks can be automated via management APIs. The cloud invites reserving resource instances for a tenant and deploying a group of tenants on the same resources. In general, this is a new way of handling app deployment. We will now take a look at how to develop an app in this way. Scenario CloudMakers.xyz, a cloud-based development company, decided to develop a personal accountant web application—MyAccountant. Professionals or small companies can register themselves on this app as a single customer and record all of their invoices on it. A single customer represents the tenant; different companies use different tenants. Every tenant needs its own private data to enforce data security, so we will reserve a dedicated database for a single tenant. Access to a single database is not an intensive task because invoice registration will generally occur once daily. Every tenant will have its own domain name to enforce company identity. A new tenant can be created from the company portal application, where new customers register themselves, specifying the tenant name. For sample purposes, without the objective of creating production-quality styling, we use the default ASP.NET MVC templates to style and build up apps and focus on tenant topics. Creating the tenant app A tenant app is an invoice recording application. To brand the tenant, we record tenant name in the app settings inside the web.config file: <add key="TenantName" value="{put_your_tenant_name}" /> To simplify this, we "brand" the application that stores the tenant name in the main layout file where the application name is displayed. The application content is represented by an Invoices page where we record data with a CRUD process. The entry for the Invoices page is in the Navigation bar: <ul class="nav navbar-nav"> <li>@Html.ActionLink("Home", "Index", "Home")</li> <li>@Html.ActionLink("Invoices", "Index", "Invoices")</li> <!-- other code omitted --> First, we need to define a model for the application in the models folder. As we need to store data in an Azure SQL database, we can use entity framework to create the model from an empty code. However, first we use the following code: public class InvoicesModel : DbContext { public InvoicesModel() : base("name=InvoicesModel") { } public virtual DbSet<Invoice> Invoices { get; set; } } As we can see, data will be accessed by a SQL database that is referenced by a connectionString in the web.config file: <add name="InvoicesModel" connectionString="data source=(LocalDb)MSSQLLocalDB;initial catalog=Tenant.Web.Models.InvoicesModel;integrated security=True;MultipleActiveResultSets=True; App=EntityFramework" providerName="System.Data.SqlClient" /></connectionStrings> This model class is just for demo purposes: public class Invoice { public int InvoiceId { get; set; } public int Number { get; set; } public DateTime Date { get; set; } public string Customer { get; set; } public decimal Amount { get; set; } public DateTime DueDate { get; set; } } After this, we try to compile the project to check whether we have not made any mistake. We can now scaffold this model into an MVC controller so that we can have a simple but working app skeleton. Creating the portal app We now need to create the portal app starting from the MVC default template. Its registration workflow is useful for the creation of our tenant registration. In particular, we utilize user registration as the tenant registration. The main information acquires the tenant name and triggers tenant deployment. We need to make two changes on the UI. First, in the RegisterViewModel defined under the Models folder, we add a TenantName property to the AccountViewModels.cs file: public class RegisterViewModel { [Required] [Display(Name = "Tenant Name")] public string TenantName { get; set; } [Required] [EmailAddress] [Display(Name = "Email")] public string Email { get; set; } // other code omitted } In the Register.cshtml view page under ViewsAccount folder, we add an input box: @using (Html.BeginForm("Register", "Account", FormMethod.Post, new { @class = "form-horizontal", role = "form" })) { @Html.AntiForgeryToken() <h4>Create a new account.</h4> <hr /> @Html.ValidationSummary("", new { @class = "text-danger" }) <div class="form-group"> @Html.LabelFor(m => m.TenantName, new { @class = "col-md-2 control-label" }) <div class="col-md-10"> @Html.TextBoxFor(m => m.TenantName, new { @class = "form-control" }) </div> </div> <div class="form-group"> @Html.LabelFor(m => m.Email, new { @class = "col-md-2 control-label" }) <div class="col-md-10"> @Html.TextBoxFor(m => m.Email, new { @class = "form- control" }) </div> </div> <!-- other code omitted --> } Portal application can be great to allow the tenant owner to manage its own tenant, configuring or handling subscription-related tasks to the supplier company. Deploying the portal application Before tenant deployment, we need to deploy the portal itself. MyAccountant is a complex solution made up of multiple Azure services, which needs to be deployed together. First, we need to create an Azure Resource Group to collect all the services: As we already discussed earlier, all data from different tenants, including the portal itself, need to be contained inside distinct Azure SQL databases. Every user will have their own DB as a personal service, which they don't use frequently. It can be a waste of money assigning a reserved quantity of Database Transaction Units (DTUs) to a single database. We can invest on a pool of DTUs that should be shared among all SQL database instances. We begin by creating an SQL Server service from the portal: We need to create a pool of DTUs, which are shared among databases, and configure the pricing tier, which defines the maximum resources allocation per DB: The first database that we need to manually deploy is the portal database, where users will register as tenants. From the MyAccountantPool blade, we can create a new database that will be immediately associated to the pool: From the database blade, we read the connection: We use this connection string to configure the portal app in web.config: <connectionStrings> <add name="DefaultConnection" connectionString="Server=tcp: {portal_db}.database.windows.net,1433;Data Source={portal_db}; .database.windows.net;Initial Catalog=Portal;Persist Security Info=False;User ID={your_username};Password={your_password}; Pooling=False;MultipleActiveResultSets=False;Encrypt=True; TrustServerCertificate=False;Connection Timeout=30;" providerName="System.Data.SqlClient" /> </connectionStrings> We need to create a shared resource for the Web. In this case, we need to create an App Service Plan where we'll host portal and tenants apps. The initial size is not a problem because we can decide to scale up or scale out the solution at any time (in this case, only when application is able to scale out—we don't handle this scenario here). Then, we need to create portal web app that will be associated with the service plan that we just created: The portal can be deployed from Visual Studio to the Azure subscription by right-clicking on the project root in Solution Explorer and selecting Microsoft Azure Web App from Publish. After deployment, the portal is up and running: Deploy the tenant app After tenant registration from the portal, we need to deploy tenant itself, which is made up of the following: The app itself that is considered as the artifact that has to be deployed A web app that runs the app, hosted on the already defined web app plan The Azure SQL database that contains data inside the elastic pool The connection string that connect database to the web app in the web.config file It's a complex activity because it involves many different resources and different kinds of tasks from deployment to configuration. For this purpose, we have the Azure Resource Group project in Visual Studio, where we can configure web app deployment and configuration via Azure Resource Manager templates. This project will be called Tenant.Deploy, and we choose a blank template to do this. In the azuredeploy.json file, we can type a template such as https://github.com/marcoparenzan/CreateMultitenantAppsInAzure/blob/master/Tenant.Deploy/Templates/azuredeploy.json. This template is quite complex. Remember that in the SQL connection string, the username and password should be provided inside the template. We need to reference the Tenant.Web project from the deployment project because we need to deploy tenant artifacts (the project bits). To support deployment, we need to create an Azure Storage Account back to the Azure portal: To understand how it works, we can manually run a deployment directly from Visual Studio by right-clicking on Deployment project from Solution Explorer and selecting Deploy. When we deploy a "sample" tenant, the first dialog will appear. You can connect to the Azure subscription, selecting an existing resource group or creating a new one and the template that describes the deployment composition. The template requires the following parameters from Edit Parameters window: The tenant name The artifact location and SAS token that are automatically added having selected the Azure Storage account from the previous dialog Now, via the included Deploy-AzureResourceGroup.ps1 PowerShell file, Azure resources are deployed. The artifact is copied with AzCopy.exe command to the Azure storage in the Tenant.Web container as a package.zip file and the resource manager starts allocating resources. We can see that tenant is deployed in the following screenshot: Automating the tenant deployment process Now, in order to complete our solution, we need to invoke this deployment process from the portal application during a registration process call in ASP.NET MVC controls. For the purpose of this article, we will just invoke the execution without defining a production-quality deployment process. We can use the following checklist before proceeding: We already have an Azure Resource Manager template that deploys the tenant app customized for the user Deployment is made with a PowerShell script in the Visual Studio deployment project A new registered user for our application does not have an Azure account; we, as service publisher, need to offer a dedicated Azure account with our credentials to deploy the new tenants Azure offers many different ways to interact with an Azure subscription: The classic portal (https://manage.windowsazure.com) The new portal (https://portal.azure.com) The resource portal (https://resources.azure.com) The Azure REST API (https://msdn.microsoft.com/en-us/library/azure/mt420159.aspx) The Azure .NET SDK (https://github.com/Azure/azure-sdk-for-net) and other platforms The Azure CLI open source CLI (https://github.com/Azure/azure-xplat-cli) PowerShell (https://github.com/Azure/azure-powershell) For our needs, this means integrating in our application. We can make these considerations: We need to reuse the same ARM template that we defined We can reuse PowerShell experience, but we can also use our experience as .NET, REST, or other platform developers Authentication is the real discriminator in our solution: the user is not an Azure subscription user and we don't want to make a constraint on this Interacting with Azure REST API, which is the API on which every other solution depends, requires that all invocations need to be authenticated to the Azure Active Directory of the subscription tenant. We already mentioned that the user is not a subscription-authenticated user. Therefore, we need an unattended authentication to our Azure API subscription using a dedicated user for this purpose, encapsulated into a component that is executed by the ASP.NET MVC application in a secure manner to make the tenant deployment. The only environment that offers an out-of-the box solution for our needs (so that we need to write less code) is the Azure Automation Service. Before proceeding, we create a dedicated user for this purpose. Therefore, for security reasons, we can disable a specific user at any time. You should take note of two things: Never use the credentials that you used to register Azure subscription in a production environment! For automation implementation, you need a Azure AD tenant user, so you cannot use Microsoft accounts (Live or Hotmail). To create the user, we need to go to the classic portal, as Azure Active Directory has no equivalent management UI in the new portal. We need to select the tenant directory, that is, the one in the new portal that is visible in the upper right corner. From the classic portal, go to to Azure Active Directory and select the tenant. Click on Add User and type in a new username: Then, go to Administrator Management in the Setting tab of the portal because we need to define the user as a co-administrator in the subscription that we need to use for deployment. Now, with the temporary password, we need to log in manually to https://portal.azure.com/ (open the browser in private mode) with these credentials because we need to change the password, as it is generated as "expired". We are now ready to proceed. Back in the new portal, we select a new Azure Automation account: The first thing that we need to do inside the account is create a credential asset to store the newly-created AAD credentials and use the inside PowerShell scripts to log on in Azure: We can now create a runbook, which is an automation task that can be expressed in different ways: Graphical PowerShell We choose the second one: As we can edit it directly from portal, we can write a PowerShell script for our purposes. This is an adaptation from the one that we used in a standard way in the deployment project inside Visual Studio. The difference is that it is runable inside a runbook and Azure, and it uses already deployed artifacts that are already in the Azure Storage account that we created before. Before proceeding, we need of two IDs from our subscription: The subscription ID The tenant ID These two parameters can be discovered with PowerShell because we can perform Login-AzureRmAccount. Run it through the command line and copy them from the output: The following code is not production quality (needs some optimization) but for demo purposes: param ( $WebhookData, $TenantName ) # If runbook was called from Webhook, WebhookData will not be null. if ($WebhookData -ne $null) { $Body = ConvertFrom-Json -InputObject $WebhookData.RequestBody $TenantName = $Body.TenantName } # Authenticate to Azure resources retrieving the credential asset $Credentials = Get-AutomationPSCredential -Name "myaccountant" $subscriptionId = '{your subscriptionId}' $tenantId = '{your tenantId}' Login-AzureRmAccount -Credential $Credentials -SubscriptionId $subscriptionId -TenantId $tenantId $artifactsLocation = 'https://myaccountant.blob.core.windows.net/ myaccountant-stageartifacts' $ResourceGroupName = 'MyAccountant' # generate a temporary StorageSasToken (in a SecureString form) to give ARM template the access to the templatea artifacts$StorageAccountName = 'myaccountant' $StorageContainer = 'myaccountant-stageartifacts' $StorageAccountKey = (Get-AzureRmStorageAccountKey - ResourceGroupName $ResourceGroupName -Name $StorageAccountName).Key1 $StorageAccountContext = (Get-AzureRmStorageAccount - ResourceGroupName $ResourceGroupName -Name $StorageAccountName).Context $StorageSasToken = New-AzureStorageContainerSASToken -Container $StorageContainer -Context $StorageAccountContext -Permission r -ExpiryTime (Get-Date).AddHours(4) $SecureStorageSasToken = ConvertTo-SecureString $StorageSasToken -AsPlainText -Force #prepare parameters for the template $ParameterObject = New-Object -TypeName Hashtable $ParameterObject['TenantName'] = $TenantName $ParameterObject['_artifactsLocation'] = $artifactsLocation $ParameterObject['_artifactsLocationSasToken'] = $SecureStorageSasToken $deploymentName = 'MyAccountant' + '-' + $TenantName + '-'+ ((Get-Date).ToUniversalTime()).ToString('MMdd-HHmm') $templateLocation = $artifactsLocation + '/Tenant.Deploy/Templates/azuredeploy.json' + $StorageSasToken # execute New-AzureRmResourceGroupDeployment -Name $deploymentName ` -ResourceGroupName $ResourceGroupName ` -TemplateFile $templateLocation ` @ParameterObject ` -Force -Verbose The script is executable in the Test pane, but for production purposes, it needs to be deployed with the Publish button. Now, we need to execute this runbook from outside ASP.NET MVC portal that we already created. We can use Webhooks for this purpose. Webhooks are user-defined HTTP callbacks that are usually triggered by some event. In our case, this is new tenant registration. As they use HTTP, they can be integrated into web services without adding new infrastructure. Runbooks can directly be exposed as a Webhooks that provides HTTP endpoint natively without the need to provide one by ourself. We need to remember some things: Webhooks are public with a shared secret in the URL, so it is "secure" if we don't share it As a shared secret, it expires, so we need to handle Webhook update in the service lifecycle As a shared secret if more users are needed, more Webhooks are needed, as the URL is the only way to recognize who invoked it (again, don't share Webhooks) Copy the URL at this stage as it is not possible to recover it but it needs to be deleted and generate a new one Write it directly in portal web.config app settings: <add key="DeplyNewTenantWebHook" value="https://s2events.azure- automation.net/webhooks?token={your_token}"/> We can set some default parameters if needed, then we can create it. To invoke the Webhook, we use System.Net.HttpClient to create a POST request, placing a JSON object containing TenantName in the body: var requestBody = new { TenantName = model.TenantName }; var httpClient = new HttpClient(); var responseMessage = await httpClient.PostAsync( ConfigurationManager.AppSettings ["DeplyNewTenantWebHook"], new StringContent(JsonConvert.SerializeObject (requestBody)) ); This code is used to customize the registration process in AccountController: public async Task<ActionResult> Register(RegisterViewModel model) { if (ModelState.IsValid) { var user = new ApplicationUser { UserName = model.Email, Email = model.Email }; var result = await UserManager.CreateAsync(user, model.Password); if (result.Succeeded) { await SignInManager.SignInAsync(user, isPersistent:false, rememberBrowser:false); // handle webhook invocation here return RedirectToAction("Index", "Home"); } AddErrors(result); } The responseMessage is again a JSON object that contains JobId that we can use to programmatically access the executed job. Conclusion There are a lot of things that can be done with the set of topics that we covered in this article. These are a few of them: We can write better .NET code for multitenant apps We can authenticate users on with the Azure Active Directory service We can leverage deployment tasks with Azure Service Bus messaging We can create more interaction and feedback during tenant deployment We can learn how to customize ARM templates to deploy other Azure Storage services, such as DocumentDB, Azure Storage, and Azure Search We can handle more PowerShell for the Azure Management tasks Summary Azure can change the way we write our solutions, giving us a set of new patterns and powerful services to develop with. In particular, we learned how to think about multitenant apps to ensure confidentiality to the users. We looked at deploying ASP.NET web apps in app services and providing computing resources with App Services Plans. We looked at how to deploy SQL in Azure SQL databases and computing resources with elastic pool. We declared a deployment script with Azure Resource Manager, Azure Resource Template with Visual Studio cloud deployment projects, and automated ARM PowerShell script execution with Azure Automation and runbooks. The content we looked at in the earlier section will be content for future articles. Code can be found on GitHub at https://github.com/marcoparenzan/CreateMultitenantAppsInAzure. Have fun! Resources for Article: Further resources on this subject: Introduction to Microsoft Azure Cloud Services [article] Microsoft Azure – Developing Web API for Mobile Apps [article] Security in Microsoft Azure [article]
Read more
  • 0
  • 0
  • 28861
article-image-five-common-questions-netjava-developers-learning-javascript-and-nodejs
Packt
20 Jun 2016
19 min read
Save for later

Five common questions for .NET/Java developers learning JavaScript and Node.js

Packt
20 Jun 2016
19 min read
In this article by Harry Cummings, author of the book Learning Node.js for .NET Developers For those with web development experience in .NET or Java, perhaps who've written some browser-based JavaScript in the past, it might not be obvious why anyone would want to take JavaScript beyond the browser and treat it as a general-purpose programming language. However, this is exactly what Node.js does. What's more, Node.js has been around for long enough now to have matured as a platform, and has sustained its impressive growth in popularity well beyond any period that could be attributed to initial hype over a new technology. In this introductory article, we'll look at why Node.js is a compelling technology worth learning more about, and address some of the common barriers and sources of confusion that developers encounter when learning Node.js and JavaScript. (For more resources related to this topic, see here.) Why use Node.js? The execution model of Node.js follows that of JavaScript in the browser. This might not be an obvious choice for server-side development. In fact, these two use cases do have something important in common. User interface code is naturally event-driven (for example, binding event handlers to button clicks). Node.js makes this a virtue by applying an event-driven approach to server-side programming. Stated formally, Node.js has a single-threaded, non-blocking, event-driven execution model. We'll define each of these terms. Non-blocking Put simply, Node.js recognizes that many programmes spend most of their time waiting for other things to happen. For example, slow I/O operations such as disk access and network requests. Node.js addresses this by making these operations non-blocking. This means that program execution can continue while they happen. This non-blocking approach is also called asynchronous programming. Of course, other platforms support this too (for example, C#'s async/await keywords and the Task Parallel Library). However, it is baked in to Node.js in a way that makes it simple and natural to use. Asynchronous API methods are all called in the same way: They all take a callback function to be invoked ("called back") when the execution completes. This function is invoked with an optional error parameter and the result of the operation. The consistency of calling non-blocking (asynchronous) API methods in Node.js carries through to its third-party libraries. This consistency makes it easy to build applications that are asynchronous throughout. Other JavaScript libraries, such as bluebird (http://bluebirdjs.com/docs/getting-started.html), allow callback-based APIs to be adapted to other asynchronous patterns. As an alternative to callbacks, you may choose to use Promises (similar to Tasks in .NET or Futures in Java) or coroutines (similar to async methods in C#) within your own codebase. This allows you to streamline your code while retaining the benefits of consistent asynchronous APIs in Node.js and its third-party libraries. Event-driven The event-driven nature of Node.js describes how operations are scheduled. In typical procedural programming environments, a program has some entry point that executes a set of commands until completion or enters a loop to perform work on each iteration. Node.js has a built-in event loop, which isn't directly exposed to the developer. It is the job of the event loop to decide which piece of code to execute next. Typically, this will be some callback function that is ready to run in response to some other event. For example, a filesystem operation may have completed, a timeout may have expired, or a new network request may have arrived. This built-in event loop simplifies asynchronous programming by providing a consistent approach and avoiding the need for applications to manage their own scheduling. Single-threaded The single-threaded nature of Node.js simply means that there is only one thread of execution in each process. Also, each piece of code is guaranteed to run to completion without being interrupted by other operations. This greatly simplifies development and makes programmes easier to reason about. It removes the possibility for a range of concurrency issues. For example, it is not necessary to synchronize/lock access to shared in-process state as it is in Java or .NET. A process can't deadlock itself or create race conditions within its own code. Single-threaded programming is only feasible if the thread never gets blocked waiting for long-running work to complete. Thus, this simplified programming model is made possible by the non-blocking nature of Node.js. Writing web applications The flagship use case for Node.js is in building websites and web APIs. These are inherently event-driven as most or all processing takes place in response to HTTP requests. Also, many websites do little computational heavy-lifting of their own. They tend to perform a lot of I/O operations, for example: Streaming requests from the client Talking to a database locally or over the network Pulling in data from remote APIs over the network Reading files from disk to send back to the client These factors make I/O operations a likely bottleneck for web applications. The non-blocking programming model of Node.js allows web applications to make the most of a single thread. As soon as any of these I/O operations starts, the thread is immediately free to pick up and start processing another request. Processing of each request continues via asynchronous callbacks when I/O operations complete. The processing thread is only kicking off and linking together these operations, never waiting for them to complete. This allows Node.js to handle a much higher rate of requests per thread than other runtime environments. How does Node.js scale? So, Node.js can handle many requests per thread, but what happens when we reach the limit of what one thread can handle? The answer is, of course, to use more threads! You can achieve this by starting multiple Node.js processes, typically, one for each web server CPU core. Note that this is still quite different to most Java or .NET web applications. These typically use a pool of threads much larger than the number of cores, because threads are expected to spend much of their time being blocked. The built-in Node.js cluster module makes it straightforward to spawn multiple Node.js processes. Tools such as PM2 (http://pm2.keymetrics.io/) and libraries such as throng (https://github.com/hunterloftis/throng) make it even easier to do so. This approach gives us the best of all worlds: Using multiple threads makes the most of our available CPU power By having a single thread per core, we also save overheads from the operating system context-switching between threads Since the processes are independent and don't share state directly, we retain the benefits of the single-threaded programming model discussed above By using long-running application processes (as with .NET or Java), we avoid the overhead of a process-per-request (as in PHP) Do I really have to use JavaScript? A lot of web developers new to Node.js will already have some experience of client-side JavaScript. This experience may not have been positive and might put you off using JavaScript elsewhere. You do not have to use JavaScript to work with Node.js. TypeScript (http://www.typescriptlang.org/) and other compile-to-JavaScript languages exist as alternatives. However, I do recommend learning Node.js with JavaScript first. It will give you a clearer understanding of Node.js and simplify your tool chain. Once you have a project or two under your belt, you'll be better placed to understand the pros and cons of other languages. In the meantime, you might be pleasantly surprised by the JavaScript development experience in Node.js. There are three broad categories of prior JavaScript development experience that can lead to people having a negative impression of it. These are as follows: Experience from the late 90s and early 00s, prior to MV* frameworks like Angular/Knockout/Backbone/Ember, maybe even prior to jQuery. This is the pioneer phase of client-side web development. More recent experience within the much more mature JavaScript ecosystem, perhaps as a full-stack developer writing server-side and client-side code. The complexity of some frameworks (such as the MV* frameworks listed earlier), or the sheer amount of choice in general, can be overwhelming. Limited experience with JavaScript itself, but exposure to some its most unusual characteristics. This may lead to a jarring sensation as a result of encountering the language in surprising or unintuitive ways. We'll address groups of people affected by each type of experience in turn. But note that individuals might identify with more than one of these groups. I'm happy to admit that I've been a member of all three in the past. The web pioneers These developers have been burned by worked with client-side JavaScript in the past. The browser is sometimes described as a hostile environment for code to execute in. A single execution context shared by all code allows for some particularly nasty gotchas. For example, third-party code on the same page can create and modify global objects. Node.js solves some of these issues on a fundamental level, and mitigates others where this isn't possible. It's JavaScript, so it's still the case that everything is mutable. But the Node.js module system narrows the global scope, so libraries are less likely to step on each other's toes. The conventions that Node.js establishes also make third-party libraries much more consistent. This makes the environment less hostile and more predictable. The web pioneers will also have had to cope with the APIs available to JavaScript in the browser. Although these have improved over time as browsers and standards have matured, the earlier days of web development were more like the Wild West. Quirks and inconsistencies in fundamental APIs caused a lot of hard work and frustration. The rise of jQuery is a testament to the difficulty of working with the Document Object Model of old. The continued popularity of jQuery indicates that people still prefer to avoid working with these APIs directly. Node.js addresses these issues quite thoroughly: First of all, by taking JavaScript out of the browser, the DOM and other APIs simply go away as they are no longer relevant. The new APIs that Node.js introduces are small, focused, and consistent. You no longer need to contend with inconsistencies between browsers. Everything you write will execute in the same JavaScript engine (V8). The overwhelmed full-stack developers Many of the frontend JavaScript frameworks provide a lot of power, but come with a great deal of complexity. For example, AngularJS has a steep learning curve, is quite opinionated about application structure, and has quite a few gotchas or things you just need to know. JavaScript itself is actually a language with a very small surface area. This provides a blank canvas for Node.js to provide a small number of consistent APIs (as described in the previous section). Although there's still plenty to learn in total, you can focus on just the things you need without getting tripped up by areas you're not yet familiar with. It's still true that there's a lot of choice and that this can be bewildering. For example, there are many competing test frameworks for JavaScript. The trend towards smaller, more composable packages in the Node.js ecosystem—while generally a good thing—can mean more research, more decisions, and fewer batteries-included frameworks that do everything out of the box. On balance though, this makes it easier to move at your own pace and understand everything that you're pulling into your application. The JavaScript dabblers It's easy to have a poor impression of JavaScript if you've only worked with it occasionally and never as the primary (or even secondary) language on a project. JavaScript doesn't do itself any favors here, with a few glaring gotchas that most people will encounter. For example, the fundamentally broken == equality operator and other symptoms of type coercion. Although these make a poor first impression, they aren't really indicative of the experience of working with JavaScript more regularly. As mentioned in the previous section, JavaScript itself is actually a very small language. Its simplicity limits the number of gotchas there can be. While there are a few things you "just need to know", it's a short list. This compares well against the languages that offer a constant stream of nasty surprises (for example, PHP's notoriously inconsistent built-in functions). What's more, successive ECMAScript standards have done a lot to clean up the JavaScript language. With Node.js, you get to take advantage of this, as all your code will run on the V8 engine, which implements the latest ES2015 standard. The other big that reason JavaScript can be jarring is more a matter of context than an inherent flaw. It looks superficially similar to the other languages with a C-like syntax, like Java and C#. The similarity to Java was intentional when JavaScript was created, but it's unfortunate. JavaScript's programming model is quite different to other object-oriented languages like Java or C#. This can be confusing or frustrating, when its syntax suggests that it may work in roughly the same way. This is especially true of object-oriented programming in JavaScript, as we'll discuss shortly. Once you've understood the fundamentals of JavaScript though, it's very easy to work productively with it. Working with JavaScript I'm not going to argue that JavaScript is the perfect language. But I do think many of the factors that lead to people having a bad impression of JavaScript are not down to the language itself. Importantly, many factors simply don't apply when you take JavaScript out of the browser environment. What's more, JavaScript has some really great extrinsic properties. These are things that aren't visible in the code, but have an effect on what it's like to work with the language. For example, JavaScript's interpreted nature makes it easy to set up automated tests to run continuously and provide near-instant feedback on your code changes. How does inheritance work in JavaScript? When introducing object-oriented programming, we usually talk about classes and inheritance. Java, C# and numerous other languages take a very similar approach to these concepts. JavaScript is quite unusual in that; it supports object-oriented programming without classes. It does this by applying the concept of inheritance directly to objects. Anything that is not one of JavaScript's built-in primitives (strings, number, null, and so on) is an object. Functions are just a special type of object that can be invoked with arguments. Arrays are a special type of object with list-like behavior. All objects (including these two special types) can have properties, which are just names with a value. You can think of JavaScript objects as a dictionary with string keys and object values. Programming without classes Let's say you have a chart with a very large number of data points. These points may be represented by objects that have some common behavior. In C# or Java, you might create a Point class. In JavaScript, you could implement points like this: function create Point(x, y) {     return {         x: x,         y: y,         isAboveDiagonal: function() {             return this.y > this.x;         }     }; }   var myPoint = createPoint(1, 2); console.log(myPoint.isAboveDiagonal()); // Prints "true" The createPoint function returns a new object each time it is called (the object is defined using JavaScript's object-literal notation, which is the basis for JSON). One problem with this approach is that the function assigned to the isAboveDiagonal property is redefined for each point on the graph, thus taking up more space in memory. You can address this using prototypal inheritance. Although JavaScript doesn't have classes, objects can inherit from other objects. Each object has a prototype. If the interpreter attempts to access a property on an object and that property doesn't exist, it will look for a property with the same name on the object's prototype instead. If the property doesn't exist there, it will check the prototype's prototype, and so on. The prototype chain will end with the built-in Object.prototype. You can implement point objects using a prototype as follows: var pointPrototype = {     isAboveDiagonal: function() {         return this.y > this.x;     } };   function createPoint(x, y) {     var newPoint = Object.create(pointPrototype);     newPoint.x = x;     newPoint.y = y;     return newPoint; }   var myPoint = createPoint(1, 2); console.log(myPoint.isAboveDiagonal()); // Prints "true" The Object.create method creates a new object with a specified prototype. The isAboveDiagonal method now only exists once in memory on the pointPrototype object. When the code tries to call isAboveDiagonal on an individual point object, it is not present, but it is found on the prototype instead. Note that the preceding example tells us something important about the behavior of the this keyword in JavaScript. It actually refers to the object that the current function was called on, rather than the object it was defined on. Creating objects with the 'new' keyword You can rewrite the previous code example in a more compact form using the new operator: function Point(x, y) {     this.x = x;     this.y = y; }   Point.prototype.isAboveDiagonal = function() {     return this.y > this.x; }   var myPoint = new Point(1, 2); By convention, functions have a property named prototype, which defaults to an empty object. Using the new operator with the Point function creates an object that inherits from Point.prototype and applies the Point function to the newly created object. Programming with classes Although JavaScript doesn't fundamentally have classes, ES2015 introduces a new class keyword. This makes it possible to implement shared behavior and inheritance in a way that may be more familiar compared to other object-oriented languages. The equivalent of the previous code example would look like the following: class Point {     constructor(x, y) {         this.x = x;         this.y = y;     }         isAboveDiagonal() {         return this.y > this.x;     } }   var myPoint = new Point(1, 2); Note that this really is equivalent to the previous example. The class keyword is just syntactic sugar for setting up the prototype-based inheritance already discussed. Once you know how to define objects and classes, you can start to structure the rest of your application. How do I structure Node.js applications? In C# and Java, the static structure of an application is defined by namespaces or packages (respectively) and static types. An application's run-time structure (that is, the set of objects created in memory) is typically bootstrapped using a dependency injection (DI) container. Examples of DI containers include NInject, Autofac and Unity in .NET, or Spring, Guice and Dagger in Java. These frameworks provide features like declarative configuration and autowiring of dependencies. Since JavaScript is a dynamic, interpreted language, it has no inherent static application structure. Indeed, in the browser, all the scripts loaded into a page run one after the other in the same global context. The Node.js module system allows you to structure your application into files and directories and provides a mechanism for importing functionality from one file into another. There are DI containers available for JavaScript, but they are less commonly used. It is more common to pass around dependencies explicitly. The Node.js module system and JavaScript's dynamic typing makes this approach more natural. You don't need to add a lot of fields and constructors/properties to set up dependencies. You can just wrap modules in an initialization function that takes dependencies as parameters. The following very simple example illustrates the Node.js module system, and shows how to inject dependencies via a factory function: We add the following code under /src/greeter.js: module.exports = function(writer) {     return {         greet: function() { writer.write('Hello World!'); }     } }; We add the following code under /src/main.js: var consoleWriter = {     write: function(string) { console.log(string); } }; var greeter = require('./greeter.js')(consoleWriter); greeter.greet(); In the Node.js module system, each file establishes a new module with its own global scope. Within this scope, Node.js provides the module object for the current module to export its functionality, and the require function for importing other modules. If you run the previous example (using node main.js), the Node.js runtime will load the greeter module as a result of the main module's call to the require function. The greeter module assigns a value to the exports property of the module object. This becomes the return value of the require call back in the main module. In this case, the greeter module exports a single object, which is a factory function that takes a dependency. Summary In this article, we have: Understood the Node.js programming model and its use in web applications Described how Node.js web applications can scale Discussed the suitability of JavaScript as a programming language Illustrated how object-oriented programming works in JavaScript Seen how dependency injection works with the Node.js module system Hopefully this article has given you some insight into why Node.js is a compelling technology, and made you better prepared to learn more about writing server-side applications with JavaScript and Node.js. Resources for Article: Further resources on this subject: Web Components [article] Implementing a Log-in screen using Ext JS [article] Arrays [article]
Read more
  • 0
  • 0
  • 10338

article-image-getting-started-tensorflow-api-primer
Sam Abrahams
19 Jun 2016
8 min read
Save for later

Getting Started with TensorFlow: an API Primer

Sam Abrahams
19 Jun 2016
8 min read
TensorFlow has picked up a lot of steam over the past couple of months, and there's been more and more interest in learning how to use the library. I've seen tons of tutorials out there that just slap together TensorFlow code, roughly describe what some of the lines do, and call it a day. Conversely, I've seen really dense tutorials that mix together universal machine learning concepts and TensorFlow's API. There is value in both of these sorts of examples, but I find them either a little too sparse or too confusing respectively. In this post, I plan to focus solely on information related to the TensorFlow API, and not touch on general machine learning concepts (aside from describing computational graphs). Additionally, I will link directly to relevant portions of the TensorFlow API for further reading. While this post isn't going to be a proper tutorial, my hope is that focusing on the core components and workflows of the TensorFlow API will make working with other resources more accessible and comprehensible. As a final note, I'll be referring to the Python API and not the C++ API in this post. Definitions Let's start off with a glossary of key words you're going to see when using TensorFlow. Tensor: An n-dimensional matrix. For most practical purposes, you can think of them the same way you would a two-dimensional matrix for matrix algebra. In TensorFlow, the return value of any mathematical operation is a tensor. See here for more about TensorFlow Tensor objects. Graph: The computational graph that is defined by the user. It's constructed of nodes and edges, representing computations and connections between those computations respectively. For a quick primer on computation graphs and how they work in backpropagation, check out Chris Olah's post here. A TensorFlow user can define more than one Graph object and run them separately. Additionally, it is possible to define a large graph and run only smaller portions of it. See here for more information about TensorFlow Graphs. Op, Operation (Ops, Operations): Any sort of computation on tensors. Operations (or Ops) can take in zero or more TensorFlow Tensor objects, and output zero or more Tensor objects as a result of the computation. Ops are used all throughout TensorFlow, from doing simple addition to matrix multiplication to initializing TensorFlow variables. Operations run only when they are passed to the Session object, which I'll discuss below. For the most part, nodes and operations are interchangable concepts. In this guide, I'll try to use the term Operation or Op when referring to TensorFlow-specific operations and node when referring to general computation graph terminology. Here's the API reference for the Operation class. Node: A computation in the graph that takes as input zero or more tensors and outputs zero or more tensors. A node does not have to interact with any other nodes, and thus does not have to have any edges connected to it. Visually, these are usually depicted as ellipses or boxes. Edge: The directed connection between two nodes. In TensorFlow, each edge can be seen as one or more tensors, and usually represents the output of one node becoming the input of the next node. Visually, these are usually depicted as lines or arrows. Device: A CPU or GPU. In TensorFlow, computations can occur across many different CPUs and GPUs, and it must keep track of these devices in order to coordinate work properly. The Typical TensorFlow Coding Workflow Writing a working TensorFlow model boils down to two steps: Build the Graph using a series of Operations, placeholders, and Variables. Run the Graph with training data repeatedly using the Session (you'll want to test the model while training to make sure it's learning properly). Sounds simple enough, and once you get a hang of it, it really is! We talked about Ops in the section above, but now I want to put special emphasis on placeholders, Variables, and the Session. They are fairly easy to grasp, but getting these core fundamentals solidified will give context to learning the rest of the TensorFlow API. Placeholders A Placeholder is a node in the graph that must be fed data via the feed_dict parameter in Session.run (see below). In general, these are used to specify input data and label data. Basically, you use placeholders to tell TensorFlow, "Hey TF, the data here is going to change each time you run the graph, but it will always be a tensor of size [N] and data-type [D]. Use that information to make sure that my matrix/tensor calculations are set up properly." TensorFlow needs to have that information in order to compile the program, as it has to guarantee that you don't accidentally try to multiply a 5x5 matrix with an 8x8 matrix (amongst other things). Placeholders are easy to define. Just make a variable that is assigned to the result of tensorflow.placeholder(): import tensorflow as tf # Create a Placeholder of size 100x400 that will contain 32-bit floating point numbers my_placeholder = tf.placeholder(tf.float32, shape=(100, 400)) Read more about Placeholder objects here. Note: We are required to feed data to the placeholder when we run our graph. We'll cover this in the Session section below. Variables Variables are objects that contain tensor information but persist across multiple calls to Session.run(). That is, they contain information that can be altered during the run of a graph, and then that altered state can be accessed the next time the graph is run. Variables are used to hold the weights and biases of a machine learning model while it trains, and their final values are what define the trained model. Defining and using Variables is mostly straightforward. Define a Variable with tensorflow.Variable() and update its information with the assign() method: import tensorflow as tf # Create a variable with the value 0 and the name of 'my_variable' my_var = tf.Variable(0, name='my_variable') # Increment the variable by one my_var.assign(my_var + 1) One catch with Variable objects is that you can't run Ops with them until you initialize them within the Session object. This is usually done with the Operation returned from tf.initialize_all_variables(), as I'll describe in the next section. Variable API reference The official how-to for Variable objects The Session Finally, let's talk about running the Session. The TensorFlow Session object is in charge of keeping track of all Variables, coordinating computation across devices, and generally doing anything that involves running the graph. You generally start a Session by calling tensorflow.Session(), and either directly assign the value of that statement to a handle or use a with ... as statement. The most important method in the Session object is run(), which takes in as input fetches, a list of Operations and Tensors that the user wishes to calculate; and feed_dict, which is an optional dictionary mapping Tensors (often Placeholders) to values that should override that Tensor. This is how you specify which values you want returned from your computation as well as the input values for training. Here is a toy example that uses a placeholder, a Variable, and the Session to showcase their basic use: import tensorflow as tf # Create a placeholder for inputting floating point data later a = tf.placeholder(tf.float32) # Make a base Variable object with the starting value of 0 start = tf.Variable(0.0) # Create a node that is the value of incrementing the 'start' Variable by the value of 'a' y = start.assign(start + a) # Open up a TensorFlow Session and assign it to the handle 'sess' sess = tf.Session() # Important: initialize the Variable, or else we won't be able to run our computation init = tf.initialize_all_variables() # 'init' is an Op: must be run by sess sess.run(init) # Now the Variable is initialized! # Get the value of 'y', feeding in different values for 'a', and print the result # Because we are using a Variable, the value should change each time print(sess.run(y, feed_dict={a:1})) # Prints 1.0 print(sess.run(y, feed_dict={a:0.5})) # Prints 1.5 print(sess.run(y, feed_dict={a:2.2})) # Prints 3.7 # Close the Session sess.close() Check out the documentation for TensorFlow's Session object here. Finishing Up Alright! This primer should give you a head start on understanding more of the resources out there for TensorFlow. The less you have to think about how TensorFlow works, the more time you can spend working out how to set up the best neural network you can! Good luck, and happy flowing! About the author Sam Abrahams is a freelance data engineer and animator in Los Angeles, CA, USA. He specializes in real-world applications of machine learning and is a contributor to TensorFlow. Sam runs a small tech blog, Memdump, and is an active member of the local hacker scene in West LA.
Read more
  • 0
  • 0
  • 3326

article-image-capability-model-microservices
Packt
17 Jun 2016
19 min read
Save for later

A capability model for microservices

Packt
17 Jun 2016
19 min read
In this article by Rajesh RV, the author of Spring Microservices, you will learn aboutthe concepts of microservices. More than sticking to definitions, it is better to understand microservices by examining some common characteristics of microservices that are seen across many successful microservices implementations. Spring Boot is an ideal framework to implement microservices. In this article, we will examine how to implement microservices using Spring Boot with an example use case. Beyond services, we will have to be aware of the challenges around microservices implementation. This article will also talk about some of the common challenges around microservices. A successful microservices implementation has to have some set of common capabilities. In this article, we will establish a microservices capability model that can be used in a technology-neutral framework to implement large-scale microservices. What are microservices? Microservices is an architecture style used by many organizations today as a game changer to achieve a high degree of agility, speed of delivery, and scale. Microservices give us a way to develop more physically separated modular applications. Microservices are not invented. Many organizations, such as Netflix, Amazon, and eBay, successfully used the divide-and-conquer technique to functionally partition their monolithic applications into smaller atomic units, and each performs a single function. These organizations solved a number of prevailing issues they experienced with their monolithic application. Following the success of these organizations, many other organizations started adopting this as a common pattern to refactor their monolithic applications. Later, evangelists termed this pattern microservices architecture. Microservices originated from the idea of Hexagonal Architecture coined by Alister Cockburn. Hexagonal Architecture is also known as thePorts and Adapters pattern. Microservices is an architectural style or an approach to building IT systems as aset of business capabilities that are autonomous, self-contained, and loosely coupled. The preceding diagram depicts a traditional N-tier application architecture having a presentation layer, business layer, and database layer. Modules A, B, and C represents three different business capabilities. The layers in the diagram represent a separation of architecture concerns. Each layer holds all three business capabilities pertaining to this layer. The presentation layer has the web components of all three modules, the business layer has the business components of all the three modules, and the database layer hosts tables of all the three modules. In most cases, layers are physically spreadable, whereas modules within a layer are hardwired. Let's now examine a microservices-based architecture, as follows: As we can note in the diagram, the boundaries are inverted in the microservices architecture. Each vertical slice represents a microservice. Each microservice has its own presentation layer, business layer, and database layer. Microservices are aligned toward business capabilities. By doing so, changes to one microservice do not impact others. There is no standard for communication or transport mechanisms for microservices. In general, microservices communicate with each other using widely adopted lightweight protocols such as HTTP and REST or messaging protocols such as JMS or AMQP. In specific cases, one might choose more optimized communication protocols such as Thrift, ZeroMQ, Protocol Buffers, or Avro. As microservices are more aligned to the business capabilities and have independently manageable lifecycles, they are the ideal choice for enterprises embarking on DevOps and cloud. DevOps and cloud are two other facets of microservices. Microservices are self-contained, independently deployable, and autonomous services that take full responsibility of a business capability and its execution. They bundle all dependencies, including library dependencies and execution environments such as web servers and containers or virtual machines that abstract physical resources. These self-contained services assume single responsibility and are well enclosed with in a bounded context. Microservices – The honeycomb analogy The honeycomb is an ideal analogy to represent the evolutionary microservices architecture. In the real world, bees build a honeycomb by aligning hexagonal wax cells. They start small, using different materials to build the cells. Construction is based on what is available at the time of building. Repetitive cells form a pattern and result in a strong fabric structure. Each cell in the honeycomb is independent but also integrated with other cells. By adding new cells, the honeycomb grows organically to a big solid structure. The content inside each cell is abstracted and is not visible outside. Damage to one cell does not damage other cells, and bees can reconstruct these cells without impacting the overall honeycomb. Characteristics of microservices The microservices definition discussed at the beginning of this article is arbitrary. Evangelists and practitioners have strong but sometimes different opinions on microservices. There is no single, concrete, and universally accepted definition for microservices. However, all successful microservices implementations exhibit a number of common characteristics. Some of these characteristics are explained as follows: Since microservices are more or less similar to a flavor of SOA, many of the service characteristics of SOA are applicable to microservices, as well. In the microservices world, services are first-class citizens. Microservices expose service endpoints as APIs and abstract all their realization details. The APIs could be synchronous or asynchronous. HTTP/REST is the popular choice for APIs. As microservices are autonomous and abstract everything behind service APIs, it is possible to have different architectures for different microservices. The internal implementation logic, architecture, and technologies, including programming language, database, quality of service mechanisms,and so on, are completely hidden behind the service API. Well-designed microservices are aligned to a single business capability, so they perform only one function. As a result, one of the common characteristics we see in most of the implementations are microservices with smaller footprints. Most of the microservices implementations are automated to the maximum extent possible, from development to production. Most large-scale microservices implementations have a supporting ecosystem in place. The ecosystem's capabilities include DevOps processes, centralized log management, service registry, API gateways, extensive monitoring, service routing and flow control mechanisms, and so on. Successful microservices implementations encapsulate logic and data within the service. This results in two unconventional situations: a distributed data and logic and decentralized governance. A microservice example The Customer profile microservice exampleexplained here demonstrates the implementation of microservice and interaction between different microservices. In this example, two microservices, Customer Profile and Customer Notification, will be developed. As shown in the diagram, the Customer Profile microservice exposes methods to create, read, update, and delete a customer and a registration service to register a customer. The registration process applies a certain business logic, saves the customer profile, and sends a message to the CustomerNotification microservice. The CustomerNotification microservice accepts the message send by the registration service and sends an e-mail message to the customer using an SMTP server. Asynchronous messaging is used to integrate CustomerProfile with the CustomerNotification service. The customer microservices class domain model diagram is as shown here: Implementing this Customer Profile microservice is not a big deal. The Spring framework, together with Spring Boot, provides all the necessary capabilities to implement this microservice without much hassle. The key is CustomerController in the diagram, which exposes the REST endpoint for our microservice. It is also possible to use HATEOAS to explore the repository's REST services directly using the @RepositoryRestResource annotation. The following code sample shows the Spring Boot main class called Application and theREST endpoint definition for theregistration of a new customer: @SpringBootApplication public class Application { public static void main(String[] args) { SpringApplication.run(Application.class, args);  } }   @RestController class CustomerController{ //other code here @RequestMapping( path="/register", method = RequestMethod.POST) Customer register(@RequestBody Customer customer){ returncustomerComponent.register(customer); } } CustomerControllerinvokes a component class, CustomerComponent. The component class/bean handles all the business logic. CustomerRepository is a Spring data JPA repository defined to handlethe persistence of the Customer entity. The whole application will then be deployed as a Spring Boot application by building a standalone jar rather than using the conventional war file. Spring Boot encapsulates the server runtime along with the fat jar it produces. By default, it is an instance of the Tomcat server. CustomerComponent, in addition to calling the CustomerRepository class, sends a message to the RabbitMQ queue, where the CustomerNotification component is listening. This can be easily achieved in Spring using the RabbitMessagingTemplate class as shown in the following Sender implementation: @Component class CustomerComponent { //other code here   Customer register(Customer customer){ customerRespository.save(customer); sender.send(customer.getEmail()); return customer; } }   @Component @Lazy class Sender { RabbitMessagingTemplate template;   @Autowired Sender(RabbitMessagingTemplate template){ this.template = template; }   @Bean Queue queue() { return new Queue("CustomerQ", false); }   public void send(String message){ template.convertAndSend("CustomerQ", message); } } The receiver on the other sideconsumes the message using RabbitListener and sends out an e-mail using theJavaMailSender component. Execute the following code: @Component class Receiver { @Autowired private  JavaMailSenderjavaMailService;   @Bean Queue queue() { return new Queue("CustomerQ", false); }   @RabbitListener(queues = "CustomerQ") public void processMessage(String email) { System.out.println(email); SimpleMailMessagemailMessage=new SimpleMailMessage(); mailMessage.setTo(email); mailMessage.setSubject("Registration"); mailMessage.setText("Successfully Registered"); javaMailService.send(mailMessage);       }   } In this case,CustomerNotification isour secondSpring Boot microservice. In this case, instead of the REST endpoint, it only exposes a message listener end point. Microservices challenges In the previous section,you learned about the right design decisions to be made and the trade-offs to be applied. In this section, we will review some of the challenges with microservices. Take a look at the following list: Data islands: Microservices abstract their own local transactional store, which is used for their own transactional purposes. The type of store and the data structure will be optimized for the services offered by the microservice. This can lead to data islands and, hence, challenges around aggregating data from different transactional stores to derive meaningful information. Logging and monitoring: Log files are a good piece of information for analysis and debugging. As each microservice is deployed independently, they emit separate logs, maybe to a local disk. This will result in fragmented logs. When we scale services across multiple machines, each service instance would produce separate log files. This makes it extremely difficult to debug and understand the behavior of the services through log mining. Dependency management: Dependency management is one of the key issues in large microservices deployments. How do we ensure the chattiness between services is manageable?How do we identify and reduce the impact of a change? How do we know whether all the dependent services are up and running? How will the service behave if one of the dependent services is not available? Organization's culture: One of the biggest challenges in microservices implementation is the organization's culture. An organization following waterfall development or heavyweight release management processes with infrequent release cycles is a challenge for microservices development. Insufficient automation is also a challenge for microservices deployments. Governance challenges: Microservices impose decentralized governance, and this is quite in contrast to the traditional SOA governance. Organizations may find it hard to come up with this change, and this could negatively impact microservices development. How can we know who is consuming service? How do we ensure service reuse? How do we define which services are available in the organization? How do we ensure that the enterprise polices are enforced? Operation overheads: Microservices deployments generally increases the number of deployable units and virtual machines (or containers). This adds significant management overheads and cost of operations.With a single application, a dedicated number of containers or virtual machines in an on-premises data center may not make much sense unless the business benefit is high. With many microservices, the number of Configurable Items (CIs) is too high, and the number of servers in which these CIs are deployed might also be unpredictable. This makes it extremely difficult to manage data in a traditional Configuration Management Database (CMDB). Testing microservices: Microservices also pose a challenge for the testability of services. In order to achieve full service functionality, one service may rely on another service, and this, in turn, may rely on another service, either synchronously or asynchronously. The issue is how we test an end-to-end service to evaluate its behavior. Dependent services may or may not be available at the time of testing. Infrastructure provisioning: As briefly touched upon under operation overheads, manual deployment can severely challenge microservices rollouts. If a deployment has manual elements, the deployer or operational administrators should know the running topology, manually reroute traffic, and then deploy the application one by one until all the services are upgraded. With many server instances running, this could lead to significant operational overheads. Moreover, the chance of error is high in this manual approach. Beyond just services– The microservices capability model Microservice are not as simple as the Customer Profile implementation we discussedearlier. This is specifically true when deploying hundreds or thousands of services. In many cases, an improper microservices implementation may lead to a number of challenges, as mentioned before.Any successful Internet-scale microservices deployment requires a number of additional surrounding capabilities. The following diagram depicts the microservices capability model: The capability model is broadly classified in to four areas, as follows: Core capabilities, which are part of the microservices themselves Supporting capabilities, which are software solutions supporting core microservice implementations Infrastructure capabilities, which are infrastructure-level expectations for a successful microservices implementation Governance capabilities, which are more of process, people, and reference information Core capabilities The core capabilities are explained here: Service listeners (HTTP/Messaging): If microservices are enabled for HTTP-based service endpoints, then the HTTP listener will be embedded within the microservices, thereby eliminating the need to have any external application server requirement. The HTTP listener will be started at the time of the application startup. If the microservice is based on asynchronous communication, then instead of an HTTP listener, a message listener will be stated. Optionally, other protocols could also be considered. There may not be any listeners if the microservices is a scheduled service. Spring Boot and Spring Cloud Streams provide this capability. Storage capability: Microservices have storage mechanisms to store state or transactional data pertaining to the business capability. This is optional, depending on the capabilities that are implemented. The storage could be either a physical storage (RDBMS,such as MySQL, and NoSQL,such as Hadoop, Cassandra, Neo4J, Elasticsearch,and so on), or it could be an in-memory store (cache,such as Ehcache and Data grids,such as Hazelcast, Infinispan,and so on). Business capability definition: This is the core of microservices, in which the business logic is implemented. This could be implemented in any applicable language, such as Java, Scala, Conjure, Erlang, and so on. All required business logic to fulfil the function is embedded within the microservices itself. Event sourcing: Microservices send out state changes to the external world without really worrying about the targeted consumers of these events. They could be consumed by other microservices, supporting services such as audit by replication, external applications,and so on. This will allow other microservices and applications to respond to state changes. Service endpoints and communication protocols: This defines the APIs for external consumers to consume. These could be synchronous endpoints or asynchronous endpoints. Synchronous endpoints could be based on REST/JSON or other protocols such as Avro, Thrift, protocol buffers, and so on. Asynchronous endpoints will be through Spring Cloud Streams backed by RabbitMQ or any other messaging servers or other messaging style implementations, such as Zero MQ. The API gateway: The API gateway provides a level of indirection by either proxying service endpoints or composing multiple service endpoints. The API gateway is also useful for policy enforcements. It may also provide real-time load balancing capabilities. There are many API gateways available in the market. Spring Cloud Zuul, Mashery, Apigee, and 3 Scale are some examples of API gateway providers. User interfaces: Generally, user interfaces are also part of microservices for users to interact with the business capabilities realized by the microservices. These could be implemented in any technology and is channel and device agnostic. Infrastructure capabilities Certain infrastructure capabilities are required for a successful deployment and to manage large-scale microservices. When deploying microservices at scale, not having proper infrastructure capabilities can be challenging and can lead to failures. Cloud: Microservices implementation is difficult in a traditional data center environment with a long lead time to provision infrastructures. Even a large number of infrastructure dedicated per microservice may not be very cost effective. Managing them internally in a data center may increase the cost of ownership and of operations. A cloud-like infrastructure is better for microservices deployment. Containers or virtual machines: Managing large physical machines is not cost effective and is also hard to manage. With physical machines, it is also hard to handle automatic fault tolerance. Virtualization is adopted by many organizations because of its ability to provide an optimal use of physical resources, and it provides resource isolation. It also reduces the overheads in managing large physical infrastructure components. Containers are the next generation of virtual machines. VMWare, Citrix,and so on provide virtual machine technologies. Docker, Drawbridge, Rocket, and LXD are some containerizing technologies. Cluster control and provisioning: Once we have a large number of containers or virtual machines, it is hard to manage and maintain them automatically. Cluster control tools provide a uniform operating environment on top of the containers and share the available capacity across multiple services. Apache Mesos and Kubernetes are examples of cluster control systems. Application lifecycle management: Application lifecycle management tools help to invoke applications when a new container is launched or kill the application when the container shuts down. Application lifecycle management allows to script application deployments and releases. It automatically detects failure scenarios and responds to them, thereby ensuring the availability of the application. This works in conjunction with the cluster control software. Marathon partially address this capability. Supporting capabilities Supporting capabilities are not directly linked to microservices, but these are essential for large-scale microservices development. Software-defined load balancer: The load balancer should be smart enough to understand the changes in deployment topology and respond accordingly. This moves away from the traditional approach of configuring static IP addresses, domain aliases, or cluster address in the load balancer. When new servers are added to the environment, it should automatically detect this and include them in the logical cluster by avoiding any manual interactions. Similarly, if a service instance is unavailable, it should take it out of the load balancer. A combination of Ribbon, Eureka, and Zuul provides this capability in Spring Cloud Netflix. Central log management: As explored earlier in this article, a capability is required to centralize all the logs emitted by service instances with correlation IDs. This helps debug, identify performances bottlenecks, and in predictive analysis. The result of this could feedback into the lifecycle manager to take corrective actions. Service registry: A service registry provides a runtime environment for services to automatically publish their availability at runtime. A registry will be a good source of information to understand the services topology at any point. Eureka from Spring Cloud, ZooKeeper, and Etcd are some of the service registry tools available. Security service: The distributed microservices ecosystem requires a central server to manage service security. This includes service authentication and token services. OAuth2-based services are widely used for microservices security. Spring Security and Spring Security OAuth are good candidates to build this capability. Service configuration: All service configurations should be externalized, as discussed in the Twelve-Factor application principles. A central service for all configurations could be a good choice. The Spring Cloud Config server and Archaius are out-of-the-box configuration servers. Testing tools (Anti-Fragile, RUM, and so on): Netflix uses Simian Army for antifragile testing. Mature services need consistent challenges to see the reliability of the services and how good fallback mechanisms are. Simian Army components create various error scenarios to explore the behavior of the system under failure scenarios. Monitoring and dashboards: Microservices also require a strong monitoring mechanism. This monitoring is not just at the infrastructurelevel but also at the service level. Spring Cloud Netflix Turbine, the Hysterix dashboard,and others provide service-level information. End-to-end monitoring tools,such as AppDynamic, NewRelic, Dynatrace, and other tools such as Statd, Sensu, and Spigo, could add value in microservices monitoring. Dependency and CI management: We also need tools to discover runtime topologies, to find service dependencies, and to manage configurable items (CIs). A graph-based CMDB is more obvious to manage these scenarios. Data lakes: As discussed earlier in this article, we need a mechanism to combine data stored in different microservices and perform near real-time analytics. Data lakesare a good choice to achieve this. Data ingestion tools such as Spring Cloud Data Flow, Flume, and Kafka are used to consume data. HDFS, Cassandra,and others are used to store data. Reliable messaging: If the communication is asynchronous, we may need a reliable messaging infrastructure service, such as RabbitMQ or any other reliable messaging service. Cloud messaging or messaging as service is a popular choice in Internet-scale message-based service endpoints. Process and governance capabilities The last in the puzzle are the process and governance capabilities required for microservices, which are: DevOps: Key in successful implementation is to adopt DevOps. DevOps complements microservices development by supporting agile development, high-velocity delivery, automation, and better change management. DevOps tools: DevOps tools for agile development, continuous integration, continuous delivery, and continuous deployment are essential for a successful delivery of microservices. A lot of emphasis is required in automated, functional, and real user testing as well as synthetic, integration, release, and performance testing. Microservices repository: A microservices repository is where the versioned binaries of microservices are placed. These could be a simple Nexus repository or container repositories such as the Docker registry. Microservice documentation: It is important to have all microservices properly documented. Swagger or API blueprint are helpful in achieving good microservices documentation. Reference architecture and libraries: Reference architecture provides a blueprint at the organization level to ensure that services are developed according to certain standards and guidelines in a consistent manner. Many of these could then be translated to a number of reusable libraries that enforce service development philosophies. Summary In this article,you learned the concepts and characteristics of microservices. We took as example a holiday portal to understand the concept of microservices better. We also examined some of the common challenges in large-scale microservice implementation. Finally, we established a microservices capability model in this article that can be used to deliver successful Internet-scale microservices.
Read more
  • 0
  • 0
  • 31235
article-image-web-components
Packt
17 Jun 2016
12 min read
Save for later

Web Components

Packt
17 Jun 2016
12 min read
In this article by Arshak Khachatryan, the author of Getting Started with Polymer, we will discuss web components. Currently, web technologies are growing rapidly. Though most websites use these technologies nowadays, we come across many with a bad, unresponsive UI design and awful performance. The only reason we should think about a responsive website is that users are now moving to the mobile web. 55% of the web users use mobile phones because they are faster and more comfortable. This is why we need to provide mobile content in the simplest way possible. Everything is moving to minimalism, even the Web. The new web standards are changing rapidly too. In this article, we will cover one of these new technologies, web components, and what they do. We will discuss the following specifications of web components in this article: Templates Shadow DOM (For more resources related to this topic, see here.) Templates In this section, we will discuss what we can do with templates. However, let's answer a few questions before this. What are templates, and why should we use them? Templates are basically fragments of HTML, but let's call these fragments as the "zombie" fragments of HTML as they are neither alive nor dead. What is meant by "neither alive nor dead"? Let me explain this with a real-life example. Once, when I was working on the ucraft.me project (it's a website built with a lot of cool stuff in it), we faced some rather new challenges with the templates. We had a lot of form elements, but we didn't know where to save the form elements content. We didn't want to load the DOM of each form element, but what could we do? As always, we did some magic; we created a lot of div elements with the form elements and hid it with CSS. But the CSS display: none property did not render the element, but it loaded the element. This was also a problem because there were a lot of form element templates, and it affected the performance of the website. I recommended to my team that they work with templates. Templates can contain HTML content, but they do not load the element nor render. We call template elements "dead elements" because they do not load the content until you get their content with JavaScript. Let's move ahead, and let me show you some examples of how you can create templates and do some stuff with its content. Imagine that you are working on a big project where you need to load some dynamic content without AJAX. If I had a task such as this, I would create a PHP file and get its content by calling the jQuery .load() function. However, now, you can save your content inside of the <template> element and get the content without any jQuery and AJAX but with a single line of JavaScript code. Let's create a template. In index.html, we have <template> and some content we want to get in the future, as shown in the following code block: <template class="superman"> <div> <img src="assets/img/superman.png" class="animated_superman" /> </div> </template> The time has now come for JavaScript! Execute the following code: <script> // selecting the template element with querySelector() var tmpl = document.querySelector('.superman'); //getting the <template> content var content = tmpl.content; // making some changes in the content content.querySelector('.animated_superman').width = 200; // appending the template to the body document.body.appendChild(content); </script> So, that's it! Cool, right? The content will load only after you append the content to the document. So, do you realize that templates are a part of the future web? If you are using Chrome Canary, just turn on the flags of experimental web platform features and enable HTML imports and experimental JavaScript. There are four ways to use templates, which are: Add templates with hidden elements in the document and just copy and paste the data when you need it, as follows: <div hidden data-template="superman"> <div> <p>SuperMan Head</p> <img src="assets/img/superman.png" class="animated_superman" /> </div> </div> However, the problem is that a browser will load all the content. It means that the browser will load but not render images, video, audio, and so on. Get the content of the template as a string (by requesting with AJAX or from <script type="x-template">). However, we might have some problems in working with the string. This can be dangerous for XSS attacks; we just need to pay some more attention to this: <script data-template="batman" type="x-template"> <div> <p>Batman Head this time!</p> <img src="assets/img/superman.png" class="animated_superman" /> </div> </div> Compiled templates such as Hogan.js (http://twitter.github.io/hogan.js/) work with strings. So, they have the same flaw as the patterns of the second type. Templates do not have these disadvantages. We will work with DOM and not with the strings. We will then decide when to run the code. In conclusion: The <template> tag is not intended to replace the system of standardization. There are no tricky iteration operators or data bindings. Its main feature is to be able to insert "live" content along with scripts. Lastly, it does not require any libraries. Shadow DOM The Shadow DOM specification is a separate standard. A part of it is used for standard DOM elements, but it is also used to create with web components. In this section, you will learn what Shadow DOM is and how to use it. Shadow DOM is an internal DOM element that is separated from an external document. It can store your ID, styles, and so on. Most importantly, Shadow DOM is not visible outside of its scope without the use of special techniques. Hence, there are no conflicts with the external world; it's like an iframe. Inside the browser The Shadow DOM concept has been used for a long time inside browsers themselves. When the browser shows complex controls, such as a <input type = "range"> slider or a <input type = "date"> calendar within itself, it constructs them out of the most ordinary styled <div>, <span>, and other elements. They are invisible at the first glance, but they can be easily seen if the checkbox in Chrome DevTools is set to display Shadow DOM: In the preceding code, #shadow-root is the Shadow DOM. Getting items from the Shadow DOM can only be done using special JavaScript calls or selectors. They are not children but a more powerful separation of content from the parent. In the preceding Shadow DOM, you can see a useful pseudo attribute. It is nonstandard and is present for solely historical reasons. It can be styled via CSS with the help of subelements—for example, let's change the form input dates to red via the following code: <style> input::-webkit-datetime-edit { background: red; } </style> <input type="date" /> Once again, make a note of the pseudo custom attribute. Speaking chronologically, in the beginning, the browsers started to experiment with encapsulated DOM structure inside their scopes, then Shadow DOM appeared which allowed developers to do the same. Now, let's work with the Shadow DOM from JavaScript or the standard Shadow DOM. Creating a Shadow DOM The Shadow DOM can create any element within the elem.createShadowRoot() call, as shown by the following code: <div id="container">You know why?</div> <script> var root = container.createShadowRoot(); root.innerHTML = "Because I'm Batman!"; </script> If you run this example, you will see that the contents of the #container element disappeared somewhere, and it only shows "Because I'm Batman!". This is because the element has a Shadow DOM and ignores the previous content of the element. Because of the creation of Shadow DOM, instead of the content, the browser has shown only the Shadow DOM. If you wish, you can put the contents of the ordinary inside this Shadow DOM. To do this, you need to specify where it is to be done. The Shadow DOM is done through the "insertion point", and it is declared using the <content> tag; here's an example: <div id="container">You know why?</div> <script> var root = container.createShadowRoot(); root.innerHTML = '<h1><content></content></h1><p>Winter is coming!</p>'; </script> Now, you will see "You know why?" in the title followed by "Winter is coming!". Here's a Shadow DOM example in Chrome DevTool: The following are some important details about the Shadow DOM: The <content> tag affects only the display, and it does not move the nodes physically. As you can see in the preceding picture, the node "You know why?" remained inside the div#container. It can even be obtained using container.firstElementChild. Inside the <content> tag, we have the content of the element itself. In this example string "You know why?". With the select attribute of the <content> element, you can specify a particular selector content you want to transfer; for example, <content select="p"></content> will transfer only paragraphs. Inside the Shadow DOM, you can use the <content> tag multiple times with different values of select, thus indicating where to place which part of the original content. However, it is impossible to duplicate nodes. If the node is shown in a <content> tag, then the next node will be missed. For example, if there is a <content select="h3.title"> tag and then <content select= "h3">, the first <content> will show the headers <h3> with the class title, while the second will show all the others, except for the ones already shown. In the preceding example from DevTools, the <content></content> tag is empty. If we add some content in the <content> tag, it will show that in that case, if there are no other nodes. Check out the following code: <div id="container"> <h3>Once upon a time, in Westeros</h3> <strong>Ruled a king by name Joffrey and he's dead!</strong> </div> <script> var root = container.createShadowRoot(); root.innerHTML = '<content select='h3'></content> <content select=".writer"> Jon Snow </content> <content></content>'; </script> When you run the JS code, you will see the following: The first <content select='h3'> tag will display the title The second <content select = ".hero"> tag would show the hero name, but if there's no any element with this selector, it will take the default value: <content select=".hero"> The third <content> tag displays the rest of the original contents of the elements without the header <h3>, which it had launched earlier Once again, note that <content> moves nodes on the DOM physically. Root shadowRoot After the creation of a root in the internal DOM, the tree will be available as container.shadowRoot. It is a special object that supports the basic methods of CSS requests and is described in detail in ShadowRoot. You need to go through container.shadowRoot if you need to work with content in the Shadow DOM. You can create a new Shadow DOM tree of JavaScript; here's an example: <div id="container">Polycasts</div> <script> // create a new Shadow DOM tree for element var root = container.createShadowRoot(); root.innerHTML = '<h1><content></content></h1> <strong>Hey googlers! Let's code today.</strong>'; </script> <script> // read data from Shadow DOM for elem var root = container.shadowRoot; // Hey googlers! Let's code today. document.write('<br/><em>container: ' + root. querySelector('strong').innerHTML); // empty as physical nodes - is content document.write('<br/><em>content: ' + root. querySelector('content').innerHTML); </script> To finish up, Shadow DOM is a tool to create a separate DOM tree inside the cell, which is not visible from outside without using special techniques: A lot of browser components with complex structures have Shadow DOM already. You can create Shadow DOM inside every element by calling elem.createShadowRoot(). In the future, it will be available as a elem.shadowRoot root, and you can access it inside the Shadow DOM. It is not available for custom elements. Once the Shadow DOM appears in the element, the content of it is hidden. You can see just the Shadow DOM. The <content> element moves the contents of the original item in the Shadow DOM only visually. However, it remains in the same place in the DOM structure. Detailed specifications are given at http://w3c.github.io/webcomponents/spec/shadow/. Summary Using web components, you can easily create your web application by splitting it into parts/components. Resources for Article: Further resources on this subject: Handling the DOM in Dart [article] Manipulation of DOM Objects using Firebug [article] jQuery 1.4 DOM Manipulation Methods for Style Properties and Class Attributes [article]
Read more
  • 0
  • 0
  • 13341

article-image-use-sqlite-ionic-store-data
Oli Huggins
13 Jun 2016
10 min read
Save for later

How to use SQLite with Ionic to store data?

Oli Huggins
13 Jun 2016
10 min read
Hybrid Mobile apps have a challenging task of being as performant as native apps, but I always tell other developers that it depends on not the technology but how we code. The Ionic Framework is a popular hybrid app development library, which uses optimal design patterns to create awe-inspiring mobile experiences. We cannot exactly use web design patterns to create hybrid mobile apps. The task of storing data locally on a device is one such capability, which can make or break the performance of your app. In a web app, we may use localStorage to store data but mobile apps require much more data to be stored and swift access. Localstorage is synchronous, so it acts slow in accessing the data. Also, web developers who have experience of coding in a backend language such as C#, PHP, or Java would find it more convenient to access data using SQL queries than using object-based DB. SQLite is a lightweight embedded relational DBMS used in web browsers and web views for hybrid mobile apps. It is similar to the HTML5 WebSQL API and is asynchronous in nature, so it does not block the DOM or any other JS code. Ionic apps can leverage this tool using an open source Cordova Plugin by Chris Brody (@brodybits). We can use this plugin directly or use it with the ngCordova library by the Ionic team, which abstracts Cordova plugin calls into AngularJS-based services. We will create an Ionic app in this blog post to create Trackers to track any information by storing it at any point in time. We can use this data to analyze the information and draw it on charts. We will be using the ‘cordova-sqlite-ext’ plugin and the ngCordova library. We will start by creating a new Ionic app with a blank starter template using the Ionic CLI command, ‘$ ionic start sqlite-sample blank’. We should also add appropriate platforms for which we want to build our app. The command to add a specific platform is ‘$ ionic platform add <platform_name>’. Since we will be using ngCordova to manage SQLite plugin from the Ionic app, we have to now install ngCordova to our app. Run the following bower command to download ngCordova dependencies to the local bower ‘lib’ folder: bower install ngCordova We need to inject the JS file using a script tag in our index.html: <script src=“lib/ngCordova/dist/ng-cordova.js"></script> Also, we need to include the ngCordova module as a dependency in our app.js main module declaration: angular.module('starter', [‘ionic’,’ngCordova']) Now, we need to add the Cordova plugin for SQLite using the CLI command: cordova plugin add https://github.com/litehelpers/Cordova-sqlite-storage.git Since we will be using the $cordovaSQLite service of ngCordova only to access this plugin from our Ionic app, we need not inject any other plugin. We will have the following two views in our Ionic app: Trackers list: This list shows all the trackers we add to DB Tracker details: This is a view to show list of data entries we make for a specific tracker We would need to create the routes by registering the states for the two views we want to create. We need to add the following config block code for our ‘starter’ module in the app.js file only: .config(function($stateProvider,$urlRouterProvider){ $urlRouterProvider.otherwise('/') $stateProvider.state('home', { url: '/', controller:'TrackersListCtrl', templateUrl: 'js/trackers-list/template.html' }); $stateProvider.state('tracker', { url: '/tracker/:id', controller:'TrackerDetailsCtrl', templateUrl: 'js/tracker-details/template.html' }) }); Both views would have similar functionality, but will display different entities. Our view will display a list of trackers from the SQLite DB table and also provide a feature to add a new tracker or delete an existing one. Create a new folder named trackers-list where we can store our controller and template for the view. We will also abstract our code to access the SQLite DB into an Ionic factory. We will implement the following methods: initDB: This will initialize or create a table for this entity if it does not exist getAllTrackers: This will get all trackers list rows from the created table addNewTracker - This is a method to insert a new row for a new tracker into the table deleteTracker - This is a method to delete a specific tracker using its ID getTracker - This will get a specific Tracker from the cached list using an ID to display anywhere We will be injecting the $cordovaSQLite service into our factory to interact with our SQLite DB. We can open an existing DB or create a new DB using the command $cordovaSQLite.openDB(“myApp.db”). We have to store the object reference returned from this method call, so we will store it in a module-level variable called db. We have to pass this object reference to our future $cordovaSQLite service calls. $cordovaSQLite has a handful of methods to provide varying features: openDB: This is a method to establish a connection to the existing DB or create a new DB execute: This is a method to execute a single SQL command query insertCollection: This is a method to insert bulk values nestedExecute: This is a method to run nested queries deleted: This is a method to delete a particular DB We see the usage of openDB and execute the command in this post. In our factory, we will create a standard method runQuery to adhere to DRY(Don’t Repeat Yourself) principles. The code for the runQuery function is as follows: function runQuery(query,dataParams,successCb,errorCb) { $ionicPlatform.ready(function() { $cordovaSQLite.execute(db, query,dataParams).then(function(res) { successCb(res); }, function (err) { errorCb(err); }); }.bind(this)); } In the preceding code, we pass the query as a string, dataParams (dynamic query parameters) as an array, and successCB/errorCB as callback functions. We should always ensure that any Cordova plugin code should be called when the Cordova ready event is already fired, which is ensured by the $ionicPlatform.ready() method. We will then call the execute method of the $cordovaSQLite service passing the ‘db’ object reference, query, and dataParams as arguments. The method returns a promise to which we register callbacks using the ‘.then’ method. We pass the results or error using the success callback or error callback. Now, we will write code for each of the methods to initialize DB, insert a new row, fetch all rows, and then delete a row. initDB Method: function initDB() { db = $cordovaSQLite.openDB("myapp.db"); var query = "CREATE TABLE IF NOT EXISTS trackers_list (id in-teger autoincrement primary key, name string)"; runQuery(query,[],function(res) { console.log("table created "); }, function (err) { console.log(err); }); } In the preceding code, the openDB method is used to establish a connection with an existing DB or create a new DB. Then, we run the query to create a new table called ‘trackers_list’ if it does not exist. We define the columns ID with integer autoincrement primary key properties with the name string. addNewTracker Method: function addNewTracker(name) { var deferred = $q.defer(); var query = "INSERT INTO trackers_list (name) VALUES (?)"; runQuery(query,[name],function(response){ //Success Callback console.log(response); deferred.resolve(response); },function(error){ //Error Callback console.log(error); deferred.reject(error); }); return deferred.promise; } In the preceding code, we take ‘name’ as an argument, which will be passed into the insert query. We write the insert query and add a new row to the trackers_list table where ID will be auto-generated. We pass dynamic query parameters using the ‘?’ character in our query string, which will be replaced by elements in the dataParams array passed as the second argument to the runQuery method. We also use a $q library to return a promise to our factory methods so that controllers can manage asynchronous calls. getAllTrackers Method: This method is the same as the addNewTracker method, only without the name parameter, and it has the following query: var query = "SELECT * from trackers_list”; This method will return a promise, which when resolved will give the response from the $cordovaSQLite method. The response object will have the following structure: { insertId: <specific_id>, rows: {item: function, length: <total_no_of_rows>} rowsAffected: 0 } The response object has properties insertId representing the new ID generated for the row, rowsAffected giving the number of rows affected by the query and rows object with item method property, to which we can pass the index of the row to retrieve it. We will write the following code in the controller to convert the response.rows object into an utterable array of rows to be displayed using the ng-repeat directive: for(var i=0;i<response.rows.length;i++) { $scope.trackersList.push({ id:response.rows.item(i).id, name:response.rows.item(i).name }); } The code in the template to display the list of Trackers would be as follows: <ion-item ui-sref="tracker({id:tracker.id})" class="item-icon-right" ng-repeat="tracker in trackersList track by $index"> {{tracker.name}} <ion-delete-button class="ion-minus-circled" ng-click=“deleteTracker($index,tracker.id)"> </ion-delete-button> <i class="icon ion-chevron-right”></i> </ion-item> deleteTracker Method: function deleteTracker(id) { var deferred = $q.defer(); var query = "DELETE FROM trackers_list WHERE id = ?"; runQuery(query,[id],function(response){ … [Same Code as addNewTrackerMethod] } The delete tracker method has the same code as the addNewTracker method, where the only change is in the query and the argument passed. We pass ‘id’ as the argument to be used in the WHERE clause of delete query to delete the row with that specific ID. Rest of the Ionic App Code: The rest of the app code has not been discussed in this post because we have already discussed the code that is intended for integration with SQLite. You can implement your own version of this app or even use this sample code for any other use case. The trackers details view will be implemented in the same way to store data into the tracker_entries table with a foreign key, tracker_id, used for this table. It will also use this ID in the SELECT query to fetch entries for a specific tracker on its detail view. The GitHub link for the exact functioning code for complete app developed during this tutorial. About the author Rahat Khanna is a techno nerd experienced in developing web and mobile apps for many international MNCs and start-ups. He has completed his bachelors in technology with computer science and engineering as specialization. During the last 7 years, he has worked for a multinational IT service company and ran his own entrepreneurial venture in his early twenties. He has worked on projects ranging from static HTML websites to scalable web applications and engaging mobile apps. Along with his current job as a senior UI developer at Flipkart, a billion dollar e-commerce firm, he now blogs on the latest technology frameworks on sites such as www.airpair.com, appsonmob.com, and so on, and delivers talks at community events. He has been helping individual developers and start-ups in their Ionic projects to deliver amazing mobile apps.
Read more
  • 0
  • 0
  • 25068
Modal Close icon
Modal Close icon