Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7008 Articles
article-image-chaos-engineering-managing-complexity-by-breaking-things
Richard Gall
20 Apr 2018
7 min read
Save for later

Chaos Engineering: managing complexity by breaking things

Richard Gall
20 Apr 2018
7 min read
Chaos Engineering is based on a fundamental assertion about software infrastructure today: that it is inherently chaotic. Or, to be more specific, it is chaotic because it is complex. Whereas software infrastructure used to be centralized, owned and licensed by large enterprise vendors, today much of the software that comprises infrastructure is open source. This is where we get back to chaos - because software infrastructure is comprised of many different parts, the way these parts can be unpredictable. Chaos Engineering is an attempt to acknowledge that fact and develop software accordingly. Who invented Chaos Engineering? Chaos Engineering began at Netflix. That makes sense when you consider the complexity of the Netflix technology stack and the way the company have scaled over the last 5 years or so. It built a number of tools to help adopt this chaos-first approach, the most prominent being Chaos Monkey. First launched in 2011 and open-sourced in 2012, Chaos Monkey was a tool that randomly selects instances in production and pulls them down; a little bit like monkeys pulling off your windscreen wipers in a safari park. However, Chaos Monkey became part of a wider suite of tools - called the Simian Army - that were built by Netflix to cause chaos in different part of its infrastructure. Here are the other two components used to simulate chaos: Chaos Gorilla causes big trouble by pulling down an entire AWS availability zone Latency monkey delays communication, essentially simulating poor network performance From that point Chaos Engineering grew. A number of large Silicon Valley organizations have adopted a similar approaches. For example, Facebook's Project Storm simulates data center failures on a huge scale, while Uber uses a tool called uDestroy. Slack has recently spoken in detail on the importance of stress testing their software too; the company is looking to build an engineering team simply to perform Chaos Engineering and improve Slack's reliability. One of the most interesting figures in Chaos Engineering is a man called Kolton Andrus. Andrus used to work at Amazon and Google, but today he is the CEO and founder of Gremlin, a startup that "helps engineers build resilient systems". Essentially, Andrus helped to develop the concept of Chaos Engineering while he was working at Netflix. Gremlin is his vehicle that is making it accessible to others. Chaos Engineering in practice Now the conceptual stuff is out of the way, here's how chaos engineering works. It's actually quite straightforward: Chaos Engineering simulates all sorts of unpredictable situations and scenarios in order to see how the system responds. It's effectively a form of stress testing. As we've seen, over the past few years companies have built their own tools to allow them to stress test their infrastructure. But Gremlin is taking the approach of offering this as a service. It's product is described as 'resiliency-as-a-service.' Its' product is a whole library of 'attacks' which can replicate different types of outages within a system. These are what it calls 'chaos experiments' that allows you to 'identify weak points in your system and fix them before they become a problem'. In this sense, Chaos Engineering is a bit like using the principles of penetration testing an applying it to software testing more broadly. By simulating everything that could possibly go wrong it allows you to make much better optimization decisions. The principles of Chaos Engineering are documented here. This is effectively its 'manifesto'. There's a lot in there worth reading, but here are the 5 principles that any sort of testing or experimentation should aspire to: Base your testing hypothesis on steady state behavior. Consider your infrastructure holistically, making individual parts work is important but not the priority. Simulate a variety of real-world events. This could be hardware or software failures, or simply external changes like spikes in traffic. What's important is that they're all unpredictable. Test in production. Your tests should be authentic. Automate! Testing things could be laborious and require a lot of manual work. Make use of automation tools to do lots of different tests without taking up too much of your time. Don't cause unnecessary pain. While it's important that your stress-tests are authentic, the impact must be contained and minimized by the engineer. Why Chaos Engineering now? Chaos Engineering isn't particularly new. As you've seen, Netflix has been doing it since 2011. But it does feel more urgent and relevant today. That's because the complexity of the software infrastructure behind many of the biggest Silicon Valley companies is now mainstream. It's normal. Cloud isn't an exotic buzzword any more - it's a reality (a reality that often has failures). Microservices are common - they're a commonsense way of building better applications and websites. Alongside this increased complexity, there is also a growing awareness of how much software outages can cost businesses. In a white paper, Gremlin make a big deal out of how much money is lost due to outages. Gremlin cite BAs system failure in summer 2017, which led to passengers stranded all over the world. This outage was estimated to have cost BA $135 million. It also refers to the Amazon S3 outage in March 2017, which is believed to have cost Amazon's customers $150 million. So - outages cost money. Yes, it's marketing spiel from Gremlin, but it's also true. It doesn't take a genius to work out that if you're eCommerce site is down for an hour, you're going to have lost a lot of money. Because software performance is so tied up with business performance, it feels incredibly fragile. That's why Chaos Engineering is perhaps more important and popular than ever. It's a way of countering that fragility. The key challenges of Chaos Engineering Chaos Engineering poses many challenges to software engineering teams. First and foremost, it requires a big cultural change. If you're intent on breaking everything, there are no rules about how things should work or what you're trying to build. Instead you're looking for the best way to build software that performs for the user. More practically, Chaos Engineering isn't that easy to do in a cost-effective manner. Everything Gremlin details in its white paper is very much true - of course outages cost a hell of a lot. But creative destruction and experimentation feels like an expensive route through software projects. It's not hard to see how it might appear self-indulgent, especially to a company or organization where software isn't properly understood. And more to the point, how often do businesses actually do the smart thing when they're building software? Long term projects are always difficult. So much software evolves pragmatically - often for the worse.  Adding in an extra layer of experimentation and detailed testing is a weird mix of bacchanalian and hyper-organized, something that many organizations just couldn't process or properly understand. Chaos engineering and the future of software development Chaos Engineering certainly looks like the future of software development. The only question is whether services like those provided by Gremlin will take off. To understand the true value of stress testing your infrastructure you do need at least a modicum awareness of the complexity of your infrastructure. Indeed, you probably need to have a conversation about what services and dependencies are most business critical. Or rather, which ones most impact the user. That's something this TechCrunch piece addresses: "Testing can... be very political. Finding the points of failure in a system might force deep conversations about a particular software architecture and its robustness in the face of tough situations. A particular company might be deeply invested in a specific technical roadmap (e.g. microservices) that chaos engineering tests show is not as resilient to failures as originally predicted." This means there is going to be a question mark over the extent to which Chaos Engineering ever really enters the mainstream. How many businesses want to have these conversations? It's not just about the inclination - it's also about the time and money. It's an innovative software engineering approach that really calls people's bluff when they talk about innovation. It asks difficult questions about how and why you innovate: do you do new things because you think you should? Is this new thing going to be good for the business? And how well will it work for users? Of course these questions are vital when you're building software. But they rarely make building software easier.
Read more
  • 0
  • 0
  • 34553

article-image-how-to-secure-and-deploy-an-android-app
Sugandha Lahoti
20 Apr 2018
17 min read
Save for later

How to Secure and Deploy an Android App

Sugandha Lahoti
20 Apr 2018
17 min read
In this article, we will be covering two extremely important Android-related topics: Android application security Android application deployment We will kick off the post by discussing Android application security. Securing an Android application It should come as no surprise that security is an important consideration when building software. Besides the security measures put in place in the Android operating system, it is important that developers pay extra attention to ensure that their applications meet the set security standards. In this section, a number of important security considerations and best practices will be broken down for your understanding. Following these best practices will make your applications less vulnerable to malicious programs that may be installed on a client device. Data storage All things being equal, the privacy of data saved by an application to a device is the most common security concern in developing an Android application. Some simple rules can be followed to make your application data more secure. Securing your data when using internal storage As we saw in the previous chapter, internal storage is a good way to save private data on a device. Every Android application has a corresponding internal storage directory in which private files can be created and written to. These files are private to the creating application, and as such cannot be accessed by other applications on the client device. As a rule of thumb, if data should only be accessible by your application and it is possible to store it in internal storage, do so. Feel free to refer to the previous chapter for a refresher on how to use internal storage. Securing your data when using external storage External storage files are not private to applications, and, as such, can be easily accessed by other applications on the same client device. As a result of this, you should consider encrypting application data before storing it in external storage. There are a number of libraries and packages that can be used to encrypt data prior to its saving to external storage. Facebook's Conceal (http://facebook.github.io/conceal/) library is a good option for external-storage data encryption. In addition to this, as another rule of thumb, do not store sensitive data in external storage. This is because external storage files can be manipulated freely. Validation should also be performed on input retrieved from external storage. This validation should be done as a result of the untrustworthy nature of data stored in external storage. Securing your data when using internal storage. Content providers can either prevent or enable external access to your application data. Use the android:exported attribute when registering your content provider in the manifest file to specify whether external access to the content provider should be permitted. Set android:exported to true if you wish the content provider to be exported, otherwise set the attribute to false. In addition to this, content provider query methods—for example, query(), update(), and delete()—should be used to prevent SQL injection (a code injection technique that involves the execution of malicious SQL statements in an entry field by an attacker). Networking security Best practices for your Android App development There are a number of best practices that should be followed when performing network transactions via an Android application. These best practices can be split into different categories. We shall speak about Internet Protocol (IP) networking and telephony networking best practices in this section. IP networking When communicating with a remote computer via IP, it is important to ensure that your application makes use of HTTPs wherever possible (thus wherever it is supported in the server). One major reason for doing this is because devices often connect to insecure networks, such as public wireless connections. HTTPs ensure encrypted communication between clients and servers, regardless of the network they are connected to. In Java, an HttpsURLConnection can be used for secure data transfer over a network. It is important to note that data received via an insecure network connection should not be trusted. Telephony networking In instances where data needs to be transferred freely across a server and client applications, Firebase Cloud Messaging (FCM)—along with IP networking—should be utilized instead of other means, such as the Short Messaging Service (SMS) protocol. FCM is a multi-platform messaging solution that facilitates the seamless and reliable transfer of messages between applications. SMS is not a good candidate for transferring data messages, because: It is not encrypted It is not strongly authenticated Messages sent via SMS are subject to spoofing SMS messages are subject to interception Input validation The validation of user input is extremely important in order to avoid security risks that may arise. One such risk, as explained in the Using content providers section, is SQL injection. The malicious injection of SQL script can be prevented by the use of parameterized queries and the extensive sanitation of inputs used in raw SQL queries. In addition to this, inputs retrieved from external storage must be appropriately validated because external storage is not a trusted data source. Working with user credentials The risk of phishing can be alleviated by reducing the requirement of user credential input in an application. Instead of constantly requesting user credentials, consider using an authorization token. Eliminate the need for storing usernames and passwords on the device. Instead, make use of a refreshable authorization token. Code obfuscation Before publishing an Android application, it is imperative to utilize a code obfuscation tool, such as ProGuard, to prevent individuals from getting unhindered access to your source code by utilizing various means, such as decompilation. ProGuard is prepackaged included within the Android SDK, and, as such, no dependency inclusion is required. It is automatically included in the build process if you specify your build type to be a release. You can find out more about ProGuard here: https://www.guardsquare.com/en/proguard . Securing broadcast receivers By default, a broadcast receiver component is exported and as a result can be invoked by other applications on the same device. You can control access of applications to your apps's broadcast receiver by applying security permissions to it. Permissions can be set for broadcast receivers in an application's manifest file with the <receiver> element. Securing your Dynamically loading code In scenarios in which the dynamic loading of code by your application is necessary, you must ensure that the code being loaded comes from a trusted source. In addition to this, you must make sure to reduce the risk of tampering code at all costs. Loading and executing code that has been tampered with is a huge security threat. When code is being loaded from a remote server, ensure it is transferred over a secure, encrypted network. Keep in mind that code that is dynamically loaded runs with the same security permissions as your application (the permissions you defined in your application's manifest file). Securing services Unlike broadcast receivers, services are not exported by the Android system by default. The default exportation of a service only happens when an intent filter is added to the declaration of a service in the manifest file. The android:exported attribute should be used to ensure services are exported only when you want them to be. Set android:exported to true when you want a service to be exported and false otherwise. Deploying your Android Application So far, we have taken an in-depth look at the Android system, application development in Android, and some other important topics, such as Android application security. It is time for us to cover our final topic for this article pertaining to the Android ecosystem—launching and publishing an Android application. You may be wondering at this juncture what the words launch and publish mean. A launch is an activity that involves the introduction of a new product to the public (end users). Publishing an Android application is simply the act of making an Android application available to users. Various activities and processes must be carried out to ensure the successful launch of an Android application. There are 15 of these activities in all. They are: Understanding the Android developer program policies Preparing your Android developer account Localization planning Planning for simultaneous release Testing against the quality guideline Building a release-ready APK Planning your application's Play Store listing Uploading your application package to the alpha or beta channel Device compatibility definition Pre-launch report assessment Pricing and application distribution setup Distribution option selection In-app products and subscriptions setup Determining your application's content rating Publishing your application Wow! That's a long list. Don't fret if you don't understand everything on the list. Let's look at each item in more detail. Understanding the Android developer program policies There is a set of developer program policies that were created for the sole purpose of making sure that the Play Store remains a trusted source of software for its users. Consequences exist for the violation of these defined policies. As a result, it is important that you peruse and fully understand these developer policies—their purposes and consequences—before continuing with the process of launching your application. Preparing your Android developer account You will need an Android developer account to launch your application on the Play Store. Ensure that you set one up by signing up for a developer account and confirming the accuracy of your account details. If you ever need to sell products on an Android application of yours, you will need to set up a merchant account. Localization planning Sometimes, for the purpose of localization, you may have more than one copy of your application, with each localized to a different language. When this is the case, you will need to plan for localization early on and follow the recommended localization checklist for Android developers. You can view this checklist here: https://developer.android.com/distribute/best-practices/launch/localization-checklist.html. Planning for simultaneous release You may want to launch a product on multiple platforms. This has a number of advantages, such as increasing the potential market size of your product, reducing the barrier of access to your product, and maximizing the number of potential installations of your application. Releasing on numerous platforms simultaneously is generally a good idea. If you wish to do this with any product of yours, ensure you plan for this well in advance. In cases where it is not possible to launch an application on multiple platforms at once, ensure you provide a means by which interested potential users can submit their contact details so as to ensure that you can get in touch with them once your product is available on their platform of choice. Testing against the quality guidelines Quality guidelines provide testing templates that you can use to confirm that your application meets the fundamental functional and non-functional requirements that are expected by Android users. Ensure that you run your applications through these quality guides before launch. You can access these application quality guides here: https://developer.android.com/develop/quality-guidelines/index.html. Building a release-ready application package (APK) A release-ready APK is an Android application that has been packaged with optimizations and then built and signed with a release key. Building a release-ready APK is an important step in the launch of an Android application. Pay extra attention to this step. Planning your application's Play Store listing This step involves the collation of all resources necessary for your product's Play Store listing. These resources include, but are not limited to, your application's log, screenshots, descriptions, promotional graphics, and videos, if any. Ensure you include a link to your application's privacy policy along with your application's Play Store listing. It is also important to localize your application's product listing to all languages that your application supports. Uploading your application package to the alpha or beta channel As testing is an efficient and battle-tested way of detecting defects in software and improving software quality, it is a good idea to upload your application package to alpha and beta channels to facilitate carrying out alpha and beta software testing on your product. Alpha testing and beta testing are both types of acceptance testing. Device compatibility definition This step involves the declaration of Android versions and screen sizes that your application was developed to work on. It is important to be as accurate as possible in this step as defining inaccurate Android versions and screen sizes will invariably lead to users experiencing problems with your application. Pre-launch report assessment Pre-launch reports are used to identify issues found after the automatic testing of your application on various Android devices. Pre-launch reports will be delivered to you, if you opt in to them, when you upload an application package to an alpha or beta channel. Pricing and application distribution setup First, determine the means by which you want to monetize you application. After determining this, set up your application as either a free install or a paid download. After you have set up the desired pricing of your application, select the countries you wish to distribute you applications to. Distribution option selection This step involves the selection of devices and platforms—for example, Android TV and Android Wear—that you wish to distribute your app on. After doing this, the Google Play team will be able to review your application. If your application is approved after its review, Google Play will make it more discoverable. In-app products and subscriptions setup If you wish to sell products within your application, you will need to set up your in-app products and subscriptions. Here, you will specify the countries that you can sell into and take care of various monetary-related issues, such as tax considerations. In this step, you will also set up your merchant account. Determining your application's content rating It is necessary that you provide an accurate rating for the application you are publishing to the Play Store. This step is mandated by the Android Developer Program Policies for good reason. It aids the appropriate age group you are targeting to discover your application. Publishing your application Once you have catered for the necessary steps prior to this, you are ready to publish your application to the production channel of the Play Store. Firstly, you will need to roll out a release. A release allows you to upload the APK files of your application and roll out your application to a specific track. At the end of the release procedure, you can publish your application by clicking Confirm rollout. So, that was all we need to know to publish a new application on the Play Store. In most cases, you will not need to follow all these steps in a linear manner, you will just need to follow a subset of the steps—more specifically, those pertaining to the type of application you wish to publish. Releasing your Android app Having signed your  application, you can proceed with completing the required application details toward the goal of releasing your app. Firstly, you need to create a suitable store listing for the application. Open the application  in the Google Play Console and navigate to the store-listing page (this can be done by selecting Store Listing on the side navigation bar). You will need to fill out all the required information in the store listing page before we proceed further. This information includes product details, such as a title, short description, full description, as well as graphic assets and categorization information—including the application type, category and content rating, contact details, and privacy policy. The Google Play Console store listing page is shown in the following screenshot: Once the store listing information has been filled in, the next thing to fill in is the pricing and distribution information. Select Pricing & distribution on the left navigation bar to open up its preference selection page. For the sake of this demonstration, we set the pricing of this app to FREE. We also selected five random countries to distribute this application to. These countries are Nigeria, India, the United States of America, the United Kingdom, and Australia: Besides selecting the type of pricing and the available countries for product distribution, you will need to provide additional preference information. The necessary information to be provided includes device category information, user program information, and consent information. It is now time to add our signed APK to our Google Play Console app. Navigate to App releases | MANAGE BETA | EDIT RELEASE. In the page that is presented to you, you may be asked whether you want to opt into Google play app signing: For the sake of this example, select OPT-OUT. Once OPT-OUT is selected, you will be able to choose your APK file for upload from your computer's file system. Select your APK for upload by clicking BROWSE FILES, as shown in the following screenshot: After selecting an appropriate APK, it will be uploaded to the Google Play Console. Once the upload is done, the play console will automatically add a suggested release name for your beta release. This release name is based on the version name of the uploaded APK. Modify the release name if you are not comfortable with the suggestion. Next, add a suitable release note in the text field provided. Once you are satisfied with the data you have input, save and continue by clicking the Review button at the bottom of the web page. After reviewing the beta release, you can roll it out if you have added beta testers to your app. Rolling out a beta release is not our focus, so let's divert back to our main goal: publishing the Messenger app. Having uploaded an APK for your application, you can now complete the mandatory content rating questionnaire. Click the Content rating navigation item on the sidebar and follow the instructions to do this. Once the questionnaire is complete, appropriate ratings for your application will be generated: With the completion of the content rating questionnaire, the application is ready to be published to production. Applications that are published to production are made available to all users on the Google Play Store. On the play console, navigate to App releases | Manage Production | Create releases. When prompted to upload an APK, click the ADD APK FROM LIBRARY button to the right of the screen and select the APK we previously uploaded (the APK with a version name of 1.0) and complete the necessary release details similar to how we did when creating a beta release. Click the review button at the bottom of the page once you are ready to proceed. You will be given a brief release summary in the page that follows: Go through the information presented in the summary carefully. Start the roll out to production once you have asserted that you are satisfied with the information presented to you in the summary. Once you start the roll out to production, you will be prompted to confirm your understanding that your app will become available to users of the Play Store: Click Confirm once you are ready for the app to go live on the Play Store. Congratulations! You have now published your first application to the Google Play Store! In this article, we learned how to secure and publish Android applications to the Google Play Store. We identified security threats to Android applications and fully explained ways to alleviate them, we also noted best practices to follow when developing applications for the Android ecosystem.  Finally, we took a deep dive into the process of application publication to the Play Store covering all the necessary steps for the successful publication of an Android application. You enjoyed an excerpt from the book, Kotlin Programming By Example, written by Iyanu Adelekan. This book will take on Android development with Kotlin, from building a classic game Tetris to a messenger app, a level up in terms of complexity. Build your first Android app with Kotlin Creating a custom layout implementation for your Android app  
Read more
  • 0
  • 0
  • 51574

article-image-getting-started-with-kotlin-programming
Sugandha Lahoti
19 Apr 2018
14 min read
Save for later

Getting started with Kotlin programming

Sugandha Lahoti
19 Apr 2018
14 min read
Learning a programming language is a daunting experience for many people and not one that most individuals generally choose to undertake. Regardless of the problem domain that you may wish to build solutions for, be it application development, networking, or distributed systems, Kotlin programming is a good choice for the development of systems to achieve the required solutions. In other words, a developer can't go wrong with learning Kotlin.  In this article, you will learn the following: The fundamentals of the Kotlin programming language The installation of Kotlin Compiling and running Kotlin programs Working with an IDE Kotlin is a strongly-typed, object-oriented language that runs on the Java Virtual Machine (JVM) and can be used to develop applications in numerous problem domains. In addition to running on the JVM, Kotlin can be compiled to JavaScript, and as such, is an equally strong choice for developing client-side web applications. Kotlin can also be compiled directly into native binaries that run on systems without a virtual machine via Kotlin/Native. The Kotlin programming language was primarily developed by JetBrains – a company based in Saint Petersburg, Russia. The developers at JetBrains are the current maintainers of the language. Kotlin was named after Kotlin island – an island near Saint Petersburg. Kotlin was designed for use in developing industrial-strength software in many domains but has seen the majority of its users come from the Android ecosystem. At the time of writing this post, Kotlin is one of the three languages that have been declared by Google as an official language for Android. Kotlin is syntactically similar to Java. As a matter of fact, it was designed to be a better alternative to Java. As a consequence, there are numerous significant advantages to using Kotlin instead of Java in software development.  Getting started with Kotlin In order to develop the Kotlin program, you will first need to install the Java Runtime Environment (JRE) on your computer. The JRE can be downloaded prepackaged along with a Java Development Kit (JDK). For the sake of this installation, we will be using the JDK. The easiest way to install a JDK on a computer is to utilize one of the JDK installers made available by Oracle (the owners of Java). There are different installers available for all major operating systems. Releases of the JDK can be downloaded from http://www.oracle.com/technetwork/java/javase/downloads/index.html: Clicking on the JDK download button takes you to a web page where you can download the appropriate JDK for your operating system and CPU architecture. Download a JDK suitable for your computer and continue to the next section: JDK installation In order to install the JDK on your computer, check out the necessary installation information from the following sections, based on your operating system. Installation on Windows The JDK can be installed on Windows in four easy steps: Double-click the downloaded installation file to launch the JDK installer. Click the Next button in the welcome window. This action will lead you to a window where you can select the components you want to install. Leave the selection at the default and click Next. The following window prompts the selection of the destination folder for the installation. For now, leave this folder as the default (also take note of the location of this folder, as you will need it in a later step). Click Next. Follow the instructions in the upcoming windows and click Next when necessary. You may be asked for your administrator's password, enter it when necessary. Java will be installed on your computer. After the JDK installation has concluded, you will need to set the JAVA_HOME environment variable on your computer. To do this: Open your Control Panel. Select Edit environment variable. In the window that has opened, click the New button. You will be prompted to add a new environment variable. Input JAVA_HOME as the variable name and enter the installation path of the JDK as the variable value. Click OK once to add the environment variable. Installation on macOS In order to install the JDK on macOS, perform the following steps: Download your desired JDK .dmg file. Locate the downloaded .dmg file and double-click it. A finder window containing the JDK package icon is opened. Double-click this icon to launch the installer. Click Continue on the introduction window. Click Install on the installation window that appears. Enter the administrator login and password when required and click Install Software. The JDK will be installed and a confirmation window displayed. Installation on Linux Installation of the JDK on Linux is easy and straightforward using apt-get: Update the package index of your computer. From your terminal, run: sudo apt-get update Check whether Java is already installed by running the following: java -version You'll know Java is installed if the version information for a Java install on your system is printed. If no version is currently installed, run: sudo apt-get install default-jdk That's it! The JDK will be installed on your computer. Compiling Kotlin programs Now that we have the JDK set up and ready for action, we need to install a means to actually compile and run our Kotlin programs. Kotlin programs can be either compiled directly with the Kotlin command-line compiler or built and run with the Integrated Development Environment (IDE). Working with the command-line compiler The command-line compiler can be installed via Homebrew, SDKMAN!, and MacPorts. Another option for setting up the command-line compiler is by manual installation. Installing the command-line compiler on macOS The Kotlin command-line compiler can be installed on macOS in various ways. The two most common methods for its installation on macOS are via Homebrew and MacPorts. Homebrew Homebrew is a package manager for the macOS systems. It is used extensively for the installation of packages required for building software projects. To install Homebrew, locate your macOS terminal and run: /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" You will have to wait a few seconds for the download and installation of Homebrew. After installation, check to see whether Homebrew is working properly by running the following command in your terminal: brew -v If the current version of Homebrew installed on your computer is printed out in the terminal, Homebrew has been successfully installed on your computer. After properly installing Homebrew, locate your terminal and execute the following command: brew install kotlin Wait for the installation to finish, after which you are ready to compile Kotlin programs with the command-line compiler. MacPorts Similar to HomeBrew, MacPorts is a package manager for macOS. Installing MacPorts is easy. It can be installed on a system by: Installing Xcode and the Xcode command-line tools. Agreeing to the Xcode license. This can be done in the terminal by running xcodebuild -license. Installing the required version of MacPorts. MacPort versions can be downloaded from https://www.macports.org/install.php. Once downloaded, locate your terminal and run port install kotlin as the superuser: sudo port install kotlin Installing the command-line compiler on Linux Linux users can easily install the command-line compiler for Kotlin with SDKMAN! SDKMAN! This can be used to install packages on Unix-based systems such as Linux and its various distributions, for example, Fedora and Solaris. SDKMAN! can be installed in three easy steps: Download the software on to your system with curl. Locate your terminal and run: curl -s "https://get.sdkman.io" | bash After you run the preceding command, a set of instructions will come up in your terminal. Follow these instructions to complete the installation. Upon completing the instructions, run: source "$HOME/.sdkman/bin/sdkman-init.sh" Run the following: sdk version If the version number of SDKMAN! just installed is printed in your terminal window, the installation was successful. Now that we have SDKMAN! successfully installed on our system, we can install the command-line compiler by running: sdk install kotlin Installing the command-line compiler on Windows In order to use the Kotlin command-line compilers on Windows: Download a GitHub release of the software from https://github.com/JetBrains/kotlin/releases/tag/v1.2.30 Locate and unzip the downloaded file Open the extracted kotlincbin folder Start the command prompt with the folder path You can now make use of the Kotlin compiler from your command line. Running your first Kotlin program Now that we have our command-line compiler set up, let's try it out with a simple Kotlin program. Navigate to your home directory and create a new file named Hello.kt. All Kotlin files have a .kt extension appended to the end of the filename. Open the file you just created in a text editor of your choosing and input the following: // The following program prints Hello world to the standard system output. fun main (args: Array<String>) { println("Hello world!") } Save the changes made to the program file. After the changes have been saved, open your terminal window and input the following command: kotlinc hello.kt -include-runtime -d hello.jar The preceding command compiles your program into an executable, hello.jar. The -include- runtime flag is used to specify that you want the compiled JAR to be self-contained. By adding this flag to the command, the Kotlin runtime library will be included in your JAR. The -d flag specifies that, in this case, we want the output of the compiler to be called. Now that we have compiled our first Kotlin program, we need to run it—after all, there's no fun in writing programs if they can't be run later on. Open your terminal, if it's not already open, and navigate to the directory where the JAR was saved to (in this case, the home directory).  To run the compiled JAR, perform the following: java -jar hello.jar After running the preceding command, you should see Hello world! printed on your display. Congratulations, you have just written your first Kotlin program! Writing scripts with Kotlin As previously stated, Kotlin can be used to write scripts. Scripts are programs that are written for specific runtime environments for the common purpose of automating the execution of tasks. In Kotlin, scripts have the .kts file extension appended to the file name. Writing a Kotlin script is similar to writing a Kotlin program. In fact, a script written in Kotlin is exactly like a regular Kotlin program! The only significant difference between a Kotlin script and regular Kotlin program is the absence of a main function. Create a file in a directory of your choosing and name it NumberSum.kts. Open the file and input the following program: val x: Int = 1 val y: Int = 2 val z: Int = x + y println(z) As you've most likely guessed, the preceding script will print the sum of 1 and 2 to the standard system output. Save the changes to the file and run the script: kotlinc -script NumberSum.kts A significant thing to take note of is that a Kotlin script does not need to be compiled. Using the REPL REPL is an acronym that stands for Read–Eval–Print Loop. An REPL is an interactive shell environment in which programs can be executed with immediate results given. The interactive shell environment can be invoked by running the kotlinc command without any arguments. The Kotlin REPL can be started by running kotlinc in your terminal. If the REPL is successfully started, a welcome message will be printed in your terminal followed by >>> on the next line, alerting us that the REPL is awaiting input. Now you can type in code within the terminal, as you would in any text editor, and get immediate feedback from the REPL. This is demonstrated in the following screenshot: In the preceding screenshot, the 1 and 2 integers are assigned to x and y, respectively. The sum of x and y is stored in a new z variable and the value held by z is printed to the display with the print() function. Working with an IDE Writing programs with the command line has its uses, but in most cases, it is better to use software built specifically for the purpose of empowering developers to write programs. This is especially true in cases where a large project is being worked on. An IDE is a computer application that hosts a collection of tools and utilities for computer programmers for software development. There are a number of IDEs that can be used for Kotlin development. Out of these IDEs, the one with the most comprehensive set of features for the purpose of developing Kotlin applications is IntelliJ IDEA. As IntelliJ IDEA is built by the creators of Kotlin, there are numerous advantages in using it over other IDEs, such as an unparalleled feature set of tools for writing Kotlin programs, as well as timely updates that cater to the newest advancements and additions to the Kotlin programming language. Installing IntelliJ IDEA IntelliJ IDEA can be downloaded for Windows, macOS, and Linux directly from JetBrains' website: https://www.jetbrains.com/idea/download. On the web page, you are presented with two available editions for download: a paid Ultimate edition and a free Community edition. The Community edition is sufficient if you wish to run the programs in this chapter. Select the edition you wish to download: Once the download is complete, double-click on the downloaded file and install it on your operating system as you would any program. Setting up a Kotlin project with IntelliJ The process of setting up a Kotlin project with IntelliJ is straightforward: Start the IntelliJ IDE application. Click Create New Project. Select Java from the available project options on the left-hand side of the newly opened window. Add Kotlin/JVM as an additional library to the project. Pick a project SDK from the drop-down list in the window. Click Next. Select a template if you wish to use one, then continue to the next screen. Provide a project name in the input field provided. Name the project HelloWorld for now. Set a project location in the input field. Click Finish. Your project will be created and you will be presented with the IDE window: To the left of the window, you will immediately see the project view. This view shows the logical structure of your project files. Two folders are present. These are: .idea: This contains IntelliJ's project-specific settings files. src: This is the source folder of your project. You will place your program files in this folder. Now that the project is set up, we will write a simple program. Add a file named hello.kt to the source folder (right-click the src folder, select New | Kotlin File/Class, and name the file hello). Copy and paste the following code into the file: fun main(args: Array<String>) { println("Hello world!") } To run the program, click the Kotlin logo adjacent to the main function and select Run HelloKt: The project will be built and run, after which, Hello world! will be printed to the standard system output. Advantages of Kotlin As previously discussed, Kotlin was designed to be a better Java, and as such, there are a number of advantages to using Kotlin over Java: Null safety: One common occurrence in Java programs is the throwing of NullPointerException. Kotlin alleviates this issue by providing a null-safe type system. Presence of extension functions: Functions can easily be added to classes defined in program files to extend their functionality in various ways. This can be done with extension functions in Kotlin. Singletons: It is easy to implement the singleton pattern in Kotlin programs. The implementation of a singleton in Java takes considerably more effort than when it is done with Kotlin. Data classes: When writing programs, it is a common scenario to have to create a class for the sole purpose of holding data in variables. This often leads to the writing of many lines of code for such a mundane task. Data classes in Kotlin make it extremely easy to create such classes that hold data with a single line of code. Function types: Unlike Java, Kotlin has function types. This enables functions to accept other functions as parameters and the definition of functions that return functions. To summarize, we introduced Kotlin and explored the fundamentals. In the process, we learned how to install, write and run Kotlin scripts on a computer and how to use the REPL and IDE. This tutorial is an excerpt from the book, Kotlin Programming By Example, written by Iyanu Adelekan. This book will help you enhance your Kotlin programming skills by building real-world applications. Build your first Android app with Kotlin How to convert Java code into Kotlin  
Read more
  • 0
  • 0
  • 37528

article-image-how-to-build-a-basic-server-side-chatbot-using-go
Sunith Shetty
19 Apr 2018
20 min read
Save for later

How to build a basic server side chatbot using Go

Sunith Shetty
19 Apr 2018
20 min read
It's common nowadays to see chatbots (also known as agents) service the needs of website users for a wide variety of purposes, from deciding what shoes to purchase to providing tips on what stocks would look good on a client's portfolio. In a real-world scenario, this functionality would be an attractive proposition for both product sales and technical support usage scenarios. For instance, if a user has a particular question on a product listed on the website, they can freely browse through the website and have a live conversation with the agent. In today’s tutorial, we will examine the functionality required to implement the live chat feature on the server side chatbot. Let’s look at how to implement a live chat feature on various product related pages. In order to have the chat box present in all sections of the website, we will need to place the chat box div container right below the primary content div container in the web page layout template (layouts/webpage_layout.tmpl): <!doctype html> <html> {{ template "partials/header_partial" . }} <div id="primaryContent" class="pageContent"> {{ template "pagecontent" . }} </div> <div id="chatboxContainer" class="containerPulse"> </div> {{ template "partials/footer_partial" . }} </html> The chat box will be implemented as a partial template in the chatbox_partial.tmpl source file in the shared/templates/partials folder: <div id="chatbox"> <div id="chatboxHeaderBar" class="chatboxHeader"> <div id="chatboxTitle" class="chatboxHeaderTitle"><span>Chat with {{.AgentName}}</span></div> <div id="chatboxCloseControl">X</div> </div> <div class="chatboxAgentInfo"> <div class="chatboxAgentThumbnail"><img src="{{.AgentThumbImagePath}}" height="81px"></div> <div class="chatboxAgentName">{{.AgentName}}</div> <div class="chatboxAgentTitle">{{.AgentTitle}}</div> </div> <div id="chatboxConversationContainer"> </div> <div id="chatboxMsgInputContainer"> <input type="text" id="chatboxInputField" placeholder="Type your message here..."> </input> </div> <div class="chatboxFooter"> <a href="http://www.isomorphicgo.org" target="_blank">Powered by Isomorphic Go</a> </div> </div> This is the HTML markup required to implement the wireframe design of the live chat box. Note that the input textfield having the id "chatboxInputField". This is the input field where the user will be able to type their message. Each message created, both the one that the user writes, as well as the one that the bot writes, will use the livechatmsg_partial.tmpl template: <div class="chatboxMessage"> <div class="chatSenderName">{{.Name}}</div> <div class="chatSenderMsg">{{.Message}}</div> </div> Each message is inside its own div container that has two div containers (shown in bold) housing the name of the sender of the message and the message itself. There are no buttons necessary in the live chat feature, since we will be adding an event listener to listen for the press of the Enter key to submit the user's message to the server over the WebSocket connection. Now that we've implemented the HTML markup that will be used to render the chat box, let's examine the functionality required to implement the live chat feature on the server side. Live chat's server-side functionality When the live chat feature is activated, we will create a persistent, WebSocket connection, between the web client and the web server. The Gorilla Web Toolkit provides an excellent implementation of the WebSocket protocol in their websocket package. To fetch the websocket package, you may issue the following command: $ go get github.com/gorilla/websocket The Gorilla web toolkit also provides a useful example web chat application. Rather than reinventing the wheel, we will repurpose Gorilla's example web chat application to fulfill the live chat feature. The source files needed from the web chat example have been copied over to the chat folder. There are three major changes we need to make to realize the live chat feature using the example chat application provided by Gorilla: Replies from the chatbot (the agent) should be targeted to a specific user, and not be sent out to every connected user We need to create the functionality to allow the chatbot to send a message back to the user We need to implement the front-end portion of the chat application in Go Let's consider each of these three points in more detail. First, Gorilla's web chat example is a free-for-all chat room. Any user can come in, type a message, and all other users connected to the chat server will be able to see the message. A major requirement for the live chat feature is that each conversation between the chatbot and the human should be exclusive. Replies from the agent must be targeted to a specific user, and not to all connected users. Second, the example web chat application from the Gorilla web toolkit doesn't send any messages back to the user. This is where the custom chatbot comes into the picture. The agent will communicate directly with the user over the established WebSocket connection. Third, the front-end portion of the example web chat application is implemented as a HTML document containing inline CSS and JavaScript. As you may have guessed already, we will implement the front-end portion for the live chat feature in Go, and the code will reside in the client/chat folder. Now that we have established our plan of action to implement the live chat feature using the Gorilla web chat example as a foundation to start from, let's begin the implementation. The modified web chat application that we will create contains two main types: Hub and Client. The hub type The chat hub is responsible for maintaining a list of client connections and directing the chatbot to broadcast a message to the relevant client. For example, if Alice asked the question "What is Isomorphic Go?", the answer from the chatbot should go to Alice and not to Bob (who may not have even asked a question yet). Here's what the Hub struct looks like: type Hub struct {  chatbot bot.Bot  clients map[*Client]bool  broadcastmsg chan *ClientMessage register chan *Client  unregister chan *Client } The chatbot is a chat bot (agent) that implements the Bot interface. This is the brain that will answer the questions received from clients. The clients map is used to register clients. The key-value pair stored in the map consists of the key, a pointer to a Client instance, and the value consists of a Boolean value set to true to indicate that the client is connected. Clients communicate with the hub over the broadcastmsg, register, and unregister channels. The register channel registers a client with the hub. The unregister channel, unregisters a client with the hub. The client sends the message entered by the user over the broadcastmsg channel, a channel of type ClientMessage. Here's the ClientMessage struct that we have introduced: type ClientMessage struct {  client *Client  message []byte } To fulfill the first major change we laid out previously, that is, the exclusivity of the conversation between the agent and the user, we use the ClientMessage struct to store, both the pointer to the Client instance that sent the user's message along with the user's message itself (a byte slice). The constructor function, NewHub, takes in chatbot that implements the Bot interface and returns a new Hub instance: func NewHub(chatbot bot.Bot) *Hub {  return &Hub{    chatbot: chatbot,    broadcastmsg: make(chan *ClientMessage), register: make(chan    *Client), unregister:        make(chan *Client),    clients: make(map[*Client]bool),  } } We implement an exported getter method, ChatBot, so that the chatbot can be accessed from the Hub object: func (h *Hub) ChatBot() bot.Bot {  return h.chatbot } This action will be significant when we implement a Rest API endpoint to send the bot's details (its name, title, and avatar image) to the client. The SendMessage method is responsible for broadcasting a message to a particular client: func (h *Hub) SendMessage(client *Client, message []byte) {  client.send <- message } The method takes in a pointer to Client, and the message, which is a byte slice, that should be sent to that particular client. The message will be sent over the client's send channel. The Run method is called to start the chat hub: func (h *Hub) Run() { for { select { case client := <-h.register: h.clients[client] = true greeting := h.chatbot.Greeting() h.SendMessage(client, []byte(greeting)) case client := <-h.unregister: if _, ok := h.clients[client]; ok { delete(h.clients, client) close(client.send) } case clientmsg := <-h.broadcastmsg: client := clientmsg.client reply := h.chatbot.Reply(string(clientmsg.message)) h.SendMessage(client, []byte(reply)) } } } We use the select statement inside the for loop to wait on multiple client operations. In the case that a pointer to a Client comes in over the hub's register channel, the hub will register the new client by adding the client pointer (as the key) to the clients map and set a value of true for it. We will fetch a greeting message to return to the client by calling the Greeting method on chatbot. Once we get the greeting (a string value), we call the SendMessage method passing in the client and the greeting converted to a byte slice. In the case that a pointer to a Client comes in over the hub's unregister channel, the hub will remove the entry in map for the given client and close the client's send channel, which signifies that the client won't be sending any more messages to the server. In the case that a pointer to a ClientMessage comes in over the hub's broadcastmsg channel, the hub will pass the client's message (as a string value) to the Reply method of the chatbot object. Once we get reply (a string value) from the agent, we call the SendMessage method passing in the client and the reply converted to a byte slice. The client type The Client type acts as a broker between Hub and the websocket connection. Here's what the Client struct looks like: type Client struct {  hub *Hub  conn *websocket.Conn send chan []byte } Each Client value contains a pointer to Hub, a pointer to a websocket connection, and a buffered channel, send, meant for outbound messages. The readPump method is responsible for relaying inbound messages coming in over the websocket connection to the hub: func (c *Client) readPump() { defer func() { c.hub.unregister <- c c.conn.Close() }() c.conn.SetReadLimit(maxMessageSize) c.conn.SetReadDeadline(time.Now().Add(pongWait)) c.conn.SetPongHandler(func(string) error { c.conn.SetReadDeadline(time.Now().Add(pongWait)); return nil }) for { _, message, err := c.conn.ReadMessage() if err != nil { if websocket.IsUnexpectedCloseError(err, websocket.CloseGoingAway) { log.Printf("error: %v", err) } break } message = bytes.TrimSpace(bytes.Replace(message, newline, space, -1)) // c.hub.broadcast <- message clientmsg := &ClientMessage{client: c, message: message} c.hub.broadcastmsg <- clientmsg } } We had to make a slight change to this function to fulfill the requirements of the live chat feature. In the Gorilla web chat example, the message alone was relayed over to Hub. Since we are directing chat bot responses, back to the client that sent them, not only do we need to send the message to the hub, but also the client that happened to send the message (shown in bold). We do so by creating a ClientMessage struct: type ClientMessage struct {  client *Client  message []byte } The ClientMessage struct contains fields to hold both the pointer to the client as well as the message, a byte slice. Going back to the readPump function in the client.go source file, the following two lines are instrumental in allowing the Hub to know which client sent the message: clientmsg := &ClientMessage{client: c, message: message}  c.hub.broadcastmsg <- clientmsg The writePump method is responsible for relaying outbound messages from the client's send channel over the websocket connection: func (c *Client) writePump() { ticker := time.NewTicker(pingPeriod) defer func() { ticker.Stop() c.conn.Close() }() for { select { case message, ok := <-c.send: c.conn.SetWriteDeadline(time.Now().Add(writeWait)) if !ok { // The hub closed the channel. c.conn.WriteMessage(websocket.CloseMessage, []byte{}) return } w, err := c.conn.NextWriter(websocket.TextMessage) if err != nil { return } w.Write(message) // Add queued chat messages to the current websocket message. n := len(c.send) for i := 0; i < n; i++ { w.Write(newline) w.Write(<-c.send) } if err := w.Close(); err != nil { return } case <-ticker.C: c.conn.SetWriteDeadline(time.Now().Add(writeWait)) if err := c.conn.WriteMessage(websocket.PingMessage, []byte{}); err != nil { return } } } } The ServeWS method is meant to be registered as an HTTP handler by the web application: func ServeWs(hub *Hub) http.Handler { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { conn, err := upgrader.Upgrade(w, r, nil) if err != nil { log.Println(err) return } client := &Client{hub: hub, conn: conn, send: make(chan []byte, 256)} client.hub.register <- client go client.writePump() client.readPump() }) } This method performs two important tasks. The method upgrades the normal HTTP connection to a websocket connection and registers the client to the hub. Now that we've set up the code for our web chat server, it's time to activate it in our web application. Activating the chat server In the igweb.go source file, we have included a function called startChatHub, which is responsible for starting the Hub: func startChatHub(hub *chat.Hub) {  go hub.Run() } We add the following code in the main function to create a new chatbot, associate it with the Hub, and start the Hub: chatbot := bot.NewAgentCase() hub := chat.NewHub(chatbot) startChatHub(hub) When we call the registerRoutes function to register all the routes for the server-side web application, note that we also pass in the hub value to the function: r := mux.NewRouter() registerRoutes(&env, r, hub) In the registerRoutes function, we need the hub to register the route handler for the Rest API endpoint that returns the agent's information: r.Handle("/restapi/get-agent-info", endpoints.GetAgentInfoEndpoint(env, hub.ChatBot())) The hub is also used to register the route handler for the WebSocket route, /ws. We register the ServeWS handler function, passing in the hub: r.Handle("/ws", chat.ServeWs(hub)) Now that we have everything in place to activate the chat server, it's time to focus on the star of the live chat feature—the chat agent. The agent's brain The chat bot type that we will use for the live chat feature, AgentCase, will implement the following Bot interface: type Bot interface { Greeting() string Reply(string) string Name() string Title() string ThumbnailPath() string SetName(string) SetTitle(string) SetThumbnailPath(string) } The Greeting method will be used to send an initial greeting to the user, enticing them to interact with the chat bot. The Reply method accepts a question (a string) and returns a reply (also a string) for the given question. The rest of the methods implemented are for purely psychological reasons to give humans the illusion that they are communicating with someone, rather than something. The Name method is a getter method that returns the chat bot's name. The Title method is a getter method that returns the chat bot's title. The ThumbnailPath method is a getter method that returns the path to the chat bot's avatar image. Each of the getter methods has a corresponding setter method: SetName, SetTitle, and SetThumbnailPath. By defining the Bot interface, we are clearly stating the expectations of a chat bot. This allows us to make the chat bot solution extensible in the future. For example, the intelligence that Case exhibits may be too rudimentary and limiting. In the near future, we may want to implement a bot named Molly, whose intelligence may be implemented using a more powerful algorithm. As long as the Molly chat bot implements the Bot interface, the new chat bot can be easily plugged into our web application. In fact, from the perspective of the server-side web application, it would just be a one-line code change. Instead of instantiating an AgentCase instance, we would instantiate an AgentMolly instance instead. Besides the difference in intelligence, the new chat bot, Molly, would come with its own name, title, and avatar image, so humans would be able to differentiate it from Case. Here's the AgentCase struct: type AgentCase struct { Bot name string title string thumbnailPath string knowledgeBase map[string]string knowledgeCorpus []string sampleQuestions []string } We have embedded the Bot interface to the struct definition, indicating that the AgentCase type will implement the Bot interface. The field name is for the name of the agent. The field title is for the title of the agent. The field thumbnailPath is used to specify the path to the chat bot's avatar image. The knowledgeBase field is  map of type map[string]string. This is essentially the agent's brain. Keys in the map are the common terms found in a particular question. Values in the map are the answers to the question. The knowledgeCorpus field, a string byte slice, is a knowledge corpus of the terms that may exist in questions that the bot will be asked. We use the keys of the knowledgeBase map to construct the knowledgeCorpus. A corpus is a collection of text that is used to conduct linguistic analysis. In our case, we will conduct the linguistic analysis based on the question (the query) that the human user provided to the bot. The sampleQuestions field, a string byte slice, will contain a list of sample questions that the user may ask the chat bot. The chat bot will provide the user with a sample question when it greets them to entice the human user into a conversation. It is understood that the human user is free to paraphrase the sample question or ask an entirely different question depending on their preference. The initializeIntelligence method is used to initialize Case's brain: func (a *AgentCase) initializeIntelligence() { a.knowledgeBase = map[string]string{ "isomorphic go isomorphic go web applications": "Isomorphic Go is the methodology to create isomorphic web applications using the Go (Golang) programming language. An isomorphic web application, is a web application, that contains code which can run, on both the web client and the web server.", "kick recompile code restart web server instance instant kickstart lightweight mechanism": "Kick is a lightweight mechanism to provide an instant kickstart to a Go web server instance, upon the modification of a Go source file within a particular project directory (including any subdirectories). An instant kickstart consists of a recompilation of the Go code and a restart of the web server instance. Kick comes with the ability to take both the go and gopherjs commands into consideration when performing the instant kickstart. This makes it a really handy tool for isomorphic golang projects.", "starter code starter kit": "The isogoapp, is a basic, barebones web app, intended to be used as a starting point for developing an Isomorphic Go application. Here's the link to the github page: https://github.com/isomorphicgo/isogoapp", "lack intelligence idiot stupid dumb dummy don't know anything": "Please don't question my intelligence, it's artificial after all!", "find talk topic presentation lecture subject": "Watch the Isomorphic Go talk by Kamesh Balasubramanian at GopherCon India: https://youtu.be/zrsuxZEoTcs", "benefits of the technology significance of the technology importance of the technology": "Here are some benefits of Isomorphic Go: Unlike JavaScript, Go provides type safety, allowing us to find and eliminate many bugs at compile time itself. Eliminates mental context-shifts between back- end and front-end coding. Page loading prompts are not necessary.", "perform routing web app register routes define routes": "You can implement client-side routing in your web application using the isokit Router preventing the dreaded full page reload.", "render templates perform template rendering": "Use template sets, a set of project templates that are persisted in memory and are available on both the server-side and the client-side", "cogs reusable components react-like react": "Cogs are reuseable components in an Isomorphic Go web application.", } a.knowledgeCorpus = make([]string, 1) for k, _ := range a.knowledgeBase { a.knowledgeCorpus = append(a.knowledgeCorpus, k) } a.sampleQuestions = []string{"What is isomorphic go?", "What are the benefits of this technology?", "Does isomorphic go offer anything react- like?", "How can I recompile code instantly?", "How can I perform routing in my web app?", "Where can I get starter code?", "Where can I find a talk on this topic?"} } There are three important tasks that occur within this method: First, we set Case's knowledge base. Second, we set Case's knowledge corpus. Third, we set the sample questions, which Case will utilize when greeting the human user. The first task we must take care of is to set Case's knowledge base. This consists of setting the knowledgeBase property of the AgentCase instance. As mentioned earlier, the keys in the map refer to terms found in the question, and the values in the map are the answers to the question. For example, the "isomorphic go isomorphic go web applications" key could service the following questions: What is Isomorphic Go? What can you tell me about Isomorphic Go? Due to the the large amount of text contained within the map literal declaration for the knowledgeBase map, I encourage you to view the source file, agentcase.go, on a computer. The second task we must take care of is to set Case's corpus, the collection of text used for linguistic analysis used against the user's question. The corpus is constructed from the keys of the knowledgeBase map. We set the knowledgeCorpus field property of the AgentCase instance to a newly created string byte slice using the built-in make function. Using a for loop, we iterate through all the entries in the knowledgeBase map and append each key to the knowledgeCorpus field slice. The third and last task we must take care of is to set the sample questions that Case will present to the human user. We simply populate the sampleQuestions property of the AgentCase instance. We use the string literal declaration to populate all the sample questions that are contained in the string byte slice. Here are the getter and setter methods of the AgentCase type: func (a *AgentCase) Name() string { return a.name } func (a *AgentCase) Title() string { return a.title } func (a *AgentCase) ThumbnailPath() string { return a.thumbnailPath } func (a *AgentCase) SetName(name string) { a.name = name } func (a *AgentCase) SetTitle(title string) { a.title = title } func (a *AgentCase) SetThumbnailPath(thumbnailPath string) { a.thumbnailPath = thumbnailPath } These methods are used to get and set the name, title, and thumbnailPath fields of the AgentCase object. Here's the constructor function used to create a new AgentCase instance: func NewAgentCase() *AgentCase {  agentCase := &AgentCase{name: "Case", title: "Resident Isomorphic  Gopher Agent",     thumbnailPath: "/static/images/chat/Case.png"}  agentCase.initializeIntelligence() return agentCase } We declare and initialize the agentCase variable with a new AgentCase instance, setting the fields for name, title, and thumbnailPath. We then call the initializeIntelligence method to initialize Case's brain. Finally, we return the newly created and initialized AgentCase instance. To summarize, we introduced you to the websocket package from the Gorilla toolkit project. We learned how to establish a persistent connection between the web server and the web client to create a server-side chatbot using WebSocket functionality. You read an excerpt from a book written by Kamesh Balasubramanian titled Isomorphic Go. In this book, you will learn how to build and deploy Isomorphic Go web applications. Top 4 chatbot development frameworks for developers How to create a conversational assistant or chatbot using Python Build a generative chatbot using recurrent neural networks (LSTM RNNs)    
Read more
  • 0
  • 0
  • 56205

article-image-azure-stream-analytics-7-reasons-to-choose
Sugandha Lahoti
19 Apr 2018
11 min read
Save for later

How to get started with Azure Stream Analytics and 7 reasons to choose it

Sugandha Lahoti
19 Apr 2018
11 min read
In this article, we will introduce Azure Stream Analytics, and show how to configure it. We will then look at some of key the advantages of the Stream Analytics platform including how it will enhance developer productivity, reduce and improve the Total Cost of Ownership (TCO) of building and maintaining a scaling streaming solution among other factors. What is Azure Stream Analytics and how does it work? Microsoft Azure Stream Analytics falls into the category of PaaS services where the customers don't need to manage the underlying infrastructure. However, they are still responsible for and manage an application that they build on the top of PaaS service and more importantly the customer data. Azure Stream Analytics is a fully managed server-less PaaS service that is built for real-time analytics computations on streaming data. The service can consume from a multitude of sources. Azure will take care of the hosting, scaling, and management of the underlying hardware and software ecosystem. The following are some of the examples of different use cases for Azure Stream Analytics. When we are designing the solution that involves streaming data, in almost every case, Azure Stream Analytics will be part of a larger solution that the customer was trying to deploy. This can be real-time dashboarding for monitoring purposes or real-time monitoring of IT infrastructure equipment, preventive maintenance (auto-manufacturing, vending machines, and so on), and fraud detection. This means that the streaming solution needs to be thoughtful about providing out-of-the-box integration with a whole plethora of services that could help build a solution in a relatively quick fashion. Let's review a usage pattern for Azure Stream Analytics using a canonical model: We can see devices and applications that generate data on the left in the preceding illustration that can connect directly or through cloud gateways to your stream ingest sources. Azure Stream Analytics can pick up the data from these ingest sources, augment it with reference data, run necessary analytics, gather insights and push them downstream for action. You can trigger business processes, write the data to a database or directly view the anomalies on a dashboard. In the previous canonical pattern, the number of streaming ingest technologies are used; let's review them in the following section: Event Hub: Global scale event ingestion system, where one can publish events from millions of sensors and applications. This will guarantee that as soon as an event comes in here, a subscriber can pick that event up within a few milliseconds. You can have one or more subscriber as well depending on your business requirements. A typical use case for an Event Hub is real-time financial fraud detection and social media sentiment analytics. IoT Hub: IoT Hub is very similar to Event Hub but takes the concept a lot further forward—in that you can take bidirectional actions. It will not only ingest data from sensors in real time but can also send commands back to them. It also enables you to do things like device management. Enabling fundamental aspects such as security is a primary need for IoT built with it. Azure Blob: Azure Blob is a massively scalable object storage for unstructured data, and is accessible through HTTP or HTTPS. Blob storage can expose data publicly to the world or store application data privately. Reference Data: This is auxiliary data that is either static or that changes slowly. Reference data can be used to enrich incoming data to perform correlation and lookups. On the ingress side, with a few clicks, you can connect to Event Hub, IOT Hub, or Blob storage. The Streaming data can be enriched with reference data in the Blob store. Data from the ingress process will be consumed by the Azure Stream Analytics service; we can call machine learning (ML) for event scoring in real time. The data can be egressed to live Dashboarding to Power BI, or could also push data back to Event Hub from where dashboards and reports can pick it up. The following is a summary of the ingress, egress, and archiving options: Ingress choices: Event Hub IoT Hub Blob storage Egress choices: Live Dashboards: PowerBI Event Hub Driving workflows: Event Hubs Service Bus Archiving and post analysis: Blob storage Document DB Data Lake SQL Server Table storage Azure Functions One key point to note is there the number of customers who push data from Stream Analytics processing (egress point) to Event Hub and then add Azure website-as hosted solutions into their own custom dashboard. One can drive workflows by pushing the events to Azure Service Bus and PowerBI. For example, customer can build IoT support solutions to detect an anomaly in connected appliances and pushing the result into Azure Service Bus. A worker role can run as a daemon to pull the messages and create support tickets using Dynamics CRM API. Then use Power BI on the ticket can be archived for post analysis. This solution eliminates the need for the customer to log a ticket , but the system will automatically do it based on predefined anomaly thresholds. This is just one sample of real-time connected solution. There are a number of use cases that don't even involve real-time alerts. You can also use it to aggregate data, filter data, and store it in Blob storage, Azure Data Lake (ADL), Document DB, SQL, and then run U-SQL Azure Data Lake Analytics (ADLA), HDInsight, or even call ML models for things like predictive maintenance. Configuring Azure Stream Analytics Azure Stream Analytics (ASA) is a fully managed, cost-effective real-time event processing engine. Stream Analytics makes it easy to set up real-time analytic computations on data streaming from devices, sensors, websites, social media, applications, infrastructure systems, and more. The service can be hosted with a few clicks in the Azure portal; users can author a Stream Analytics job specifying the input source of the streaming data, the output sink for the results of your job, and a data transformation expressed in a SQL-like language. The jobs can be monitored and you can adjust the scale/speed of the job in the Azure portal to scale from a few kilobytes to a gigabyte or more of events processed per second. Let's review how to configure Azure Stream Analytics step by step: Log in to the Azure portal using your Azure credentials, click on New, and search for Stream Analytics job: 2. Click on Create to create an Azure Stream Analytics instance: 3. Provide a Job Name and Resource group name for the Azure Stream Analytics job deployment: 4. After a few minutes, the deployment will be complete: 5. Review the following in the deployment--audit trail of the creation: 6. Ability stream up and down using a simple UI: 7. Build in the Query interface to run queries: 8. Run Queries using a SQL-like interface, with the ability to accept late-arriving events with simple GUI-based configuration: Key advantages of Azure Stream Analytics Let's quickly review how traditional streaming solutions are built; the core deployment starts with procuring and setting up the basic infrastructure necessary to host the streaming solution. Once this is done, we can then build the ingress and egress solution on top of the deployed infrastructure. Once the core infrastructure is built, customer tools will be used to build business intelligence (BI) or machine-learning integration. After the system goes into production, scaling during runtime needs to be taken care of by capturing the telemetry and building and configuration of HW/SW resources as necessary. As business needs ramp up, so does the monitoring and troubleshooting. Security Azure Stream Analytics provides a number of inbuilt security mechanics in areas such as authentication, authorization, auditing, segmentation, and data protection. Let's quickly review them. Authentication support: Authentication support in Azure Stream Analytics is done at portal level. Users should have a valid subscription ID and password to access the Azure Stream Analytics job. Authorization: Authorization is the process during login where users provide their credentials (for example, user account name and password, smart card and PIN, Secure ID and PIN, and so on) to prove their Microsoft identity so that they can retrieve their access token from the authentication server. Authorization is supported by Azure Stream Analytics. Only authenticated/authorized users can access the Azure Stream Analytics job. Support for encryption: Data-at-rest using client-side encryption and TDE. Support for key management: Key management is supported through ingress and egress points. Programmer productivity One of the key features of Azure Stream Analytics is developer productivity, and it is driven a lot by the query language that is based on SQL constructs. It provides a wide array of functions for analytics on streaming data, all the way from simple data manipulation functions, data and time functions, temporal functions, mathematical, string, scaling, and much more. It provides two features natively out of the box. Let's review the features in detail in the next section Declarative SQL constructs Built-in temporal semantics Declarative SQL constructs A simple-to-use UI is provided and queries can be constructed using the provided user interface. The following is the feature set of the declarative SQL constructs: Filters (Where) Projections (Select) Time-window and property-based aggregates (Group By) Time-shifted joins (specifying time bounds within which the joining events must occur) All combinations thereof The following is a summary of different constructs to manipulate streaming data: Data manipulation: SELECT, FROM, WHERE GROUP BY, HAVING, CASE WHEN THEN ELSE, INNER/LEFT OUTER JOIN, UNION, CROSS/OUTER APPLY, CAST, INTO, ORDER BY ASC, DSC Date and time functions: DateName, DatePart, Day, Month, Year, DateDiff, DateTimeFromParts, DateAdd Temporal functions: Lag, IsFirst, LastCollectTop Aggregate functions: SUM, COUNT, AVG, MIN, MAX, STDEV, STDEVP, VAR VARP, TopOne Mathematical functions: ABS, CEILING, EXP, FLOOR POWER, SIGN, SQUARE, SQRT String functions: Len, Concat, CharIndex Substring, Lower Upper, PatIndex Scaling extensions: WITH, PARTITION BY OVER Geospatial: CreatePoint, CreatePolygon, CreateLineString, ST_DISTANCE, ST_WITHIN, ST_OVERLAPS, ST_INTERSECTS Built-in temporal semantics Azure Stream Analytics provides prebuilt temporal semantics to query time-based information and merge streams with multiple timelines. Here is a list of temporal semantics: Application or ingest timestamp Windowing functions Policies for event ordering Policies to manage latencies between ingress sources Manage streams with multiple timelines Join multiple streams of temporal windows Join streaming data with data-at-rest Lowest total cost of ownership Azure Stream Analytics is a fully managed PaaS service on Azure. There are no upfront costs or costs involved in setting up computer clusters and complex hardware wiring like you would do with an on-prem solution. It's a simple job service where there is no cluster provisioning and customers pay for what they use. A key consideration is the variable workloads. With Azure Stream Analytics, you do not need to design your system for peak throughput and can add more compute footprint as you go. If you have scenarios where data comes in spurts, you do not want to design a system for peak usage and leave it unutilized for other times. Let's say you are building a traffic monitoring solution—naturally, there is the expectation that it will expect peaks to show up during morning and evening rush hours. However, you would not want to design your system or investments to cater to these extremes. Cloud elasticity that Azure offers is a perfect fit here. Azure Stream Analytics also offers fast recovery by checkpointing and at-least-once event delivery. Mission-critical and enterprise-less scalability and availability Azure Stream Analytics is available across multiple worldwide data centers and sovereign clouds. Azure Stream Analytics promises 3-9s availability that is financially guaranteed with built-in auto recovery so that you will never lose the data. The good thing is customers do not need to write a single line of code to achieve this. The bottom-line is that enterprise readiness is built into the platform. Here is a summary of the Enterprise-ready features: Distributed scale-out architecture Ingests millions of events per second Accommodates variable loads Easily adds incremental resources to scale Available across multiple data centres and sovereign clouds Global compliance In addition, Azure Stream Analytics is compliant with many industries and government certifications. It is already HIPPA-compliant built-in and suitable to host healthcare applications. That's how customers can scale up their businesses confidently. Here is a summary of global compliance: ISO 27001 ISO 27018 SOC 1 Type 2 SOC 2 Type 2 SOC 3 Type 2 HIPAA/HITECH PCI DSS Level 1 European Union Model Clauses China GB 18030 Thus we reviewed Azure Stream Analytics and understood its key advantages. These advantages included: Ease in terms of developer productivity, Ease of development and how to reduces total cost of ownership, Global compliance certifications, The value of the PaaS based streaming solution to host mission-critical applications and security This post is taken from the book, Stream Analytics with Microsoft Azure, written by Anindita Basak, Krishna Venkataraman, Ryan Murphy, and Manpreet Singh. This book will help you to understand Azure Stream Analytics so that you can develop efficient analytics solutions that can work with any type of data. Say hello to Streaming Analytics How to build a live interactive visual dashboard in Power BI with Azure Stream Performing Vehicle Telemetry job analysis with Azure Stream Analytics tools    
Read more
  • 0
  • 0
  • 40529

article-image-performing-vehicle-telemetry-job-analysis-with-azure-stream-analytics-tools
Sugandha Lahoti
18 Apr 2018
8 min read
Save for later

Performing Vehicle Telemetry job analysis with Azure Stream Analytics tools

Sugandha Lahoti
18 Apr 2018
8 min read
This tutorial is a step-by-step blueprint for a Vehicle Telemetry job analysis on Azure using Streaming Analytics tools for Visual Studio.For connected car and real-time predictive Vehicle Telemetry Analysis, there's a necessity to specify opportunities for new solutions. These opportunities include   How a car could be shipped globally with the required smart hardware to connect to the internet within the next few years. How the embedded connections could define Vehicle Telemetry predictive health status so automotive companies will be able to collect data on the performance of cars, How to send interactive updates and patches to car's instrumentation remotely, How to avoid car equipment damage with precautionary measures with prior notification All these require an intelligent vehicle health telemetry analysis which you can implement using Azure Streaming. Stream Analytics tools for Visual Studio The Stream Analytics tools for Visual Studio help prepare, build, and deploy real-time events on Azure. Optionally, the tools enable you to monitor the streaming job using local sample data as job input testing as well as real-time monitoring, job metrics, diagram view, and so on. This tool provides a complete development setup for the implementation and deployment of real-world Azure Stream Analytics jobs using Visual Studio. Developing a Stream Analytics job using Visual Studio Post the installation of the Stream Analytics tool, a new stream analytics job can be created in Visual Studio. You can get started in Visual Studio IDE from File | New Project. Under the Templates, select Stream Analytics and choose Azure Stream Analytics Application. 2. Next, the job name, project, and solution location should be provided. Under Solution menu, you may also select options such as Add to solution or Create new instance apart from Create new solution from the available drop-down menu during Visual Studio Stream Analytics job creation: 3. Once the ASA job is created, in the Solution Explorer, the job topology folder structure could be viewed as Inputs (job input), Outputs (job output), JobConfig.json, Script.asaql (Stream Analytics Query file), Azure Functions (optional), and so on: 4. Next, provide the job topology data input and output event source settings by selecting Input.json and Output.json from Inputs and Outputs directories, respectively. 5. For a Vehicle Telemetry Predictive Analysis demo using an Azure Stream Analytics job, we need to take two different job data streams. One should be a Stream type for an illimitable sequence of real-time events processed through Azure Event Hub along with Hub policy name, policy key, event serialization format, and so on: Defining a Stream Analytics query for Vehicle Telemetry job analysis using Stream Analytics tools To assign the streaming analytics query definition, the Script.asasql file from the ASA project should be selected by specifying the data and reference stream input joining operation along with supplying analyzed output to Blob storage as configured in job properties. Query to define Vehicle Telemetry (Connected Car) engine health status and pollution index over cities For connected car and real-time predictive Vehicle Telemetry Analysis, there's a necessity to specify opportunities for new solutions in terms of how a car could be shipped globally with the required smart hardware to connect to the internet within the next few years. How the embedded connections could define Vehicle Telemetry predictive health status so automotive companies will be able to collect data on the performance of cars, to send interactive updates and patches to car's instrumentation remotely, and just to avoid car equipment damage with precautionary measures with prior notification through intelligent vehicle health telemetry analysis using Azure Streaming. The solution architecture of the Co:tUlected Car-Vehicle Telemetry Analysis case study used in this demo, with Azure Stream Analytics for real-time predictive analysis, is as follows: Testing Stream Analytics queries locally or in the cloud Azure Stream Analytics tools in Visual Studio offer the flexibility to execute the queries either locally or directly in the cloud. In the Script.asaql file, you need to provide the respective query of your streaming job and test against local input stream/Reference data for query testing before processing in Azure: 2. To run the Stream Analytics job query locally, first select Add Local Input by right-clicking on the ASA project in VS Solution Explorer, and choose to Add Local Input: 3. Define the local input for each Event Hub Data Stream and Blob storage data and execute the job query locally before publishing it in Azure: 4. After adding each local input test data, you can test the Stream Analytics job query locally in VS editor by clicking on the Run Locally button in the top left corner of VS IDE: Vehicle diagnostic Usage-based insurance Engine emission control Engine performance remapping Eco-driving Roadside assistance call Fleet management So, specify the following schema during the designing of a connected car streaming job query with Stream Analytics using parameters such as Vehicle Index no, Model, outside temperature, engine speed, fuel meter, tire pressure, and brake status, by defining INNER join with Event Hub data streams along with Blob storage reference streams containing vehicle model information: Select input.vin, BlobSource.Model, input.timestamp, input.outsideTemperature, input.engineTemperature, input.speed, input.fuel, input.engineoil, input.tirepressure, input.odometer, input.city, input.accelerator_pedal_position, input.parking_brake_status, input.headlamp_status, input.brake_pedal_status, input.transmission_gear_position, input.ignition_status, input.windshield_wiper_status, input.abs into output from input join BlobSource on input.vin = BlobSource.VIN The query could be further customized for complex event processing analysis in terms of defining windowing concepts like Tumbling window function, which assigns equal length non-overlapping series of events in streams with a fixed time slice. The following Vehicle Telemetry analytics query will specify a smart car health index parameter with complex streams from a specified two-second timestamp interval in the form of a fixed length series of events: select BlobSource.Model, input.city,count(vin) as cars, avg(input.engineTemperature) as engineTemperature, avg(input.speed) as Speed, avg(input.fuel) as Fuel, avg(input.engineoil) as EngineOil,avg(input.tirepressure) as TirePressure, avg(input.odometer) as Odometer into EventHubOut from input join BlobSource on input.vin = BlobSource.VIN group by BlobSource.model, input.city, TumblingWindow(second,2) The following Vehicle Telemetry analytics query will specify a smart car health index parameter with complex streams from a specified two-second timestamp interval in the form of a fixed length series of events: 5. The query could be executed locally or submitted to Azure. While running the job locally, a Command Prompt will appear asserting the local Stream Analytics job's running status, with the output data folder location: 6. If run locally, the job output folder would contain two files in the project disk location within the ASALocalRun directory named, with the current date timestamp. Two output files would be present in .csv and .json formats respectively: Now, if submitted the job to Azure from the Stream Analytics project in Visual Studio, it offers a beautiful job dashboard while providing an interactive job diagram view, job metrics graph, and errors (if any). The Vehicle Telemetry Predictive Health Analytics job dashboard in Visual Studio provides a nice job diagram with Real-Time Insights of events, with a display refreshed at a minimum rate of every 30 minutes: The Stream Analytics job metrics graph provides interactive insights on input and output events, out of order events, late events, runtime errors, and data conversion errors related to the job as appropriate: For Connected Car-Predictive Vehicle Telemetry Analytics, you may configure the data input streams processed with complex events by using a definite timestamp interval in a non-overlapping mode such as Tumbling window over a two-second time slicer. The output sink should be configured as Service Bus Event Hub in a data partitioning unit of 32, with a maximum message retention period of 7 days. The output job sink processed events in Event Hub could be archived as well in Azure blob storage for a long-term infrequent access perspective: The Azure Service Bus, Event Hub job output metrics dashboard view configured for vehicle telemetry analysis is as follows: On the left side of the job dashboard, the Job Summary provides a comprehensive view controller of the job parameters such as job status, creation time, job output start time, start mode, last output timestamp, output error handling mechanism provided for quick reference logs, late event arrival tolerance windows, and so on. The job can be stopped and started, deleted, or even refreshed by selecting icons from the top left menu of the job view dashboard in VS: Optionally, a Stream Analytics complete project clone can also be generated by clicking on the Generate Project icon from the top menu of the job dashboard. This article is an excerpt from the book, Stream Analytics with Microsoft Azure, written by Anindita Basak, Krishna Venkataraman, Ryan Murphy, and Manpreet Singh. This book provides lessons on Real-time data processing for quick insights using Azure Stream Analytics. Say hello to Streaming Analytics How to get started with Azure Stream Analytics and 7 reasons to choose it  
Read more
  • 0
  • 0
  • 41458
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-boost-r-codes-using-c-fortran
Pravin Dhandre
18 Apr 2018
16 min read
Save for later

How to boost R codes using C++ and Fortran

Pravin Dhandre
18 Apr 2018
16 min read
Sometimes, R code just isn't fast enough. Maybe you've used profiling to figure out where your bottlenecks are, and you've done everything you can think of within R, but your code still isn't fast enough. In those cases, a useful alternative can be to delegate some parts of the implementation to Fortran or C++. This is an advanced technique that can often prove to be quite useful if know how to program in such languages. In today’s tutorial, we will try to explore techniques to boost R codes and calculations using efficient languages like Fortran and C++. Delegating code to other languages can address bottlenecks such as the following: Loops that can't be easily vectorized due to iteration dependencies Processes that involve calling functions millions of times Inefficient but necessary data structures that are slow in R Delegating code to other languages can provide great performance benefits, but it also incurs the cost of being more explicit and careful with the types of objects that are being moved around. In R, you can get away with simple things such as being imprecise about a number being an integer or a real. In these other languages, you can't; every object must have a precise type, and it remains fixed for the entire execution. Boost R codes using an old-school approach with Fortran We will start with an old-school approach using Fortran first. If you are not familiar with it, Fortran is the oldest programming language still under use today. It was designed to perform lots of calculations very efficiently and with very few resources. There are a lot of numerical libraries developed with it, and many high-performance systems nowadays still use it, either directly or indirectly. Here's our implementation, named sma_fortran(). The syntax may throw you off if you're not used to working with Fortran code, but it's simple enough to understand. First, note that to define a function technically known as a subroutine in Fortran, we use the subroutine keyword before the name of the function. As our previous implementations do, it receives the period and data (we use the dataa name with an extra a at the end because Fortran has a reserved keyword data, which we shouldn't use in this case), and we will assume that the data is already filtered for the correct symbol at this point. Next, note that we are sending new arguments that we did not send before, namely smas and n. Fortran is a peculiar language in the sense that it does not return values, it uses side effects instead. This means that instead of expecting something back from a call to a Fortran subroutine, we should expect that subroutine to change one of the objects that was passed to it, and we should treat that as our return value. In this case, smas fulfills that role; initially, it will be sent as an array of undefined real values, and the objective is to modify its contents with the appropriate SMA values. Finally, the n represents the number of elements in the data we send. Classic Fortran doesn't have a way to determine the size of an array being passed to it, and it needs us to specify the size manually; that's why we need to send n. In reality, there are ways to work around this, but since this is not a tutorial about Fortran, we will keep the code as simple as possible. Next, note that we need to declare the type of objects we're dealing with as well as their size in case they are arrays. We proceed to declare pos (which takes the place of position in our previous implementation, because Fortran imposes a limit on the length of each line, which we don't want to violate), n, endd (again, end is a keyword in Fortran, so we use the name endd instead), and period as integers. We also declare dataa(n), smas(n), and sma as reals because they will contain decimal parts. Note that we specify the size of the array with the (n) part in the first two objects. Once we have declared everything we will use, we proceed with our logic. We first create a for loop, which is done with the do keyword in Fortran, followed by a unique identifier (which are normally named with multiples of tens or hundreds), the variable name that will be used to iterate, and the values that it will take, endd and 1 to n in this case, respectively. Within the for loop, we assign pos to be equal to endd and sma to be equal to 0, just as we did in some of our previous implementations. Next, we create a while loop with the do…while keyword combination, and we provide the condition that should be checked to decide when to break out of it. Note that Fortran uses a very different syntax for the comparison operators. Specifically, the .lt. operator stand for less-than, while the .ge. operator stands for greater-than-or-equal-to. If any of the two conditions specified is not met, then we will exit the while loop. Having said that, the rest of the code should be self-explanatory. The only other uncommon syntax property is that the code is indented to the sixth position. This indentation has meaning within Fortran, and it should be kept as it is. Also, the number IDs provided in the first columns in the code should match the corresponding looping mechanisms, and they should be kept toward the left of the logic-code. For a good introduction to Fortran, you may take a look at Stanford's Fortran 77 Tutorial (https:/​/​web.​stanford.​edu/​class/​me200c/​tutorial_​77/​). You should know that there are various Fortran versions, and the 77 version is one of the oldest ones. However, it's also one of the better supported ones: subroutine sma_fortran(period, dataa, smas, n) integer pos, n, endd, period real dataa(n), smas(n), sma do 10 endd = 1, n pos = endd sma = 0.0 do 20 while ((endd - pos .lt. period) .and. (pos .ge. 1)) sma = sma + dataa(pos) pos = pos - 1 20 end do if (endd - pos .eq. period) then sma = sma / period else sma = 0 end if smas(endd) = sma 10 continue end Once your code is finished, you need to compile it before it can be executed within R. Compilation is the process of translating code into machine-level instructions. You have two options when compiling Fortran code: you can either do it manually outside of R or you can do it within R. The second one is recommended since you can take advantage of R's tools for doing so. However, we show both of them. The first one can be achieved with the following code: $ gfortran -c sma-delegated-fortran.f -o sma-delegated-fortran.so This code should be executed in a Bash terminal (which can be found in Linux or Mac operating systems). We must ensure that we have the gfortran compiler installed, which was probably installed when R was. Then, we call it, telling it to compile (using the -c option) the sma-delegated-fortran.f file (which contains the Fortran code we showed before) and provide an output file (with the -o option) named sma-delegatedfortran.so. Our objective is to get this .so file, which is what we need within R to execute the Fortran code. The way to compile within R, which is the recommended way, is to use the following line: system("R CMD SHLIB sma-delegated-fortran.f") It basically tells R to execute the command that produces a shared library derived from the sma-delegated-fortran.f file. Note that the system() function simply sends the string it receives to a terminal in the operating system, which means that you could have used that same command in the Bash terminal used to compile the code manually. To load the shared library into R's memory, we use the dyn.load() function, providing the location of the .so file we want to use, and to actually call the shared library that contains the Fortran implementation, we use the .Fortran() function. This function requires type checking and coercion to be explicitly performed by the user before calling it. To provide a similar signature as the one provided by the previous functions, we will create a function named sma_delegated_fortran(), which receives the period, symbol, and data parameters as we did before, also filters the data as we did earlier, calculates the length of the data and puts it in n, and uses the .Fortran() function to call the sma_fortran() subroutine, providing the appropriate parameters. Note that we're wrapping the parameters around functions that coerce the types of these objects as required by our Fortran code. The results list created by the .Fortran() function contains the period, dataa, smas, and n objects, corresponding to the parameters sent to the subroutine, with the contents left in them after the subroutine was executed. As we mentioned earlier, we are interested in the contents of the sma object since they contain the values we're looking for. That's why we send only that part back after converting it to a numeric type within R. The transformations you see before sending objects to Fortran and after getting them back is something that you need to be very careful with. For example, if instead of using single(n) and as.single(data), we use double(n) and as.double(data), our Fortran implementation will not work. This is something that can be ignored within R, but it can't be ignored in the case of Fortran: system("R CMD SHLIB sma-delegated-fortran.f") dyn.load("sma-delegated-fortran.so") sma_delegated_fortran <- function(period, symbol, data) { data <- data[which(data$symbol == symbol), "price_usd"] n <- length(data) results <- .Fortran( "sma_fortran", period = as.integer(period), dataa = as.single(data), smas = single(n), n = as.integer(n) ) return(as.numeric(results$smas)) } Just as we did earlier, we benchmark and test for correctness: performance <- microbenchmark( sma_12 <- sma_delegated_fortran(period, symboo, data), unit = "us" ) all(sma_1$sma - sma_12 <= 0.001, na.rm = TRUE) #> TRUE summary(performance)$me In this case, our median time is of 148.0335 microseconds, making this the fastest implementation up to this point. Note that it's barely over half of the time from the most efficient implementation we were able to come up with using only R. Take a look at the following table: Boost R codes using a modern approach with C++ Now, we will show you how to use a more modern approach using C++. The aim of this section is to provide just enough information for you to start experimenting using C++ within R on your own. We will only look at a tiny piece of what can be done by interfacing R with C++ through the Rcpp package (which is installed by default in R), but it should be enough to get you started. If you have never heard of C++, it's a language used mostly when resource restrictions play an important role and performance optimization is of paramount importance. Some good resources to learn more about C++ are Meyer's books on the topic, a popular one being Effective C++ (Addison-Wesley, 2005), and specifically for the Rcpp package, Eddelbuettel's Seamless R and C++ integration with Rcpp by Springer, 2013, is great. Before we continue, you need to ensure that you have a C++ compiler in your system. On Linux, you should be able to use gcc. On Mac, you should install Xcode from the  application store. O n Windows, you should install Rtools. Once you test your compiler and know that it's working, you should be able to follow this section. We'll cover more on how to do this in Appendix, Required Packages. C++ is more readable than Fortran code because it follows more syntax conventions we're used to nowadays. However, just because the example we will use is readable, don't think that C++ in general is an easy language to use; it's not. It's a very low-level language and using it correctly requires a good amount of knowledge. Having said that, let's begin. The #include line is used to bring variable and function definitions from R into this file when it's compiled. Literally, the contents of the Rcpp.h file are pasted right where the include statement is. Files ending with the .h extensions are called header files, and they are used to provide some common definitions between a code's user and its developers. The using namespace Rcpp line allows you to use shorter names for your function. Instead of having to specify Rcpp::NumericVector, we can simply use NumericVector to define the type of the data object. Doing so in this example may not be too beneficial, but when you start developing for complex C++ code, it will really come in handy. Next, you will notice the // [[Rcpp::export(sma_delegated_cpp)]] code. This is a tag that marks the function right below it so that R know that it should import it and make it available within R code. The argument sent to export() is the name of the function that will be accessible within R, and it does not necessarily have to match the name of the function in C++. In this case, sma_delegated_cpp() will be the function we call within R, and it will call the smaDelegated() function within C++: #include using namespace Rcpp; // [[Rcpp::export(sma_delegated_cpp)]] NumericVector smaDelegated(int period, NumericVector data) { int position, n = data.size(); NumericVector result(n); double sma; for (int end = 0; end < n; end++) { position = end; sma = 0; while(end - position < period && position >= 0) { sma = sma + data[position]; position = position - 1; } if (end - position == period) { sma = sma / period; } else { sma = NA_REAL; } result[end] = sma; } return result; } Next, we will explain the actual smaDelegated() function. Since you have a good idea of what it's doing at this point, we won't explain its logic, only the syntax that is not so obvious. The first thing to note is that the function name has a keyword before it, which is the type of the return value for the function. In this case, it's NumericVector, which is provided in the Rcpp.h file. This is an object designed to interface vectors between R and C++. Other types of vector provided by Rcpp are IntegerVector, LogicalVector, and CharacterVector. You also have IntegerMatrix, NumericMatrix, LogicalMatrix, and CharacterMatrix available. Next, you should note that the parameters received by the function also have types associated with them. Specifically, period is an integer (int), and data is NumericVector, just like the output of the function. In this case, we did not have to pass the output or length objects as we did with Fortran. Since functions in C++ do have output values, it also has an easy enough way of computing the length of objects. The first line in the function declare a variables position and n, and assigns the length of the data to the latter one. You may use commas, as we do, to declare various objects of the same type one after another instead of splitting the declarations and assignments into its own lines. We also declare the vector result with length n; note that this notation is similar to Fortran's. Finally, instead of using the real keyword as we do in Fortran, we use the float or double keyword here to denote such numbers. Technically, there's a difference regarding the precision allowed by such keywords, and they are not interchangeable, but we won't worry about that here. The rest of the function should be clear, except for maybe the sma = NA_REAL assignment. This NA_REAL object is also provided by Rcpp as a way to denote what should be sent to R as an NA. Everything else should result familiar. Now that our function is ready, we save it in a file called sma-delegated-cpp.cpp and use R's sourceCpp() function to bring compile it for us and bring it into R. The .cpp extension denotes contents written in the C++ language. Keep in mind that functions brought into R from C++ files cannot be saved in a .Rdata file for a later session. The nature of C++ is to be very dependent on the hardware under which it's compiled, and doing so will probably produce various errors for you. Every time you want to use a C++ function, you should compile it and load it with the sourceCpp() function at the moment of usage. library(Rcpp) sourceCpp("./sma-delegated-cpp.cpp") sma_delegated_cpp <- function(period, symbol, data) { data <- as.numeric(data[which(data$symbol == symbol), "price_usd"]) return(sma_cpp(period, data)) } If everything worked fine, our function should be usable within R, so we benchmark and test for correctness. I promise this is the last one: performance <- microbenchmark( sma_13 <- sma_delegated_cpp(period, symboo, data), unit = "us" ) all(sma_1$sma - sma_13 <= 0.001, na.rm = TRUE) #> TRUE summary(performance)$median #> [1] 80.6415 This time, our median time was 80.6415 microseconds, which is three orders of magnitude faster than our first implementation. Think about it this way: if you provide an input for sma_delegated_cpp() so that it took around one hour for it to execute, sma_slow_1() would take around 1,000 hours, which is roughly 41 days. Isn't that a surprising difference? When you are in situations that take that much execution time, it's definitely worth it to try and make your implementations as optimized as possible. You may use the cppFunction() function to write your C++ code directly inside an .R file, but you should not do so. Keep that just for testing small pieces of code. Separating your C++ implementation into its own files allows you to use the power of your editor of choice (or IDE) to guide you through the development as well as perform deeper syntax checks for you. You read an excerpt from R Programming By Example authored by Omar Trejo Navarro. This book provides step-by-step guide to build simple-to-advanced applications through examples in R using modern tools. Getting Inside a C++ Multithreaded Application Understanding the Dependencies of a C++ Application    
Read more
  • 0
  • 0
  • 15700

article-image-packaging-and-publishing-an-oracle-jet-hybrid-mobile-application
Vijin Boricha
17 Apr 2018
4 min read
Save for later

Packaging and publishing an Oracle JET Hybrid mobile application

Vijin Boricha
17 Apr 2018
4 min read
Today, we will learn how to package and publish an Oracle JET mobile application on Apple Store and Android Play Store. We can package and publish Oracle JET-based hybrid mobile applications to Google Play or Apple App stores, using framework support and third-party tools. Packaging a mobile application An Oracle JET Hybrid mobile application can be packaged with the help of Grunt build release commands, as described in the following steps: The Grunt release command can be issued with the desired platform as follows: grunt build:release --platform={ios|android} Code sign the application based on the platform in the buildConfig.json component. Note: Further details regarding code sign per platform are available at: Android: https:// cordova.apache.org/docs/en/latest/guide/platforms/android/tools. html.iOS:https:// cordova.apache.org/docs/en/latest/guide/platforms/ios/tools.Html. We can pass the code sign details and rebuild the application using the following command: grunt build:release --platform={ios|android} --buildConfig=path/buildConfig.json The application can be tested after the preceding changes using the following serve command: grunt serve:release --platform=ios|android [--web=true --serverPort=server-port-number --livereloadPort=live-reload-port-number - -destination=emulator-name|device] --buildConfig==path/buildConfig.json Publishing a mobile application Publishing an Oracle JET hybrid mobile application is as per the platform-specific guidelines. Each platform has defined certain standards and procedures to distribute an app on the respective platforms. Publishing on an iOS platform The steps involved in publishing iOS applications include: Enrolling in the Apple Developer Program to distribute the app. Adding advanced, integrated services based on the application type. Preparing our application with approval and configuration according to iOS standards. Testing our app on numerous devices and application releases. Submitting and releasing the application as a mobile app in the store. Alternatively, we can also distribute the app outside the store. The iOS distribution process is represented in the following diagram: Please note that the process for distributing applications on iOS presented in the preceding diagram is the latest, as of writing this chapter. It may be altered by the iOS team at a later point. The preceding said process is up-to-date, as of writing this chapter. Note: For the latest iOS app distribution procedure, please refer to the official iOS documentation at: https://developer.apple.com/library/ios/documentation/IDEs/ Conceptual/AppDistributionGuide/Introduction/Introduction.html Publishing on an Android platform There are multiple approaches to publish our application on Android platforms. The following are the steps involved in publishing the app on Android: Preparing the app for release. We need to perform the following activities to prepare an app for release: Configuring the app for release, including logs and manifests. Testing the release version of the app on multiple devices. Updating the application resources for the release. Preparing any remote applications or services the app interacts with. 2. Releasing the app to the market through Google Play. We need to perform the following activities to prepare an app for release through Google Play: Prepare any promotional documentation required for the app. Configure all the default options and prepare the components. Publish the prepared release version of the app to Google Play. 3. Alternately, we can publish the application through email, or through our own website, for users to download and install. The steps involved in publishing Android applications are advised in the following diagram: Please note that the process for distributing applications on Android presented in the preceding diagram is the latest, as of writing this chapter. It may be altered by the Android team at a later point. Note: For the latest Android app distribution procedure, please refer to the official Android documentation at http://developer.android.com/tools/publishing/publishing_overview. html#publishing-release. You enjoyed an excerpt  from Oracle JET for Developers written by Raja Malleswara Rao Pattamsetti.  With this book, you will learn to leverage Oracle JavaScript Extension Toolkit (JET) to develop efficient client-side applications. Auditing Mobile Applications Creating and configuring a basic mobile application    
Read more
  • 0
  • 0
  • 10736

article-image-how-to-build-a-live-interactive-visual-dashboard-in-power-bi-with-azure-stream
Sugandha Lahoti
17 Apr 2018
4 min read
Save for later

How to build a live interactive visual dashboard in Power BI with Azure Stream

Sugandha Lahoti
17 Apr 2018
4 min read
Azure Stream Analytics is a managed complex event processing interactive data engine. As a built-in output connector, it offers the facility of building live interactive intelligent BI charts and graphics using Microsoft's cloud-based Business Intelligent tool called Power BI. In this tutorial we implement a data architecture pipeline by designing a visual dashboard using Microsoft Power BI and Stream Analytics. Prerequisites of building an interactive visual live dashboard in Power BI with Stream Analytics: Azure subscription Power BI Office365 account (the account email ID should be the same for both Azure and Power BI). It can be a work or school account Integrating Power BI as an output job connector for Stream Analytics To start with connecting the Power BI portal as an output of an existing Stream Analytics job, follow the given steps:  First, select Outputs in the Azure portal under JOB TOPOLOGY:  After clicking on Outputs, click on +Add in the top left corner of the job window, as shown in the following screenshot:  After selecting +Add, you will be prompted to enter the New output connectors of the job. Provide details such as job Output name/alias; under Sink, choose Power BI from the drop-down menu.  On choosing Power BI as the streaming job output Sink, it will automatically prompt you to authorize the Power BI work/personal account with Azure. Additionally, you may create a new Power BI account by clicking on Signup. By authorizing, you are granting access to the Stream Analytics output permanently in the Power BI dashboard. You can also revoke the access by changing the password of the Power BI account or deleting the output/job.  Post the successful authorization of the Power BI account with Azure, there will be options to select Group Workspace, which is the Power BI tenant workspace where you may create the particular dataset to configure processed Stream Analytics events. Furthermore, you also need to define the Table Name as data output. Lastly, click on the Create button to integrate the Power BI data connector for real-time data visuals:   Note: If you don't have any custom workspace defined in the Power BI tenant, the default workspace is My Workspace. If you define a dataset and table name that already exists in another Stream Analytics job/output, it will be overwritten. It is also recommended that you just define the dataset and table name under the specific tenant workspace in the job portal and not explicitly create them in Power BI tenants as Stream Analytics automatically creates them once the job starts and output events start to push into the Power BI dashboard.   On starting the Streaming job with output events, the Power BI dataset would appear under the dataset tab following workspace. The  dataset can contain maximum 200,000 rows and supports real-time streaming events and historical BI report visuals as well: Further Power BI dashboard and reports can be implemented using the streaming dataset. Alternatively, you may also create tiles in custom dashboards by selecting CUSTOM STREAMING  DATA under REAL-TIME OATA, as shown in the following screenshot:  By selecting Next, the streaming dataset should be selected and then the visual type, respective fields, Axis, or legends, can be defined: Thus, a complete interactive near real-time Power BI visual dashboard can be implemented with analyzed streamed data from Stream Analytics, as shown in the following screenshot, from the real-world Connected Car-Vehicle Telemetry analytics dashboard: In this article we saw a step-by-step implementation of a real-time visual dashboard using Microsoft Power BI with processed data from Azure Stream Analytics as the output data connector. This article is an excerpt from the book, Stream Analytics with Microsoft Azure, written by Anindita Basak, Krishna Venkataraman, Ryan Murphy, and Manpreet Singh. To learn more on designing and managing Stream Analytics jobs using reference data and utilizing petabyte-scale enterprise data store with Azure Data Lake Store, you may refer to this book. Unlocking the secrets of Microsoft Power BI Ride the third wave of BI with Microsoft Power BI  
Read more
  • 0
  • 0
  • 49665

article-image-cross-validation-r-predictive-models
Pravin Dhandre
17 Apr 2018
8 min read
Save for later

Cross-validation in R for predictive models

Pravin Dhandre
17 Apr 2018
8 min read
In today’s tutorial, we will efficiently train our first predictive model, we will use Cross-validation in R as the basis of our modeling process. We will build the corresponding confusion matrix. Most of the functionality comes from the excellent caret package. You can find more information on the vast features of caret package that we will not explore in this tutorial. Before moving to the training tutorial, lets understand what a confusion matrix is. A confusion matrix is a summary of prediction results on a classification problem. The number of correct and incorrect predictions are summarized with count values and broken down by each class. This is the key to the confusion matrix. The confusion matrix shows the ways in which your classification model is confused when it makes predictions. It gives you insight not only into the errors being made by your classifier but more importantly the types of errors that are being made. Training our first predictive model Following best practices, we will use Cross Validation (CV) as the basis of our modeling process. Using CV we can create estimates of how well our model will do with unseen data. CV is powerful, but the downside is that it requires more processing and therefore more time. If you can take the computational complexity, you should definitely take advantage of it in your projects. Going into the mathematics behind CV is outside of the scope of this tutorial. If interested, you can find out more information on cross validation on Wikipedia . The basic idea is that the training data will be split into various parts, and each of these parts will be taken out of the rest of the training data one at a time, keeping all remaining parts together. The parts that are kept together will be used to train the model, while the part that was taken out will be used for testing, and this will be repeated by rotating the parts such that every part is taken out once. This allows you to test the training procedure more thoroughly, before doing the final testing with the testing data. We use the trainControl() function to set our repeated CV mechanism with five splits and two repeats. This object will be passed to our predictive models, created with the caret package, to automatically apply this control mechanism within them: cv.control <- trainControl(method = "repeatedcv", number = 5, repeats = 2) Our predictive models pick for this example are Random Forests (RF). We will very briefly explain what RF are, but the interested reader is encouraged to look into James, Witten, Hastie, and Tibshirani's excellent "Statistical Learning" (Springer, 2013). RF are a non-linear model used to generate predictions. A tree is a structure that provides a clear path from inputs to specific outputs through a branching model. In predictive modeling they are used to find limited input-space areas that perform well when providing predictions. RF create many such trees and use a mechanism to aggregate the predictions provided by this trees into a single prediction. They are a very powerful and popular Machine Learning model. Let's have a look at the random forests example: Random forests aggregate trees To train our model, we use the train() function passing a formula that signals R to use MULT_PURCHASES as the dependent variable and everything else (~ .) as the independent variables, which are the token frequencies. It also specifies the data, the method ("rf" stands for random forests), the control mechanism we just created, and the number of tuning scenarios to use: model.1 <- train( MULT_PURCHASES ~ ., data = train.dfm.df, method = "rf", trControl = cv.control, tuneLength = 5 ) Improving speed with parallelization If you actually executed the previous code in your computer before reading this, you may have found that it took a long time to finish (8.41 minutes in our case). As we mentioned earlier, text analysis suffers from very high dimensional structures which take a long time to process. Furthermore, using CV runs will take a long time to run. To cut down on the total execution time, use the doParallel package to allow for multi-core computers to do the training in parallel and substantially cut down on time. We proceed to create the train_model() function, which takes the data and the control mechanism as parameters. It then makes a cluster object with the makeCluster() function with a number of available cores (processors) equal to the number of cores in the computer, detected with the detectCores() function. Note that if you're planning on using your computer to do other tasks while you train your models, you should leave one or two cores free to avoid choking your system (you can then use makeCluster(detectCores() -2) to accomplish this). After that, we start our time measuring mechanism, train our model, print the total time, stop the cluster, and return the resulting model. train_model <- function(data, cv.control) { cluster <- makeCluster(detectCores()) registerDoParallel(cluster) start.time <- Sys.time() model <- train( MULT_PURCHASES ~ ., data = data, method = "rf", trControl = cv.control, tuneLength = 5 ) print(Sys.time() - start.time) stopCluster(cluster) return(model) } Now we can retrain the same model much faster. The time reduction will depend on your computer's available resources. In the case of an 8-core system with 32 GB of memory available, the total time was 3.34 minutes instead of the previous 8.41 minutes, which implies that with parallelization, it only took 39% of the original time. Not bad right? Let's have look at how the model is trained: model.1 <- train_model(train.dfm.df, cv.control) Computing predictive accuracy and confusion matrices Now that we have our trained model, we can see its results and ask it to compute some predictive accuracy metrics. We start by simply printing the object we get back from the train() function. As can be seen, we have some useful metadata, but what we are concerned with right now is the predictive accuracy, shown in the Accuracy column. From the five values we told the function to use as testing scenarios, the best model was reached when we used 356 out of the 2,007 available features (tokens). In that case, our predictive accuracy was 65.36%. If we take into account the fact that the proportions in our data were around 63% of cases with multiple purchases, we have made an improvement. This can be seen by the fact that if we just guessed the class with the most observations (MULT_PURCHASES being true) for all the observations, we would only have a 63% accuracy, but using our model we were able to improve toward 65%. This is a 3% improvement. Keep in mind that this is a randomized process, and the results will be different every time you train these models. That's why we want a repeated CV as well as various testing scenarios to make sure that our results are robust: model.1 #> Random Forest #> #> 212 samples #> 2007 predictors #> 2 classes: 'FALSE', 'TRUE' #> #> No pre-processing #> Resampling: Cross-Validated (5 fold, repeated 2 times) #> Summary of sample sizes: 170, 169, 170, 169, 170, 169, ... #> Resampling results across tuning parameters: #> #> mtry Accuracy Kappa #> 2 0.6368771 0.00000000 #> 11 0.6439092 0.03436849 #> 63 0.6462901 0.07827322 #> 356 0.6536545 0.16160573 #> 2006 0.6512735 0.16892126 #> #> Accuracy was used to select the optimal model using the largest value. #> The final value used for the model was mtry = 356. To create a confusion matrix, we can use the confusionMatrix() function and send it the model's predictions first and the real values second. This will not only create the confusion matrix for us, but also compute some useful metrics such as sensitivity and specificity. We won't go deep into what these metrics mean or how to interpret them since that's outside the scope of this tutorial, but we highly encourage the reader to study them using the resources cited in this tutorial: confusionMatrix(model.1$finalModel$predicted, train$MULT_PURCHASES) #> Confusion Matrix and Statistics #> #> Reference #> Prediction FALSE TRUE #> FALSE 18 19 #> TRUE 59 116 #> #> Accuracy : 0.6321 #> 95% CI : (0.5633, 0.6971) #> No Information Rate : 0.6368 #> P-Value [Acc > NIR] : 0.5872 #> #> Kappa : 0.1047 #> Mcnemar's Test P-Value : 1.006e-05 #> #> Sensitivity : 0.23377 #> Specificity : 0.85926 #> Pos Pred Value : 0.48649 #> Neg Pred Value : 0.66286 #> Prevalence : 0.36321 #> Detection Rate : 0.08491 #> Detection Prevalence : 0.17453 #> Balanced Accuracy : 0.54651 #> #> 'Positive' Class : FALSE You read an excerpt from R Programming By Example authored by Omar Trejo Navarro. This book gets you familiar with R’s fundamentals and its advanced features to get you hands-on experience with R’s cutting edge tools for software development. Getting Started with Predictive Analytics Here’s how you can handle the bias variance trade-off in your ML models    
Read more
  • 0
  • 0
  • 22003
article-image-building-first-vue-js-web-application
Kunal Chaudhari
17 Apr 2018
11 min read
Save for later

Building your first Vue.js 2 Web application

Kunal Chaudhari
17 Apr 2018
11 min read
Vue is a relative newcomer in the JavaScript frontend landscape, but a very serious challenger to the current leading libraries. It is simple, flexible, and very fast, while still providing a lot of features and optional tools that can help you build a modern web app efficiently. In today’s tutorial, we will explore Vue.js library and then we will start creating our first web app. Why another frontend framework? Its creator, Evan You, calls it the progressive framework. Vue is incrementally adoptable, with a core library focused on user interfaces that you can use in existing projects You can make small prototypes all the way up to large and sophisticated web applications Vue is approachable-- beginners can pick up the library easily, and confirmed developers can be productive very quickly Vue roughly follows a Model-View-ViewModel architecture, which means the View (the user interface) and the Model (the data) are separated, with the ViewModel (Vue) being a mediator between the two. It handles the updates automatically and has been already optimized for you. Therefore, you don't have to specify when a part of the View should update because Vue will choose the right way and time to do so. The library also takes inspiration from other similar libraries such as React, Angular, and Polymer. The following is an overview of its core features: A reactive data system that can update your user interface automatically, with a lightweight virtual-DOM engine and minimal optimization efforts, is required Flexible View declaration--artist-friendly HTML templates, JSX (HTML inside JavaScript), or hyperscript render functions (pure JavaScript) Composable user interfaces with maintainable and reusable components Official companion libraries that come with routing, state management, scaffolding, and more advanced features, making Vue a non-opinionated but fully fleshed out frontend framework Vue.js - A trending project Evan You started working on the first prototype of Vue in 2013, while working at Google, using Angular. The initial goal was to have all the cool features of Angular, such as data binding and data-driven DOM, but without the extra concepts that make a framework opinionated and heavy to learn and use. The first public release was published on February 2014 and had immediate success the very first day, with HackerNews frontpage, /r/javascript at the top spot and 10k unique visits on the official website. The first major version 1.0 was reached in October 2015, and by the end of that year, the npm downloads rocketed to 382k ytd, the GitHub repository received 11k stars, the official website had 363k unique visitors, and the popular PHP framework Laravel had picked Vue as its official frontend library instead of React. The second major version, 2.0, was released in September 2016, with a new virtual DOM- based renderer and many new features such as server-side rendering and performance improvements. This is the version we will use in this article. It is now one of the fastest frontend libraries, outperforming even React according to a comparison refined with the React team. At the time of writing this article, Vue was the second most popular frontend library on GitHub with 72k stars, just behind React and ahead of Angular 1. The next evolution of the library on the roadmap includes more integration with Vue-native libraries such as Weex and NativeScript to create native mobile apps with Vue, plus new features and improvements. Today, Vue is used by many companies such as Microsoft, Adobe, Alibaba, Baidu, Xiaomi, Expedia, Nintendo, and GitLab. Compatibility requirements Vue doesn't have any dependency and can be used in any ECMAScript 5 minimum- compliant browser. This means that it is not compatible with Internet Explorer 8 or less, because it needs relatively new JavaScript features such as Object.defineProperty, which can't be polyfilled on older browsers. In this article, we are writing code in JavaScript version ES2015 (formerly ES6), so you will need a modern browser to run the examples (such as Edge, Firefox, or Chrome). At some point, we will introduce a compiler called Babel that will help us make our code compatible with older browsers. One-minute setup Without further ado, let's start creating our first Vue app with a very quick setup. Vue is flexible enough to be included in any web page with a simple script tag. Let's create a very simple web page that includes the library, with a simple div element and another script tag: <html> <head> <meta charset="utf-8"> <title>Vue Project Guide setup</title> </head> <body> <!-- Include the library in the page --> <script src="https://unpkg.com/vue/dist/vue.js"></script> <!-- Some HTML --> <div id="root"> <p>Is this an Hello world?</p> </div> <!-- Some JavaScript →> <script> console.log('Yes! We are using Vue version', Vue.version) </script> </body> </html> In the browser console, we should have something like this: Yes! We are using Vue version 2.0.3 As you can see in the preceding code, the library exposes a Vue object that contains all the features we need to use it. We are now ready to go. Creating an app For now, we don't have any Vue app running on our web page. The whole library is based on Vue instances, which are the mediators between your View and your data. So, we need to create a new Vue instance to start our app: // New Vue instance var app = new Vue({ // CSS selector of the root DOM element el: '#root', // Some data data () { return { message: 'Hello Vue.js!', } }, }) The Vue constructor is called with the new keyword to create a new instance. It has one argument--the option object. It can have multiple attributes (called options). For now, we are using only two of them. With the el option, we tell Vue where to add (or "mount") the instance on our web page using a CSS selector. In the example, our instance will use the <div id="root"> DOM element as its root element. We could also use the $mount method of the Vue instance instead of the el option: var app = new Vue({ data () { return { message: 'Hello Vue.js!', } }, }) // We add the instance to the page app.$mount('#root') Most of the special methods and attributes of a Vue instance start with a dollar character. We will also initialize some data in the data option with a message property that contains a string. Now the Vue app is running, but it doesn't do much, yet. You can add as many Vue apps as you like on a single web page. Just create a new Vue instance for each of them and mount them on different DOM elements. This comes in handy when you want to integrate Vue in an existing project. Vue devtools An official debugger tool for Vue is available on Chrome as an extension called Vue.js devtools. It can help you see how your app is running to help you debug your code. You can download it from the Chrome Web Store (https://chrome.google.com/webstore/ search/vue) or from the Firefox addons registry (https://addons.mozilla.org/en-US/ firefox/addon/vue-js-devtools/?src=ss). For the Chrome version, you need to set an additional setting. In the extension settings, enable Allow access to file URLs so that it can detect Vue on a web page opened from your local drive: On your web page, open the Chrome Dev Tools with the F12 shortcut (or Shift + command + c on OS X) and search for the Vue tab (it may be hidden in the More tools... dropdown). Once it is opened, you can see a tree with our Vue instance named Root by convention. If you click on it, the sidebar displays the properties of the instance: You can drag and drop the devtools tab to your liking. Don't hesitate to place it among the first tabs, as it will be hidden in the page where Vue is not in development mode or is not running at all. You can change the name of your instance with the name option: var app = new Vue({ name: 'MyApp', // ...         }) This will help you see where your instance in the devtools is when you will have many more: Templates make your DOM dynamic With Vue, we have several systems at our disposal to write our View. For now, we will start with templates. A template is the easiest way to describe a View because it looks like HTML a lot, but with some extra syntax to make the DOM dynamically update very easily. Displaying text The first template feature we will see is the text interpolation, which is used to display dynamic text inside our web page. The text interpolation syntax is a pair of double curly braces containing a JavaScript expression of any kind. Its result will replace the interpolation when Vue will process the template. Replace the <div id="root"> element with the following: <div id="root"> <p>{{ message }}</p> </div> The template in this example has a <p> element whose content is the result of the message JavaScript expression. It will return the value of the message attribute of our instance. You should now have a new text displayed on your web page--Hello Vue.js!. It doesn't seem like much, but Vue has done a lot of work for us here--we now have the DOM wired with our data. To demonstrate this, open your browser console and change the app.message value and press Enter on the keyboard: app.message = 'Awesome!' The message has changed. This is called data-binding. It means that Vue is able to automatically update the DOM whenever your data changes without requiring anything from your part. The library includes a very powerful and efficient reactivity system that keeps track of all your data and is able to update what's needed when something changes. All of this is very fast indeed. Adding basic interactivity with directives Let's add some interactivity to our otherwise quite static app, for example, a text input that will allow the user to change the message displayed. We can do that in templates with special HTML attributes called directives. All the directives in Vue start with v- and follow the kebab-case syntax. That means you should separate the words with a dash. Remember that HTML attributes are case insensitive (whether they are uppercase or lowercase doesn't matter). The directive we need here is v-model, which will bind the value of our <input> element with our message data property. Add a new <input> element with the v-model="message" attribute inside the template: <div id="root"> <p>{{ message }}</p> <!-- New text input --> <input v-model="message" /> </div> Vue will now update the message property automatically when the input value changes. You can play with the content of the input to verify that the text updates as you type and the value in the devtools changes: There are many more directives available in Vue, and you can even create your own. To summarize, we quickly set up a web page to get started using Vue and wrote a simple app. We created a Vue instance to mount the Vue app on the page and wrote a template to make the DOM dynamic. Inside this template, we used a JavaScript expression to display text, thanks to text interpolations. Finally, we added some interactivity with an input element that we bound to our data with the v-model directive. You read an excerpt from a book written by Guillaume Chau, titled Vue.js 2 Web Development Projects. Its a project-based, practical guide to get hands-on into Vue.js 2.5 development by building beautiful, functional and performant web. Why has Vue.js become so popular? Building a real-time dashboard with Meteor and Vue.js      
Read more
  • 0
  • 0
  • 28447

article-image-raspberry-pi-family-raspberry-pi-zero-w-wireless
Packt Editorial Staff
16 Apr 2018
12 min read
Save for later

Meet the Coolest Raspberry Pi Family Member: Raspberry Pi Zero W Wireless

Packt Editorial Staff
16 Apr 2018
12 min read
In early 2017, Raspberry Pi community announced a new board with wireless extension. It is a highly promising board allowing everyone to connect their devices to the Internet. Offering a wireless functionality where everyone can develop their own projects without cables and components. It uses their skills to develop projects including software and hardware. This board is the new toy of any engineer interested in Internet of Things, security, automation and more! Comparing the new board with Raspberry Pi 3 Model B we can easily figure that it is quite small with many possibilities over the Internet of Things. But what is a Raspberry Pi Zero W and why do you need it? In today’s post, we will cover the following topics: Overview of the Raspberry Pi family Introduction to the new Raspberry Pi Zero W Distributions Common issues Raspberry Pi family As said earlier Raspberry Pi Zero W is the new member of Raspberry Pi family boards. All these years Raspberry Pi evolved and became more user friendly with endless possibilities. Let's have a short look at the rest of the family so we can understand the difference of the Pi Zero board. Right now, the heavy board is named Raspberry Pi 3 Model B. It is the best solution for projects such as face recognition, video tracking, gaming or anything else that is in demand: It is the 3rd generation of Raspberry Pi boards after Raspberry Pi 2 and has the following specs: A 1.2GHz 64-bit quad-core ARMv8 CPU 11n Wireless LAN Bluetooth 4.1 Bluetooth Low Energy (BLE) Like the Pi 2, it also has 1GB RAM 4 USB ports 40 GPIO pins Full HDMI port Ethernet port Combined 3.5mm audio jack and composite video Camera interface (CSI) Display interface (DSI) Micro SD card slot (now push-pull rather than push-push) VideoCore IV 3D graphics core The next board is Raspberry Pi Zero, in which the Zero W was based. A small low cost and power board able to do many things: The specs of this board can be found as follows: 1GHz, Single-core CPU 512MB RAM Mini-HDMI port Micro-USB OTG port Micro-USB power HAT-compatible 40-pin header Composite video and reset headers CSI camera connector (v1.3 only) At this point we should not forget to mention that apart from the boards mentioned earlier there are several other modules and components such as the Sense Hat or Raspberry Pi Touch Display available which will work great for advance projects. The 7″ Touchscreen Monitor for Raspberry Pi gives users the ability to create all-in-one, integrated projects such as tablets, infotainment systems and embedded projects: Where Sense HAT is an add-on board for Raspberry Pi, made especially for the Astro Pi mission. The Sense HAT has an 8×8 RGB LED matrix, a five-button joystick and includes the following sensors: Gyroscope Accelerometer Magnetometer Temperature Barometric pressure Humidity Stay tuned with more new boards and modules at the official website: https://www.raspberrypi.org/ Raspberry Pi Zero W Raspberry Pi Zero W is a small device that has the possibilities to be connected either on an external monitor or TV and of course it is connected to the internet. The operating system varies as there are many distros in the official page and almost everyone is baled on Linux systems. With Raspberry Pi Zero W you have the ability to do almost everything, from automation to gaming! It is a small computer that allows you easily program with the help of the GPIO pins and some other components such as a camera. Its possibilities are endless! Specifications If you have bought Raspberry PI 3 Model B you would be familiar with Cypress CYW43438 wireless chip. It provides 802.11n wireless LAN and Bluetooth 4.0 connectivity. The new Raspberry Pi Zero W is equipped with that wireless chip as well. Following you can find the specifications of the new board: Dimensions: 65mm × 30mm × 5mm SoC:Broadcom BCM 2835 chip ARM11 at 1GHz, single core CPU 512ΜΒ RAM Storage: MicroSD card Video and Audio:1080P HD video and stereo audio via mini-HDMI connector Power:5V, supplied via micro USB connector Wireless:2.4GHz 802.11 n wireless LAN Bluetooth: Bluetooth classic 4.1 and Bluetooth Low Energy (BLE) Output: Micro USB GPIO: 40-pin GPIO, unpopulated Notice that all the components are on the top side of the board so you can easily choose your case without any problems and keep it safe. As far as the antenna concern, it is formed by etching away copper on each layer of the PCB. It may not be visible as it is in other similar boards but it is working great and offers quite a lot functionalities: Also, the product is limited to only one piece per buyer and costs 10$. You can buy a full kit with microsd card, a case and some more extra components for about 45$ or choose the camera full kit which contains a small camera component for 55$. Camera support Image processing projects such as video tracking or face recognition require a camera. Following you can see the official camera support of Raspberry Pi Zero W. The camera can easily be mounted at the side of the board using a cable like the Raspberry Pi 3 Model B board: Depending on your distribution you many need to enable the camera though command line. More information about the usage of this module will be mentioned at the project. Accessories Well building projects with the new board there are some other gadgets that you might find useful working with. Following there is list of some crucial components. Notice that if you buy Raspberry Pi Zero W kit, it includes some of them. So, be careful and don't double buy them: OTG cable powerHUB GPIO header microSD card and card adapter HDMI to miniHDMI cable HDMI to VGA cable Distributions The official site https://www.raspberrypi.org/downloads/ contains several distributions for downloading. The two basic operating systems that we will analyze after are RASPBIAN and NOOBS. Following you can see how the desktop environment looks like. Both RASPBIAN and NOOBS allows you to choose from two versions. There is the full version of the operating system and the lite one. Obviously the lite version does not contain everything that you might use so if you tend to use your Raspberry with a desktop environment choose and download the full version. On the other side if you tend to just ssh and do some basic stuff pick the lite one. It' s really up to you and of course you can easily download again anything you like and re-write your microSD card: NOOBS distribution Download NOOBS: https://www.raspberrypi.org/downloads/noobs/. NOOBS distribution is for the new users with not so much knowledge in linux systems and Raspberry PI boards. As the official page says it is really "New Out Of the Box Software". There is also pre-installed NOOBS SD cards that you can purchase from many retailers, such as Pimoroni, Adafruit, and The Pi Hut, and of course you can download NOOBS and write your own microSD card. If you are having trouble with the specific distribution take a look at the following links: Full guide at https://www.raspberrypi.org/learning/software-guide/. View the video at https://www.raspberrypi.org/help/videos/#noobs-setup. NOOBS operating system contains Raspbian and it provides various of other operating systems available to download. RASPBIAN distribution Download RASPBIAN: https://www.raspberrypi.org/downloads/raspbian/. Raspbian is the official supported operating system. It can be installed though NOOBS or be downloading the image file at the following link and going through the guide of the official website. Image file: https://www.raspberrypi.org/documentation/installation/installing-images/README.md. It has pre-installed plenty of software such as Python, Scratch, Sonic Pi, Java, Mathematica, and more! Furthermore, more distributions like Ubuntu MATE, Windows 10 IOT Core or Weather Station are meant to be installed for more specific projects like Internet of Things (IoT) or weather stations. To conclude with, the right distribution to install actually depends on your project and your expertise in Linux systems administration. Raspberry Pi Zero W needs an microSD card for hosting any operating system. You are able to write Raspbian, Noobs, Ubuntu MATE, or any other operating system you like. So, all that you need to do is simple write your operating system to that microSD card. First of all you have to download the image file from https://www.raspberrypi.org/downloads/ which, usually comes as a .zip file. Once downloaded, unzip the zip file, the full image is about 4.5 Gigabytes. Depending on your operating system you have to use different programs: 7-Zip for Windows The Unarchiver for Mac Unzip for Linux Now we are ready to write the image in the MicroSD card. You can easily write the .img file in the microSD card by following one of the next guides according to your system. For Linux users dd tool is recommended. Before connecting your microSD card with your adaptor in your computer run the following command: df -h Now connect your card and run the same command again. You must see some new records. For example if the new device is called /dev/sdd1 keep in your mind that the card is at /dev/sdd (without the 1). The next step is to use the dd command and copy the image to the microSD card. We can do this by the following command: dd if=<path to your image> of=</dev/***> Where if is the input file (image file or the distribution) and of is the output file (microSD card). Again be careful here and use only /dev/sdd or whatever is yours without any numbers. If you are having trouble with that please use the full manual at the following link https://www.raspberrypi.org/documentation/installation/installing-images/linux.md. A good tool that could help you out for that job is GParted. If it is not installed on your system you can easily install it with the following command: sudo apt-get install gparted Then run sudogparted to start the tool. Its handles partitions very easily and you can format, delete or find information about all your mounted partitions. More information about ddcan be found here: https://www.raspberrypi.org/documentation/installation/installing-images/linux.md For Mac OS users dd tool is always recommended: https://www.raspberrypi.org/documentation/installation/installing-images/mac.md For Windows users Win32DiskImager utility is recommended: https://www.raspberrypi.org/documentation/installation/installing-images/windows.md There are several other ways to write an image file in a microSD card. So, if you are against any kind of problems when following the guides above feel free to use any other guide available on the Internet. Now, assuming that everything is ok and the image is ready. You can now gently plugin the microcard to your Raspberry PI Zero W board. Remember that you can always confirm that your download was successful with the sha1 code. In Linux systems you can use sha1sum followed by the file name (the image) and print the sha1 code that should and must be the same as it is at the end of the official page where you downloaded the image. Common issues Sometimes, working with Raspberry Pi boards can lead to issues. We all have faced some of them and hope to never face them again. The Pi Zero is so minimal and it can be tough to tell if it is working or not. Since, there is no LED on the board, sometimes a quick check if it is working properly or something went wrong is handy. Debugging steps With the following steps you will probably find its status: Take your board, with nothing in any slot or socket. Remove even the microSD card! Take a normal micro-USB to USB-ADATA SYNC cable and connect the one side to your computer and the other side to the Pi's USB, (not the PWR_IN). If the Zero is alive: On Windows the PC will go ding for the presence of new hardware and you should see BCM2708 Boot in Device Manager. On Linux, with a ID 0a5c:2763 Broadcom Corp message from dmesg. Try to run dmesg in a Terminal before your plugin the USB and after that. You will find a new record there. Output example: [226314.048026] usb 4-2: new full-speed USB device number 82 using uhci_hcd [226314.213273] usb 4-2: New USB device found, idVendor=0a5c, idProduct=2763 [226314.213280] usb 4-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [226314.213284] usb 4-2: Product: BCM2708 Boot [226314.213] usb 4-2: Manufacturer: Broadcom If you see any of the preceding, so far so good, you know the Zero's not dead. microSD card issue Remember that if you boot your Raspberry and there is nothing working, you may have burned your microSD card wrong. This means that your card many not contain any boot partition as it should and it is not able to boot the first files. That problem occurs when the distribution is burned to /dev/sdd1 and not to /dev/sdd as we should. This is a quite common mistake and there will be no errors in your monitor. It will just not work! Case protection Raspberry Pi boards are electronics and we never place electronics in metallic surfaces or near magnetic objects. It will affect the booting operation of the Raspberry and it will probably not work. So a tip of advice, spend some extra money for the Raspberry PI Case and protect your board from anything like that. There are many problems and issues when hanging your raspberry pi using tacks. To summarize, we introduced the new Raspberry Pi Zero board with the rest of its family and a brief analysis on some extra components that are must buy as well. [box type="shadow" align="aligncenter" class="" width=""]This article is an excerpt from the book Raspberry Pi Zero W Wireless Projects written by Vasilis Tzivaras. The Raspberry Pi has always been the go–to, lightweight ARM-based computer. This book will help you design and build interesting DIY projects using the Raspberry Pi Zero W board.[/box] Introduction to Raspberry Pi Zero W Wireless Build your first Raspberry Pi project
Read more
  • 0
  • 0
  • 46345

article-image-what-is-a-support-vector-machine
Packt Editorial Staff
16 Apr 2018
7 min read
Save for later

What is a support vector machine?

Packt Editorial Staff
16 Apr 2018
7 min read
Support vector machines are machine learning algorithms whereby a model 'learns' to categorize data around a linear classifier. The linear classifier is, quite simply, a line that classifies. It's a line that that distinguishes between 2 'types' of data, like positive sentiment and negative language. This gives you control over data, allowing you to easily categorize and manage different data points in a way that's useful too. This tutorial is an extract from Statistics for Machine Learning. But support vector machines do more than linear classification - they are multidimensional algorithms, which is why they're so powerful. Using something called a kernel trick, which we'll look at in more detail later, support vector machines are able to create non-linear boundaries. Essentially they work at constructing a more complex linear classifier, called a hyperplane. Support vector machines work on a range of different types of data, but they are most effective on data sets with very high dimensions relative to the observations, for example: Text classification, in which language has the very dimensions of word vectors For the quality control of DNA sequencing by labeling chromatograms correctly Different types of support vector machines Support vector machines are generally classified into three different groups: Maximum margin classifiers Support vector classifiers Support vector machines Let's take a look at them now. Maximum margin classifiers People often use the term maximum margin classifier interchangeably with support vector machines. They're the most common type of support vector machine, but as you'll see, there are some important differences. The maximum margin classifier tackles the problem of what happens when your data isn't quite clear or clean enough to draw a simple line between two sets - it helps you find the best line, or hyperplane out of a range of options. The objective of the algorithm is to find  furthest distance between the two nearest points in two different categories of data - this is the 'maximum margin', and the hyperplane sits comfortably within it. The hyperplane is defined by this equation: So, this means that any data points that sit directly on the hyperplane have to follow this equation. There are also data points that will, of course, fall either side of this hyperplane. These should follow these equations: You can represent the maximum margin classifier like this: Constraint 2 ensures that observations will be on the correct side of the hyperplane by taking the product of coefficients with x variables and finally, with a class variable indicator. In the diagram below, you can see that we could draw a number of separate hyperplanes to separate the two classes (blue and red). However, the maximum margin classifier attempts to fit the widest slab (maximize the margin between positive and negative hyperplanes) between two classes and the observations touching both the positive and negative hyperplanes. These are the support vectors. It's important to note that in non-separable cases, the maximum margin classifier will not have a separating hyperplane - there's no feasible solution. This issue will be solved with support vector classifiers. Support vector classifiers Support vector classifiers are an extended version of maximum margin classifiers. Here, some violations are 'tolerated' for non-separable cases. This means a best fit can be created. In fact, in real-life scenarios, we hardly find any data with purely separable classes; most classes have a few or more observations in overlapping classes. The mathematical representation of the support vector classifier is as follows, a slight correction to the constraints to accommodate error terms: In constraint 4, the C value is a non-negative tuning parameter to either accommodate more or fewer overall errors in the model. Having a high value of C will lead to a more robust model, whereas a lower value creates the flexible model due to less violation of error terms. In practice, the C value would be a tuning parameter as is usual with all machine learning models. The impact of changing the C value on margins is shown in the two diagrams below. With the high value of C, the model would be more tolerating and also have space for violations (errors) in the left diagram, whereas with the lower value of C, no scope for accepting violations leads to a reduction in margin width. C is a tuning parameter in Support Vector Classifiers: Support vector machines Support vector machines are used when the decision boundary is non-linear. It's useful when it becomes impossible to separate with support vector classifiers. The diagram below explains the non-linearly separable cases for both 1-dimension and 2-dimensions: Clearly, you can't classify using support vector classifiers whatever the cost value is. This is why you would want to then introduce something called the kernel trick. In the diagram below, a polynomial kernel with degree 2 has been applied in transforming the data from 1-dimensional to 2-dimensional data. By doing so, the data becomes linearly separable in higher dimensions. In the left diagram, different classes (red and blue) are plotted on X1 only, whereas after applying degree 2, we now have 2-dimensions, X1 and X21 (the original and a new dimension). The degree of the polynomial kernel is a tuning parameter. You need to tune them with various values to check where higher accuracy might be possible with the model: However, in the 2-dimensional case, the kernel trick is applied as below with the polynomial kernel with degree 2. Observations have been classified successfully using a linear plane after projecting the data into higher dimensions: Different types of kernel functions Kernel functions are the functions that, given the original feature vectors, return the same value as the dot product of its corresponding mapped feature vectors. Kernel functions do not explicitly map the feature vectors to a higher-dimensional space, or calculate the dot product of the mapped vectors. Kernels produce the same value through a different series of operations that can often be computed more efficiently. The main reason for using kernel functions is to eliminate the computational requirement to derive the higher-dimensional vector space from the given basic vector space, so that observations be separated linearly in higher dimensions. Why someone needs to like this is, derived vector space will grow exponentially with the increase in dimensions and it will become almost too difficult to continue computation, even when you have a variable size of 30 or so. The following example shows how the size of the variables grows. Here's an example: When we have two variables such as x and y, with a polynomial degree kernel, it needs to compute x2, y2, and xy dimensions in addition. Whereas, if we have three variables x, y, and z, then we need to calculate the x2, y2, z2, xy, yz, xz, and xyz vector spaces. You will have realized by this time that the increase of one more dimension creates so many combinations. Hence, care needs to be taken to reduce its computational complexity; this is where kernels do wonders. Kernels are defined more formally in the following equation: Polynomial kernels are often used, especially with degree 2. In fact, the inventor of support vector machines, Vladimir N Vapnik, developed using a degree 2 kernel for classifying handwritten digits. Polynomial kernels are given by the following equation: Radial Basis Function kernels (sometimes called Gaussian kernels) are a good first choice for problems requiring nonlinear models. A decision boundary that is a hyperplane in the mapped feature space is similar to a decision boundary that is a hypersphere in the original space. The feature space produced by the Gaussian kernel can have an infinite number of dimensions, a feat that would be impossible otherwise. RBF kernels are represented by the following equation: This is sometimes simplified as the following equation: It is advisable to scale the features when using support vector machines, but it is very important when using the RBF kernel. When the value of the gamma value is small, it gives you a pointed bump in the higher dimensions. A larger value gives you a softer, broader bump. A small gamma will give you low bias and high variance solutions; on the other hand, a high gamma will give you high bias and low variance solutions and that is how you control the fit of the model using RBF kernels: Learn more about support vector machines Support vector machines as a classification engine [read now] 10 machine learning algorithms every engineer needs to know [read now]
Read more
  • 0
  • 0
  • 48969
article-image-4-encryption-options-for-your-sql-server
Vijin Boricha
16 Apr 2018
7 min read
Save for later

4 Encryption options for your SQL Server

Vijin Boricha
16 Apr 2018
7 min read
In today’s tutorial, we will learn about cryptographic elements like T-SQL functions, service master key, and more. SQL Server cryptographic elements Encryption is the process of obfuscating data by the use of a key or password. This can make the data useless without the corresponding decryption key or password. Encryption does not solve access control problems. However, it enhances security by limiting data loss even if access controls are bypassed. For example, if the database host computer is misconfigured and a hacker obtains sensitive data, that stolen information might be useless if it is encrypted. SQL Server provides the following building blocks for the encryption; based on them you can implement all supported features, such as backup encryption, Transparent Data Encryption, column encryption and so on. We already know what the symmetric and asymmetric keys are. The basic concept is the same in SQL Server implementation. Later in the chapter you will practice how to create and implement all elements from the Figure 9-3. Let me explain the rest of the items. T-SQL functions SQL Server has built in support for handling encryption elements and features in the forms of T-SQL functions. You don't need any third-party software to do that, as you do with other database platforms. Certificates A public key certificate is a digitally-signed statement that connects the data of a public key to the identity of the person, device, or service that holds the private key. Certificates are issued and signed by a certification authority (CA). You can work with self-signed certificates, but you should be careful here. This can be misused for the large set of network attacks. SQL Server encrypts data with a hierarchical encryption. Each layer encrypts the layer beneath it using certificates, asymmetric keys, and symmetric keys. In a nutshell, the previous image means that any key in a hierarchy is guarded (encrypted) with the key above it. In practice, if you miss just one element from the chain, decryption will be impossible. This is an important security feature, because it is really hard for an attacker to compromise all levels of security. Let me explain the most important elements in the hierarchy. Service Master Key SQL Server has two primary applications for keys: a Service Master Key (SMK) generated on and for a SQL Server instance, and a database master key (DMK) used for a database. The SMK is automatically generated during installation and the first time the SQL Server instance is started. It is used to encrypt the next first key in the chain. The SMK should be backed up and stored in a secure, off-site location. This is an important step, because this is the first key in the hierarchy. Any damage at this level can prevent access to all encrypted data in the layers below. When the SMK is restored, the SQL Server decrypts all the keys and data that have been encrypted with the current SMK, and then encrypts them with the SMK from the backup. Service Master Key can be viewed with the following system catalog view: 1> SELECT name, create_date 2> FROM sys.symmetric_keys 3> GO name create_date ------------------------- ----------------------- ##MS_ServiceMasterKey## 2017-04-17 17:56:20.793 (1 row(s) affected) Here is an example of how you can back up your SMK to the /var/opt/mssql/backup folder. Note: In the case that you don't have /var/opt/mssql/backup folder execute all 5 bash lines. In the case you don't have permissions to /var/opt/mssql/backup folder execute all lines without first one. # sudo mkdir /var/opt/mssql/backup # sudo chown mssql /var/opt/mssql/backup/ # sudo chgrp mssql /var/opt/mssql/backup/ # sudo /opt/mssql/bin/mssql-conf set filelocation.defaultbackupdir /var/opt/mssql/backup/ # sudo systemctl restart mssql-server 1> USE master 2> GO Changed database context to 'master'. 1> BACKUP SERVICE MASTER KEY TO FILE = '/var/opt/mssql/backup/smk' 2> ENCRYPTION BY PASSWORD = 'S0m3C00lp4sw00rd' 3> --In the real scenarios your password should be more complicated 4> GO exit The next example is how to restore SMK from the backup location: 1> USE master 2> GO Changed database context to 'master'. 1> RESTORE SERVICE MASTER KEY 2> FROM FILE = '/var/opt/mssql/backup/smk' 3> DECRYPTION BY PASSWORD = 'S0m3C00lp4sw00rd' 4> GO You can examine the contents of your SMK with the ls command or some internal Linux file views, such is in Midnight Commander (MC). Basically there is not much to see, but that is the power of encryption. The SMK is the foundation of the SQL Server encryption hierarchy. You should keep a copy at an offsite location. Database master key The DMK is a symmetric key used to protect the private keys of certificates and asymmetric keys that are present in the database. When it is created, the master key is encrypted by using the AES 256 algorithm and a user-supplied password. To enable the automatic decryption of the master key, a copy of the key is encrypted by using the SMK and stored in both the database (user and in the master database). The copy stored in the master is always updated whenever the master key is changed. The next T-SQL code show how to create DMK in the Sandbox database: 1> CREATE DATABASE Sandbox 2> GO 1> USE Sandbox 2> GO 3> CREATE MASTER KEY 4> ENCRYPTION BY PASSWORD = 'S0m3C00lp4sw00rd' 5> GO Let's check where the DMK is with the sys.sysmmetric_keys system catalog view: 1> SELECT name, algorithm_desc 2> FROM sys.symmetric_keys 3> GO name algorithm_desc -------------------------- --------------- ##MS_DatabaseMasterKey## AES_256 (1 row(s) affected) This default can be changed by using the DROP ENCRYPTION BY SERVICE MASTER KEY option of ALTER MASTER KEY. A master key that is not encrypted by the SMK must be opened by using the OPEN MASTER KEY statement and a password. Now that we know why the DMK is important and how to create one, we will continue with the following DMK operations: ALTER OPEN CLOSE BACKUP RESTORE DROP These operations are important because all other encryption keys, on database-level, are dependent on the DMK. We can easily create a new DMK for Sandbox and re-encrypt the keys below it in the encryption hierarchy, assuming that we have the DMK created in the previous steps: 1> ALTER MASTER KEY REGENERATE 2> WITH ENCRYPTION BY PASSWORD = 'S0m3C00lp4sw00rdforN3wK3y' 3> GO Opening the DMK for use: 1> OPEN MASTER KEY 2> DECRYPTION BY PASSWORD = 'S0m3C00lp4sw00rdforN3wK3y' 3> GO Note: If the DMK was encrypted with the SMK, it will be automatically opened when it is needed for decryption or encryption. In this case, it is not necessary to use the OPEN MASTER KEY statement. Closing the DMK after use: 1> CLOSE MASTER KEY 2> GO Backing up the DMK: 1> USE Sandbox 2> GO 1> OPEN MASTER KEY 2> DECRYPTION BY PASSWORD = 'S0m3C00lp4sw00rdforN3wK3y'; 3> BACKUP MASTER KEY TO FILE = '/var/opt/mssql/backup/Snadbox-dmk' 4> ENCRYPTION BY PASSWORD = 'fk58smk@sw0h%as2' 5> GO Restoring the DMK: 1> USE Sandbox 2> GO 1> RESTORE MASTER KEY 2> FROM FILE = '/var/opt/mssql/backup/Snadbox-dmk' 3> DECRYPTION BY PASSWORD = 'fk58smk@sw0h%as2' 4> ENCRYPTION BY PASSWORD = 'S0m3C00lp4sw00rdforN3wK3y'; 5> GO When the master key is restored, SQL Server decrypts all the keys that are encrypted with the currently active master key, and then encrypts these keys with the restored master. Dropping the DMK: 1> USE Sandbox 2> GO 1> DROP MASTER KEY 2> GO You read an excerpt  from the book SQL Server on Linux, written by Jasmin Azemović.  From this book, you will learn to configure and administer database solutions on Linux. How SQL Server handles data under the hood SQL Server basics Creating reports using SQL Server 2016 Reporting Services  
Read more
  • 0
  • 0
  • 17469

article-image-behavior-scripting-in-c-and-javascript-for-game-developers
Packt Editorial Staff
16 Apr 2018
16 min read
Save for later

Behavior Scripting in C# and Javascript for game developers

Packt Editorial Staff
16 Apr 2018
16 min read
The common idea about game behaviors - things like enemy AI, or sequences of events, or the rules of a puzzle – are expressed in a scripting language, probably in a simple top-to-bottom recipe form, without using objects or much branching. Behaviour scripts are often associated with an object instance in game code – expressed in an object-oriented language such as C++ or C# – which does the work. In today’s post, we will introduce you to new classes and behavior scripts. The details of a new C# behavior and a new JavaScript behavior are also covered. We will further explore: Wall attack Declaring public variables Assigning scripts to objects Moving the camera To take your first steps into programming, we will look at a simple example of the same functionality in both C# and JavaScript, the two main programming languages used by Unity developers. It is also possible to write Boo-based scripts, but these are rarely used except by those with existing experience in the language. To follow the next steps, you may choose either JavaScript or C#, and then continue with your preferred language. To begin, click on the Create button on the Project panel, then choose either JavaScript or C# script, or simply click on the Add Component button on the Main CameraInspector panel. Your new script will be placed into the Project panel named NewBehaviourScript, and will show an icon of a page with either JavaScript or C# written on it. When selecting your new script, Unity offers a preview of what is already in the script, in the view of the Inspector, and an accompanying Edit button that when clicked on will launch the script into the default script editor, MonoDevelop. You can also launch a script in your script editor at any time by double-clicking on its icon in the Project panel. New behaviour script or class New scripts can be thought of as a new class in Unity terms. If you are new to programming, think of a class as a set of actions, properties, and other stored information that can be accessed under the heading of its name. For example, a class called Dogmay contain properties such as color, breed, size, or genderand have actions such as rolloveror fetchStick. These properties can be described as variables, while the actions can be written in functions, also known as methods. In this example, to refer to the breedvariable, a property of the Dogclass, we might refer to the class it is in, Dog, and use a period (full stop) to refer to this variable, in the following way: Dog.breed; If we want to call a function within the Dogclass, we might say, for example, the following: Dog.fetchStick(); We can also add arguments into functions-these aren't the everyday arguments we have with one another! Think of them as more like modifying the behavior of a function, for example, with our fetchStickfunction, we might build in an argument that defines how quickly our dog will fetch the stick. This might be called as follows: Dog.fetchStick(25); While these are abstract examples, often it can help to transpose coding into commonplace examples in order to make sense of them. As we continue, think back to this example or come up with some examples of your own, to help train yourself to understand classes of information and their properties. When you write a script in C# or JavaScript, you are writing a new class or classes with their own properties (variables) and instructions (functions) that you can call into play at the desired moment in your games. What's inside a new C# behaviour When you begin with a new C# script, Unity gives you the following code to get started: usingUnityEngine; usingSystem.Collections; publicclassNewBehaviourScript:MonoBehaviour{ //UsethisforinitializationvoidStart(){ } //UpdateiscalledonceperframevoidUpdate(){ } } This begins with the necessary two calls to the Unity Engine itself: usingUnityEngine; usingSystem.Collections; It goes on to establish the class named after the script. With C#, you'll be required to name your scripts with matching names to the class declared inside the script itself. This is why you will see publicclassNewBehaviourScript:MonoBehaviour{at the beginning of a new C# document, as NewBehaviourScriptis the default name that Unity gives to newly generated scripts. If you rename your script in the Project panel when it is created, Unity will rewrite the class name in your C# script. Code in classes When writing code, most of your functions, variables, and other scripting elements will be placed within the class of a script in C#. Within-in this context-means that it must occur after the class declaration, and following the corresponding closing }of that, at the bottom of the script. So, unless told otherwise, while following the instructions, assume that your code should be placed within the class established in the script. In JavaScript, this is less relevant as the entire script is the class; it is not explicitly established. Basic functions Unity as an engine has many of its own functions that can be used to call different features of the game engine, and it includes two important ones when you create a new script in C#. Functions (also known as methods) most often start with the voidterm in C#. This is the function's return type, which is the kind of data a function may result in. As most functions are simply there to carry out instructions rather than return information, often you will see voidat the beginning of their declaration, which simply means that a certain type of data will not be returned. Some basic functions are explained as follows: Start(): This is called when the scene first launches, so it is often used as it is suggested in the code, for initialization. For example, you may have a core variable that must be set to 0when the game scene begins or perhaps a function that spawns your player character in the correct place at the start of a level. Update(): This is called in every frame that the game runs, and is crucial for checking the state of various parts of your game during this time, as many different conditions of game objects may change while the game is running. Variables in C# To store information in a variable in C#, you will use the following syntax: typeOfDatanameOfVariable=value; Consider the following example: intcurrentScore=5; Another example would be: floatcurrentVelocity=5.86f; Note that the examples here show numerical data, with intmeaning integer, that is, a whole number, and floatmeaning floating point, that is, a number with a decimal place, which in C# requires a letter fto be placed at the end of the value. This syntax is somewhat different from JavaScript. Refer to the Variables in JavaScript section. What's inside a new JavaScript behaviour? While fulfilling the same functions as a C# file, a new empty JavaScript file shows you less as the entire script itself is considered to be the class, and the empty space in the script is considered to be within the opening and closing of the class, as the class declaration itself is hidden. You will also note that the lines usingUnityEngine;and usingSystem. Collections;are also hidden in JavaScript, so in a new JavaScript, you will simply be shown the Update()function: functionUpdate(){ } You will note that in JavaScript, you declare functions differently, using the term functionbefore the name. You will also need to write a declaration of variables and various other scripted elements with a slightly different syntax. We will look at examples of this as we progress. Variables in JavaScript The syntax for variables in JavaScript works as follows, and is always preceded by the prefix var, as shown: varvariableName:TypeOfData=value; For example: varcurrentScore:int=0; Another example is: varcurrentVelocity:float=5.86; As you must have noticed, the floatvalue does not require a letter ffollowing its value as it does in C#. You will notice as you see further, comparing the scripts written in the two different languages that C# often has stricter rules about how scripts are written, especially regarding implicitly stating types of data that are being used. Comments In both C# and JavaScript in Unity, you can write comments using: //twoforwardslashessymbolsforasinglelinecomment Another way of doing this would be: /*forward-slash,startoopenamultilinecommentsandattheendofit,star,forward-slashtoclose*/ You may write comments in the code to help you remember what each part does as you progress. Remember that because comments are not executed as code, you can write whatever you like, including pieces of code. As long as they are contained within a comment they will never be treated as working code. Wall attack Now let's put some of your new scripting knowledge into action and turn our existing scene into an interactive gameplay prototype. In the Project panel in Unity, rename your newly created script Shooterby selecting it, pressing return (Mac) or F2 (Windows), and typing in the new name. If you are using C#, remember to ensure that your class declaration inside the script matches this name of the script: publicclassShooter:MonoBehaviour{ As mentioned previously, JavaScript users will not need to do this. To kick-start your knowledge of using scripting in Unity, we will write a script to control the camera and allow shooting of a projectile at the wall that we have built. To begin with, we will establish three variables: bullet: This is a variable of type Rigidbody, as it will hold a reference to a physics controlled object we will make power: This is a floating point variable number we will use to set the power of shooting moveSpeed: This is another floating point variable number we will use to define the speed of movement of the camera using the arrow keys These variables must be public member variables, in order for them to display as adjustable settings in the Inspector. You'll see this in action very shortly! Declaring public variables Public variables are important to understand as they allow you to create variables that will be accessible from other scripts-an important part of game development as it allows for simpler inter-object communication. Public variables are also really useful as they appear as settings you can adjust visually in the Inspector once your script is attached to an object. Private variables are the opposite-designed to be only accessible within the scope of the script, class, or function they are defined within, and do not appear as settings in the Inspector. C# Before we begin, as we will not be using it, remove the Start()function from this script by deleting voidStart(){}. To establish the required variables, put the following code snippet into your script after the opening of the class, shown as follows: usingUnityEngine; usingSystem.Collections; publicclassShooter:MonoBehaviour{ publicRigidbodybullet;publicfloatpower=1500f;publicfloatmoveSpeed=2f; voidUpdate(){ } } Note that in this example, the default explanatory comments and the Start()function have been removed in order to save space. JavaScript In order to establish public member variables in JavaScript, you will need to simply ensure that your variables are declared outside of any existing function. This is usually done at the top of the script, so to declare the three variables we need, add the following to the top of your new Shooterscript so that it looks like this: varbullet:Rigidbody;varpower:float=1500;varmoveSpeed:float=5;functionUpdate(){ } Note that JavaScript (UnityScript) is much less declarative and needs less typing to start. Assigning scripts to objects In order for this script to be used within our game it must be attached as a component of one of the game objects within the existing scene. Save your script by choosing File | Save from the top menu of your script editor and return to Unity. There are several ways to assign a script to an object in Unity: Drag it from the Project panel and drop it onto the name of an object in the Hierarchy panel. Drag it from the Project panel and drop it onto the visual representation of the object in the Scene panel. Select the object you wish to apply the script to and then drag and drop the script to empty space at the bottom of the Inspector view for that object. Select the object you wish to apply the script to and then choose Component | Scripts | and the name of your script from the top menu. The most common method is the first approach, and this would be most appropriate since trying to drag to the camera in the Scene View, for example, would be difficult as the camera itself doesn't have a tangible surface to drag to. For this reason, drag your new Shooterscript from the Project panel and drop it onto the name of Main Camera in the Hierarchy to assign it, and you should see your script appear as a new component, following the existing audio listener component. You will also see its three public variables such as bullet, power, and moveSpeedin the Inspector, as follows: You can alternatively act in the Inspector, directly, press the Add Component button, and look for Shooterby typing in the search box. Note, this is valid if you didn't add the component in this way initially. In that case, the Shootercomponent will already be attached to the camera GameObject. As you will see, Unity has taken the variable names and given them capital letters, and in the case of our moveSpeedvariable, it takes a capital letter in the middle of the phrase to signify the start of a new word in the Inspector, placing a space between the two words when seen as a public variable. You can also see here that the bulletvariable is not yet set, but it is expecting an object to be assigned to it that has a Rigidbody attached-this is often referred to as being a Rigidbody object. Despite the fact that, in Unity, all objects in the scene can be referred to as game objects, when describing an object as a Rigidbodyobject in scripting, we will only be able to refer to properties and functions of the Rigidbodyclass. This is not a problem however; it simply makes our script more efficient than referring to the entire GameObjectclass. For more on this, take a look at the script reference documentation for both the classes: GameObject Rigidbody Beware that when adjusting values of public variables in the Inspector, any values changed will simply override those written in the script, rather than replacing them. Let's continue working on our script and add some interactivity; so, return to your script editor now. Moving the camera Next, we will make use of the moveSpeedvariable combined with keyboard input in order to move the camera and effectively create a primitive aiming of our shot, as we will use the camera as the point to shoot from. As we want to use the arrow keys on the keyboard, we need to be aware of how to address them in the code first. Unity has many inputs that can be viewed and adjusted using the Input Manager-choose Edit | Project Settings | Input: As seen in this screenshot, two of the default settings for input are Horizontal and Vertical. These rely on an axis-based input that, when holding the Positive Button, builds to a value of 1, and when holding the Negative Button, builds to a value of -1. Releasing either button means that the input's value springs back to 0, as it would if using a sprung analog joystick on a gamepad. As input is also the name of a class, and all named elements in the Input Manager are axes or buttons, in scripting terms, we can simply use: Input.GetAxis("Horizontal"); This receives the current value of the horizontal keys, that is, a value between -1 and 1, depending upon what the user is pressing. Let's put that into practice in our script now, using local variables to represent our axes. By doing this, we can modify the value of this variable later using multiplication, taking it from a maximum value of 1 to a higher number, allowing us to move the camera faster than 1 unit at a time. This variable is not something that we will ever need to set inside the Inspector, as Unity is assigning values based on our key input. As such, these values can be established as local variables. Local, private, and public variables Before we continue, let's take an overview of local, private, and public variables in order to cement your understanding: Local variables: These are variables established inside a function; they will not be shown in the Inspector, and are only accessible to the function they are in. Private variables: These are established outside a function, and therefore accessible to any function within your class. However, they are also not visible in the Inspector. Public variables: These are established outside a function, are accessible to any function in their class and also to other scripts, apart from being visible for editing in the Inspector. Local variables and receiving input The local variables in C# and JavaScript are shown as follows: C# Here is the code for C#: voidUpdate(){floath=Input.GetAxis("Horizontal")*Time.deltaTime*moveSpeed;floatv=Input.GetAxis("Vertical")*Time.deltaTime*moveSpeed; JavaScript Here is the code for JavaScript: functionUpdate(){varh:float=Input.GetAxis("Horizontal")*Time.deltaTime*moveSpeed;varv:float=Input.GetAxis("Vertical")*Time.deltaTime*moveSpeed; The variables declared here-hfor Horizontaland vfor Vertical, could be named anything we like; it is simply quicker to write single letters. Generally speaking, we would normally give these a name, because some letters cannot be used as variable names, for example, x, y, and z, because they are used for coordinate values and therefore reserved for use as such. As these axes' values can be anything from -1 to 1, they are likely to be a number with a decimal place, and as such, we must declare them as floating point type variables. They are then multiplied using the *symbol by Time.deltaTime, which simply means that the value is divided by the number of frames per second (the deltaTimeis the time it takes from one frame to the next or the time taken since the Update()function last ran), which means that the value adds up to a consistent amount per second, regardless of the framerate. The resultant value is then increased by multiplying it by the public variable we made earlier, moveSpeed. This means that although the values of hand vare local variables, we can still affect them by adjusting public moveSpeedin the Inspector, as it is a part of the equation that those variables represent. This is a common practice in scripting as it takes advantage of the use of publicly accessible settings combined with specific values generated by a function. [box type="note" align="" class="" width=""]You read an excerpt from the book Unity 5.x Game Development Essentials, Third Edition written by Tommaso Lintrami. Unity is the most popular game engine among Indie developers, start-ups, and medium to large independent game development companies. This book is a complete exercise in game development covering environments, physics, sound, particles, and much more—to get you up and running with Unity rapidly.[/box] Scripting Strategies Unity 3.x Scripting-Character Controller versus Rigidbody
Read more
  • 0
  • 0
  • 26884
Modal Close icon
Modal Close icon