Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7008 Articles
article-image-how-to-dockerize-asp-net-core-application
Aaron Lazar
27 Apr 2018
5 min read
Save for later

How to dockerize an ASP.NET Core application

Aaron Lazar
27 Apr 2018
5 min read
There are many reasons why you might want to dockerize an ASP.NET Core application. But ultimately, it's simply going to make life much easier for you. It's great for isolating components, especially if you're building a microservices or planning to deploy your application on the cloud. So, if you want an easier life (possibly) follow this tutorial to learn how to dockerize an ASP.NET Core application. Get started: Dockerize an ASP.NET Core application Create a new ASP.NET Core Web Application in Visual Studio 2017 and click OK: On the next screen, select Web Application (Model-View-Controller) or any type you like, while ensuring that ASP.NET Core 2.0 is selected from the drop-down list. Then check the Enable Docker Support checkbox. This will enable the OS drop-down list. Select Windows here and then click on the OK button: If you see the following message, you need to switch to Windows containers. This is because you have probably kept the default container setting for Docker as Linux: If you right-click on the Docker icon in the taskbar, you will see that you have an option to enable Windows containers there too. You can switch to Windows containers from the Docker icon in the taskbar by clicking on the Switch to Windows containers option: Switching to Windows containers may take several minutes to complete, depending on your line speed and the hardware configuration of your PC.If, however, you don't click on this option, Visual Studio will ask you to change to Windows containers when selecting the OS platform as Windows.There is a good reason that I am choosing Windows containers as the target OS. This reason will become clear later on in the chapter when working with Docker Hub and automated builds. After your ASP.NET Core application is created, you will see the following project setup in Solution Explorer: The Docker support that is added to Visual Studio comes not only in the form of the Dockerfile, but also in the form of the Docker configuration information. This information is contained in the global docker-compose.yml file at the solution level: 3. Clicking on the Dockerfile in Solution Explorer, you will see that it doesn't look complicated at all. Remember, the Dockerfile is the file that creates your image. The image is a read-only template that outlines how to create a Docker container. The Dockerfile, therefore, contains the steps needed to generate the image and run it. The instructions in the Dockerfile create layers in the image. This means that if anything changes in the Dockerfile, only the layers that have changed will be rebuilt when the image is rebuilt. The Dockerfile looks as follows: FROM microsoft/aspnetcore:2.0-nanoserver-1709 AS base WORKDIR /app EXPOSE 80 FROM microsoft/aspnetcore-build:2.0-nanoserver-1709 AS build WORKDIR /src COPY *.sln ./ COPY DockerApp/DockerApp.csproj DockerApp/ RUN dotnet restore COPY . . WORKDIR /src/DockerApp RUN dotnet build -c Release -o /app FROM build AS publish RUN dotnet publish -c Release -o /app FROM base AS final WORKDIR /app COPY --from=publish /app . ENTRYPOINT ["dotnet", "DockerApp.dll"] When you have a look at the menu in Visual Studio 2017, you will notice that the Run button has been changed to Docker: Clicking on the Docker button to debug your ASP.NET Core application, you will notice that there are a few things popping up in the Output window. Of particular interest is the IP address at the end. In my case, it reads Launching http://172.24.12.112 (yours will differ): When the browser is launched, you will see that the ASP.NET Core application is running at the IP address listed previously in the Output window. Your ASP.NET Core application is now running inside of a Windows Docker container: This is great and really easy to get started with. But what do you need to do to Dockerize an ASP.NET Core application that already exists? As it turns out, this isn't as difficult as you may think. How to add Docker support to an existing .NET Core application Imagine that you have an ASP.NET Core application without Docker support. To add Docker support to this existing application, simply add it from the context menu: To add Docker support to an existing ASP.NET Core application, you need to do the following: Right-click on your project in Solution Explorer Click on the Add menu item Click on Docker Support in the fly-out menu: Visual Studio 2017 now asks you what the target OS is going to be. In our case, we are going to target Windows: After clicking on the OK button, Visual Studio 2017 will begin to add the Docker support to your project: It's actually extremely easy to create ASP.NET Core applications that have Docker support baked in, and even easier to add Docker support to existing ASP.NET Core applications. Lastly, if you experience any issues, such as file access issues, ensure that your antivirus software has excluded your Dockerfile from scanning. Also, make sure that you run Visual Studio as Administrator. This tutorial has been taken from C# 7 and .NET Core Blueprints. More Docker tutorials Building Docker images using Dockerfiles How to install Keras on Docker and Cloud ML
Read more
  • 0
  • 0
  • 53751

article-image-how-to-deploy-nodejs-application-to-the-web-using-heroku
Sunith Shetty
26 Apr 2018
19 min read
Save for later

How to deploy a Node.js application to the web using Heroku

Sunith Shetty
26 Apr 2018
19 min read
Heroku is a tool that helps you manage cloud hosted web applications. It's a really great service. It makes creating, deploying, and updating apps really easy. Now Heroku, like GitHub, does not require a credit card to sign up and there is a free tier, which we'll use. They have paid plans for just about everything, but we can get away with the free tier for everything we'll do in this section. In this tutorial, you'll learn to deploy your live Node.js app to the Web using Heroku. By the end of this tutorial, you'll have the URL that you can share with anybody to view the application from their browser. Installing Heroku command-line tools To kick things off, we'll open up the browser and go to Heroku's website here. Here we can go ahead and sign up for a new account. Take a quick moment to either log in to your existing one or sign up for a new one. Once logged in, it'll show you the dashboard. Now your dashboard will look something like this: Although there might be a greeting telling you to create a new application, which you can ignore. I have a bunch of apps. You might not have these. That is perfectly fine. The next thing we'll do is install the Heroku command-line tools. This will let us create apps, deploy apps, open apps, and do all sorts of really cool stuff from the Terminal, without having to come into the web app. That will save us time and make development a lot easier. We can grab the download by going to toolbelt.heroku.com. Here we're able to grab the installer for whatever operating system, you happen to be running on. So, let's start the download. It's a really small download so it should happen pretty quickly. Once it's done, we can go ahead and run through the process: This is a simple installer where you just click on Install. There is no need to customize anything. You don't have to enter any specific information about your Heroku account. Let's go ahead and complete the installer. This will give us a new command from the Terminal that we can execute. Before we can do that, we do have to log in locally in the Terminal and that's exactly what we'll do next. Log in to Heroku account locally Now we will start off the Terminal. If you already have it running, you might need to restart it in order for your operating system to recognize the new command. You can test that it got installed properly by running the following command: heroku --help When you run this command, you'll see that it's installing the CLI for the first time and then we'll get all the help information. This will tell us what commands we have access to and exactly how they work: Now we will need to log in to the Heroku account locally. This process is pretty simple. In the preceding code output, we have all of the commands available and one of them happens to be login. We can run heroku login just like this to start the process: heroku login I'll run the login command and now we just use the email and password that we had set up before: I'll type in my email and password. Typing for Password is hidden because it's secure. And when I do that you see Logged in as garyngreig@gmail.com shows up and this is fantastic: Now we're logged in and we're able to successfully communicate between our machine's command line and the Heroku servers. This means we can get started creating and deploying applications. Getting SSH key to Heroku Now before going ahead, we'll use the clear command to clear the Terminal output and get our SSH key on Heroku, kind of like what we did with GitHub, only this time we can do it via the command line. So it's going to be a lot easier. In order to add our local keys to Heroku, we'll run the heroku keys:add command. This will scan our SSH directory and add the key up: heroku keys:add Here you can see it found a key the id_rsa.pub file: Would you like to upload it to Heroku? Type Yes and hit enter: Now we have our key uploaded. That is all it took. Much easier than it was to configure with GitHub. From here, we can use the heroku keys command to print all the keys currently on our account: heroku keys We could always remove them using heroku keys:remove command followed by the email related to that key. In this case, we'll keep the Heroku key that we have. Next up, we can test our connection using SSH with the v flag and git@heroku.com: ssh -v git@heroku.com This will communicate with the Heroku servers: As shown, we can see it's asking that same question: The authenticity of the host 'heroku.com' can't be established, Are you sure you want to continue connecting? Type Yes. You will see the following output: Now when you run that command, you'll get a lot of cryptic output. What you're looking for is authentication succeeded and then public key in parentheses. If things did not go well, you'll see the permission denied message with public key in parentheses. In this case, the authentication was successful, which means we are good to go. I'll run clear again, clearing the Terminal output. Setting up in the application code for Heroku Now we can turn our attention towards the application code because before we can deploy to Heroku, we will need to make two changes to the code. These are things that Heroku expects your app to have in place in order to run properly because Heroku does a lot of things automatically, which means you have to have some basic stuff set up for Heroku to work. It's not too complex—some really simple changes, a couple one-liners. Changes in the server.js file First up in the server.js file down at the very bottom of the file, we have the port and our app.listen statically coded inside server.js: app.listen(3000, () => { console.log('Server is up on port 3000'); }); We need to make this port dynamic, which means we want to use a variable. We'll be using an environment variable that Heroku is going to set. Heroku will tell your app which port to use because that port will change as you deploy your app, which means that we'll be using that environment variable so we don't have to swap out our code every time we want to deploy. With environment variables, Heroku can set a variable on the operating system. Your Node app can read that variable and it can use it as the port. Now all machines have environment variables. You can actually view the ones on your machine by running the env command on Linux or macOS or the set command on Windows. What you'll get when you do that is a really long list of key-value pairs, and this is all environment variables are: Here, we have a LOGNAME environment variable set to Andrew. I have a HOME environment variable set to my home directory, all sorts of environment variables throughout my operating system. One of these that Heroku is going to set is called PORT, which means we need to go ahead and grab that port variable and use it in server.js instead of 3000. Up at the very top of the server.js file, we'd to make a constant called port, and this will store the port that we'll use for the app: const express = require('express');. const hbs = require('hbs'); const fs = require('fs'); const port Now the first thing we'll do is grab a port from process.env. The process.env is an object that stores all our environment variables as key-value pairs. We're looking for one that Heroku is going to set called PORT: const port = process.env.PORT; This is going to work great for Heroku, but when we run the app locally, the PORT environment variable is not going to exist, so we'll set a default using the OR (||) operator in this statement. If process.env.port does not exist, we'll set port equal to 3000 instead: const port = process.env.PORT || 3000; Now we have an app that's configured to work with Heroku and to still run locally, just like it did before. All we have to do is take the PORT variable and use that in app.listen instead of 3000. As shown, I'm going to reference port and inside our message, I'll swap it out for template strings and now I can replace 3000 with the injected port variable, which will change over time: app.listen(port, () => { console.log(`Server is up on port ${port}`); }); With this in place, we have now fixed the first problem with our app. I'll now run node server.js from the Terminal. node server.js We still get the exact same message: Server is up on port 3000, so your app will still works locally as expected: Changes in the package.json file Next up, we have to specify a script in package.json. Inside package.json, you might have noticed we have a scripts object, and in there we have a test script. This gets set by default for npm: We can create all sorts of scripts inside the scripts object that do whatever we like. A script is nothing more than a command that we run from the Terminal, so we could take this command, node server.js, and turn it into a script instead, and that's exactly what we're going to do. Inside the scripts object, we'll add a new script. The script needs to be called start: This is a very specific, built-in script and we'll set it equal to the command that starts our app. In this case, it will be node server.js: "start": "node server.js" This is necessary because when Heroku tries to start our app, it will not run Node with your file name because it doesn't know what your file name is called. Instead, it will run the start script and the start script will be responsible for doing the proper thing; in this case, booting up that server file. Now we can run our app using that start script from the Terminal by using the following command: npm start When I do that, we get a little output related to npm and then we get Server is up on port 3000. The big difference is that we are now ready for Heroku. We could also run the test script using from the Terminal npm test: npm test Now, we have no tests specified and that is expected: Making a commit in Heroku The next step in the process will be to make the commit and then we can finally start getting it up on the Web. First up, git status. When we run git status, we have something a little new: Instead of new files, we have modified files here as shown in the code output here. We have a modified package.json file and we have a modified server.js file. These are not going to be committed if we were to run a git commit just yet; we still have to use git add. What we'll do is run git add with the dot as the next argument. Dot is going to add every single thing showing up and get status to the next commit. Now I only recommend using the syntax of everything you have listed in the Changes not staged for commit header. These are the things you actually want to commit, and in our case, that is indeed what we want. If I run git add and then a rerun git status, we can now see what is going to be committed next, under the Changes to be committed header: Here we have our package.json file and the server.js file. Now we can go ahead and make that commit. I'll run a git commit command with the m flag so we can specify our message, and a good message for this commit would be something like Setup start script and heroku Port: git commit -m 'Setup start script and heroku port' Now we can go ahead and run that command, which will make the commit. Now we can go ahead and push that up to GitHub using the git push command, and we can leave off the origin remote because the origin is the default remote. I'll go ahead and run the following command: git push This will push it up to GitHub, and now we are ready to actually create the app, push our code up, and view it over in the browser: Running the Heroku create command The next step in the process will be to run a command called heroku create from the Terminal. heroku create needs to get executed from inside your application: heroku create Just like we run our Git commands, when I run heroku create, a couple things are going to happen: First up, it's going to make a real new application over in the Heroku web app It's also going to add a new remote to your Git repository Now remember we have an origin remote, which points to our GitHub repository. We'll have a Heroku remote, which points to our Heroku Git repository. When we deploy to the Heroku Git repository, Heroku is going to see that. It will take the changes and it will deploy them to the Web. When we run Heroku create, all of that happens: Now we do still have to push up to this URL in order to actually do the deploying process, and we can do that using git push followed by heroku: git push heroku The brand new remote was just added because we ran heroku create. Now pushing it this time around will go through the normal process. You'll then start seeing some logs. These are logs coming back from Heroku letting you know how your app is deploying. It's going through the entire process, showing you what happens along the way. This will take about 10 seconds and at the very end we have a success message—Verifying deploy... done: It also verified that the app was deployed successfully and that did indeed pass. From here we actually have a URL we can visit (https://sleepy-retreat-32096.herokuapp.com/). We can take it, copy it, and paste it in the browser. What I'll do instead is use the following command: heroku open The heroku open will open up the Heroku app in the default browser. When I run this, it will switch over to Chrome and we get our application showing up just as expected: We can switch between pages and everything works just like it did locally. Now we have a URL and this URL was given to us by Heroku. This is the default way Heroku generates app URLs. If you have your own domain registration company, you can go ahead and configure its DNS to point to this application. This will let you use a custom URL for your Heroku app. You'll have to refer to the specific instructions for your domain registrar in order to do that, but it can indeed be done. Now that we have this in place, we have successfully deployed our Node applications live to Heroku, and this is just fantastic. In order to do this, all we had to do is make a commit to change our code and push it up to a new Git remote. It could not be easier to deploy our code. You can also manage your application by going back over to the Heroku dashboard. If you give it a refresh, you should see that brand new URL somewhere on the dashboard. Remember mine was sleepy retreat. Yours is going to be something else. If I click on the sleepy retreat, I can view the app page: Here we can do a lot of configuration. We can manage Activity and Access so we can collaborate with others. We have metrics, we have Resources, all sorts of really cool stuff. With this in place, we are now done with our basic deploying section. In the next section, your challenge will be to go through that process again. You'll add some changes to the Node app. You'll commit them, deploy them, and view them live in the Web. We'll get started by creating the local changes. That means I'll register a new URL right here using app.get. We'll create a new page/projects, which is why I have that as the route for my HTTP get handler. Inside the second argument, we can specify our callback function, which will get called with request and response, and like we do for the other routes above, the root route and our about route, we'll be calling response.render to render our template. Inside the render arguments list, we'll provide two. The first one will be the file name. The file doesn't exist, but we can still go ahead and call render. I'll call it projects.hbs, then we can specify the options we want to pass to the template. In this case, we'll set page title, setting it equal to Projects with a capital P. Excellent! Now with this in place, the server file is all done. There are no more changes there. What I'll do is go ahead and go to the views directory, creating a new file called projects.hbs. In here, we'll be able to configure our template. To kick things off, I'm going to copy the template from the about page. Since it's really similar, I'll copy it. Close about, paste it into projects, and I'm just going to change this text to project page text would go here. Then we can save the file and make our last change. The last thing we want to do is update the header. We now have a brand new projects page that lives at /projects. So we'll want to go ahead and add that to the header links list. Right here, I'll create a new paragraph tag and then I'll make an anchor tag. The text for the link will be Projects with a capital P and the href, which is the URL to visit when that link is clicked. We'll set that equal to /projects, just like we did for about, where we set it equal to /about. Now that we have this in place, all our changes are done and we are ready to test things out locally. I'll fire up the app locally using Node with server.js as the file. To start, we're up on localhost 3000. So over in the browser, I can move to the localhost tab, as opposed to the Heroku app tab, and click on Refresh. Right here we have Home, which goes to home, we have About which goes to about, and we have Projects which does indeed go to /projects, rendering the projects page. Project page text would go here. With this in place we're now done locally. We have the changes, we've tested them, now it's time to go ahead and make that commit. That will happen over inside the Terminal. I'll shut down the server and run Git status. This will show me all the changes to my repository as of the last commit. I have two modified files: the server file and the header file, and I have my brand new projects file. All of this looks great. I want to add all of this to the next commit, so I can use a Git add with the . to do just that. Now before I actually make the commit, I do like to test that the proper things got added by running Git status. Right here I can see my changes to be committed are showing up in green. Everything looks great. Next up, we'll run a Git commit to actually make the commit. This is going to save all of the changes into the Git repository. A message for this one would be something like adding a project page. With a commit made, the next thing you needed to do was push it up to GitHub. This will back our code up and let others collaborate on it. I'll use Git push to do just that. Remember we can leave off the origin remote as origin is the default remote, so if you leave off a remote it'll just use that anyway. With our GitHub repository updated, the last thing to do is deploy to Heroku and we do that by pushing up the Git repository, using Git push, to the Heroku remote. When we do this, we get our long list of logs as the Heroku server goes through the process of installing our npm modules, building the app, and actually deploying it. Once it's done, we'll get brought back to the Terminal like we are here, and then we can open up the URL in the\ browser. Now I can copy it from here or run Heroku open. Since I already have a tab open with the URL in place, I'll simply give it a refresh. Now you might have a little delay as you refresh your app. Sometimes starting up the app right after a new app was deployed can take about 10 to 15 seconds. That will only happen as you first visit it. Other times where you click on the Refresh button, it should reload instantly. Now we have the projects page and if I visit it, everything looks awesome. The navbar is working great and the projects page is indeed rendering at /projects. With this in place, we are now done. We've gone through the process of adding a new feature, testing it locally, making a Git commit, pushing it up to GitHub, and deploying it to Heroku. We now have a workflow for building real-world web applications using Node.js. This tutorial has been taken from Learning Node.js Development. More Heroku tutorials Deploy a Game to Heroku Managing Heroku from the command line
Read more
  • 0
  • 0
  • 38189

article-image-creating-deploying-amazon-redshift-cluster
Vijin Boricha
26 Apr 2018
7 min read
Save for later

Creating and deploying an Amazon Redshift cluster

Vijin Boricha
26 Apr 2018
7 min read
Amazon Redshift is one of the database as a service (DBaaS) offerings from AWS that provides a massively scalable data warehouse as a managed service, at significantly lower costs. The data warehouse is based on the open source PostgreSQL database technology. However, not all features offered in PostgreSQL are present in Amazon Redshift. Today, we will learn about Amazon Redshift and perform a few steps to create a fully functioning Amazon Redshift cluster. We will also take a look at some of the essential concepts and terminologies that you ought to keep in mind when working with Amazon Redshift: Clusters: Just like Amazon EMR, Amazon Redshift also relies on the concept of clusters. Clusters here are logical containers containing one or more instances or compute nodes and one leader node that is responsible for the cluster's overall management. Here's a brief look at what each node provides: Leader node: The leader node is a single node present in a cluster that is responsible for orchestrating and executing various database operations, as well as facilitating communication between the database and associate client programs. Compute node: Compute nodes are responsible for executing the code provided by the leader node. Once executed, the compute nodes share the results back to the leader node for aggregation. Amazon Redshift supports two types of compute nodes: dense storage nodes and dense compute nodes. The dense storage nodes provide standard hard disk drives for creating large data warehouses; whereas, the dense compute nodes provide higher performance SSDs. You can start off by using a single node that provides 160 GB of storage and scale up to petabytes by leveraging one or more 16 TB capacity instances as well. Node slices: Each compute node is partitioned into one or more smaller chunks or slices by the leader node, based on the cluster's initial size. Each slice contains a portion of the compute nodes memory, CPU and disk resource, and uses these resources to process certain workloads that are assigned to it. The assignment of workloads is again performed by the leader node. Databases: As mentioned earlier, Amazon Redshift provides a scalable database that you can leverage for a data warehouse, as well as analytical purposes. With each cluster that you spin in Redshift, you can create one or more associated databases with it. The database is based on the open source relational database PostgreSQL (v8.0.2) and thus, can be used in conjunction with other RDBMS tools and functionalities. Applications and clients can communicate with the database using standard PostgreSQL JDBC and ODBC drivers. Here is a representational image of a working data warehouse cluster powered by Amazon Redshift: With this basic information in mind, let's look at some simple and easy to follow steps using which you can set up and get started with your Amazon Redshift cluster. Getting started with Amazon Redshift In this section, we will be looking at a few simple steps to create a fully functioning Amazon Redshift cluster that is up and running in a matter of minutes: First up, we have a few prerequisite steps that need to be completed before we begin with the actual set up of the Redshift cluster. From the AWS Management Console, use the Filter option to filter out IAM. Alternatively, you can also launch the IAM dashboard by selecting this URL: https://console.aws.amazon.com/iam/. Once logged in, we need to create and assign a role that will grant our Redshift cluster read-only access to Amazon S3 buckets. This role will come in handy later on in this chapter when we load some sample data on an Amazon S3 bucket and use Amazon Redshift's COPY command to copy the data locally into the Redshift cluster for processing. To create the custom role, select the Role option from the IAM dashboards' navigation pane. On the Roles page, select the Create role option. This will bring up a simple wizard using which we will create and associate the required permissions to our role. Select the Redshift option from under the AWS Service group section and opt for the Redshift - Customizable option provided under the Select your use case field. Click Next to proceed with the set up. On the Attach permissions policies page, filter and select the AmazonS3ReadOnlyAccess permission. Once done, select Next: Review. In the final Review page, type in a suitable name for the role and select the Create Role option to complete the process. Make a note of the role's ARN as we will be requiring this in the later steps. Here is snippet of the role policy for your reference: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:Get*", "s3:List*" ], "Resource": "*" } ] } With the role created, we can now move on to creating the Redshift cluster. To do so, log in to the AWS Management Console and use the Filter option to filter out Amazon Redshift. Alternatively, you can also launch the Redshift dashboard by selecting this URL: https://console.aws.amazon.com/redshift/. Select Launch Cluster to get started with the process. Next, on the CLUSTER DETAILS page, fill in the required information pertaining to your cluster as mentioned in the following list: Cluster identifier: A suitable name for your new Redshift cluster. Note that this name only supports lowercase strings. Database name: A suitable name for your Redshift database. You can always create more databases within a single Redshift cluster at a later stage. By default, a database named dev is created if no value is provided: Database port: The port number on which the database will accept connections. By default, the value is set to 5439, however you can change this value based on your security requirements. Master user name: Provide a suitable username for accessing the database. Master user password: Type in a strong password with at least one uppercase character, one lowercase character and one numeric value. Confirm the password by retyping it in the Confirm password field. Once completed, hit Continue to move on to the next step of the wizard. On the NODE CONFIGURATION page, select the appropriate Node type for your cluster, as well as the Cluster type based on your functional requirements. Since this particular cluster setup is for demonstration purposes, I've opted to select the dc2.large as the Node type and a Single Node deployment with 1 compute node. Click Continue to move on the next page once done. It is important to note here that the cluster that you are about to launch will be live and not running in a sandbox-like environment. As a result, you will incur the standard Amazon Redshift usage fees for the cluster until you delete it. You can read more about Redshift's pricing at: https://aws.amazon.com/redshift/pricing/. In the ADDITIONAL CONFIGURATION page, you can configure add-on settings, such as encryption enablement, selecting the default VPC for your cluster, whether or not the cluster should have direct internet access, as well as any preferences for a particular Availability Zone out of which the cluster should operate. Most of these settings do not require any changes at the moment and can be left to their default values. The only changes required on this page is associating the previously created IAM role with the cluster. To do so, from the Available Roles drop-down list, select the custom Redshift role that we created in our prerequisite section. Once completed, click on Continue. Review the settings and changes on the Review page and select the Launch Cluster option when completed. The cluster takes a few minutes to spin up depending on whether or not you have opted for a single instance deployment or multiple instances. Once completed, you should see your cluster listed on the Clusters page, as shown in the following screenshot. Ensure that the status of your cluster is shown as healthy under the DB Health column. You can additionally make a note of the cluster's endpoint as well, for accessing it programmatically: With the cluster all set up, the next thing to do is connect to the same. This Amazon Redshift tutorial has been taken from AWS Administration - The Definitive Guide - Second Edition. Read More Amazon S3 Security access and policies How to run Lambda functions on AWS Greengrass AWS Fargate makes Container infrastructure management a piece of cake
Read more
  • 0
  • 0
  • 36127

article-image-run-and-configure-iot-gateway
Gebin George
26 Apr 2018
9 min read
Save for later

How to run and configure an IoT Gateway

Gebin George
26 Apr 2018
9 min read
What is an IoT gateway? An IoT gateway is the protocol or software that's used to connect Internet of Things servers to the cloud. The IoT Gateway can be run as a standalone application, without any modification. There are different encapsulations of the IoT Gateway already prepared. They are built using the same code, but have different properties and are aimed at different operating systems. In today's tutorial, we will explore how to run and configure the IoT gateway. Since all libraries used are based on .NET Standard, they are portable across platforms and operating systems. The encapsulations are then compiled into .NET Core 2 applications. These are the ones being executed. Since both .NET Standard and .NET Core 2 are portable, the gateway can, therefore, be encapsulated for more operating systems than currently supported. Check out this link for a list of operating systems supported by .NET Core 2. Available encapsulations such as installers or app package bundles are listed in the following table. For each one is listed the start project that can be used if you build the project and want to start or debug the application from the development environment: Platform Executable project Windows console Waher.IoTGateway.Console Windows service Waher.IoTGateway.Svc Universal Windows Platform app Waher.IoTGateway.App The IoT Gateway encapsulations can be downloaded from the GitHub project page: All gateways use the library Waher.IoTGateway, which defines the executing environment of the gateway and interacts with all pluggable modules and services. They also use the Waher.IoTGateway.Resources library, which contains resource files common among all encapsulations. The Waher.IoTGateway library is also available as a NuGet: Running the console version The console version of the IoT Gateway (Waher.IoTGateway.Console) is the simplest encapsulation. It can be run from the command line. It requires some basic configuration to run properly. This configuration can be provided manually (see following sections), or by using the installer. The installer asks the user for some basic information and generates the configuration files necessary to execute the application. The console version is the simplest encapsulation, with a minimum of operating system dependencies. It's the easiest to port to other environments. It's also simple to run from the development environment. When run, it outputs any events directly to the terminal window. If sniffers are enabled, the corresponding communication is also output to the terminal window. This provides a simple means to test and debug encrypted communication: Running the gateway as a Windows service The IoT Gateway can also be run as a Windows service (Waher.IoTGateway.Svc). This requires the application be executed on a Windows operating system. The application is a .NET Core 2 console application that has command-line switches allowing it to be registered and executed in the background as a Windows service. Since it supports a command-line interface, it can be used to run the gateway from the console as well. The following table lists recognized command-line switches: Switch Description -? Shows help information. -console Runs the service as a console application. -install Installs the application as a Window Service in the underlying operating system. -displayname Name Sets a custom display name for the Windows service. The default name if omitted is IoT Gateway Service. -description Desc Sets a custom textual description for the Windows service. The default description if omitted is Windows Service hosting the Waher IoT Gateway. -immediate If the service should be started immediately. -localsystem Installed service will run using the Local System account. -localservice Installed service will run using the Local Service account (default). -networkservice Installed service will run using the Network Service account. -start Mode Sets the default starting mode of the Windows service. The default is Disabled. Available options are StartOnBoot, StartOnSystemStart, AutoStart, StartOnDemand and Disabled -uninstall Uninstalls the application as a Windows service from the operating system.   Running the gateway as an app It is possible to run the IoT Gateway as a Universal Windows Platform (UWP) app (Waher.IoTGateway.App). This allows it to be run on Windows phones or embedded devices such as the Raspberry Pi running Windows 10 IoT Core (16299 and later). It can also be used as a template for creating custom apps based on the IoT Gateway: Configuring the IoT Gateway All application data files are separated from the executable files. Application data files are files that can be potentially changed by the user. Executable files are files potentially changed by installers. For the Console and Service applications, application data files are stored in the IoT Gateway subfolder to the operating system's Program Data folder. Example: C:ProgramDataIoT Gateway. For the UWP app, a link to the program data folder is provided at the top of the window. The application data folder contains files you might have to configure to get it to work as you want. Configuring the XMPP interface All IoT Gateways connect to the XMPP network. This connection is used to provide a secure and interoperable interface to your gateway and its underlying devices. You can also administer the gateway through this XMPP connection. The XMPP connection is defined in different manners, depending on the encapsulation. The app lets the user configure the connection via a dialog window. The credentials are then persisted in the object database. The Console and Service versions of the IoT Gateway let the user define the connection using an xmpp.config file in the application data folder. The following is an example configuration file: <?xml version='1.0' encoding='utf-8'?> <SimpleXmppConfiguration xmlns='http://waher.se/Schema/SimpleXmppConfiguration.xsd'> <Host>waher.se</Host> <Port>5222</Port> <Account>USERNAME</Account> <Password>PASSWORD</Password> <ThingRegistry>waher.se</ThingRegistry> <Provisioning>waher.se</Provisioning> <Events></Events> <Sniffer>true</Sniffer> <TrustServer>false</TrustServer> <AllowCramMD5>true</AllowCramMD5> <AllowDigestMD5>true</AllowDigestMD5> <AllowPlain>false</AllowPlain> <AllowScramSHA1>true</AllowScramSHA1> <AllowEncryption>true</AllowEncryption> <RequestRosterOnStartup>true</RequestRosterOnStartup> </SimpleXmppConfiguration> The following is a short recapture: Element Type Description Host String Host name of the XMPP broker to use. Port 1-65535 Port number to connect to. Account String Name of XMPP account. Password String Password to use (or password hash). ThingRegistry String Thing registry to use, or empty if not. Provisioning String Provisioning server to use, or empty if not. Events String Event long to use, or empty if not. Sniffer Boolean If network communication is to be sniffed or not. TrustServer Boolean If the XMPP broker is to be trusted. AllowCramMD5 Boolean If the CRAM-MD5 authentication mechanism is allowed. AllowDigestMD5 Boolean If the DIGEST-MD5 authentication mechanism is allowed. AllowPlain Boolean If the PLAIN authentication mechanism is allowed. AllowScramSHA1 Boolean If the SCRAM-SHA-1 authentication mechanism is allowed. AllowEncryption Boolean If encryption is allowed. RequestRosterOnStartup Boolean If the roster is required, it should be requested on start up. Securing the password Instead of writing the password in clear text in the configuration file, it is recommended that the password hash is used instead of the authentication mechanism supports hashes. When the installer sets up the gateway, it authenticates the credentials during startup and writes the hash value in the file instead. When the hash value is used, the mechanism used to create the hash must be written as well. In the following example, new-line characters are added for readability: <Password type="SCRAM-SHA-1"> rAeAYLvAa6QoP8QWyTGRLgKO/J4= </Password> Setting basic properties of the gateway The basic properties of the IoT Gateway are defined in the Gateway.config file in the program data folder. For example: <?xml version="1.0" encoding="utf-8" ?> <GatewayConfiguration xmlns="http://waher.se/Schema/GatewayConfiguration.xsd"> <Domain>example.com</Domain> <Certificate configFileName="Certificate.config"/> <XmppClient configFileName="xmpp.config"/> <DefaultPage>/Index.md</DefaultPage> <Database folder="Data" defaultCollectionName="Default" blockSize="8192" blocksInCache="10000" blobBlockSize="8192" timeoutMs="10000" encrypted="true"/> <Ports> <Port protocol="HTTP">80</Port> <Port protocol="HTTP">8080</Port> <Port protocol="HTTP">8081</Port> <Port protocol="HTTP">8082</Port> <Port protocol="HTTPS">443</Port> <Port protocol="HTTPS">8088</Port> <Port protocol="XMPP.C2S">5222</Port> <Port protocol="XMPP.S2S">5269</Port> <Port protocol="SOCKS5">1080</Port> </Ports> <FileFolders> <FileFolder webFolder="/Folder1" folderPath="ServerPath1"/> <FileFolder webFolder="/Folder2" folderPath="ServerPath2"/> <FileFolder webFolder="/Folder3" folderPath="ServerPath3"/> </FileFolders> </GatewayConfiguration> Element Type Description Domain String The name of the domain, if any, pointing to the machine running the IoT Gateway. Certificate String The configuration file name specifying details about the certificate to use. XmppClient String The configuration file name specifying details about the XMPP connection. DefaultPage String Relative URL to the page shown if no web page is specified when browsing the IoT Gateway. Database String How the local object database is configured. Typically, these settings do not need to be changed. All you need to know is that you can persist and search for your objects using the static Database defined in Waher.Persistence. Ports Port Which port numbers to use for different protocols supported by the IoT Gateway. FileFolders FileFolder Contains definitions of virtual web folders. Providing a certificate Different protocols (such as HTTPS) require a certificate to allow callers to validate the domain name claim. Such a certificate can be defined by providing a Certificate.config file in the application data folder and then restarting the gateway. If providing such a file, different from the default file, it will be loaded and processed, and then deleted. The information, together with the certificate, will be moved to the relative safety of the object database. For example: <?xml version="1.0" encoding="utf-8" ?> <CertificateConfiguration xmlns="http://waher.se/Schema/CertificateConfiguration.xsd"> <FileName>certificate.pfx</FileName> <Password>testexamplecom</Password> </CertificateConfiguration> Element Type Description FileName String Name of certificate file to import. Password String Password needed to access private part of certificate.   This tutorial was taken from Mastering Internet of Things. Read More IoT Forensics: Security in an always connected world where things talk How IoT is going to change tech teams 5 reasons to choose AWS IoT Core for your next IoT project  
Read more
  • 0
  • 0
  • 28573

article-image-setting-up-logistic-regression-model-using-tensorflow
Packt Editorial Staff
25 Apr 2018
8 min read
Save for later

Setting up Logistic Regression model using TensorFlow

Packt Editorial Staff
25 Apr 2018
8 min read
TensorFlow is another open source library developed by the Google Brain Team to build numerical computation models using data flow graphs. The core of TensorFlow was developed in C++ with the wrapper in Python. The tensorflow package in R gives you access to the TensorFlow API composed of Python modules to execute computation models. TensorFlow supports both CPU- and GPU-based computations. In this article, we will cover the application of TensorFlow in setting up a logistic regression model. The example will use a similar dataset to that used in the H2O model setup. The tensorflow package in R calls the Python tensorflow API for execution, which is essential to install the tensorflow package in both R and Python to make R work. The following are the dependencies for tensorflow: Python 2.7 / 3.x R (>3.2) devtools package in R for installing TensorFlow from GitHub TensorFlow in Python pip Getting ready The code for this section is created on Linux but can be run on any operating system. To start modeling, load the tensorflow package in the environment. R loads the default TensorFlow environment variable and also the NumPy library from Python in the np variable: library("tensorflow") # Load TensorFlow np <- import("numpy") # Load numpy library How to do it... The data is imported using a standard function from R, as shown in the following code. The data is imported using the csv file and transformed into the matrix format followed by selecting the features used to model as defined in xFeatures and yFeatures. The next step in TensorFlow is to set up a graph to run optimization: # Loading input and test data xFeatures = c("Temperature", "Humidity", "Light", "CO2", "HumidityRatio") yFeatures = "Occupancy" occupancy_train <-as.matrix(read.csv("datatraining.txt",stringsAsFactors = T)) occupancy_test <- as.matrix(read.csv("datatest.txt",stringsAsFactors = T)) # subset features for modeling and transform to numeric values occupancy_train<-apply(occupancy_train[, c(xFeatures, yFeatures)], 2, FUN=as.numeric) occupancy_test<-apply(occupancy_test[, c(xFeatures, yFeatures)], 2, FUN=as.numeric) # Data dimensions nFeatures<-length(xFeatures) nRow<-nrow(occupancy_train) Before setting up the graph, let's reset the graph using the following command: # Reset the graph tf$reset_default_graph() Additionally, let's start an interactive session as it will allow us to execute variables without referring to the session-to-session object: # Starting session as interactive session sess<-tf$InteractiveSession() Define the logistic regression model in TensorFlow: # Setting-up Logistic regression graph x <- tf$constant(unlist(occupancy_train[, xFeatures]), shape=c(nRow, nFeatures), dtype=np$float32) # W <- tf$Variable(tf$random_uniform(shape(nFeatures, 1L))) b <- tf$Variable(tf$zeros(shape(1L))) y <- tf$matmul(x, W) + b The input feature x is defined as a constant as it will be an input to the system. The weight W and bias b are defined as variables that will be optimized during the optimization process. The y is set up as a symbolic representation between x, W, and b. The weight W is set up to initialize random uniform distribution and b is assigned the value zero. The next step is to set up the cost function for logistic regression: # Setting-up cost function and optimizer y_ <- tf$constant(unlist(occupancy_train[, yFeatures]), dtype="float32", shape=c(nRow, 1L)) cross_entropy<-tf$reduce_mean(tf$nn$sigmoid_cross_entropy_with_logits(labels=y_, logits=y, name="cross_entropy")) optimizer <- tf$train$GradientDescentOptimizer(0.15)$minimize(cross_entropy) # Start a session init <- tf$global_variables_initializer() sess$run(init) Execute the gradient descent algorithm for the optimization of weights using cross entropy as the loss function: # Running optimization for (step in 1:5000) {   sess$run(optimizer)   if (step %% 20== 0)     cat(step, "-", sess$run(W), sess$run(b), "==>", sess$run(cross_entropy), "n") } How it works... The performance of the model can be evaluated using AUC: # Performance on Train library(pROC) ypred <- sess$run(tf$nn$sigmoid(tf$matmul(x, W) + b)) roc_obj <- roc(occupancy_train[, yFeatures], as.numeric(ypred)) # Performance on test nRowt<-nrow(occupancy_test) xt <- tf$constant(unlist(occupancy_test[, xFeatures]), shape=c(nRowt, nFeatures), dtype=np$float32) ypredt <- sess$run(tf$nn$sigmoid(tf$matmul(xt, W) + b)) roc_objt <- roc(occupancy_test[, yFeatures], as.numeric(ypredt)). AUC can be visualized using the plot.auc function from the pROC package, as shown in the screenshot following this command. The performance for training and testing (hold-out) is very similar. plot.roc(roc_obj, col = "green", lty=2, lwd=2) plot.roc(roc_objt, add=T, col="red", lty=4, lwd=2) Performance of logistic regression using TensorFlow Visualizing TensorFlow graphs TensorFlow graphs can be visualized using TensorBoard. It is a service that utilizes TensorFlow event files to visualize TensorFlow models as graphs. Graph model visualization in TensorBoard is also used to debug TensorFlow models. Getting ready TensorBoard can be started using the following command in the terminal: $ tensorboard --logdir home/log --port 6006 The following are the major parameters for TensorBoard: --logdir : To map to the directory to load TensorFlow events --debug: To increase log verbosity --host: To define the host to listen to its localhost (0.0.1) by default --port: To define the port to which TensorBoard will serve The preceding command will launch the TensorFlow service on localhost at port 6006, as shown in the following screenshot: TensorBoard The tabs on the TensorBoard capture relevant data generated during graph execution. How to do it... The section covers how to visualize TensorFlow models and output in TernsorBoard. To visualize summaries and graphs, data from TensorFlow can be exported using the FileWriter command from the summary module. A default session graph can be added using the following command: # Create Writer Obj for log log_writer = tf$summary$FileWriter('c:/log', sess$graph) The graph for logistic regression developed using the preceding code is shown in the following screenshot: Visualization of the logistic regression graph in TensorBoard Details about symbol descriptions on TensorBoard can be found at https://www.tensorflow.org/get_started/graph_viz. Similarly, other variable summaries can be added to the TensorBoard using correct summaries, as shown in the following code: # Adding histogram summary to weight and bias variable w_hist = tf$histogram_summary("weights", W) b_hist = tf$histogram_summary("biases", b) Create a cross entropy evaluation for test. An example script to generate the cross entropy cost function for test and train is shown in the following command: # Set-up cross entropy for test nRowt<-nrow(occupancy_test) xt <- tf$constant(unlist(occupancy_test[, xFeatures]), shape=c(nRowt, nFeatures), dtype=np$float32) ypredt <- tf$nn$sigmoid(tf$matmul(xt, W) + b) yt_ <- tf$constant(unlist(occupancy_test[, yFeatures]), dtype="float32", shape=c(nRowt, 1L)) cross_entropy_tst<-tf$reduce_mean(tf$nn$sigmoid_cross_entropy_with_logits(labels=yt_, logits=ypredt, name="cross_entropy_tst")) Add summary variables to be collected: # Add summary ops to collect data w_hist = tf$summary$histogram("weights", W) b_hist = tf$summary$histogram("biases", b) crossEntropySummary<-tf$summary$scalar("costFunction", cross_entropy) crossEntropyTstSummary<-tf$summary$scalar("costFunction_test", cross_entropy_tst) Open the writing object, log_writer. It writes the default graph to the location, c:/log: # Create Writer Obj for log log_writer = tf$summary$FileWriter('c:/log', sess$graph) Run the optimization and collect the summaries: for (step in 1:2500) {   sess$run(optimizer)   # Evaluate performance on training and test data after 50 Iteration   if (step %% 50== 0){    ### Performance on Train    ypred <- sess$run(tf$nn$sigmoid(tf$matmul(x, W) + b))    roc_obj <- roc(occupancy_train[, yFeatures], as.numeric(ypred))    ### Performance on Test    ypredt <- sess$run(tf$nn$sigmoid(tf$matmul(xt, W) + b))    roc_objt <- roc(occupancy_test[, yFeatures], as.numeric(ypredt))    cat("train AUC: ", auc(roc_obj), " Test AUC: ", auc(roc_objt), "n")    # Save summary of Bias and weights    log_writer$add_summary(sess$run(b_hist), global_step=step)    log_writer$add_summary(sess$run(w_hist), global_step=step)    log_writer$add_summary(sess$run(crossEntropySummary), global_step=step)    log_writer$add_summary(sess$run(crossEntropyTstSummary), global_step=step) } } Collect all the summaries to a single tensor using themerge_all command from the summary module: summary = tf$summary$merge_all() Write the summaries to the log file using the log_writer object: log_writer = tf$summary$FileWriter('c:/log', sess$graph) summary_str = sess$run(summary) log_writer$add_summary(summary_str, step) log_writer$close() We have learned how to perform logistic regression using TensorFlow also we have covered the application of TensorFlow in setting up a logistic regression model. [box type="shadow" align="" class="" width=""]This article is book excerpt taken from, R Deep Learning Cookbook, co-authored by PKS Prakash & Achyutuni Sri Krishna Rao. This book contains powerful and independent recipes to build deep learning models in different application areas using R libraries.[/box] Read More Getting started with Linear and logistic regression Healthcare Analytics: Logistic Regression to Reduce Patient Readmissions Using Logistic regression to predict market direction in algorithmic trading  
Read more
  • 0
  • 0
  • 47924

article-image-building-a-real-time-dashboard-with-meteor-and-vue-js
Kunal Chaudhari
25 Apr 2018
14 min read
Save for later

Building a real-time dashboard with Meteor and Vue.js

Kunal Chaudhari
25 Apr 2018
14 min read
In this article, we will use Vue.js with an entirely different stack--Meteor! We will discover this full-stack JavaScript framework and build a real-time dashboard with Meteor to monitor the production of some products. We will cover the following topics: Installing Meteor and setting up a project Storing data into a Meteor collection with a Meteor method Subscribing to the collection and using the data in our Vue components The app will have a main page with some indicators, such as: It will also have another page with buttons to generate fake measures since we won't have real sensors available. Setting up the project In this first part, we will cover Meteor and get a simple app up and running on this platform. What is Meteor? Meteor is a full-stack JavaScript framework for building web applications. The mains elements of the Meteor stack are as follows: Web client (can use any frontend library, such as React or Vue); it has a client-side database called Minimongo Server based on nodejs; it supports the modern ES2015+ features, including the import/export syntax Real-time database on the server using MongoDB Communication between clients and the server is abstracted; the client-side and server-side databases can be easily synchronized in real-time Optional hybrid mobile app (Android and iOS), built in one command Integrated developer tools, such as a powerful command-line utility and an easy- to-use build tool Meteor-specific packages (but you can also use npm packages) As you can see, JavaScript is used everywhere. Meteor also encourages you to share code between the client and the server. Since Meteor manages the entire stack, it offers very powerful systems that are easy to use. For example, the entire stack is fully reactive and real-time--if a client sends an update to the server, all the other clients will receive the new data and their UI will automatically be up to date. Meteor has its own build system called "IsoBuild" and doesn't use Webpack. It focuses on ease of use (no configuration), but is, as a result, also less flexible. Installing Meteor If you don't have Meteor on your system, you need to open the Installation Guide on the official Meteor website. Follow the instructions there for your OS to install Meteor. When you are done, you can check whether Meteor was correctly installed with the following command: meteor --version The current version of Meteor should be displayed. Creating the project Now that Meteor is installed, let's set up a new project: Let's create our first Meteor project with the meteor create command: meteor create --bare <folder> cd <folder> The --bare argument tells Meteor we want an empty project. By default, Meteor will generate some boilerplate files we don't need, so this keeps us from having to delete them. Then, we need two Meteor-specific packages--one for compiling the Vue components, and one for compiling Stylus inside those components. Install them with the meteor add command: meteor add akryum:vue-component akryum:vue-stylus We will also install the vue and vue-router package from npm: meteor npm i -S vue vue-router Note that we use the meteor npm command instead of just npm. This is to have the same environment as Meteor (nodejs and npm versions). To start our Meteor app in development mode, just run the meteor command: Meteor Meteor should start an HTTP proxy, a MongoDB, and the nodejs server: It also shows the URL where the app is available; however, if you open it right now, it will be blank. Our first Vue Meteor app In this section, we will display a simple Vue component in our app: Create a new index.html file inside the project directory and tell Meteor we want div in the page body with the app id: <head> <title>Production Dashboard</title> </head> <body> <div id="app"></div>      </body> This is not a real HTML file. It is a special format where we can inject additional elements to the head or body section of the final HTML page. Here, Meteor will add a title into the head section and the <div> into the body section. Create a new client folder, new components subfolder, and a new App.vue component with a simple template: <!-- client/components/App.vue --> <template> <div id="#app"> <h1>Meteor</h1> </div>   </template> Download (https://github.com/Akryum/packt-vue-project-guide/tree/ master/chapter8-full/client) this stylus file in the client folder and add it to the main App.vue component: <style lang="stylus" src="../style.styl" /> Create a main.js file in the client folder that starts the Vue application inside the Meteor.startup hook: import { Meteor } from 'meteor/meteor' import Vue from 'vue' import App from './components/App.vue' Meteor.startup(() => { new Vue({ el: '#app', ...App, }) }) In a Meteor app, it is recommended that you create the Vue app inside the Meteor.startup hook to ensure that all the Meteor systems are ready before starting the frontend. This code will only be run on the client because it is located in a client folder. You should now have a simple app displayed in your browser. You can also open the Vue devtools and check whether you have the App component present on the page. Routing Let's add some routing to the app; we will have two pages--the dashboard with indicators and a page with buttons to generate fake data: In the client/components folder, create two new components--ProductionGenerator.vue and ProductionDashboard.vue. Next to the main.js file, create the router in a router.js file: import Vue from 'vue' import VueRouter from 'vue-router' import ProductionDashboard from './components/ProductionDashboard.vue' import ProductionGenerator from './components/ProductionGenerator.vue' Vue.use(VueRouter) const routes = [ { path: '/', name: 'dashboard', component: ProductionDashboard }, { path: '/generate', name: 'generate', component: ProductionGenerator }, ] const router = new VueRouter({ mode: 'history', routes, }) export default router    Then, import the router in the main.js file and inject it into the app.    In the App.vue main component, add the navigation menu and the router view: <nav> <router-link :to="{ name: 'dashboard' }" exact>Dashboard </router-link> <router-link :to="{ name: 'generate' }">Measure</router-link> </nav> <router-view /> The basic structure of our app is now done: Production measures The first page we will make is the Measures page, where we will have two buttons: The first one will generate a fake production measure with current date and random value The second one will also generate a measure, but with the error property set to true All these measures will be stored in a collection called "Measures". Meteor collections integration A Meteor collection is a reactive list of objects, similar to a MongoDB collection (in fact, it uses MongoDB under the hood). We need to use a Vue plugin to integrate the Meteor collections into our Vue app in order to update it automatically: Add the vue-meteor-tracker npm package: meteor npm i -S vue-meteor-tracker    Then, install the library into Vue: import VueMeteorTracker from 'vue-meteor-tracker' Vue.use(VueMeteorTracker)    Restart Meteor with the meteor command. The app is now aware of the Meteor collection and we can use them in our components, as we will do in a moment. Setting up data The next step is setting up the Meteor collection where we will store our measures data Adding a collection We will store our measures into a Measures Meteor collection. Create a new lib folder in the project directory. All the code in this folder will be executed first, both on the client and the server. Create a collections.js file, where we will declare our Measures collection: import { Mongo } from 'meteor/mongo' export const Measures = new Mongo.Collection('measures') Adding a Meteor method A Meteor method is a special function that will be called both on the client and the server. This is very useful for updating collection data and will improve the perceived speed of the app--the client will execute on minimongo without waiting for the server to receive and process it. This technique is called "Optimistic Update" and is very effective when the network quality is poor.  Next to the collections.js file in the lib folder, create a new methods.js file. Then, add a measure.add method that inserts a new measure into the Measures collection: import { Meteor } from 'meteor/meteor' import { Measures } from './collections' Meteor.methods({ 'measure.add' (measure) { Measures.insert({ ...measure, date: new Date(), }) }, }) We can now call this method with the Meteor.call function: Meteor.call('measure.add', someMeasure) The method will be run on both the client (using the client-side database called minimongo) and on the server. That way, the update will be instant for the client. Simulating measures Without further delay, let's build the simple component that will call this measure.add Meteor method: Add two buttons in the template of ProductionGenerator.vue: <template> <div class="production-generator"> <h1>Measure production</h1> <section class="actions"> <button @click="generateMeasure(false)">Generate Measure</button> <button @click="generateMeasure(true)">Generate Error</button> </section> </div> </template> Then, in the component script, create the generateMeasure method that generates some dummy data and then call the measure.add Meteor method: <script> import { Meteor } from 'meteor/meteor' export default { methods: { generateMeasure (error) { const value = Math.round(Math.random() * 100) const measure = { value, error, } Meteor.call('measure.add', measure) }, }, } </script> The component should look like this: If you click on the buttons, nothing visible should happen. Inspecting the data There is an easy way to check whether our code works and to verify that you can add items in the Measures collection. We can connect to the MongoDB database in a single command. In another terminal, run the following command to connect to the app's database: meteor mongo Then, enter this MongoDB query to fetch the documents of the measures collection (the argument used when creating the Measures Meteor collection): db.measures.find({}) If you clicked on the buttons, a list of measure documents should be displayed This means that our Meteor method worked and objects were inserted in our MongoDB database. Dashboard and reporting Now that our first page is done, we can continue with the real-time dashboard. Progress bars library To display some pretty indicators, let's install another Vue library that allows drawing progress bars along SVG paths; that way, we can have semi-circular bars: Add the vue-progress-path npm package to the project: meteor npm i -S vue-progress-path We need to tell the Vue compiler for Meteor not to process the files in node_modules where the package is installed. Create a new .vueignore file in the project root directory. This file works like a .gitignore: each line is a rule to ignore some paths. If it ends with a slash /, it will ignore only corresponding folders. So, the content of .vueignore should be as follows: node_modules/ Finally, install the vue-progress-path plugin in the client/main.js file: import 'vue-progress-path/dist/vue-progress-path.css' import VueProgress from 'vue-progress-path' Vue.use(VueProgress, { defaultShape: 'semicircle', }) Meteor publication To synchronize data, the client must subscribe to a publication declared on the server. A Meteor publication is a function that returns a Meteor collection query. It can take arguments to filter the data that will be synchronized. For our app, we will only need a simple measures publication that sends all the documents of the Measures collection: This code should only be run on the server. So, create a new server in the project folder and a new publications.js file inside that folder: import { Meteor } from 'meteor/meteor' import { Measures } from '../lib/collections' Meteor.publish('measures', function () { return Measures.find({}) }) This code will only run on the server because it is located in a folder called server. Creating the Dashboard component We are ready to build our ProductionDashboard component. Thanks to the vue- meteor-tracker we installed earlier, we have a new component definition option-- meteor. This is an object that describes the publications that need to be subscribed to and the collection data that needs to be retrieved for that component.    Add the following script section with the meteor definition option: <script> export default { meteor: { // Subscriptions and Collections queries here }, } </script> Inside the meteor option, subscribe to the measures publication with the $subscribe object: meteor: { $subscribe: { 'measures': [], }, }, Retrieve the measures with a query on the Measures Meteor collection inside the meteor option: meteor: { // ... measures () { return Measures.find({}, { sort: { date: -1 }, }) }, }, The second parameter of the find method is an options object very similar to the MongoDB JavaScript API. Here, we are sorting the documents by their date in descending order, thanks to the sort property of the options object. Finally, create the measures data property and initialize it to an empty array. The script of the component should now look like this: <script> import { Measures } from '../../lib/collections' export default { data () { return { measures: [], } }, meteor: { $subscribe: { 'measures': [], }, measures () { return Measures.find({}, { sort: { date: -1 }, }) }, }, } </script> In the browser devtools, you can now check whether the component has retrieved the items from the collection. Indicators We will create a separate component for the dashboard indicators, as follows: In the components folder, create a new ProductionIndicator.vue component. Declare a template that displays a progress bar, a title, and additional info text: <template> <div class="production-indicator"> <loading-progress :progress="value" /> <div class="title">{{ title }}</div> <div class="info">{{ info }}</div> </div> </template> Add the value, title, and info props: <script> export default { props: { value: { type: Number, required: true, }, title: String, info: [String, Number], }, } </script> Back in our ProductionDashboard component, let's compute the average of the values and the rate of errors: computed: { length () { return this.measures.length }, average () { if (!this.length) return 0 let total = this.measures.reduce( (total, measure) => total += measure.value, 0 ) return total / this.length }, errorRate () { if (!this.length) return 0 let total = this.measures.reduce( (total, measure) => total += measure.error ? 1 : 0, 0 ) return total / this.length }, }, 5. Add two indicators in the templates - one for the average value and one for the error rate: <template> <div class="production-dashboard"> <h1>Production Dashboard</h1> <section class="indicators"> <ProductionIndicator :value="average / 100" title="Average" :info="Math.round(average)" /> <ProductionIndicator class="danger" :value="errorRate" title="Errors" :info="`${Math.round(errorRate * 100)}%`" /> </section> </div> </template> The indicators should look like this: Listing the measures Finally, we will display a list of the measures below the indicators:  Add a simple list of <div> elements for each measure, displaying the date if it has an error and the value: <section class="list"> <div v-for="item of measures" :key="item._id" > <div class="date">{{ item.date.toLocaleString() }}</div> <div class="error">{{ item.error ? 'Error' : '' }}</div> <div class="value">{{ item.value }}</div> </div> </section> The app should now look as follows, with a navigation toolbar, two indicators, and the measures list: If you open the app in another window and put your windows side by side, you can see the full-stack reactivity of Meteor in action. Open the dashboard in one window and the generator page in the other window. Then, add fake measures and watch the data update on the other window in real time. If you want to learn more about Meteor, check out the official website and the Vue integration repository. To summarize, we created a project using Meteor. We integrated Vue into the app and set up a Meteor reactive collection. Using a Meteor method, we inserted documents into the collection and displayed in real-time the data in a dashboard component. You read an excerpt from a book written by Guillaume Chau, titled Vue.js 2 Web Development Projects. This book will help you build exciting real world web projects from scratch and become proficient with Vue.js Web Development. Read More Building your first Vue.js 2 Web application Why has Vue.js become so popular? Installing and Using Vue.js    
Read more
  • 0
  • 3
  • 37529
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-building-arcore-android-application
Sugandha Lahoti
24 Apr 2018
9 min read
Save for later

Getting started with building an ARCore application for Android

Sugandha Lahoti
24 Apr 2018
9 min read
Google developed ARCore to be accessible from multiple development platforms (Android [Java], Web [JavaScript], Unreal [C++], and Unity [C#]), thus giving developers plenty of flexibility and options to build applications on various platforms. While each platform has its strengths and weaknesses, all the platforms essentially extend from the native Android SDK that was originally built as Tango. This means that regardless of your choice of platform, you will need to install and be somewhat comfortable working with the Android development tools. In this article, we will focus on setting up the Android development tools and building an ARCore application for Android. The following is a summary of the major topics we will cover in this post: Installing Android Studio Installing ARCore Build and deploy Exploring the code Installing Android Studio Android Studio is a development environment for coding and deploying Android applications. As such, it contains the core set of tools we will need for building and deploying our applications to an Android device. After all, ARCore needs to be installed to a physical device in order to test. Follow the given instructions to install Android Studio for your development environment: Open a browser on your development computer to https://developer.android.com/studio. Click on the green DOWNLOAD ANDROID STUDIO button. Agree to the Terms and Conditions and follow the instructions to download. After the file has finished downloading, run the installer for your system. Follow the instructions on the installation dialog to proceed. If you are installing on Windows, ensure that you set a memorable installation path that you can easily find later, as shown in the following example: Click through the remaining dialogs to complete the installation. When the installation is complete, you will have the option to launch the program. Ensure that the option to launch Android Studio is selected and click on Finish. Android Studio comes embedded with OpenJDK. This means we can omit the steps to installing Java, on Windows at least. If you are doing any serious Android development, again on Windows, then you should go through the steps on your own to install the full Java JDK 1.7 and/or 1.8, especially if you plan to work with older versions of Android. On Windows, we will install everything to C:Android; that way, we can have all the Android tools in one place. If you are using another OS, use a similar well-known path. Now that we have Android Studio installed, we are not quite done. We still need to install the SDK tools that will be essential for building and deployment. Follow the instructions in the next exercise to complete the installation: If you have not installed the Android SDK before, you will be prompted to install the SDK when Android Studio first launches, as shown: Select the SDK components and ensure that you set the installation path to a well-known location, again, as shown in the preceding screenshot. Leave the Welcome to Android Studio dialog open for now. We will come back to it in a later exercise. That completes the installation of Android Studio. In the next section, we will get into installing ARCore. Installing ARCore Of course, in order to work with or build any ARCore applications, we will need to install the SDK for our chosen platform. Follow the given instructions to install the ARCore SDK: We will use Git to pull down the code we need directly from the source. You can learn more about Git and how to install it on your platform at https://git-scm.com/book/en/v2/Getting-Started-Installing-Git or use Google to search: getting started installing Git. Ensure that when you install on Windows, you select the defaults and let the installer set the PATH environment variables. Open Command Prompt or Windows shell and navigate to the Android (C:Android on Windows) installation folder. Enter the following command: git clone https://github.com/google-ar/arcore-android-sdk.git This will download and install the ARCore SDK into a new folder called arcore-android-sdk, as illustrated in the following screenshot: Ensure that you leave the command window open. We will be using it again later. Installing the ARCore service on a device Now, with the ARCore SDK installed on our development environment, we can proceed with installing the ARCore service on our test device. Use the following steps to install the ARCore service on your device: NOTE: this step is only required when working with the Preview SDK of ARCore. When Google ARCore 1.0 is released you will not need to perform this step. Grab your mobile device and enable the developer and debugging options by doing the following: Opening the Settings app Selecting the System Scrolling to the bottom and selecting About phone Scrolling again to the bottom and tapping on Build number seven times Going back to the previous screen and selecting Developer options near the bottom Selecting USB debugging Download the ARCore service APK from https://github.com/google-ar/arcore-android-sdk/releases/download/sdk-preview/arcore-preview.apk to the Android installation folder (C:Android). Also note that this URL will likely change in the future. Connect your mobile device with a USB cable. If this is your first time connecting, you may have to wait several minutes for drivers to install. You will then be prompted to switch on the device to allow the connection. Select Allow to enable the connection. Go back to your Command Prompt or Windows shell and run the following command: adb install -r -d arcore-preview.apk //ON WINDOWS USE: sdkplatform-toolsadb install -r -d arcore-preview.apk After the command is run, you will see the word Success. This completes the installation of ARCore for the Android platform. In the next section, we will build our first sample ARCore application. Build and deploy Now that we have all the tedious installation stuff out of the way, it's time to build and deploy a sample app to your Android device. Let's begin by jumping back to Android Studio and following the given steps: Select the Open an existing Android Studio project option from the Welcome to Android Studio window. If you accidentally closed Android Studio, just launch it again. Navigate and select the Androidarcore-android-sdksamplesjava_arcore_hello_ar folder, as follows: Click on OK. If this is your first time running this project, you will encounter some dependency errors, such as the one here: In order to resolve the errors, just click on the link at the bottom of the error message. This will open a dialog, and you will be prompted to accept and then download the required dependencies. Keep clicking on the links until you see no more errors. Ensure that your mobile device is connected and then, from the menu, choose Run - Run. This should start the app on your device, but you may still need to resolve some dependency errors. Just remember to click on the links to resolve the errors. This will open a small dialog. Select the app option. If you do not see the app option, select Build - Make Project from the menu. Again, resolve any dependency errors by clicking on the links. "Your patience will be rewarded." - Alton Brown Select your device from the next dialog and click on OK. This will launch the app on your device. Ensure that you allow the app to access the device's camera. The following is a screenshot showing the app in action: Great, we have built and deployed our first Android ARCore app together. In the next section, we will take a quick look at the Java source code. Exploring the code Now, let's take a closer look at the main pieces of the app by digging into the source code. Follow the given steps to open the app's code in Android Studio: From the Project window, find and double-click on the HelloArActivity, as shown: After the source is loaded, scroll through the code to the following section: private void showLoadingMessage() { runOnUiThread(new Runnable() { @Override public void run() { mLoadingMessageSnackbar = Snackbar.make( HelloArActivity.this.findViewById(android.R.id.content), "Searching for surfaces...", Snackbar.LENGTH_INDEFINITE); mLoadingMessageSnackbar.getView().setBackgroundColor(0xbf323232); mLoadingMessageSnackbar.show(); } }); } Note the highlighted text—"Searching for surfaces..". Select this text and change it to "Searching for ARCore surfaces..". The showLoadingMessage function is a helper for displaying the loading message. Internally, this function calls runOnUIThread, which in turn creates a new instance of Runnable and then adds an internal run function. We do this to avoid thread blocking on the UI, a major no-no. Inside the run function is where the messaging is set and the message Snackbar is displayed. From the menu, select Run - Run 'app' to start the app on your device. Of course, ensure that your device is connected by USB. Run the app on your device and confirm that the message has changed. Great, now we have a working app with some of our own code. This certainly isn't a leap, but it's helpful to walk before we run. In this article, we started exploring ARCore by building and deploying an AR app for the Android platform. We did this by first installing Android Studio. Then, we installed the ARCore SDK and ARCore service onto our test mobile device. Next, we loaded up the sample ARCore app and patiently installed the various required build and deploy dependencies. After a successful build, we deployed the app to our device and tested. Finally, we tested making a minor code change and then deployed another version of the app. You read an excerpt from the book, Learn ARCore - Fundamentals of Google ARCore, written by Micheal Lanham. This book will help you will create next-generation Augmented Reality and Mixed Reality apps with the latest version of Google ARCore. Read More Google ARCore is pushing immersive computing forward Types of Augmented Reality targets
Read more
  • 0
  • 0
  • 31924

article-image-building-a-web-service-with-laravel-5
Kunal Chaudhari
24 Apr 2018
15 min read
Save for later

Building a Web Service with Laravel 5

Kunal Chaudhari
24 Apr 2018
15 min read
A web service is an application that runs on a server and allows a client (such as a browser) to remotely write/retrieve data to/from the server over HTTP. In this article we will be covering the following set of topics: Using Laravel to create a web service Writing database migrations and seed files Creating API endpoints to make data publicly accessible Serving images from Laravel The interface of a web service will be one or more API endpoints, sometimes protected with authentication, that will return data in an XML or JSON payload: Web services are a speciality of Laravel, so it won't be hard to create one for Vuebnb. We'll use routes for our API endpoints and represent the listings with Eloquent models that Laravel will seamlessly synchronize with the database: Laravel also has inbuilt features to add API architectures such as REST, though we won't need this for our simple use case. Mock data The mock listing data is in the file database/data.json. This file includes a JSON- encoded array of 30 objects, with each object representing a different listing. Having built the listing page prototype, you'll no doubt recognize a lot of the same properties on these objects, including the title, address, and description. database/data.json: [ { "id": 1, "title": "Central Downtown Apartment with Amenities", "address": "...", "about": "...", "amenity_wifi": true, "amenity_pets_allowed": true, "amenity_tv": true, "amenity_kitchen": true, "amenity_breakfast": true, "amenity_laptop": true, "price_per_night": "$89" "price_extra_people": "No charge", "price_weekly_discount": "18%", "price_monthly_discount": "50%", }, { "id": 2, ... }, ... ] Each mock listing includes several images of the room as well. Images aren't really part of a web service, but they will be stored in a public folder in our app to be served as needed. Database Our web service will require a database table for storing the mock listing data. To set this up we'll need to create a schema and migration. We'll then create a seeder that will load and parse our mock data file and insert it into the database, ready for use in the app. Migration A migration is a special class that contains a set of actions to run against the database, such as creating or modifying a database table. Migrations ensure your database gets set up identically every time you create a new instance of your app, for example, installing in production or on a teammate's machine. To create a new migration, use the make:migration Artisan CLI command. The argument of the command should be a snake-cased description of what the migration will do: $ php artisan make:migration create_listings_table You'll now see your new migration in the database/migrations directory. You'll notice the filename has a prefixed timestamp, such as 2017_06_20_133317_create_listings_table.php. The timestamp allows Laravel to determine the proper order of the migrations, in case it needs to run more than one at a time. Your new migration declares a class that extends Migration. It overrides two methods: up, which is used to add new tables, columns, or indexes to your database; and down, which is used to delete them. We'll implement these methods shortly. You'll now see your new migration in the database/migrations directory. You'll notice the filename has a prefixed timestamp, such as 2017_06_20_133317_create_listings_table.php. The timestamp allows Laravel to determine the proper order of the migrations, in case it needs to run more than one at a time. Your new migration declares a class that extends Migration. It overrides two methods: up, which is used to add new tables, columns, or indexes to your database; and down, which is used to delete them. We'll implement these methods shortly. 2017_06_20_133317_create_listings_table.php: <?php use Illuminate\Support\Facades\Schema; use Illuminate\Database\Schema\Blueprint; use Illuminate\Database\Migrations\Migration; class CreateListingsTable extends Migration { public function up() { // } public function down() { // } } Schema A schema is a blueprint for the structure of a database. For a relational database such as MySQL, the schema will organize data into tables and columns. In Laravel, schemas are declared by using the Schema facade's create method. We'll now make a schema for a table to hold Vuebnb listings. The columns of the table will match the structure of our mock listing data. Note that we set a default false value for the amenities and allow the prices to have a NULL value. All other columns require a value. The schema will go inside our migration's up method. We'll also fill out the down with a call to Schema::drop. 2017_06_20_133317_create_listings_table.php: public function up() { Schema::create('listings', function (Blueprint $table) { $table->primary('id'); $table->unsignedInteger('id'); $table->string('title'); $table->string('address'); $table->longText('about'); // Amenities $table->boolean('amenity_wifi')->default(false); $table->boolean('amenity_pets_allowed')->default(false); $table->boolean('amenity_tv')->default(false); $table->boolean('amenity_kitchen')->default(false); $table->boolean('amenity_breakfast')->default(false); $table->boolean('amenity_laptop')->default(false); // Prices $table->string('price_per_night')->nullable(); $table->string('price_extra_people')->nullable(); $table->string('price_weekly_discount')->nullable(); $table->string('price_monthly_discount')->nullable(); }); } public function down() { Schema::drop('listings'); } A facade is an object-oriented design pattern for creating a static proxy to an underlying class in the service container. The facade is not meant to provide any new functionality; its only purpose is to provide a more memorable and easily readable way of performing a common action. Think of it as an object-oriented helper function. Execution Now that we've set up our new migration, let's run it with this Artisan command: $ php artisan migrate You should see an output like this in the Terminal: Migrating: 2017_06_20_133317_create_listings_table Migrated:            2017_06_20_133317_create_listings_table To confirm the migration worked, let's use Tinker to show the new table structure. If you've never used Tinker, it's a REPL tool that allows you to interact with a Laravel app on the command line. When you enter a command into Tinker it will be evaluated as if it were a line in your app code. Firstly, open the Tinker shell: $ php artisan tinker Now enter a PHP statement for evaluation. Let's use the DB facade's select method to run an SQL DESCRIBE query to show the table structure: >>>> DB::select('DESCRIBE listings;'); The output is quite verbose so I won't reproduce it here, but you should see an object with all your table details, confirming the migration worked. Seeding mock listings Now that we have a database table for our listings, let's seed it with the mock data. To do so we're going to have to do the following:  Load the database/data.json file  Parse the file  Insert the data into the listings table Creating a seeder Laravel includes a seeder class that we can extend called Seeder. Use this Artisan command to implement it: $ php artisan make:seeder ListingsTableSeeder When we run the seeder, any code in the run method is executed. database/ListingsTableSeeder.php: <?php use Illuminate\Database\Seeder; class ListingsTableSeeder extends Seeder { public function run() { // } } Loading the mock data Laravel provides a File facade that allows us to open files from disk as simply as File::get($path). To get the full path to our mock data file we can use the base_path() helper function, which returns the path to the root of our application directory as a string. It's then trivial to convert this JSON file to a PHP array using the built-in json_decode method. Once the data is an array, it can be directly inserted into the database given that the column names of the table are the same as the array keys. database/ListingsTableSeeder.php: public  function  run() { $path  = base_path()  . '/database/data.json'; $file  = File::get($path); $data  = json_decode($file,  true); } Inserting the data In order to insert the data, we'll use the DB facade again. This time we'll call the table method, which returns an instance of Builder. The Builder class is a fluent query builder that allows us to query the database by chaining constraints, for example, DB::table(...)->where(...)->join(...) and so on. Let's use the insert method of the builder, which accepts an array of column names and values. database/seeds/ListingsTableSeeder.php: public  function  run() { $path  = base_path()  . '/database/data.json'; $file  = File::get($path); $data  = json_decode($file,  true); DB::table('listings')->insert($data); } Executing the seeder To execute the seeder we must call it from the DatabaseSeeder.php file, which is in the same directory. database/seeds/DatabaseSeeder.php: <?php use Illuminate\Database\Seeder; class DatabaseSeeder extends Seeder { public function run() { $this->call(ListingsTableSeeder::class); } } With that done, we can use the Artisan CLI to execute the seeder: $ php artisan db:seed You should see the following output in your Terminal: Seeding: ListingsTableSeeder We'll again use Tinker to check our work. There are 30 listings in the mock data, so to confirm the seed was successful, let's check for 30 rows in the database: $ php artisan tinker >>>> DB::table('listings')->count(); # Output: 30 Finally, let's inspect the first row of the table just to be sure its content is what we expect: >>>> DB::table('listings')->get()->first(); Here is the output: => {#732 +"id": 1, +"title": "Central Downtown Apartment with Amenities", +"address": "No. 11, Song-Sho Road, Taipei City, Taiwan 105", +"about": "...", +"amenity_wifi": 1, +"amenity_pets_allowed": 1, +"amenity_tv": 1, +"amenity_kitchen": 1, +"amenity_breakfast": 1, +"amenity_laptop": 1, +"price_per_night": "$89", +"price_extra_people": "No charge", +"price_weekly_discount": "18%", +"price_monthly_discount": "50%" } If yours looks like that you're ready to move on! Listing model We've now successfully created a database table for our listings and seeded it with mock listing data. How do we access this data now from the Laravel app? We saw how the DB facade lets us execute queries on our database directly. But Laravel provides a more powerful way to access data via the Eloquent ORM. Eloquent ORM Object-Relational Mapping (ORM) is a technique for converting data between incompatible systems in object-oriented programming languages. Relational databases such as MySQL can only store scalar values such as integers and strings, organized within tables. We want to make use of rich objects in our app, though, so we need a means of robust conversion. Eloquent is the ORM implementation used in Laravel. It uses the active record design pattern, where a model is tied to a single database table, and an instance of the model is tied to a single row. To create a model in Laravel using Eloquent ORM, simply extend the Illuminate\Database\Eloquent\Model class using Artisan: $ php artisan make:model Listing This generates a new file. app/Listing.php: <?php namespace App; use Illuminate\Database\Eloquent\Model; class Listing extends Model { // } How do we tell the ORM what table to map to, and what columns to include? By default, the Model class uses the class name (Listing) in lowercase (listing) as the table name to use. And, by default, it uses all the fields from the table. Now, any time we want to load our listings we can use code such as this, anywhere in our app: <?php // Load all listings $listings = \App\Listing::all(); // Iterate listings, echo the address foreach ($listings as $listing) { echo $listing->address . '\n' ; } /* * Output: * * No. 11, Song-Sho Road, Taipei City, Taiwan 105 * 110, Taiwan, Taipei City, Xinyi District, Section 5, Xinyi Road, 7 * No. 51, Hanzhong Street, Wanhua District, Taipei City, Taiwan 108 * ... */ Casting The data types in a MySQL database don't completely match up to those in PHP. For example, how does an ORM know if a database value of 0 is meant to be the number 0, or the Boolean value of false? An Eloquent model can be given a $casts property to declare the data type of any specific attribute. $casts is an array of key/values where the key is the name of the attribute being cast, and the value is the data type we want to cast to. For the listings table, we will cast the amenities attributes as Booleans. app/Listing.php: <?php namespace App; use Illuminate\Database\Eloquent\Model; class Listing extends Model { protected $casts = [ 'amenity_wifi' => 'boolean', 'amenity_pets_allowed' => 'boolean', 'amenity_tv' => 'boolean', 'amenity_kitchen' => 'boolean', 'amenity_breakfast' => 'boolean', 'amenity_laptop' => 'boolean' ]; } Now these attributes will have the correct type, making our model more robust: echo  gettype($listing->amenity_wifi()); //  boolean Public interface The final piece of our web service is the public interface that will allow a client app to request the listing data. Since the Vuebnb listing page is designed to display one listing at a time, we'll at least need an endpoint to retrieve a single listing. Let's now create a route that will match any incoming GET requests to the URI /api/listing/{listing} where {listing} is an ID. We'll put this in the routes/api.php file, where routes are automatically given the /api/ prefix and have middleware optimized for use in a web service by default. We'll use a closure function to handle the route. The function will have a $listing argument, which we'll type hint as an instance of the Listing class, that is, our model. Laravel's service container will resolve this as an instance with the ID matching {listing}. We can then encode the model as JSON and return it as a response. routes/api.php: <?php use App\Listing; Route::get('listing/{listing}', function(Listing $listing) { return $listing->toJson(); }); We can test this works by using the curl command from the Terminal: $ curl http://vuebnb.test/api/listing/1 The response will be the listing with ID 1: Controller We'll be adding more routes to retrieve the listing data as the project progresses. It's a best practice to use a controller class for this functionality to keep a separation of concerns. Let's create one with Artisan CLI: $ php artisan make:controller ListingController We'll then move the functionality from the route into a new method, get_listing_api. app/Http/Controllers/ListingController.php: <?php namespace App\Http\Controllers; use Illuminate\Http\Request; use App\Listing; class ListingController extends Controller { public function get_listing_api(Listing $listing) { return $listing->toJson(); } } For the Route::get method we can pass a string as the second argument instead of a closure function. The string should be in the form [controller]@[method], for example, ListingController@get_listing_web. Laravel will correctly resolve this at runtime. routes/api.php: <?php Route::get('/listing/{listing}', 'ListingController@get_listing_api'); Images As stated at the beginning of the article, each mock listing comes with several images of the room. These images are not in the project code and must be copied from a parallel directory in the code base called images. Copy the contents of this directory into the public/images folder: $ cp -a ../images/. ./public/images Once you've copied these files, public/images will have 30 sub-folders, one for each mock listing. Each of these folders will contain exactly four main images and a thumbnail image: Accessing images Files in the public directory can be directly requested by appending their relative path to the site URL. For example, the default CSS file, public/css/app.css, can be requested at http://vuebnb.test/css/app.css. The advantage of using the public folder, and the reason we've put our images there, is to avoid having to create any logic for accessing them. A frontend app can then directly call the images in an img tag. You may think it's inefficient for our web server to serve images like this, and you'd be right. Let's try to open one of the mock listing images in our browser to test this thesis: http://vuebnb.test/images/1/Image_1.jpg: Image links The payload for each listing in the web service should include links to these new images so a client app knows where to find them. Let's add the image paths to our listing API payload so it looks like this: { "id": 1, "title": "...", "description": "...", ... "image_1": "http://vuebnb.test/app/image/1/Image_1.jpg", "image_2": "http://vuebnb.test/app/image/1/Image_2.jpg", "image_3": "http://vuebnb.test/app/image/1/Image_3.jpg", "image_4": "http://vuebnb.test/app/image/1/Image_4.jpg" } To implement this, we'll use our model's toArray method to make an array representation of the model. We'll then easily be able to add new fields. Each mock listing has exactly four images, numbered 1 to 4, so we can use a for loop and the asset helper to generate fully- qualified URLs to files in the public folder. We finish by creating an instance of the Response class by calling the response helper. We use the json; method and pass in our array of fields, returning the result. app/Http/Controllers/ListingController.php: public function get_listing_api(Listing $listing) { $model = $listing->toArray(); for($i = 1; $i <=4; $i++) { $model['image_' . $i] = asset( 'images/' . $listing->id . '/Image_' . $i . '.jpg' ); } return response()->json($model); } The /api/listing/{listing} endpoint is now ready for consumption by a client app. To summarize, we built a web service with Laravel to make the data publicly accessible. This involved setting up a database table using a migration and schema, then seeding the database with mock listing data. We then created a public interface for the web service using routes. You enjoyed an excerpt from a book written by Anthony Gore, titled Full-Stack Vue.js 2 and Laravel 5 which would help you bring the frontend and backend together with Vue, Vuex, and Laravel. Read More Testing RESTful Web Services with Postman How to develop RESTful web services in Spring        
Read more
  • 0
  • 0
  • 41654

article-image-advanced-programming-with-rust
Packt Editorial Staff
23 Apr 2018
7 min read
Save for later

Perform Advanced Programming with Rust

Packt Editorial Staff
23 Apr 2018
7 min read
Rust is a systems programming language that runs blazingly fast, prevents segfaults, and guarantees thread safety. In today’s tutorial we are focusing on equipping you with recipes to programming with Rust and also help you define expressions, constants, and variable bindings. Let us get started: Defining an expression An expression, in simple words, is a statement in Rust by using which we can create logic and workflows in the program and applications. We will deep dive into understanding expressions and blocks in Rust. Getting ready We will require the Rust compiler and any text editor for coding. How to do it... Follow the ensuing steps: Create a file named expression.rs with the next code snippet. Declare the main function and create the variables x_val, y_val, and z_val: //  main  point  of  execution fn  main()  { //  expression let  x_val  =  5u32; //  y  block let  y_val  =  { let  x_squared  =  x_val  *  x_val; let  x_cube  =  x_squared  *  x_val; //  This  expression  will  be  assigned  to  `y_val` x_cube  +  x_squared  +  x_val }; //  z  block let  z_val  =  { //  The  semicolon  suppresses  this  expression  and  `()`  is assigned  to  `z` 2  *  x_val; }; //  printing  the  final  outcomes println!("x  is  {:?}",  x_val); println!("y  is  {:?}",  y_val); println!("z  is  {:?}",  z_val); } You should get the ensuing output upon running the code. Please refer to the following screenshot: How it works... All the statements that end in a semicolon (;) are expressions. A block is a statement that has a set of statements and variables inside the {} scope. The last statement of a block is the value that will be assigned to the variable. When we close the last statement with a semicolon, it returns () to the variable. In the preceding recipe, the first statement which is a variable named x_val , is assigned to the value 5. Second, y_val is a block that performs certain operations on the variable x_val and a few more variables, which are x_squared and x_cube that contain the squared and cubic values of the variable x_val , respectively. The variables x_squared and x_cube , will be deleted soon after the scope of the block. The block where we declare the z_val variable has a semicolon at the last statement which assigns it to the value of (), suppressing the expression. We print out all the values in the end. We print all the declared variables values in the end. Defining constants Rust provides the ability to assign and maintain constant values across the code in Rust. These values are very useful when we want to maintain a global count, such as a timer-- threshold--for example. Rust provides two const keywords to perform this activity. You will learn how to deliver constant values globally in this recipe. Getting ready We will require the Rust compiler and any text editor for coding. How to do it... Follow these steps: Create a file named constant.rs with the next code snippet. Declare the global UPPERLIMIT using constant: //  Global  variables  are  declared  outside  scopes  of  other function const  UPPERLIMIT:  i32  =  12; Create the is_big function by accepting a single integer as input: //  function  to  check  if  bunber fn  is_big(n:  i32)  ->  bool  { //  Access  constant  in  some  function n  >  UPPERLIMIT } In the main function, call the is_big function and perform the decision-making statement: fn  main()  { let  random_number  =  15; //  Access  constant  in  the  main  thread println!("The  threshold  is  {}",  UPPERLIMIT); println!("{}  is  {}",  random_number,  if is_big(random_number)  {  "big"  }  else  {  "small" }); //  Error!  Cannot  modify  a  `const`. //  UPPERLIMIT  =  5; } You should get the following screenshot as output upon running the preceding code: How it works... The workflow of the recipe is fairly simple, where we have a function to check whether an integer is greater than a fixed threshold or not. The UPPERLIMIT variable defines the fixed threshold for the function, which is a constant whose value will not change in the code and is accessible throughout the program. We assigned 15 to random_number and passed it via is_big  (integer  value); and we then get a boolean output, either true or false, as the return type of the function is a bool type. The answer to our situation is false as 15 is not bigger than 12, which the UPPERLIMIT value set as the constant. We performed this condition checking using the if...else statement in Rust. We cannot change the UPPERLIMIT value; when attempted, it will throw an error, which is commented in the code section. Constants declare constant values. They represent a value, not a memory address: type  =  value; Performing variable bindings Variable binding refers to how a variable in the Rust code is bound to a type. We will cover pattern, mutability, scope, and shadow concepts in this recipe. Getting ready We will require the Rust compiler and any text editor for coding. How to do it... Perform the following step: Create a file named binding.rs and enter a code snippet that includes declaring the main function and different variables: fn  main()  { //  Simplest  variable  binding let  a  =  5; //  pattern let  (b,  c)  =  (1,  2); //  type  annotation let  x_val:  i32  =  5; //  shadow  example let  y_val:  i32  =  8; { println!("Value  assigned  when  entering  the scope  :  {}",  y_val);  //  Prints  "8". let  y_val  =  12; println!("Value  modified  within  scope  :{}",  y_val); //  Prints  "12". } println!("Value  which  was  assigned  first  :  {}",  y_val); //  Prints  "8". let  y_val  =  42; println!("New  value  assigned  :  {}",  y_val); //Prints  "42". } You should get the following screenshot as output upon running the preceding code: How it works... The let statement is the simplest way to create a binding, where we bind a variable to a value, which is the case with variable a. To create a pattern with the let statement, we assign the pattern values to b and c values in the same pattern. Rust is a statically typed language. This means that we have to specify our types during an assignment, and at compile time, it is checked to see if it is compatible. Rust also has the type reference feature that identifies the variable type automatically at compile time. The variable_name  : type is the format we use to explicitly mention the type in Rust. We read the assignment in the following format: x_val is a binding with the type i32 and the value 5. Here, we declared x_val as a 32-bit signed integer. However, Rust has many different primitive integer types that begin with i for signed integers and u for unsigned integers, and the possible integer sizes are 8, 16, 32, and 64 bits. Variable bindings have a scope that makes the variable alive only in the scope. Once it goes out of the scope, the resources are freed. A block is a collection of statements enclosed by {}. Function definitions are also blocks! We use a block to illustrate the feature in Rust that allows variable bindings to be shadowed. This means that a later variable binding can be done with the same name, which in our case is y_val. This goes through a series of value changes, as a new binding that is currently in scope overrides the previous binding. Shadowing enables us to rebind a name to a value of a different type. This is the reason why we are able to assign new values to the immutable y_val variable in and out of the block. [box type="shadow" class="" width=""]This article is an extract taken from Rust Cookbook written by Vigneshwer Dhinakaran. You will find more than 80 practical recipes written in Rust that will allow you to use the code samples right away in your existing applications.[/box] Read More 20 ways to describe programming in 5 words Top 5 programming languages for crunching Big Data effectively    
Read more
  • 0
  • 1
  • 28050

article-image-how-data-scientists-test-hypotheses-and-probability
Richard Gall
23 Apr 2018
4 min read
Save for later

How data scientists test hypotheses and probability

Richard Gall
23 Apr 2018
4 min read
Why hypotheses are important in statistical analysis Hypothesis testing allows researchers and statisticians to develop hypotheses which are then assessed to determine the probability or the likelihood of those findings. This statistics tutorial has been taken from Basic Statistics and Data Mining for Data Science. Whenever you wish to make an inference about a population from a sample, you must test a specific hypothesis. It’s common practice to state 2 different hypotheses: Null hypothesis which states that there is no effect Alternative/research hypothesis which states that there is an effect So, the null hypothesis is one which says that there is no difference. For example, you might be looking at the mean income between males and females, but the null hypothesis you are testing is that there is no difference between the 2 groups. The alternative hypothesis, meanwhile, is generally, although not exclusively, the one that researchers are really interested in. In this example, you might hypothesize that the mean income between males and females is different. Read more: How to predict Bitcoin prices from historical and live data. Why probability is important in statistical analysis In statistics, nothing is ever certain because we are always dealing with samples rather than populations. This is why we always have to work in probabilities. The way hypotheses are assessed is by calculating the probability or the likelihood of finding our result. A probability value, which can range from zero to one, corresponding to 0% and 100% in percentages, is essentially a way of measuring the likelihood of a particular event occurring. You can use these values to assess whether the likelihood of any of these differences that you have found are the result of random chance. How do hypotheses and probability interact? It starts getting really interesting once we begin looking at how hypotheses and probability interact. Here’s an example. Suppose you want to know who is going to win the Super Bowl. I ask a fellow statistician, and he tells me that she’s built a predictive model and that he knows which team is going to win. Fine - my next question is how confident he is in that prediction. He says he’s 50% confident - are you going to trust his prediction? Of course you’re not - there are only 2 possible outcomes and 50% is ultimately just random chance. So, say I ask another statistician. He also tells me that he has a prediction and that he has built a predictive model, and he’s 75% confident in the prediction he has made. You’re more likely to trust this prediction - you have a 75% chance of being right and a 25% chance of being wrong. But let’s say you’re feeling cautious - a 25% chance of being wrong is too high. So, you ask another statistician for their prediction. She tells me that she’s also built a predictive model which she has 90% confidence is correct. So, having formally stated our hypotheses we then have to select a criterion for acceptance or rejection of the null hypothesis. With probability tests like the chi-squared test, the t-test, or regression or correlation, you’re testing the likelihood that a statistic of the magnitude that you obtained or greater would have occurred by chance, assuming that the null hypothesis is true. It’s important to remember that you always assess the probability of the null hypothesis as true. You only reject the null hypothesis if you can say that the results would have been extremely unlikely under the conditions set by the null hypothesis. In this case, if you can reject the null hypothesis, you have found support for the alternative/research hypothesis. This doesn’t prove the alternative hypothesis, but it does tell you that the null hypothesis is unlikely to be true. The criterion we typically use is whether the significance level sits above or below 0.05 (5%), indicating that a statistic of the size that we obtained, would only be likely to occur on 5% of occasions. By choosing a 5% criterion you are accepting that you will make a mistake in rejecting the null hypothesis 1 in 20 times. Replication and data mining If in traditional statistics we work with hypotheses and probabilities to deal with the fact that we’re always working with a sample rather than a population, in data mining, we can work in a slightly different way - we can use something called replication instead. In a data mining project we might have 2 data sets - a training data set and a testing data set. We build our model on a training set and once we’ve done that, we take the results of that model and then apply it to a testing data set to see if we find similar results.
Read more
  • 0
  • 0
  • 55468
article-image-data-bindings-with-knockout-js
Vijin Boricha
23 Apr 2018
7 min read
Save for later

Data bindings with Knockout.js

Vijin Boricha
23 Apr 2018
7 min read
Today, we will learn about three data binding abilities of Knockout.js. Data bindings are attributes added by the framework for the purpose of data access between elements and view scope. While Observable arrays are efficient in accessing the list of objects with the number of operations on top of the display of the list using the foreach function, Knockout.js has provided three additional data binding abilities: Control-flow bindings Appearance bindings Interactive bindings Let us review these data bindings in detail in the following sections. Control-flow bindings As the name suggests, control-flow bindings help us access the data elements based on a certain condition. The if, if-not, and with are the control-flow bindings available from the Knockout.js. In the following example, we will be using if and with control-flow bindings. We have added a new attribute to the Employee object called age; we are displaying the age value in green only if it is greater than 20. Similarly, we have added another markedEmployee. With control-flow binding, we can limit the scope of access to that specific employee object in the following paragraph. Add the following code snippet to index.html and run the program to see the if and with control-flow bindings working: <!DOCTYPE html> <html> <head> <title>Knockout JS</title> </head> <body> <h1>Welcome to Knockout JS programming</h1> <table border="1" > <tr > <th colspan="2" style="padding:10px;"> <b>Employee Data - Organization : <span style="color:red" data-bind='text: organizationName'> </span> </b> </th> </tr> <tr> <td style="padding:10px;">Employee First Name:</td> <td style="padding:10px;"> <span data-bind='text: empFirstName'></span> </td> </tr> <tr> <td style="padding:10px;">Employee Last Name:</td> <td style="padding:10px;"> <span data-bind='text: empLastName'></span> </td> </tr> </table> <p>Organization Full Name : <span style="color:red" data-bind='text: orgFullName'></span> </p> <!-- Observable Arrays--> <h2>Observable Array Example : </h2> <table border="1"> <thead><tr> <th style="padding:10px;">First Name</th> <th style="padding:10px;">Last Name</th> <th style="padding:10px;">Age</th> </tr></thead> <tbody data-bind='foreach: organization'> <tr> <td style="padding:10px;" data-bind='text: firstName'></td> <td style="padding:10px;" data-bind='text: lastName'></td> <td data-bind="if: age() > 20" style="color: green;padding:10px;"> <span data-bind='text:age'></span> </td> </tr> </tbody> </table> <!-- with control flow bindings --> <p data-bind='with: markedEmployee'> Employee <strong data-bind="text: firstName() + ', ' + lastName()"> </strong> is marked with the age <strong data-bind='text: age'> </strong> </p> <h2>Add New Employee to Observable Array</h2> First Name : <input data-bind="value: newFirstName" /> Last Name : <input data-bind="value: newLastName" /> Age : <input data-bind="value: newEmpAge" /> <button data-bind='click: addEmployee'>Add Employee</button> <!-- JavaScript resources --> <script type='text/javascript' src='js/knockout-3.4.2.js'></script> <script type='text/javascript'> function Employee (firstName, lastName,age) { this.firstName = ko.observable(firstName); this.lastName = ko.observable(lastName); this.age = ko.observable(age); }; this.addEmployee = function() { this.organization.push(new Employee (employeeViewModel.newFirstName(), employeeViewModel.newLastName(), employeeViewModel.newEmpAge())); }; var employeeViewModel = { empFirstName: "Tony", empLastName: "Henry", //Observable organizationName: ko.observable("Sun"), newFirstName: ko.observable(""), newLastName: ko.observable(""), newEmpAge: ko.observable(""), //With control flow object markedEmployee: ko.observable(new Employee("Garry", "Parks", "65")), //Observable Arrays organization : ko.observableArray([ new Employee("John", "Kennedy", "24"), new Employee("Peter", "Hennes","18"), new Employee("Richmond", "Smith","54") ]) }; //Computed Observable employeeViewModel.orgFullName = ko.computed(function() { return employeeViewModel.organizationName() + " Limited"; }); ko.applyBindings(employeeViewModel); employeeViewModel.organizationName("Oracle"); </script> </body> </html> Run the preceding program to see the if control-flow acting on the Age field, and the with control-flow showing a marked employee record with age 65: Appearance bindings Appearance bindings deal with displaying the data from binding elements on view components in formats such as text and HTML, and applying styles with the help of a set of six bindings, as follows: Text: <value>—Sets the value to an element. Example: <td data-bind='text: name'></td> HTML: <value>—Sets the HTML value to an element. Example: //JavaScript: function Employee(firstname, lastname, age) { ... this.formattedName = ko.computed(function() { return "<strong>" + this.firstname() + "</strong>"; }, this); } //Html: <span data-bind='html: markedEmployee().formattedName'></span> Visible: <condition>—An element can be shown or hidden based on the condition. Example: <td data-bind='visible: age() > 20' style='color: green'> span data-bind='text:age'> CSS: <object>—An element can be associated with a CSS class. Example: //CSS: .strongEmployee { font-weight: bold; } //HTML: <span data-bind='text: formattedName, css: {strongEmployee}'> </span> Style: <object>—Associates an inline style to the element. Example: <span data-bind='text: age, style: {color: age() > 20 ? "green" :"red"}'> </span> Attr: <object>—Defines an attribute for the element. Example: <p><a data-bind='attr: {href: featuredEmployee().populatelink}'> View Employee</a></p> Interactive bindings Interactive bindings help the user interact with the form elements to be associated with corresponding viewmodel methods or events to be triggered in the pages. Knockout JS supports the following interactive bindings: Click: <method>—An element click invokes a ViewModel method. Example: <button data-bind='click: addEmployee'>Submit</button> Value:<property>—Associates the form element value to the ViewModel attribute. Example: <td>Age: <input data-bind='value: age' /></td> Event: <object>—With an user-initiated event, it invokes a method. Example: <p data-bind='event: {mouseover: showEmployee, mouseout: hideEmployee}'> Age: <input data-bind='value: Age' /> </p> Submit: <method>—With a form submit event, it can invoke a method. Example: <form data-bind="submit: addEmployee"> <!—Employee form fields --> <button type="submit">Submit</button> </form> Enable: <property>—Conditionally enables the form elements. Example: last name field is enabled only after adding first name field. Disable: <property>—Conditionally disables the form elements. Example: last name field is disabled after adding first name: <p>Last Name: <input data-bind='value: lastName, disable: firstName' /> </p> Checked: <property>—Associates a checkbox or radio element to the ViewModel attribute. Example: <p>Gender: <input data-bind='checked:gender' type='checkbox' /></p> Options: <array>—Defines a ViewModel array for the<select> element. Example: //Javascript: this.designations = ko.observableArray(['manager', 'administrator']); //Html: Designation: <select data-bind='options: designations'></select> selectedOptions: <array>—Defines the active/selected element from the <select> element. Example: Designation: <select data-bind='options: designations, optionsText:"Select", selectedOptions:defaultDesignation'> </select> hasfocus: <property>—Associates the focus attribute to the element. Example: First Name: <input data-bind='value: firstName, hasfocus: firstNameHasFocus' /> We learned about data binding abilities of Knockout.js. You can know more about external data access and Hybrid Mobile Application Development from the book Oracle JET for Developers. Read More Text and appearance bindings and form field bindings Getting to know KnockoutJS Templates    
Read more
  • 0
  • 0
  • 16888

article-image-r6-classes-retrieve-live-data-markets-wallets
Pravin Dhandre
23 Apr 2018
11 min read
Save for later

Using R6 classes in R to retrieve live data for markets and wallets

Pravin Dhandre
23 Apr 2018
11 min read
In this tutorial, you will learn to create a simple requester to request external information from an API over the internet. You will also learn to develop exchange and wallet infrastructure using R programming. Creating a simple requester to isolate API calls Now, we will focus on how we actually retrieve live data. This functionality will also be implemented using R6 classes, as the interactions can be complex. First of all, we create a simple Requester class that contains the logic to retrieve data from JSON APIs found elsewhere in the internet and that will be used to get our live cryptocurrency data for wallets and markets. We don't want logic that interacts with external APIs spread all over our classes, so we centralize it here to manage it as more specialized needs come into play later. As you can see, all this object does is offer the public request() method, and all it does is use the formJSON() function from the jsonlite package to call a URL that is being passed to it and send the data it got back to the user. Specifically, it sends it as a dataframe when the data received from the external API can be coerced into dataframe-form. library(jsonlite) Requester <- R6Class( "Requester", public = list( request = function(URL) { return(fromJSON(URL)) } ) ) Developing our exchanges infrastructure Our exchanges have multiple markets inside, and that's the abstraction we will define now. A Market has various private attributes, as we saw before when we defined what data is expected from each file, and that's the same data we see in our constructor. It also offers a data() method to send back a list with the data that should be saved to a database. Finally, it provides setters and getters as required. Note that the setter for the price depends on what units are requested, which can be either usd or btc, to get a market's asset price in terms of US Dollars or Bitcoin, respectively: Market <- R6Class( "Market", public = list( initialize = function(timestamp, name, symbol, rank, price_btc, price_usd) { private$timestamp <- timestamp private$name <- name private$symbol <- symbol private$rank <- rank private$price_btc <- price_btc private$price_usd <- price_usd }, data = function() { return(list( timestamp = private$timestamp, name = private$name, symbol = private$symbol, rank = private$rank, price_btc = private$price_btc, price_usd = private$price_usd )) }, set_timestamp = function(timestamp) { private$timestamp <- timestamp }, get_symbol = function() { return(private$symbol) }, get_rank = function() { return(private$rank) }, get_price = function(base) { if (base == 'btc') { return(private$price_btc) } else if (base == 'usd') { return(private$price_usd) } } ), private = list( timestamp = NULL, name = "", symbol = "", rank = NA, price_btc = NA, price_usd = NA ) ) Now that we have our Market definition, we proceed to create our Exchange definition. This class will receive an exchange name as name and will use the exchange_requester_factory() function to get an instance of the corresponding ExchangeRequester. It also offers an update_markets() method that will be used to retrieve market data with the private markets() method and store it to disk using the timestamp and storage objects being passed to it. Note that instead of passing the timestamp through the arguments for the private markets() method, it's saved as a class attribute and used within the private insert_metadata() method. This technique provides cleaner code, since the timestamp does not need to be passed through each function and can be retrieved when necessary. The private markets() method calls the public markets() method in the ExchangeRequester instance saved in the private requester attribute (which was assigned to by the factory) and applies the private insert_metadata() method to update the timestamp for such objects with the one sent to the public update_markets() method call before sending them to be written to the database: source("./requesters/exchange-requester-factory.R", chdir = TRUE) Exchange <- R6Class( "Exchange", public = list( initialize = function(name) { private$requester <- exchange_requester_factory(name) }, update_markets = function(timestamp, storage) { private$timestamp <- unclass(timestamp) storage$write_markets(private$markets()) } ), private = list( requester = NULL, timestamp = NULL, markets = function() { return(lapply(private$requester$markets(), private$insert_metadata)) }, insert_metadata = function(market) { market$set_timestamp(private$timestamp) return(market) } ) ) Now, we need to provide a definition for our ExchangeRequester implementations. As in the case of the Database, this ExchangeRequester will act as an interface definition that will be implemented by the CoinMarketCapRequester. We see that the ExchangeRequester specifies that all exchange requester instances should provide a public markets() method, and that a list is expected from such a method. From context, we know that this list should contain Market instances. Also, each ExchangeRequester implementation will contain a Requester object by default, since it's being created and assigned to the requester private attribute upon class instantiation. Finally, each implementation will also have to provide a create_market() private method and will be able to use the request() private method to communicate to the Requester method request() we defined previously: source("../../../utilities/requester.R") KNOWN_ASSETS = list( "BTC" = "Bitcoin", "LTC" = "Litecoin" ) ExchangeRequester <- R6Class( "ExchangeRequester", public = list( markets = function() list() ), private = list( requester = Requester$new(), create_market = function(resp) NULL, request = function(URL) { return(private$requester$request(URL)) } ) ) Now we proceed to provide an implementation for CoinMarketCapRequester. As you can see, it inherits from ExchangeRequester, and it provides the required method implementations. Specifically, the markets() public method calls the private request() method from ExchangeRequester, which in turn calls the request() method from Requester, as we have seen, to retrieve data from the private URL specified. If you request data from CoinMarketCap's API by opening a web browser and navigating to the URL shown (https:/​/​api.​coinmarketcap.​com/​v1/​ticker), you will get a list of market data. That is the data that will be received in our CoinMarketCapRequester instance in the form of a dataframe, thanks to the Requester object, and will be transformed into numeric data where appropriate using the private clean() method, so that it's later used to create Market instances with the apply() function call, which in turn calls the create_market() private method. Note that the timestamp is set to NULL for all markets created this way because, as you may remember from our Exchange class, it's set before writing it to the database. There's no need to send the timestamp information all the way down to the CoinMarketCapRequester, since we can simply write at the Exchange level right before we send the data to the database: source("./exchange-requester.R") source("../market.R") CoinMarketCapRequester <- R6Class( "CoinMarketCapRequester", inherit = ExchangeRequester, public = list( markets = function() { data <- private$clean(private$request(private$URL)) return(apply(data, 1, private$create_market)) } ), private = list( URL = "https://api.coinmarketcap.com/v1/ticker", create_market = function(row) { timestamp <- NULL return(Market$new( timestamp, row[["name"]], row[["symbol"]], row[["rank"]], row[["price_btc"]], row[["price_usd"]] )) }, clean = function(data) { data$price_usd <- as.numeric(data$price_usd) data$price_btc <- as.numeric(data$price_btc) data$rank <- as.numeric(data$rank) return(data) } ) ) Finally, here's the code for our exchange_requester_factory(). As you can see, it's basically the same idea we have used for our other factories, and its purpose is to easily let us add more implementations for our ExchangeRequeseter by simply adding else-if statements in it: source("./coinmarketcap-requester.R") exchange_requester_factory <- function(name) { if (name == "CoinMarketCap") { return(CoinMarketCapRequester$new()) } else { stop("Unknown exchange name") } } Developing our wallets infrastructure Now that we are able to retrieve live price data from exchanges, we turn to our Wallet definition. As you can see, it specifies the type of private attributes we expect for the data that it needs to handle, as well as the public data() method to create the list of data that needs to be saved to a database at some point. It also provides getters for email, symbol, and address, and the public pudate_assets() method, which will be used to get and save assets into the database, just as we did in the case of Exchange. As a matter of fact, the techniques followed are exactly the same, so we won't explain them again: source("./requesters/wallet-requester-factory.R", chdir = TRUE) Wallet <- R6Class( "Wallet", public = list( initialize = function(email, symbol, address, note) { private$requester <- wallet_requester_factory(symbol, address) private$email <- email private$symbol <- symbol private$address <- address private$note <- note }, data = function() { return(list( email = private$email, symbol = private$symbol, address = private$address, note = private$note )) }, get_email = function() { return(as.character(private$email)) }, get_symbol = function() { return(as.character(private$symbol)) }, get_address = function() { return(as.character(private$address)) }, update_assets = function(timestamp, storage) { private$timestamp <- timestamp storage$write_assets(private$assets()) } ), private = list( timestamp = NULL, requester = NULL, email = NULL, symbol = NULL, address = NULL, note = NULL, assets = function() { return (lapply ( private$requester$assets(), private$insert_metadata)) }, insert_metadata = function(asset) { timestamp(asset) <- unclass(private$timestamp) email(asset) <- private$email return(asset) } ) ) Implementing our wallet requesters The WalletRequester will be conceptually similar to the ExchangeRequester. It will be an interface, and will be implemented in our BTCRequester and LTCRequester interfaces. As you can see, it requires a public method called assets() to be implemented and to return a list of Asset instances. It also requires a private create_asset() method to be implemented, which should return individual Asset instances, and a private url method that will build the URL required for the API call. It offers a request() private method that will be used by implementations to retrieve data from external APIs: source("../../../utilities/requester.R") WalletRequester <- R6Class( "WalletRequester", public = list( assets = function() list() ), private = list( requester = Requester$new(), create_asset = function() NULL, url = function(address) "", request = function(URL) { return(private$requester$request(URL)) } ) ) The BTCRequester and LTCRequester implementations are shown below for completeness, but will not be explained. If you have followed everything so far, they should be easy to understand: source("./wallet-requester.R") source("../../asset.R") BTCRequester <- R6Class( "BTCRequester", inherit = WalletRequester, public = list( initialize = function(address) { private$address <- address }, assets = function() { total <- as.numeric(private$request(private$url())) if (total > 0) { return(list(private$create_asset(total))) } return(list()) } ), private = list( address = "", url = function(address) { return(paste( "https://chainz.cryptoid.info/btc/api.dws", "?q=getbalance", "&a=", private$address, sep = "" )) }, create_asset = function(total) { return(new( "Asset", email = "", timestamp = "", name = "Bitcoin", symbol = "BTC", total = total, address = private$address )) } ) ) source("./wallet-requester.R") source("../../asset.R") LTCRequester <- R6Class( "LTCRequester", inherit = WalletRequester, public = list( initialize = function(address) { private$address <- address }, assets = function() { total <- as.numeric(private$request(private$url())) if (total > 0) { return(list(private$create_asset(total))) } return(list()) } ), private = list( address = "", url = function(address) { return(paste( "https://chainz.cryptoid.info/ltc/api.dws", "?q=getbalance", "&a=", private$address, sep = "" )) }, create_asset = function(total) { return(new( "Asset", email = "", timestamp = "", name = "Litecoin", symbol = "LTC", total = total, address = private$address )) } ) ) The wallet_requester_factory() works just as the other factories; the only difference is that in this case, we have two possible implementations that can be returned, which can be seen in the if statement. If we decided to add a WalletRequester for another cryptocurrency, such as Ether, we could simply add the corresponding branch here, and it should work fine: source("./btc-requester.R") source("./ltc-requester.R") wallet_requester_factory <- function(symbol, address) { if (symbol == "BTC") { return(BTCRequester$new(address)) } else if (symbol == "LTC") { return(LTCRequester$new(address)) } else { stop("Unknown symbol") } } Hope you enjoyed this interesting tutorial and were able to retrieve live data for your application. To know more, do check out the R Programming By Example and start handling data efficiently with modular, maintainable and expressive codes. Read More Introduction to R Programming Language and Statistical Environment 20 ways to describe programming in 5 words  
Read more
  • 0
  • 0
  • 6401

article-image-this-week-on-packt-hub-20-april-2018
Aarthi Kumaraswamy
20 Apr 2018
4 min read
Save for later

This week on Packt Hub – 20 April 2018

Aarthi Kumaraswamy
20 Apr 2018
4 min read
It’s been another busy week on the Packt Hub with a lot of tech news developments, hands on tutorials and insights on the latest and trending technological trends like Vue.js, Kotlin, GDPR and more. There has been plenty of interesting stories too from around the world. Here’s what you might have missed in the last 7 days - Tutorials, insights and new on technology… Featured Interview Selenium and data-driven testing: An interview with Carl Cocchiaro Data-driven testing has become a lot easier thanks to tools like Selenium. That’s good news for everyone in software development. We spoke to Carl Cocchiato about data-driven testing and much more. Carl is the author of Selenium Framework Design in Data-Driven Testing. Tech news Bulletins Cloud and networking news bulletin – Friday 20 April Programming news bulletin – Thursday 19 April Security news bulletin – Wednesday 18 April Web development news bulletin – Tuesday 17 April Data science news bulletin – Monday 16 April Data news in depth MongoDB going relational with 4.0 release [Editor’s Pick] JupyterLab v0.32.0 releases TensorFlow 1.8.0-rc0 releases Development & programming news in depth What’s new in ECMAScript 2018 (ES9)? [Editor’s Pick] Understanding the hype behind Magic Leap’s New Augmented Reality Headsets Scrivito launches serverless JavaScript CMS What’s new in Unreal Engine 4.19? [Editor’s Pick] Leap Motion open sources its $100 augmented reality headset, North Star [Editor’s Pick] Unity 2D & 3D game kits simplify Unity game development for beginners Cloud & networking news in depth What to expect from upcoming Ubuntu 18.04 release [Editor’s Pick] Google announces the largest overhaul of their Cloud Speech-to-Text What’s new in Docker Enterprise Edition 2.0? Couchbase mobile 2.0 is released Other news Splunk Industrial Asset Intelligence (Splunk IAI) targets Industrial IoT marketplace [Editor’s Pick] Tutorials This week we bring data and machine learning folks, tutorials on real-time streaming with Azure stream analytics, creating live visual dashboards in Power BI and using Q learning to build an options trading web app. For web developers, this week is all about Vue.js with a little Go on the side. Android rules mobile development while Kotlin is gaining more traction as a serious programming language. Data tutorials How to get started with Azure Stream Analytics and 7 reasons to choose it Performing Vehicle Telemetry job analysis with Azure Stream Analytics tools Building a live interactive visual dashboard in Power BI with Azure Stream [Editor’s Pick] Cross-validation in R for predictive models How to build an options trading web app using Q-learning [Editor’s Pick] Development & programming tutorials Web development tutorials How to build a basic server-side chatbot using Go Building your first Vue.js 2 Web application [Editor’s Pick] How to test node applications using Mocha framework Game and Mobile development tutorials How to Secure and Deploy an Android App Packaging and publishing an Oracle JET Hybrid mobile application Build your first Android app with Kotlin Behavior Scripting in C# and Javascript for game developers [Editor’s Pick] Programming tutorials Getting started with Kotlin programming How to boost R codes using C++ and Fortran Other Tutorials Meet the Coolest Raspberry Pi Family Member: Raspberry Pi Zero W Wireless Build your first Raspberry Pi project This week’s opinions, analysis, and insights Some hot topics in focus this week are Chaos engineering, GDPR, AIOps, the reactive Manifesto, cloud security threats and more. See what we learned from the recently concluded IBM Think and VUECONF.US conferences in this week’s special coverage. Data Insights What is AIOps and why is it going to be important? AI on mobile: How AI is taking over the mobile devices marketspace How machine learning as a service is transforming cloud IBM Think 2018: 6 key takeaways for developers AI-powered Robotics: Autonomous machines in the making What your organization needs to know about GDPR [Editor’s Pick] What we learned from IBM Research’s ‘5 in 5’ predictions presented at Think 2018 [Editor’s Pick] What is a support vector machine? [Editor’s Pick] 4 Encryption options for your SQL Server GDPR is good for everyone: businesses, developers, customers Data science on Windows is a big no [Editor’s Pick] Development & Programming Insights What is the Reactive Manifesto? Vue.js developer special: What we learned from VUECONF.US 2018 [Editor’s Pick] Top 7 modern Virtual Reality hardware systems Cloud & Networking Insights AWS Fargate makes Container infrastructure management a piece of cake Top 5 cloud security threats to look out for in 2018 [Editor’s Pick] Other Insights Chaos Engineering: managing complexity by breaking things [Editor’s Pick] 5 reasons to choose AWS IoT Core for your next IoT project  
Read more
  • 0
  • 0
  • 3037
article-image-test-node-applications-using-mocha-framework
Sunith Shetty
20 Apr 2018
12 min read
Save for later

How to test node applications using Mocha framework

Sunith Shetty
20 Apr 2018
12 min read
In today’s tutorial, you will learn how to create your very first test case that tests whether your code is working as expected. If we make a function that's supposed to add two numbers together, we can automatically verify it's doing that. And if we have a function that's supposed to fetch a user from the database, we can make sure it's doing that as well. Now to get started in this section, we'll look at the very basics of setting up a testing suite inside a Node.js project. We'll be testing a real-world function. Installing the testing module In order to get started, we will make a directory to store our code for this chapter. We'll make one on the desktop using mkdir and we'll call this directory node-tests: mkdir node-tests Then we'll change directory inside it using cd, so we can go ahead and run npm init. We'll be installing modules and this will require a package.json file: cd node-tests npm init We'll run npm init using the default values for everything, simply hitting enter throughout every single step: Now once that package.json file is generated, we can open up the directory inside Atom. It's on the desktop and it's called node-tests. From here, we're ready to actually define a function we want to test. The goal in this section is to learn how to set up testing for a Node project, so the actual functions we'll be testing are going to be pretty trivial, but it will help illustrate exactly how to set up our tests. Testing a Node project To get started, let's make a fake module. This module will have some functions and we'll test those functions. In the root of the project, we'll create a brand new directory and I'll call this directory utils: We can assume this will store some utility functions, such as adding a number to another number, or stripping out whitespaces from a string, anything kind of hodge-podge that doesn't really belong to any specific location. We'll make a new file in the utils folder called utils.js, and this is a similar pattern to what we did when we created the weather and location directories in our weather app: You're probably wondering why we have a folder and a file with the same name. This will be clear when we start testing. Now before we can write our first test case to make sure something works, we need something to test. I'll make a very basic function that takes two numbers and adds them together. We'll create an adder function as shown in the following code block: module.exports.add = () => { } This arrow function (=>) will take two arguments, a and b, and inside the function, we'll return the value a + b. Nothing too complex here: module.exports.add = () => { return a + b; }; Now since we just have one expression inside our arrow function (=>) and we want to return it, we can actually use the arrow function (=>) expression syntax, which lets us add our expression as shown in the following code, a + b, and it'll be implicitly returned: module.exports.add = (a, b) => a + b; There's no need to explicitly add a return keyword on to the function. Now that we have utils.js ready to go, let's explore testing. We'll be using a framework called Mocha in order to set up our test suite. This will let us configure our individual test cases and also run all of our test files. This will be really important for creating and running tests. The goal here is to make testing simple and we'll use Mocha to do just that. Now that we have a file and a function we actually want to test, let's explore how to create and run a test suite. Mocha – the testing framework We'll be doing the testing using the super popular testing framework Mocha, which you can find at mochajs.org. This is a fantastic framework for creating and running test suites. It's super popular and their page has all the information you'd ever want to know about setting it up, configuring it, and all the cool bells and whistles it has included: If you scroll down on this page, you'll be able to see a table of contents: Here you can explore everything Mocha has to offer. We'll be covering most of it in this article, but for anything we don't cover, I do want to make you aware you can always learn about it on this page. Now that we've explored the Mocha documentation page, let's install it and start using it. Inside the Terminal, we'll install Mocha. First up, let's clear the Terminal output. Then we'll install it using the npm install command. When you use npm install, you can also use the shortcut npm i. This has the exact same effect. I'll use npm i with mocha, specifying the version @3.0.0. This is the most recent version of the library as of this filming: npm i mocha@3.0.0 Now we do want to save this into the package.json file. Previously, we've used the save flag, but we'll talk about a new flag, called save-dev. The save-dev flag is will save this package for development purposes only—and that's exactly what Mocha will be for. We don't actually need Mocha to run our app on a service like Heroku. We just need Mocha locally on our machine to test our code. When you use the save-dev flag, it installs the module much the same way: npm i mocha@5.0.0 --save-dev But if you explore package.json, you'll see things are a little different. Inside our package.json file, instead of a dependencies attribute, we have a devDependencies attribute: In there we have Mocha, with the version number as the value. The devDependencies are fantastic because they're not going to be installed on Heroku, but they will be installed locally. This will keep the Heroku boot times really, really quick. It won't need to install modules that it's not going to actually need. We'll be installing both devDependencies and dependencies in most of our projects from here on out. Creating a test file for the add function Now that we have Mocha installed, we can go ahead and create a test file. In the utils folder, we'll make a new file called utils.test.js:   This file will store our test cases. We'll not store our test cases in utils.js. This will be our application code. Instead, we'll make a file called utils.test.js. When we use this test.js extension, we're basically telling our app that this will store our test cases. When Mocha goes through our app looking for tests to run, it should run any file with this Extension. Now we have a test file, the only thing left to do is create a test case. A test case is a function that runs some code, and if things go well, great, the test is considered to have passed. And if things do not go well, the test is considered to have failed. We can create a new test case, using it. It is a function provided by Mocha. We'll be running our project test files through Mocha, so there's no reason to import it or do anything like that. We simply call it just like this: it(); Now it lets us define a new test case and it takes two arguments. These are: The first argument is a string The second argument is a function First up, we'll have a string description of what exactly the test is doing. If we're testing that the adder function works, we might have something like: it('should add two numbers'); Notice here that it plays into the sentence. It should read like this, it should add two numbers; describes exactly what the test will verify. This is called behavior-driven development, or BDD, and that's the principles that Mocha was built on. Now that we've set up the test string, the next thing to do is add a function as the second argument: it('should add two numbers', () => { }); Inside this function, we'll add the code that tests that the add function works as expected. This means it will probably call add and check that the value that comes back is the appropriate value given the two numbers passed in. That means we do need to import the util.js file up at the top. We'll create a constant, call utils, setting it equal to the return result from requiring utils. We're using ./ since we will be requiring a local file. It's in the same directory so I can simply type utils without the js extension as shown here: const utils = require('./utils'); it('should add two numbers', () => { }); Now that we have the utils library loaded in, inside the callback we can call it. Let's make a variable to store the return results. We'll call this one results. And we'll set it equal to utils.add passing in two numbers. Let's use something like 33 and 11: const utils = require('./utils'); it('should add two numbers', () => { var res = utils.add(33, 11); }); We would expect it to get 44 back. Now at this point, we do have some code inside of our test suites so we run it. We'll do that by configuring the test script. Currently, the test script simply prints a message to the screen saying that no tests exist. What we'll do instead is call Mocha. As shown in the following code, we'll be calling Mocha, passing in as the one and only argument the actual files we want to test. We can use a globbing pattern to specify multiple files. In this case, we'll be using ** to look in every single directory. We're looking for a file called utils.test.js: "scripts": { "test": "mocha **/utils.test.js" }, Now this is a very specific pattern. It's not going to be particularly useful. Instead, we can swap out the file name with a star as well. Now we're looking for any file on the project that has a file name ending in .test.js: "scripts": { "test": "mocha **/*.test.js" }, And this is exactly what we want. From here, we can run our test suite by saving package.json and moving to the Terminal. We'll use the clear command to clear the Terminal output and then we can run our test script using command shown as follows: npm test When we run this, we'll execute that Mocha command: It'll go off. It'll fetch all of our test files. It'll run all of them and print the results on the screen inside Terminal as shown in the preceding screenshot. Here we can see we have a green checkmark next to our test, should add two numbers. Next, we have a little summary, one passing test, and it happened in 8 milliseconds. It'll go off. It'll fetch all of our test files. It'll run all of them and print the results on the screen inside Terminal as shown in the preceding screenshot. Here we can see we have a green checkmark next to our test, should add two numbers. Next, we have a little summary, one passing test, and it happened in 8 milliseconds. Now in our case, we don't actually assert anything about the number that comes back. It could be 700 and we wouldn't care. The test will always pass. To make a test fail what we have to do is throw an error. That means we can throw a new error and we pass into the constructor function whatever message we want to use as the error as shown in the following code block. In this case, I could say something like Value not correct: const utils = require('./utils'); it('should add two numbers', () => { var res = utils.add(33, 11); throw new Error('Value not correct') }); Now with this in place, I can save the test file and rerun things from the Terminal by rerunning npm test, and when we do that now we have 0 tests passing and we have 1 test failing: Next we can see the one test is should add two numbers, and we get our error message, Value not correct. When we throw a new error, the test fails and that's exactly what we want to do for add. Creating the if condition for the test Now, we'll create an if statement for the test. If the response value is not equal to 44, that means we have a problem on our hands and we'll throw an error: const utils = require('./utils'); it('should add two numbers', () => { var res = utils.add(33, 11); if (res != 44){ } }); Inside the if condition, we can throw a new error and we'll use a template string as our message string because I do want to use the value that comes back in the error message. I'll say Expected 44, but got, then I'll inject the actual value, whatever happens to come back: const utils = require('./utils'); it('should add two numbers', () => { var res = utils.add(33, 11); if (res != 44){ throw new Error(`Expected 44, but got ${res}.`); } }); Now in our case, everything will line up great. But what if the add method wasn't working correctly? Let's simulate this by simply tacking on another addition, adding on something like 22 in utils.js: module.exports.add = (a, b) => a + b + 22; I'll save the file, rerun the test suite: Now we get an error message: Expected 44, but got 66. This error message is fantastic. It lets us know that something is going wrong with the test and it even tells us exactly what we got back and what we expected. This will let us go into the add function, look for errors, and hopefully fix them. Creating test cases doesn't need to be something super complex. In this case, we have a simple test case that tests a simple function. To summarize, we looked into basic testing of a node app. We explored the testing framework, Mocha which can be used for creating and running test suites. You read an excerpt from a book written by Andrew Mead, titled Learning Node.js Development. In this book, you will learn how to build, deploy, and test Node apps. Developing Node.js Web Applications How is Node.js Changing Web Development?        
Read more
  • 0
  • 0
  • 14214

article-image-build-first-raspberry-pi-project
Gebin George
20 Apr 2018
7 min read
Save for later

Build your first Raspberry Pi project

Gebin George
20 Apr 2018
7 min read
In today's tutorial, we will build a simple Raspberry Pi 3 project. Since our Raspberry Pi now runs Windows 10 IoT Core, .NET Core applications will run on it, including Universal Windows Platform (UWP) applications. From a blank solution, let's create our first Raspberry Pi application. Choose Add and New Project. In the Visual C# category, select Blank App (Universal Windows). Let's call our project FirstApp. Visual Studio will ask us for target and minimum platform versions. Check the screenshot and make sure the version you select is lower than the version installed on your Raspberry Pi. In our case, the Raspberry Pi runs Build 15063. This is the March 2017 release. So, we accept Build 14393 (July 2016) as the target version and Build 10586 (November 2015) as the minimum version. If you want to target the Windows 10 Fall Creators Update, which supports .NET Core 2, you should select Build 16299 for both. In the Solution Explorer, we should now see the files of our new UWP project: New project Adding NuGet packages We proceed by adding functionality to our app from downloadable packages, or NuGets. From the References node, right-click and select Manage NuGet Packages. First, go to the Updates tab and make sure the packages that you already have are updated. Next, go to the Browse tab, type Firmata in the search box, and press Enter. You should see the Windows-Remote-Arduino package. Make sure to install it in your project. In the same way, search for the Waher.Events package and install it. Aggregating capabilities Since we're going to communicate with our Arduino using a USB serial port, we must make a declaration in the Package.appxmanifest file stating this. If we don't do this, the runtime environment will not allow the app to do it. Since this option is not available in the GUI by default, you need to edit the file using the XML editor. Make sure the serialCommunication device capability is added, as follows: <Capabilities> <Capability Name="internetClient" /> <DeviceCapability Name="serialcommunication"> <Device Id="any"> <Function Type="name:serialPort" /> </Device> </DeviceCapability> </Capabilities> Initializing the application Before we do any communication with the Arduino, we need to initialize the application. We do this by finding the OnLaunched method in the App.xml.cs file. After the Window.Current.Activate() call, we make a call to our Init() method where we set up the application. Window.Current.Activate(); Task.Run((Action)this.Init); We execute our initialization method from the thread pool, instead of the standard thread. This is done by calling Task.Run(), defined in the System.Threading.Tasks namespace. The reason for this is that we want to avoid locking the standard thread. Later, there will be a lot of asynchronous calls made during initialization. To avoid problems, we should execute all these from the thread pool, instead of from the standard thread. We'll make the method asynchronous: private async void Init() { try { Log.Informational("Starting application."); ... } catch (Exception ex) { Log.Emergency(ex); MessageDialog Dialog = new MessageDialog(ex.Message, "Error"); await Dialog.ShowAsync(); } IoT Desktop } The static Log class is available in the Waher.Events namespace, belonging to the NuGet we included earlier. (MessageDialog is available in Windows.UI.Popups, which might be a new namespace if you're not familiar with UWP.) Communicating with the Arduino The Arduino is accessed using Firmata. To do that, we use the Windows.Devices.Enumeration, Microsoft.Maker.RemoteWiring, and Microsoft.Maker.Serial namespaces, available in the Windows-Remote-Arduino NuGet. We begin by enumerating all the devices it finds: DeviceInformationCollection Devices = await UsbSerial.listAvailableDevicesAsync(); foreach (DeviceInformationDeviceInfo in Devices) { If our Arduino device is found, we will have to connect to it using USB: if (DeviceInfo.IsEnabled&&DeviceInfo.Name.StartsWith("Arduino")) { Log.Informational("Connecting to " + DeviceInfo.Name); this.arduinoUsb = new UsbSerial(DeviceInfo); this.arduinoUsb.ConnectionEstablished += () => Log.Informational("USB connection established."); Attach a remote device to the USB port class: this.arduino = new RemoteDevice(this.arduinoUsb); We need to initialize our hardware, when the remote device is ready: this.arduino.DeviceReady += () => { Log.Informational("Device ready."); this.arduino.pinMode(13, PinMode.OUTPUT); // Onboard LED. this.arduino.digitalWrite(13, PinState.HIGH); this.arduino.pinMode(8, PinMode.INPUT); // PIR sensor. MainPage.Instance.DigitalPinUpdated(8, this.arduino.digitalRead(8)); this.arduino.pinMode(9, PinMode.OUTPUT); // Relay. this.arduino.digitalWrite(9, 0); // Relay set to 0 this.arduino.pinMode("A0", PinMode.ANALOG); // Light sensor. MainPage.Instance.AnalogPinUpdated("A0", this.arduino.analogRead("A0")); }; Important: the analog input must be set to PinMode.ANALOG, not PinMode.INPUT. The latter is for digital pins. If used for analog pins, the Arduino board and Firmata firmware may become unpredictable. Our inputs are then reported automatically by the Firmata firmware. All we need to do to read the corresponding values is to assign the appropriate event handlers. In our case, we forward the values to our main page, for display: this.arduino.AnalogPinUpdated += (pin, value) => { MainPage.Instance.AnalogPinUpdated(pin, value); }; this.arduino.DigitalPinUpdated += (pin, value) => { MainPage.Instance.DigitalPinUpdated(pin, value); }; Communication is now set up. If you want, you can trap communication errors, by providing event handlers for the ConnectionFailed and ConnectionLost events. All we need to do now is to initiate communication. We do this with a simple call: this.arduinoUsb.begin(57600, SerialConfig.SERIAL_8N1); Testing the app Make sure the Arduino is still connected to your PC via USB. If you run the application now (by pressing F5), it will communicate with the Arduino, and display any values read to the event log. In the GitHub project, I've added a couple of GUI components to our main window, that display the most recently read pin values on it. It also displays any event messages logged. We leave the relay for later chapters. For a more generic example, see the Waher.Service.GPIO project at https://github.com/PeterWaher/IoTGateway/tree/master/Services/Waher.Service.GPIO. This project allows the user to read and control all pins on the Arduino, as well as the GPIO pins available on the Raspberry Pi directly. Deploying the app You are now ready to test the app on the Raspberry Pi. You now need to disconnect the Arduino board from your PC and install it on top of the Raspberry Pi. The power of the Raspberry Pi should be turned off when doing this. Also, make sure the serial cable is connected to one of the USB ports of the Raspberry Pi. Begin by switching the target platform, from Local Machine to Remote Machine, and from x86 to ARM: Run on a remote machine with an ARM processor Your Raspberry Pi should appear automatically in the following dialog. You should check the address with the IoT Dashboard used earlier, to make sure you're selecting the correct machine: Select your Raspberry Pi You can now run or debug your app directly on the Raspberry Pi, using your local PC. The first deployment might take a while since the target system needs to be properly prepared. Subsequent deployments will be much faster. Open the Device Portal from the IoT Dashboard, and take a Screenshot, to see the results. You can also go to the Apps Manager in the Device Portal, and configure the app to be started automatically at startup: App running on the Raspberry Pi To summarize, we saw how to practically build a simple application using Raspberry Pi 3 and C#. You read an excerpt from the book, Mastering Internet of Things, written by Peter Waher. This book will help you design and implement scalable IoT solutions with ease. Meet the Coolest Raspberry Pi Family Member: Raspberry Pi Zero W Wireless AI and the Raspberry Pi: Machine Learning and IoT, What’s the Impact?    
Read more
  • 0
  • 0
  • 29844
Modal Close icon
Modal Close icon