Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7013 Articles
article-image-angular-pipes-angular-4
Packt Editorial Staff
30 Apr 2018
13 min read
Save for later

8 built-in Angular Pipes in Angular 4 that you should know

Packt Editorial Staff
30 Apr 2018
13 min read
Angular is a mature technology with introduction to new way to build applications. Think of Angular Pipes as modernized version of filters comprising functions or helps used to format the values within the template. Pipes in Angular are basically extension of what filters were in Angular v1. There are many useful built-in Pipes we can use easily in our templates. In today’s tutorial we will learn about Built-in Pipes as well as create our own custom user-defined pipe. Angular Pipes - overview Pipes allows us to format the values within the view of the templates before it's displayed. For example, in most modern applications, we want to display terms, such as today, tomorrow, and so on and not system date formats such as April 13 2017 08:00. Let's look  more real-world scenarios. You want the hint text in the application to always be lowercase? No problem; define and use lowercasePipe. In weather app, if you want to show month name as MAR or APR instead of full month name, use DatePipe. Cool, right? You get the point. Pipes helps you add your business rules, so you can transform the data before it's actually displayed in the templates. A good way to relate Angular Pipes is similar to Angular 1.x filters. Pipes do a lot more than just filtering. We have used Angular Router to define Route Path, so we have all the Pipes functionalities in one page. You can create in same or different apps. Feel free to use your creativity. In Angular 1.x, we had filters--Pipes are replacement of filters. Defining a Pipe The pipe operator is defined with a pipe symbol (|) followed by the name of the pipe: {{ appvalue | pipename }} The following is an example of a simple lowercase pipe: {{"Sridhar  Rao"  |  lowercase}} In the preceding code, we are transforming the text to lowercase using the lowercase pipe. Now, let's write an example component using the lowercase pipe example: @Component({ selector: 'demo-pipe', template: ` Author name is {{authorName | lowercase}} ` }) export class DemoPipeComponent { authorName = 'Sridhar Rao'; } Let's analyze the preceding code in detail: We are defining a DemoPipeComponent component class We are creating string variable authorName and assigning the value 'Sridhar Rao'. In the template view, we display authorName, but before we print it in the UI we transform it using the lowercase pipe Run the preceding code, and you should see the screenshot shown as follows as an output: Well done! In the preceding example, we have used a Built-in Pipe. In next sections, you will learn more about the Built-in Pipes and also create a few custom Pipes. Note that the pipe operator only works in your templates and not inside controllers. Built-in Pipes Angular Pipes are modernized version of Angular 1.x filters. Angular comes with a lot of predefined Built-in Pipes. We can use them directly in our views and transform the data on the fly. The following is the list of all the Pipes that Angular has built-in support for: DatePipe DecimalPipe CurrencyPipe LowercasePipe and UppercasePipe JSON Pipe SlicePipe async Pipe In the following sections, let's implement and learn more about the various pipes and see them in action. DatePipe DatePipe, as the name itself suggest, allows us to format or transform the values that are date related. DatePipe can also be used to transform values in different formats based on parameters passed at runtime. The general syntax is shown in the following code snippet: {{today | date}} // prints today's date and time {{ today | date:'MM-dd-yyyy' }} //prints only Month days and year {{ today | date:'medium' }} {{ today | date:'shortTime' }} // prints short format Let's analyze the preceding code snippets in detail: As explained in the preceding section, the general syntax is variable followed with a (|) pipe operator followed by name of the pipe operator We use the date pipe to transform the today variable Also, in the preceding example, you will note that we are passing few parameters to the pipe operator. We will cover passing parameters to the pipe in the following section Now, let's create a complete example of the date pipe component. The following is the code snippet for implementing the DatePipe component: import { Component } from '@angular/core'; @Component({ template: ` <h5>Built-In DatePipe</h5> <ol> <li> <strong>DatePipe example expression</strong> <p>Today is {{today | date}} <p>{{ today | date:'MM-dd-yyyy' }} <p>{{ today | date:'medium' }} <p>{{ today | date:'shortTime' }} </li> </ol> `, })   Let's analyze the preceding code snippet in detail: We are creating a PipeComponent component class. We define a today variable. In the view, we are transforming the value of variable into various expressions based on different parameters. Run the application, and we should see the output as shown in the following screenshot: We learned the date pipe in this section. In the following sections, we will continue to learn and implement other Built-in Pipes and also create some custom user-defined pipes. DecimalPipe In this section, you will learn about yet another Built-in Pipe--DecimalPipe. DecimalPipe allows us to format a number according to locale rules. DecimalPipe can also be used to transform a number in different formats. The general syntax is shown as follows: appExpression  |  number  [:digitInfo] In the preceding code snippet, we use the number pipe, and optionally, we can pass the parameters. Let's look at how to create a DatePipe implementing decimal points. The following is an example code of the same: import { Component } from '@angular/core'; @Component({ template: ` state_tax (.5-5): {{state_tax | number:'.5-5'}} state_tax (2.10-10): {{state_tax | number:'2.3-3'}} `, }) export class PipeComponent { state_tax: number = 5.1445; } Let's analyze the preceding code snippet in detail: We defie a component class--PipeComponent. We define a variable--state_tax. We then transform state_tax in the view. The first pipe operator tells the expression to print the decimals up to five decimal places. The second pipe operator tells the expression to print the value to three decimal places. The output of the preceding pipe component example is given as follows: Undoubtedly, number pipe is one of the most useful and used pipe across various applications. We can transform the number values specially dealing with decimals and floating points. CurrencyPipe Applications that intent to cater to multi-national geographies, we need to show country- specific codes and their respective currency values. That's where CurrencyPipe comes to our rescue. The CurrencyPipe operator is used to append the country codes or currency symbol in front of the number values. Take a look the code snippet implementing the CurrencyPipe operator: {{  value  |  currency:'USD'  }} Expenses  in  INR: {{  expenses  |  currency:'INR'  }} Let's analyze the preceding code snippet in detail: The first line of code shows the general syntax of writing a currency pipe. The second line shows the currency syntax, and we use it to transform the expenses value and append the Indian currency symbol to it. So now that we know how to use a currency pipe operator, let's put together an example to display multiple currency and country formats. The following is the complete component class, which implements a currency pipe operator: import { Component } from '@angular/core'; @Component({ selector: 'currency-pipe', template: ` <h5>Built-In CurrencyPipe</h5> <ol> <li> <p>Salary in USD: {{ salary | currency:'USD':true }}</p> <p>Expenses in INR: {{ expenses | currency:'INR':false }}</p> </li> </ol> ` }) export class CurrencyPipeComponent { salary: number = 2500; expenses: number = 1500; } Let's analyze the the preceding code in detail: We created a component class, CurrencyPipeComponent, and declared few variables, namely salary and expenses. In the component template, we transformed the display of the variables by adding the country and currency details. In the first pipe operator, we used 'currency :  USD', which will append the ($) dollar symbol before the variable. In the second pipe operator, we used 'currency :  'INR':false', which will add the currency code, and false will tell not to print the symbol. Launch the app, and we should see the output as shown in the following screenshot: In this section, we learned about and implemented CurrencyPipe. In the following sections, we will keep exploring and learning about other Built-in Pipes and much more. import { Component } from '@angular/core'; @Component({ selector: 'currency-pipe', template: ` <h5>Built-In CurrencyPipe</h5> <ol> <li> <p>Salary in USD: {{ salary | currency:'USD':true }}</p> <p>Expenses in INR: {{ expenses | currency:'INR':false }}</p> </li> </ol> ` }) export class CurrencyPipeComponent { salary: number = 2500; expenses: number = 1500; } The LowercasePipe and UppercasePipe, as the name suggests, help in transforming the text into lowercase and uppercase, respectively. Take a look at the following code snippet: Author  is  Lowercase {{authorName  |  lowercase  }} Author  in  Uppercase  is {{authorName  |  uppercase  }} Let's analyze the preceding code in detail: The first line of code transforms the value of authorName into a lowercase using the lowercase pipe The second line of code transforms the value of authorName into an uppercase using the uppercase pipe. Now that we saw how to define lowercase and uppercase pipes, it's time we create a complete component example, which implements the Pipes to show author name in both lowercase and uppercase. Take a look at the following code snippet: import { Component } from '@angular/core'; @Component({ selector: 'textcase-pipe', template: ` <h5>Built-In LowercasPipe and UppercasePipe</h5> <ol> <li> <strong>LowercasePipe example</strong> <p>Author in lowercase is {{authorName | lowercase}} </li> <li> <strong>UpperCasePipe example</strong> <p>Author in uppercase is {{authorName | uppercase}} </li> </ol> ` }) export class TextCasePipeComponent { authorName = "Sridhar Rao"; } Let's analyze the preceding code in detail: We create a component class, TextCasePipeComponent, and define a variable authorName. In the component view, we use the lowercase and uppercase pipes. The first pipe will transform the value of the variable to the lowercase text. The second pipe will transform the value of the variable to uppercase text. Run the application, and we should see the output as shown in the following screenshot: In this section, you learned how to use lowercase and uppercase pipes to transform the values. JSON Pipe Similar to JSON filter in Angular 1.x, we have JSON Pipe, which helps us transform the string into a JSON format string. In lowercase or uppercase pipe, we were transforming the strings; using JSON Pipe, we can transform and display the string into a JSON format string. The general syntax is shown in the following code snippet: <pre>{{  myObj  |  json  }}</pre> Now, let's use the preceding syntax and create a complete component example, which uses the JSON Pipe: import { Component } from '@angular/core'; @Component({ template: ` <h5>Author Page</h5> <pre>{{ authorObj | json }}</pre> ` }) export class JSONPipeComponent { authorObj: any; constructor() { this.authorObj = { name: 'Sridhar Rao', website: 'http://packtpub.com', Books: 'Mastering Angular2' }; } } Let's analyze the preceding code in detail: We created a component class JSONPipeComponent and authorObj and assigned the JSON string to the variable. In the component template view, we transformed and displayed the JSON string. Run the app, and we should see the output as shown in the following screenshot: JSON is soon becoming defacto standard of web applications to integrate between services and client technologies. Hence, JSON Pipe comes in handy every time we need to transform our values to JSON structure in the view. Slice pipe Slice Pipe is very similar to array slice JavaScript function. It gets a sub string from a strong certain start and end positions. The general syntax to define a slice pipe is given as follows: {{email_id  |  slice:0:4  }} In the preceding code snippet, we are slicing the e-mail address to show only the first four characters of the variable value email_id. Now that we know how to use a slice pipe, let's put it together in a component. The following is the complete complete code snippet implementing the slice pipe: import { Component } from '@angular/core'; @Component({ selector: 'slice-pipe', template: ` <h5>Built-In Slice Pipe</h5> <ol> <li> <strong>LowercasePipe example</strong> <p> Email Id is {{ emailAddress }} </li> <li> <strong>LowercasePipe example</strong> <p>Sliced Email Id is {{emailAddress | slice : 0: 4}} </li> </ol> ` }) export class SlicePipeComponent { emailAddress = "test@packtpub.com"; } Let's analyze the preceding code snippet in detail: We are creating a class SlicePipeComponent. We defined a string variable emailAddress and assign it a value, test@packtpub.com. Then, we applied the slice pipe to the {{emailAddress |  slice  :  0:  4}} variable. We get the sub string starting 0 position and get four characters from the variable value of emailAddress. Run the app, and we should the output as shown in the following screenshot: SlicePipe is certainly a very helpful Built-in Pipe specially dealing with strings or substrings. async Pipe async Pipe allows us to directly map a promises or observables into our template view. To understand async Pipe better, let me throw some light on an Observable first. Observables are Angular-injectable services, which can be used to stream data to multiple sections in the application. In the following code snippet, we are using async Pipe as a promise to resolve the list of authors being returned: <ul id="author-list"> <li *ngFor="let author of authors | async"> <!-- loop the object here --> </li> </ul> The async pipe now subscribes to the observable (authors) and retrieve the last value. Let's look at examples of how we can use the async pipe as both promise and an observable. Add the following lines of code in our app.component.ts file: getAuthorDetails(): Observable<Author[]> { return this.http.get(this.url).map((res: Response) => res.json()); } getAuthorList(): Promise<Author[]> { return this.http.get(this.url).toPromise().then((res: Response) => res.json()); } Let's analyze the preceding code snippet in detail: We created a method called getAuthorDetails and attached an observable with the same. The method will return the response from the URL--which is a JSON output. In the getAuthorList method, we are binding a promise, which needs to be resolved or rejected in the output returned by the url called through a http request. In this section, we have seen how the async pipe works. You will find it very similar to dealing with services. We can either map a promise or an observable and map the result to the template. To summarize, we demonstrated Angular Pipes by explaining in detail about various built-in Pipes such as DatePipe, DecimalPipe, CurrencyPipe, LowercasePipe and UppercasePipe, JSON Pipe, SlicePipe, and async Pipe. [box type="note" align="" class="" width=""]The above article is an excerpt from the book Expert Angular, written by Sridhar Rao, Rajesh Gunasundaram, and Mathieu Nayrolles. This book will help you learn everything you need to build highly scalable and robust web applications using Angular 4. What are you waiting for, check out the book now to become an expert Angular developer![/box] Get Familiar with Angular Interview - Why switch to Angular for web development Building Components Using Angular    
Read more
  • 0
  • 0
  • 37852

article-image-mysql-errors-to-be-aware
Amey Varangaonkar
30 Apr 2018
9 min read
Save for later

12 most common MySQL errors you should be aware of

Amey Varangaonkar
30 Apr 2018
9 min read
[box type="note" align="" class="" width=""]The following excerpt is taken from the book MySQL 8 Administrator’s Guide written by Chintan Mehta, Ankit Bhavsar, Subhash Shah and Hetal Oza. This book provides tips and tricks to tackle problems you might encounter while administering MySQL solution.[/box] While using MySQL 8 there can be few scenarios where you would not be able to access or use MySQL properly. These situations can be very annoying, but are easily fixable. However, before you look for the solution, you must know the problem! Here are some of the common errors you might come across when using MySQL 8. 1. Access denied MySQL provides a privilege system that authenticates the user who connects from a host, and associates the user with access privileges on a database. The privileges include SELECT, INSERT, UPDATE, and DELETE and are able to identify anonymous users and grant privileges for MySQL specific functions, such as LOAD DATA INFILE and administrative operations. The access denied error may occur because of many causes. In many cases, the problem is caused because of MySQL accounts that the client programs use to connect with the MySQL server with permission from the server. 2. Lost connection to MySQL server The lost connection to MySQL server error can occur because of one of the three likely causes explained in this section. One potential reason for the error is that the network connectivity is troublesome. Network conditions should be checked if this is a frequent error. If an error message like “Lost connection to MySQL server” appears while querying the database, it is certain that the error has occurred because of network connection issues. The connection_timeout system variable defines the number of seconds that the mysqld server waits for a connection packet before connection timeout response. Infrequently, this error may occur when a client is trying for the initial connection to the server and the connection_timeout value is set to a few seconds. In this case, the problem can be resolved by increasing the connection_timeout value based on the the distance and connection speed. SHOW GLOBAL STATUS LIKE and Aborted_connects can be used to determine if we are experiencing this more frequently. It can be certainly said that increasing the connection_timeout value is the solution if the error message contains reading authorization packet. It is possible that the problem may be faced because of larger Binary Large OBject (BLOB) values than max_allowed_packet. This can cause a lost connection to the MySQL server error with clients. If the ER_NET_PACKET_TOO_LARGE error is observed, it confirms that the max_allowed_packet value should be increased. 3. Password fails when entered incorrectly MySQL clients ask for a password when the client program is invoked with the -- password or -p option without the password value. The following is the command: > mysql -u user_name -p Enter password: On a few systems, it may happen that the password works fine when specified in an option file or on the command line. But it does not work when entered interactively on the Command Prompt at the Enter password: prompt. It occurs because the system-provided library to read the passwords limits the password values to a small number of characters (usually eight). It is an issue with the system library and not with MySQL. As a workaround to this, change the MySQL password to a value that is eight or fewer characters or store the password in the option file. 4. Host host_name is blocked If the mysqld server receives too many connection requests from the host that is interrupted in the middle, the following error occurs: Host 'host_name' is blocked because of many connection errors. Unblock with 'mysqladmin flush-hosts' The max_connect_errors system variable determines the number of successive interrupted connection requests that are allowed. Once there are max_connect_errors failed requests without a successful connection, mysqld assumes that something is wrong and blocks the host from further connections until the FLUSH HOSTS statement or mysqladmin flush-hosts command is issued. mysqld blocks a host after 100 connection errors as a default. It can be adjusted by setting the max_connect_errors value on the server startup, as follows: > mysqld_safe --max_connect_errors=10000 This value can also be set up at runtime, as follows: mysql> SET GLOBAL max_connect_errors=10000; It should be checked first that there is nothing wrong with TCP/IP connections from the host if the host_name is blocked error is received for a particular host. Increasing the value of the max_connect_errors variable does not help if the network has problems. 5. Too many connections This error indicates that all available connection are in use for other client connections. The max_connections is the system variable that controls the number of connections to the server. The default value for the maximum number of connections is 151. We can set a larger value than 151 for the max_connections system variable to support more connections than 151. The mysqld server process actually allows one more than max_connections (max_connections + 1) value clients to connect. The additional one connection is kept reserved for accounts with CONNECTION_ADMIN or the SUPER privilege. The privilege can be granted to the administrators with access to the PROCESS privilege. With this access, the administrator can connect to the server using the reserved connection. They can execute the SHOW PROCESSLIST command to diagnose the problems even though the maximum number of client connections is exhausted. 6. Out of memory If the mysql does not have enough memory to store the entire request of the query issued by the MySQL client program, the server throws the following error: mysql: Out of memory at line 42, 'malloc.c' mysql: needed 8136 byte (8k), memory in use: 12481367 bytes (12189k) ERROR 2008: MySQL client ran out of memory In order to fix the problem, we must first check if the query is correct. Do we expect the query to return so many rows? If not, we should correct the query and execute it again. If the query is correct and needs no correction, we can connect mysql with the --quick option. Using the --quick option results in the mysql_use_result() C API function for fetching the result set. The function adds more load on the server and less load on the client. 7. Packet too large The communication packet is one of the following: A single SQL statement that the MySQL client sends to the MySQL server A single row that is sent to the MySQL client from the MySQL server A binary log event that is sent from a replication master server to the replication slave A 1 GB packet size is the largest possible packet size that can be transmitted to or from the MySQL 8 server or client. The MySQL server or client issues an ER_NET_PACKET_TOO_LARGE error and closes the connection if it receives a packet bigger than max_allowed_packet bytes. The default max_allowed_packet size is 16 MB for the MySQL client program. The following command can be used to set a larger value: > mysql --max_allowed_packet=32M The default value for the MySQL server is 64 MB. It should be noted that there is no harm in setting a larger value for this system variable, as the additional memory is allocated as needed. 8. The table is full The table-full error occurs in one of the following conditions: The disk is full The table has reached the maximum size The actual maximum table size in the MySQL database can be determined by the constraints imposed by the operating system on the file sizes. 9. Can't create/write to file This indicates that MySQL is unable to create a temporary file in the temporary directory for the result set if we get the following error while executing a query: Can't create/write to file 'sqla3fe_0.ism' The possible workaround for the error is to start the mysqld server with the --tmpdir option. The following is the command: > mysqld --tmpdir C:/temp 10. Commands out of sync If the client functions are called in the wrong order, the commands out of sync error is  received. It means that the command cannot be executed in the client code. As an example, if we execute mysql_use_result() and try to execute another query before executing mysql_free_result(), this error may occur. It may also happen if we execute two queries that return a result set without calling the mysql_use_result() or mysql_store_result() functions in between. 11. Ignoring user The following error is received when an account in the user table is found with an invalid password upon the mysqld server startup or when the server reloads the grant tables: Found wrong password for user 'some_user'@'some_host'; ignoring user The account is ignored by the MySQL permission system as a result. To fix the problem, we should assign a new valid password for the account. 12. Table tbl_name doesn't exist The following error indicates that a specified table does not exist in the default database: Table 'tbl_name' doesn't exist Can't find file: 'tbl_name' (errno: 2) In some cases, the user may be referring to the table incorrectly. It is possible because the MySQL server uses directories and files for storing database tables. Depending upon the operating system file management, the database and table names can be case sensitive. For non case-sensitive file systems, such as Windows, the references to a specified table used within a query must use the same letter case. In addition to these, you might come across MySQL 8 server errors such as issue with permission, or client errors like problem with NULL values. To know how to deal with them, you may check out this book MySQL 8 Administrator’s Guide. MySQL 8.0 is generally available with added features Basic Website using Node.js and MySQL database  
Read more
  • 0
  • 0
  • 56288

article-image-run-lambda-functions-on-aws-greengrass
Vijin Boricha
27 Apr 2018
7 min read
Save for later

How to run Lambda functions on AWS Greengrass

Vijin Boricha
27 Apr 2018
7 min read
AWS Greengrass is a form of edge computing service that extends the cloud's functionality to your IoT devices by allowing data collection and analysis closer to its point of origin. This is accomplished by executing AWS Lambda functions locally on the IoT device itself, while still using the cloud for management and analytics. Today, we will learn how to leverage AWS Greengrass to run simple lambda functions on an IoT device. How does this help a business? Well to start with, using AWS Greengrass you are now able to respond to locally generated events in near real time. With Greengrass, you can program your IoT devices to locally process and filter data and only transmit the important chunks back to AWS for analysis. This also has a direct impact on the costs as well as the amount of data transmitted back to the cloud. Here are the core components of AWS Greengrass: Greengrass Core (GGC) software: The Greengrass Core software is a packaged module that consists of a runtime to allow executions of Lambda functions, locally. It also contains an internal message broker and a deployment agent that periodically notifies the AWS Greengrass service about the device's configuration, state, available updates, and so on. The software also ensures that the connection between the device and the IoT service is secure with the help of keys and certificates. Greengrass groups: A Greengrass group is a collection of Greengrass Core settings and definitions that are used to manage one or more Greengrass-backed IoT devices. The groups internally comprise a few other components, namely: Greengrass group definition: A collection of information about your Greengrass group Device definition: A collection of IoT devices that are a part of a Greengrass group Greengrass group settings: Contains connection as well as configuration information along with the necessary IAM Roles required for interacting with other AWS services Greengrass Core: The IoT device itself Lambda functions: A list of Lambda functions that can be deployed to the Greengrass Core. Subscriptions: A collection of a message source, a message target and an MQTT topic to transmit the messages. The source or targets can be either the IoT service, a Lambda function or even the IoT device itself. Greengrass Core SDK: Greengrass also provides an SDK which you can use to write and run Lambda functions on Greengrass Core devices. The SDK currently supports Java 8, Python 2.7, and Node.js 6.10. With this key information in mind, let's go ahead and deploy our very own Greengrass Core on an IoT device. Running Lambda functions on AWS Greengrass With the Greengrass Core software up and running on your IoT device, we can now go ahead and run a simple Lambda function on it! For this particular section, we will be leveraging an AWS Lambda blueprint that prints a simple Hello World message: To get started, first, we will need to create our Lambda function. From the AWS Management Console, filter out the Lambda service using the Filter option or alternatively, select this URL: https://console.aws.amazon.com/lambda/home. Ensure that the Lambda function is launched from the same region as that of the AWS Greengrass. In this case, we are using the US-East-1 (N. Virginia) region. On the AWS Lambda console landing page, select the Create function option to get started. Since we are going to be leveraging an existing function blueprint for this use case, select the Blueprints option provided on the Create function page. Use the filter to find a blueprint with the name greengrass-hello-world. There are two templates present to date that match this name, one function is based on Python while the other is based on Node.js. For this particular section, select the greengrass-hello-world Python function and click on Configure to proceed. Fill out the required details for the new function, such as a Name followed by a valid Role. For this section, go ahead and select the Create new role from template option. Provide a suitable Role name and finally, from the Policy templates drop-down list, select the AWS IoT Button Permissions role. Once completed, click on Create function to complete the function's creation process. But before you move on to associating this function with your AWS Greengrass, you will also need to create a new version out of this function. Select the Publish new version option from the Actions tab. Provide a suitable Version description text and click on Publish once done. Your function is now ready for AWS Greengrass. Now, head back to the AWS IoT dashboard and select the newly deployed Greengrass group from the Groups option present on the navigation pane. From the Greengrass group page, select the Lambdas option from the navigation pane followed by the Add Lambda option, as shown in the following screenshot: On the Add a Lambda to your Greengrass group, you can choose to either Create a new Lambda function or Use an existing Lambda function as well. Since we have already created our function, select the Use existing function option. In the next page, select your Greengrass Lambda function and click Next to proceed. Finally, select the version of the deployed function and click on Finish once done. To finish things, we will need to create a new subscription between the Lambda function (source) and the AWS IoT service (destination). Select the Subscriptions option from the same Greengrass group page, as shown. Click on Add Subscription to proceed: On the Select your source and target page, select the newly deployed Lambda function as the source, followed by the IoT cloud as the target. Click on Next once done. You can provide an Optional topic filter as well, to filter messages published on the messaging queue. In this case, we have provided a simple hello/world as the filter for this scenario. Click on Finish once done to complete the subscription configuration. With all the pieces in place, it's now time to deploy our Lambda function over to the Greengrass Core. To do so, select the Deployments option and from the Actions drop-down list, select the Deploy option, as shown in the following screenshot: The deployment takes a few seconds to complete. Once done, verify the status of the deployment by viewing the Status column. The Status should show Successfully completed. With the function now deployed, test the setup by using the MQTT client provided by AWS IoT, as done before. Remember to enter the same hello/world topic name in the subscription topic field and click on Publish to topic once done. If all goes well, you should receive a custom Hello World message from the Greengrass Core as depicted in the following screenshot: This was just a high-level view of what you can achieve with Greengrass and Lambda. You can leverage Lambda for performing all kinds of preprocessing on data on your IoT device itself, thus saving a tremendous amount of time, as well as costs. With this, we come to the end of this post. Stay tuned for our next post where we will look at ways to effectively monitor IoT devices. We leveraged AWS Greengrass and Lambda to develop a cost-effective and speedy solution. You read an excerpt from the book AWS Administration - The Definitive Guide - Second Edition written by Yohan Wadia.  Whether you are a seasoned system admin or a rookie, this book will help you learn all the skills you need to work with the AWS cloud.  
Read more
  • 0
  • 0
  • 25959

article-image-debug-application-using-qt-creator
Gebin George
27 Apr 2018
9 min read
Save for later

How to Debug an application using Qt Creator

Gebin George
27 Apr 2018
9 min read
Today, we will learn about debugging an application using Qt Creator. A debugger is a program that can be used to test and debug other programs, in case of a sudden crash during the program execution or an unexpected behavior in the logic of the program. Most of the time (if not always), debuggers are used in the development environment and in conjunction with an IDE. In our case, we will learn how to use a debugger with Qt Creator. It is important to note that debuggers are not part of the Qt Framework, and, just like compilers, they are usually provided by the operating system SDK. Qt Creator automatically detects and uses debuggers if they are present on a system. This can be checked by navigating into the Qt Creator Options page via the main menu Tools and then Options. Make sure to select Build & Run from the list on the left side and then switch to the Debuggers tab from the top. You should be able to see one or more autodetected debuggers on the list. [box type="info" align="" class="" width=""]Windows Users: You should see something similar to the screenshot after this information box. If not, this means you have not installed any debuggers. You can easily download and install it using the instructions provided here: https:/ / docs. microsoft. com/ en- us/ windows- hardware/ drivers/debugger/ Or, you can independently search for the following topic online: Debugging Tools for Windows (WinDbg, KD, CDB, NTSD). Nevertheless, after the debugger is installed (assumingly, CDB or Microsoft Console Debugger for Microsoft Visual C++ Compilers and GDB for GCC Compilers), you can restart Qt Creator and return to this page. You should be able to have one or more entries similar to the following. Since we have installed a 32-bit version of the Qt and OpenCV Frameworks, choose the entry with x86 in its name to view its path, type, and other properties. macOS and Linux Users: There shouldn't be any action needed on your part and, depending on the OS, you'll see a GDB, LLDB, or some other debugger in the entries.[/box] Here's the screenshot of the Build & Run tab on the Options page: Depending on the operating system and the installed debugger, the preceding screenshot might be slightly different. Nevertheless, you'll have a debugger that you need to make sure is correctly set as the debugger for the Qt Kit you are using. So, make a note of the debugger path and name and switch to the Kits tab, and, after selecting the Qt Kit you were using, make sure the debugger for it is correctly set, as you can see in the following screenshot: Don't worry about choosing the wrong debugger, or any other options, since you'll be warned with relevant icons beside the Qt Kit icon selected at the top. The icon seen in the following image on the left side is usually displayed when everything is okay with the Kit, the second one from the left is an indication that something is not right, and the one on the right means a critical error. Move your mouse over the icon when it appears to see more information about the required actions needed to fix the issue: [box type="info" align="" class="" width=""]Critical issues with Qt Kits can be caused by many different factors such as a missing compiler which will make the kit completely useless until the issue is resolved. An example of a warning message in a Qt Kit would be a missing debugger, which will not make the kit useless, but you won't be able to use the debugger with it, thus it means less functionality than a completely configured Qt Kit.[/box] After the debugger is correctly set, you can start debugging your applications in one of the following ways, which basically have the same result: ending up in the Debugger view of the Qt Creator: Starting an application in Debugging mode Attaching to a running application (or process) [box type="info" align="" class="" width=""]Note that a debugging process can be started in many ways, such as remotely, by attaching to a process running on a separate machine and so on. However, the preceding methods will suffice for most cases and especially for the ones relevant to the Qt+OpenCV application development and what we learned throughout this book.[/box] Getting started with the debugging mode To start an application in the debugging mode, after opening a Qt project, you can use one of the following methods: Pressing the F5 button Using the Start Debugging button, right below the usual Run button with a similar icon, but with a small bug on it Using the main menu entries in the following order: Debug/Start Debugging/Start Debugging. To attach the debugger to a running application, you can use the main menu entries in the following order: Debug/Start Debugging/Attach to Running Application. This will open up the List of Processes window, from which you can choose your application or any other process you want to debug using its process ID or executable name. You can also use the Filter field (as seen in the following image) to find your application, since, most probably, the list of processes will be quite a long one. After choosing the correct process, make sure to press the Attach to Process button. No matter which one of the preceding methods you use, you will end up in the Qt Creator Debug mode, which is quite similar to the Edit mode, but it also allows you to do the following, among many others: Add, Enable, Disable, and View Breakpoints in the code (a Breakpoint is simply a point or a line in the code that we want the debugger to pause in the process and allow us to do a more detailed analysis of the status of the program) Interrupt running programs and processes to view and examine the code View and examine the function call stack (the call stack is a stack containing the hierarchical list of functions that led to a breakpoint or interrupted state) View and examine the variables Disassemble the source codes (disassembling in this sense means extracting the exact instructions that correspond to the function calls and other C++ codes in our program) You'll notice a performance drop in the application when it is started in debugging mode, which is obviously because of the fact that codes are being monitored and traced by the debugger. Here's a screenshot of the Qt Creator Debug mode, in which all of the capabilities mentioned earlier are visible in a single window and in the Debug mode of the Qt Creator: The area specified with the number 1 in the preceding screenshot in the code editor that you have already used through the book and are quite familiar with. Each line of code has a line number; you can click on their left side to toggle a breakpoint anywhere you want in the code. You can also right-click on the line numbers to set, remove, disable, or enable a breakpoint by selecting Set Breakpoint at Line X, Remove Breakpoint X, Disable Breakpoint X, or Enable Breakpoint X, where X in all of the commands mentioned here needs to be replaced by the line number. Apart from the code editor, you can also use the area mentioned with number 4 in the preceding screenshot to add, delete, edit, and further modify breakpoints in the code. You can also right-click on the same toolbar below the code editor that contains the debugger controls to open up the following menu and add or remove more panes to display additional debug and analysis information. We will cover the default debugger view, but make sure to check out each one of the following options on your own to familiarize yourself with the debugger even more: The area specified with number 2 in the preceding code can be used to view the call stack. Whether you interrupt the program by pressing the Interrupt button or choosing Debug/Interrupt from the menu while the it is running, set a breakpoint and stop the program in a specific line of code, or a malfunctioning code causes the program to fall into a trap and pause the process (since a crash and exception will be caught by the debugger), you can always view the hierarchy of function calls that led to the interrupted state, or further analyze them by checking the area 2 in the preceding Qt Creator screenshot. Finally, you can use the third area in the previous screenshot to view the local and global variables of the program in the interrupted location in the code. You can see the contents of the variables, whether they are standard data types, such as integers and floats or structures and classes, and also you can further expand and analyze their content to test and analyze any possible issues in your code. Using a debugger efficiently can mean hours of difference in testing and solving the issues in your code. In terms of practical usage of the debuggers, there is really no other way but to use it as much as you can and develop habits of your own to use the debugger, but also make note of good practices and tricks you found along the way and the ones we just went through. If you are interested, you can also read online about other possible methods of debugging, such as remote debugging, debugging using crash dump files (on Windows), and more. We saw how to practically debug an application using QT debugging mode. [box type="note" align="" class="" width=""]You read an excerpt from the book, Computer Vision with OpenCV 3 and Qt 5 written by Amin Ahmadi Tazehkandi.  The book covers development of cross-platform applications using OpenCV 3 and Qt 5.[/box] 3 ways to deploy a QT and OpenCV application Debugging Your .NET Application    
Read more
  • 0
  • 0
  • 29850

article-image-top-10-mysql-8-performance-benchmarking-aspects-to-know
Amey Varangaonkar
27 Apr 2018
5 min read
Save for later

Top 10 MySQL 8 performance benchmarking aspects to know

Amey Varangaonkar
27 Apr 2018
5 min read
[box type="note" align="" class="" width=""]The following excerpt is taken from the book MySQL 8 Administrator’s Guide, co-authored by Chintan Mehta, Ankit Bhavsar, Hetal Oza and Subhash Shah. This book presents an in-depth view of the newly released features of MySQL 8 and how you can leverage them to administer a high-performance MySQL solution.[/box] Following the best practices for the configuration of MySQL helps us design and manage efficient database, and are quite a cherry on top - without which, it might seem a bit incomplete. In addition to configuration, benchmarking helps us validate and find bottlenecks in the database system and address them. In this article, we look at specific areas that will help us understand the best practices for configuration and performance benchmarking. 1. Resource utilization IO activity, CPU, and memory usage is something that you should not miss out. These metrics help us know how the system is performing while doing benchmarking and at the time of scaling. It also helps us derive impacts per transaction. 2. Stretching your benchmarking timelines We may often like to have a quick glance at performance metrics; however, ensuring that MySQL behaves in the same way for a longer duration of testing is also a key element. There is some basic stuff that might impact on performance when you stretch your benchmark timelines, such as memory fragmentation, degradation of IO, impact after data accumulation, cache management, and so on. We don't want our database to get restarted just to clean up junk items, correct? Therefore, it is suggested to run benchmarking for a long duration for stability and performance Validation. 3. Replicating production settings Let's benchmark in a production-replicated environment. Wait! Let's disable database replication in a replica environment until we are done with benchmarking. Gotcha! We have got some good numbers! It often happens that we don't simulate everything completely that we are going to configure in the production environment. It could prove to be costly, as we might unintentionally be benchmarking something in an environment that might have an adverse impact when it's in production. Replicate production settings, data, workload, and so on in your replicated environment while you do benchmarking. 4. Consistency of throughput and latency Throughput and latency go hand in hand. It is important to keep your eyes primarily focused on throughput; however, latency over time might be something to look out for. Performance dips, slowness, or stalls were noticed in InnoDB in its earlier days. It has improved a lot since then, but as there might be other cases depending on your workload, it is always good to keep an eye on throughput along with latency. 5. Sysbench can do more Sysbench is a wonderful tool to simulate your workloads, whether it be thousands of tables, transaction intensive, data in-memory, and so on. It is a splendid tool to simulate and gives you nice representation. 6. Virtualization world I would like to keep this simple; bare metal as compared to virtualization isn't the same. Hence, while doing benchmarking, measure your resources according to your environment. You might be surprised to see the difference in results if you compare both. 7. Concurrency Big data is seated on heavy data workload; high concurrency is important. MySQL 8 is extending its maximum CPU core support in every new release, optimizing concurrency based on your requirements and hardware resources should be taken care of. 8. Hidden workloads Do not miss out factors that run in the background, such as reporting for big data analytics, backups, and on-the-fly operations while you are benchmarking. The impact of such hidden workloads or obsolete benchmarking workloads can make your days (and nights) Miserable. 9. Nerves of your query Oops! Did we miss the optimizer? Not yet. An optimizer is a powerful tool that will read the nerves of your query and provide recommendations. It's a tool that I use before making changes to a query in production. It's a savior when you have complex queries to be optimized. These are a few areas that we should look out for. Let's now look at a few benchmarks that we did on MySQL 8 and compare them with the ones on MySQL 5.7. 10. Benchmarks To start with, let's fetch all the column names from all the InnoDB tables. The following is the query that we executed: SELECT t.table_schema, t.table_name, c.column_name FROM information_schema.tables t, information_schema.columns c WHERE t.table_schema = c.table_schema AND t.table_name = c.table_name AND t.engine='InnoDB'; The following figure shows how MySQL 8 performed a thousand times faster when having four instances: Following this, we also performed a benchmark to find static table metadata. The following is the query that we executed: SELECT TABLE_SCHEMA, TABLE_NAME, TABLE_TYPE, ENGINE, ROW_FORMAT FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA LIKE 'chintan%'; The following figure shows how MySQL 8 performed around 30 times faster than MySQL 5.7:   It made us eager to go into a bit more detail. So, we thought of doing one last test to find dynamic table metadata. The following is the query that we executed: SELECT TABLE_ROWS FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA LIKE 'chintan%'; The following figure shows how MySQL 8 performed around 30 times faster than MySQL 5.7: MySQL 8.0 brings enormous performance improvement to the table. Scaling from one to million tables, is a need for big data requirements, which is now achievable. We look forward to more benchmarks being officially released once MySQL 8 is available for general purpose. If you found this post useful, make sure to check out the book MySQL 8 Administrator’s Guide for more tips and tricks to manage MySQL 8 effectively. MySQL 8.0 is generally available with added features New updates to Microsoft Azure services for SQL Server, MySQL, and PostgreSQL  
Read more
  • 0
  • 0
  • 35254

article-image-how-to-dockerize-asp-net-core-application
Aaron Lazar
27 Apr 2018
5 min read
Save for later

How to dockerize an ASP.NET Core application

Aaron Lazar
27 Apr 2018
5 min read
There are many reasons why you might want to dockerize an ASP.NET Core application. But ultimately, it's simply going to make life much easier for you. It's great for isolating components, especially if you're building a microservices or planning to deploy your application on the cloud. So, if you want an easier life (possibly) follow this tutorial to learn how to dockerize an ASP.NET Core application. Get started: Dockerize an ASP.NET Core application Create a new ASP.NET Core Web Application in Visual Studio 2017 and click OK: On the next screen, select Web Application (Model-View-Controller) or any type you like, while ensuring that ASP.NET Core 2.0 is selected from the drop-down list. Then check the Enable Docker Support checkbox. This will enable the OS drop-down list. Select Windows here and then click on the OK button: If you see the following message, you need to switch to Windows containers. This is because you have probably kept the default container setting for Docker as Linux: If you right-click on the Docker icon in the taskbar, you will see that you have an option to enable Windows containers there too. You can switch to Windows containers from the Docker icon in the taskbar by clicking on the Switch to Windows containers option: Switching to Windows containers may take several minutes to complete, depending on your line speed and the hardware configuration of your PC.If, however, you don't click on this option, Visual Studio will ask you to change to Windows containers when selecting the OS platform as Windows.There is a good reason that I am choosing Windows containers as the target OS. This reason will become clear later on in the chapter when working with Docker Hub and automated builds. After your ASP.NET Core application is created, you will see the following project setup in Solution Explorer: The Docker support that is added to Visual Studio comes not only in the form of the Dockerfile, but also in the form of the Docker configuration information. This information is contained in the global docker-compose.yml file at the solution level: 3. Clicking on the Dockerfile in Solution Explorer, you will see that it doesn't look complicated at all. Remember, the Dockerfile is the file that creates your image. The image is a read-only template that outlines how to create a Docker container. The Dockerfile, therefore, contains the steps needed to generate the image and run it. The instructions in the Dockerfile create layers in the image. This means that if anything changes in the Dockerfile, only the layers that have changed will be rebuilt when the image is rebuilt. The Dockerfile looks as follows: FROM microsoft/aspnetcore:2.0-nanoserver-1709 AS base WORKDIR /app EXPOSE 80 FROM microsoft/aspnetcore-build:2.0-nanoserver-1709 AS build WORKDIR /src COPY *.sln ./ COPY DockerApp/DockerApp.csproj DockerApp/ RUN dotnet restore COPY . . WORKDIR /src/DockerApp RUN dotnet build -c Release -o /app FROM build AS publish RUN dotnet publish -c Release -o /app FROM base AS final WORKDIR /app COPY --from=publish /app . ENTRYPOINT ["dotnet", "DockerApp.dll"] When you have a look at the menu in Visual Studio 2017, you will notice that the Run button has been changed to Docker: Clicking on the Docker button to debug your ASP.NET Core application, you will notice that there are a few things popping up in the Output window. Of particular interest is the IP address at the end. In my case, it reads Launching http://172.24.12.112 (yours will differ): When the browser is launched, you will see that the ASP.NET Core application is running at the IP address listed previously in the Output window. Your ASP.NET Core application is now running inside of a Windows Docker container: This is great and really easy to get started with. But what do you need to do to Dockerize an ASP.NET Core application that already exists? As it turns out, this isn't as difficult as you may think. How to add Docker support to an existing .NET Core application Imagine that you have an ASP.NET Core application without Docker support. To add Docker support to this existing application, simply add it from the context menu: To add Docker support to an existing ASP.NET Core application, you need to do the following: Right-click on your project in Solution Explorer Click on the Add menu item Click on Docker Support in the fly-out menu: Visual Studio 2017 now asks you what the target OS is going to be. In our case, we are going to target Windows: After clicking on the OK button, Visual Studio 2017 will begin to add the Docker support to your project: It's actually extremely easy to create ASP.NET Core applications that have Docker support baked in, and even easier to add Docker support to existing ASP.NET Core applications. Lastly, if you experience any issues, such as file access issues, ensure that your antivirus software has excluded your Dockerfile from scanning. Also, make sure that you run Visual Studio as Administrator. This tutorial has been taken from C# 7 and .NET Core Blueprints. More Docker tutorials Building Docker images using Dockerfiles How to install Keras on Docker and Cloud ML
Read more
  • 0
  • 0
  • 53751
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-how-to-deploy-nodejs-application-to-the-web-using-heroku
Sunith Shetty
26 Apr 2018
19 min read
Save for later

How to deploy a Node.js application to the web using Heroku

Sunith Shetty
26 Apr 2018
19 min read
Heroku is a tool that helps you manage cloud hosted web applications. It's a really great service. It makes creating, deploying, and updating apps really easy. Now Heroku, like GitHub, does not require a credit card to sign up and there is a free tier, which we'll use. They have paid plans for just about everything, but we can get away with the free tier for everything we'll do in this section. In this tutorial, you'll learn to deploy your live Node.js app to the Web using Heroku. By the end of this tutorial, you'll have the URL that you can share with anybody to view the application from their browser. Installing Heroku command-line tools To kick things off, we'll open up the browser and go to Heroku's website here. Here we can go ahead and sign up for a new account. Take a quick moment to either log in to your existing one or sign up for a new one. Once logged in, it'll show you the dashboard. Now your dashboard will look something like this: Although there might be a greeting telling you to create a new application, which you can ignore. I have a bunch of apps. You might not have these. That is perfectly fine. The next thing we'll do is install the Heroku command-line tools. This will let us create apps, deploy apps, open apps, and do all sorts of really cool stuff from the Terminal, without having to come into the web app. That will save us time and make development a lot easier. We can grab the download by going to toolbelt.heroku.com. Here we're able to grab the installer for whatever operating system, you happen to be running on. So, let's start the download. It's a really small download so it should happen pretty quickly. Once it's done, we can go ahead and run through the process: This is a simple installer where you just click on Install. There is no need to customize anything. You don't have to enter any specific information about your Heroku account. Let's go ahead and complete the installer. This will give us a new command from the Terminal that we can execute. Before we can do that, we do have to log in locally in the Terminal and that's exactly what we'll do next. Log in to Heroku account locally Now we will start off the Terminal. If you already have it running, you might need to restart it in order for your operating system to recognize the new command. You can test that it got installed properly by running the following command: heroku --help When you run this command, you'll see that it's installing the CLI for the first time and then we'll get all the help information. This will tell us what commands we have access to and exactly how they work: Now we will need to log in to the Heroku account locally. This process is pretty simple. In the preceding code output, we have all of the commands available and one of them happens to be login. We can run heroku login just like this to start the process: heroku login I'll run the login command and now we just use the email and password that we had set up before: I'll type in my email and password. Typing for Password is hidden because it's secure. And when I do that you see Logged in as garyngreig@gmail.com shows up and this is fantastic: Now we're logged in and we're able to successfully communicate between our machine's command line and the Heroku servers. This means we can get started creating and deploying applications. Getting SSH key to Heroku Now before going ahead, we'll use the clear command to clear the Terminal output and get our SSH key on Heroku, kind of like what we did with GitHub, only this time we can do it via the command line. So it's going to be a lot easier. In order to add our local keys to Heroku, we'll run the heroku keys:add command. This will scan our SSH directory and add the key up: heroku keys:add Here you can see it found a key the id_rsa.pub file: Would you like to upload it to Heroku? Type Yes and hit enter: Now we have our key uploaded. That is all it took. Much easier than it was to configure with GitHub. From here, we can use the heroku keys command to print all the keys currently on our account: heroku keys We could always remove them using heroku keys:remove command followed by the email related to that key. In this case, we'll keep the Heroku key that we have. Next up, we can test our connection using SSH with the v flag and git@heroku.com: ssh -v git@heroku.com This will communicate with the Heroku servers: As shown, we can see it's asking that same question: The authenticity of the host 'heroku.com' can't be established, Are you sure you want to continue connecting? Type Yes. You will see the following output: Now when you run that command, you'll get a lot of cryptic output. What you're looking for is authentication succeeded and then public key in parentheses. If things did not go well, you'll see the permission denied message with public key in parentheses. In this case, the authentication was successful, which means we are good to go. I'll run clear again, clearing the Terminal output. Setting up in the application code for Heroku Now we can turn our attention towards the application code because before we can deploy to Heroku, we will need to make two changes to the code. These are things that Heroku expects your app to have in place in order to run properly because Heroku does a lot of things automatically, which means you have to have some basic stuff set up for Heroku to work. It's not too complex—some really simple changes, a couple one-liners. Changes in the server.js file First up in the server.js file down at the very bottom of the file, we have the port and our app.listen statically coded inside server.js: app.listen(3000, () => { console.log('Server is up on port 3000'); }); We need to make this port dynamic, which means we want to use a variable. We'll be using an environment variable that Heroku is going to set. Heroku will tell your app which port to use because that port will change as you deploy your app, which means that we'll be using that environment variable so we don't have to swap out our code every time we want to deploy. With environment variables, Heroku can set a variable on the operating system. Your Node app can read that variable and it can use it as the port. Now all machines have environment variables. You can actually view the ones on your machine by running the env command on Linux or macOS or the set command on Windows. What you'll get when you do that is a really long list of key-value pairs, and this is all environment variables are: Here, we have a LOGNAME environment variable set to Andrew. I have a HOME environment variable set to my home directory, all sorts of environment variables throughout my operating system. One of these that Heroku is going to set is called PORT, which means we need to go ahead and grab that port variable and use it in server.js instead of 3000. Up at the very top of the server.js file, we'd to make a constant called port, and this will store the port that we'll use for the app: const express = require('express');. const hbs = require('hbs'); const fs = require('fs'); const port Now the first thing we'll do is grab a port from process.env. The process.env is an object that stores all our environment variables as key-value pairs. We're looking for one that Heroku is going to set called PORT: const port = process.env.PORT; This is going to work great for Heroku, but when we run the app locally, the PORT environment variable is not going to exist, so we'll set a default using the OR (||) operator in this statement. If process.env.port does not exist, we'll set port equal to 3000 instead: const port = process.env.PORT || 3000; Now we have an app that's configured to work with Heroku and to still run locally, just like it did before. All we have to do is take the PORT variable and use that in app.listen instead of 3000. As shown, I'm going to reference port and inside our message, I'll swap it out for template strings and now I can replace 3000 with the injected port variable, which will change over time: app.listen(port, () => { console.log(`Server is up on port ${port}`); }); With this in place, we have now fixed the first problem with our app. I'll now run node server.js from the Terminal. node server.js We still get the exact same message: Server is up on port 3000, so your app will still works locally as expected: Changes in the package.json file Next up, we have to specify a script in package.json. Inside package.json, you might have noticed we have a scripts object, and in there we have a test script. This gets set by default for npm: We can create all sorts of scripts inside the scripts object that do whatever we like. A script is nothing more than a command that we run from the Terminal, so we could take this command, node server.js, and turn it into a script instead, and that's exactly what we're going to do. Inside the scripts object, we'll add a new script. The script needs to be called start: This is a very specific, built-in script and we'll set it equal to the command that starts our app. In this case, it will be node server.js: "start": "node server.js" This is necessary because when Heroku tries to start our app, it will not run Node with your file name because it doesn't know what your file name is called. Instead, it will run the start script and the start script will be responsible for doing the proper thing; in this case, booting up that server file. Now we can run our app using that start script from the Terminal by using the following command: npm start When I do that, we get a little output related to npm and then we get Server is up on port 3000. The big difference is that we are now ready for Heroku. We could also run the test script using from the Terminal npm test: npm test Now, we have no tests specified and that is expected: Making a commit in Heroku The next step in the process will be to make the commit and then we can finally start getting it up on the Web. First up, git status. When we run git status, we have something a little new: Instead of new files, we have modified files here as shown in the code output here. We have a modified package.json file and we have a modified server.js file. These are not going to be committed if we were to run a git commit just yet; we still have to use git add. What we'll do is run git add with the dot as the next argument. Dot is going to add every single thing showing up and get status to the next commit. Now I only recommend using the syntax of everything you have listed in the Changes not staged for commit header. These are the things you actually want to commit, and in our case, that is indeed what we want. If I run git add and then a rerun git status, we can now see what is going to be committed next, under the Changes to be committed header: Here we have our package.json file and the server.js file. Now we can go ahead and make that commit. I'll run a git commit command with the m flag so we can specify our message, and a good message for this commit would be something like Setup start script and heroku Port: git commit -m 'Setup start script and heroku port' Now we can go ahead and run that command, which will make the commit. Now we can go ahead and push that up to GitHub using the git push command, and we can leave off the origin remote because the origin is the default remote. I'll go ahead and run the following command: git push This will push it up to GitHub, and now we are ready to actually create the app, push our code up, and view it over in the browser: Running the Heroku create command The next step in the process will be to run a command called heroku create from the Terminal. heroku create needs to get executed from inside your application: heroku create Just like we run our Git commands, when I run heroku create, a couple things are going to happen: First up, it's going to make a real new application over in the Heroku web app It's also going to add a new remote to your Git repository Now remember we have an origin remote, which points to our GitHub repository. We'll have a Heroku remote, which points to our Heroku Git repository. When we deploy to the Heroku Git repository, Heroku is going to see that. It will take the changes and it will deploy them to the Web. When we run Heroku create, all of that happens: Now we do still have to push up to this URL in order to actually do the deploying process, and we can do that using git push followed by heroku: git push heroku The brand new remote was just added because we ran heroku create. Now pushing it this time around will go through the normal process. You'll then start seeing some logs. These are logs coming back from Heroku letting you know how your app is deploying. It's going through the entire process, showing you what happens along the way. This will take about 10 seconds and at the very end we have a success message—Verifying deploy... done: It also verified that the app was deployed successfully and that did indeed pass. From here we actually have a URL we can visit (https://sleepy-retreat-32096.herokuapp.com/). We can take it, copy it, and paste it in the browser. What I'll do instead is use the following command: heroku open The heroku open will open up the Heroku app in the default browser. When I run this, it will switch over to Chrome and we get our application showing up just as expected: We can switch between pages and everything works just like it did locally. Now we have a URL and this URL was given to us by Heroku. This is the default way Heroku generates app URLs. If you have your own domain registration company, you can go ahead and configure its DNS to point to this application. This will let you use a custom URL for your Heroku app. You'll have to refer to the specific instructions for your domain registrar in order to do that, but it can indeed be done. Now that we have this in place, we have successfully deployed our Node applications live to Heroku, and this is just fantastic. In order to do this, all we had to do is make a commit to change our code and push it up to a new Git remote. It could not be easier to deploy our code. You can also manage your application by going back over to the Heroku dashboard. If you give it a refresh, you should see that brand new URL somewhere on the dashboard. Remember mine was sleepy retreat. Yours is going to be something else. If I click on the sleepy retreat, I can view the app page: Here we can do a lot of configuration. We can manage Activity and Access so we can collaborate with others. We have metrics, we have Resources, all sorts of really cool stuff. With this in place, we are now done with our basic deploying section. In the next section, your challenge will be to go through that process again. You'll add some changes to the Node app. You'll commit them, deploy them, and view them live in the Web. We'll get started by creating the local changes. That means I'll register a new URL right here using app.get. We'll create a new page/projects, which is why I have that as the route for my HTTP get handler. Inside the second argument, we can specify our callback function, which will get called with request and response, and like we do for the other routes above, the root route and our about route, we'll be calling response.render to render our template. Inside the render arguments list, we'll provide two. The first one will be the file name. The file doesn't exist, but we can still go ahead and call render. I'll call it projects.hbs, then we can specify the options we want to pass to the template. In this case, we'll set page title, setting it equal to Projects with a capital P. Excellent! Now with this in place, the server file is all done. There are no more changes there. What I'll do is go ahead and go to the views directory, creating a new file called projects.hbs. In here, we'll be able to configure our template. To kick things off, I'm going to copy the template from the about page. Since it's really similar, I'll copy it. Close about, paste it into projects, and I'm just going to change this text to project page text would go here. Then we can save the file and make our last change. The last thing we want to do is update the header. We now have a brand new projects page that lives at /projects. So we'll want to go ahead and add that to the header links list. Right here, I'll create a new paragraph tag and then I'll make an anchor tag. The text for the link will be Projects with a capital P and the href, which is the URL to visit when that link is clicked. We'll set that equal to /projects, just like we did for about, where we set it equal to /about. Now that we have this in place, all our changes are done and we are ready to test things out locally. I'll fire up the app locally using Node with server.js as the file. To start, we're up on localhost 3000. So over in the browser, I can move to the localhost tab, as opposed to the Heroku app tab, and click on Refresh. Right here we have Home, which goes to home, we have About which goes to about, and we have Projects which does indeed go to /projects, rendering the projects page. Project page text would go here. With this in place we're now done locally. We have the changes, we've tested them, now it's time to go ahead and make that commit. That will happen over inside the Terminal. I'll shut down the server and run Git status. This will show me all the changes to my repository as of the last commit. I have two modified files: the server file and the header file, and I have my brand new projects file. All of this looks great. I want to add all of this to the next commit, so I can use a Git add with the . to do just that. Now before I actually make the commit, I do like to test that the proper things got added by running Git status. Right here I can see my changes to be committed are showing up in green. Everything looks great. Next up, we'll run a Git commit to actually make the commit. This is going to save all of the changes into the Git repository. A message for this one would be something like adding a project page. With a commit made, the next thing you needed to do was push it up to GitHub. This will back our code up and let others collaborate on it. I'll use Git push to do just that. Remember we can leave off the origin remote as origin is the default remote, so if you leave off a remote it'll just use that anyway. With our GitHub repository updated, the last thing to do is deploy to Heroku and we do that by pushing up the Git repository, using Git push, to the Heroku remote. When we do this, we get our long list of logs as the Heroku server goes through the process of installing our npm modules, building the app, and actually deploying it. Once it's done, we'll get brought back to the Terminal like we are here, and then we can open up the URL in the\ browser. Now I can copy it from here or run Heroku open. Since I already have a tab open with the URL in place, I'll simply give it a refresh. Now you might have a little delay as you refresh your app. Sometimes starting up the app right after a new app was deployed can take about 10 to 15 seconds. That will only happen as you first visit it. Other times where you click on the Refresh button, it should reload instantly. Now we have the projects page and if I visit it, everything looks awesome. The navbar is working great and the projects page is indeed rendering at /projects. With this in place, we are now done. We've gone through the process of adding a new feature, testing it locally, making a Git commit, pushing it up to GitHub, and deploying it to Heroku. We now have a workflow for building real-world web applications using Node.js. This tutorial has been taken from Learning Node.js Development. More Heroku tutorials Deploy a Game to Heroku Managing Heroku from the command line
Read more
  • 0
  • 0
  • 38189

article-image-creating-deploying-amazon-redshift-cluster
Vijin Boricha
26 Apr 2018
7 min read
Save for later

Creating and deploying an Amazon Redshift cluster

Vijin Boricha
26 Apr 2018
7 min read
Amazon Redshift is one of the database as a service (DBaaS) offerings from AWS that provides a massively scalable data warehouse as a managed service, at significantly lower costs. The data warehouse is based on the open source PostgreSQL database technology. However, not all features offered in PostgreSQL are present in Amazon Redshift. Today, we will learn about Amazon Redshift and perform a few steps to create a fully functioning Amazon Redshift cluster. We will also take a look at some of the essential concepts and terminologies that you ought to keep in mind when working with Amazon Redshift: Clusters: Just like Amazon EMR, Amazon Redshift also relies on the concept of clusters. Clusters here are logical containers containing one or more instances or compute nodes and one leader node that is responsible for the cluster's overall management. Here's a brief look at what each node provides: Leader node: The leader node is a single node present in a cluster that is responsible for orchestrating and executing various database operations, as well as facilitating communication between the database and associate client programs. Compute node: Compute nodes are responsible for executing the code provided by the leader node. Once executed, the compute nodes share the results back to the leader node for aggregation. Amazon Redshift supports two types of compute nodes: dense storage nodes and dense compute nodes. The dense storage nodes provide standard hard disk drives for creating large data warehouses; whereas, the dense compute nodes provide higher performance SSDs. You can start off by using a single node that provides 160 GB of storage and scale up to petabytes by leveraging one or more 16 TB capacity instances as well. Node slices: Each compute node is partitioned into one or more smaller chunks or slices by the leader node, based on the cluster's initial size. Each slice contains a portion of the compute nodes memory, CPU and disk resource, and uses these resources to process certain workloads that are assigned to it. The assignment of workloads is again performed by the leader node. Databases: As mentioned earlier, Amazon Redshift provides a scalable database that you can leverage for a data warehouse, as well as analytical purposes. With each cluster that you spin in Redshift, you can create one or more associated databases with it. The database is based on the open source relational database PostgreSQL (v8.0.2) and thus, can be used in conjunction with other RDBMS tools and functionalities. Applications and clients can communicate with the database using standard PostgreSQL JDBC and ODBC drivers. Here is a representational image of a working data warehouse cluster powered by Amazon Redshift: With this basic information in mind, let's look at some simple and easy to follow steps using which you can set up and get started with your Amazon Redshift cluster. Getting started with Amazon Redshift In this section, we will be looking at a few simple steps to create a fully functioning Amazon Redshift cluster that is up and running in a matter of minutes: First up, we have a few prerequisite steps that need to be completed before we begin with the actual set up of the Redshift cluster. From the AWS Management Console, use the Filter option to filter out IAM. Alternatively, you can also launch the IAM dashboard by selecting this URL: https://console.aws.amazon.com/iam/. Once logged in, we need to create and assign a role that will grant our Redshift cluster read-only access to Amazon S3 buckets. This role will come in handy later on in this chapter when we load some sample data on an Amazon S3 bucket and use Amazon Redshift's COPY command to copy the data locally into the Redshift cluster for processing. To create the custom role, select the Role option from the IAM dashboards' navigation pane. On the Roles page, select the Create role option. This will bring up a simple wizard using which we will create and associate the required permissions to our role. Select the Redshift option from under the AWS Service group section and opt for the Redshift - Customizable option provided under the Select your use case field. Click Next to proceed with the set up. On the Attach permissions policies page, filter and select the AmazonS3ReadOnlyAccess permission. Once done, select Next: Review. In the final Review page, type in a suitable name for the role and select the Create Role option to complete the process. Make a note of the role's ARN as we will be requiring this in the later steps. Here is snippet of the role policy for your reference: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:Get*", "s3:List*" ], "Resource": "*" } ] } With the role created, we can now move on to creating the Redshift cluster. To do so, log in to the AWS Management Console and use the Filter option to filter out Amazon Redshift. Alternatively, you can also launch the Redshift dashboard by selecting this URL: https://console.aws.amazon.com/redshift/. Select Launch Cluster to get started with the process. Next, on the CLUSTER DETAILS page, fill in the required information pertaining to your cluster as mentioned in the following list: Cluster identifier: A suitable name for your new Redshift cluster. Note that this name only supports lowercase strings. Database name: A suitable name for your Redshift database. You can always create more databases within a single Redshift cluster at a later stage. By default, a database named dev is created if no value is provided: Database port: The port number on which the database will accept connections. By default, the value is set to 5439, however you can change this value based on your security requirements. Master user name: Provide a suitable username for accessing the database. Master user password: Type in a strong password with at least one uppercase character, one lowercase character and one numeric value. Confirm the password by retyping it in the Confirm password field. Once completed, hit Continue to move on to the next step of the wizard. On the NODE CONFIGURATION page, select the appropriate Node type for your cluster, as well as the Cluster type based on your functional requirements. Since this particular cluster setup is for demonstration purposes, I've opted to select the dc2.large as the Node type and a Single Node deployment with 1 compute node. Click Continue to move on the next page once done. It is important to note here that the cluster that you are about to launch will be live and not running in a sandbox-like environment. As a result, you will incur the standard Amazon Redshift usage fees for the cluster until you delete it. You can read more about Redshift's pricing at: https://aws.amazon.com/redshift/pricing/. In the ADDITIONAL CONFIGURATION page, you can configure add-on settings, such as encryption enablement, selecting the default VPC for your cluster, whether or not the cluster should have direct internet access, as well as any preferences for a particular Availability Zone out of which the cluster should operate. Most of these settings do not require any changes at the moment and can be left to their default values. The only changes required on this page is associating the previously created IAM role with the cluster. To do so, from the Available Roles drop-down list, select the custom Redshift role that we created in our prerequisite section. Once completed, click on Continue. Review the settings and changes on the Review page and select the Launch Cluster option when completed. The cluster takes a few minutes to spin up depending on whether or not you have opted for a single instance deployment or multiple instances. Once completed, you should see your cluster listed on the Clusters page, as shown in the following screenshot. Ensure that the status of your cluster is shown as healthy under the DB Health column. You can additionally make a note of the cluster's endpoint as well, for accessing it programmatically: With the cluster all set up, the next thing to do is connect to the same. This Amazon Redshift tutorial has been taken from AWS Administration - The Definitive Guide - Second Edition. Read More Amazon S3 Security access and policies How to run Lambda functions on AWS Greengrass AWS Fargate makes Container infrastructure management a piece of cake
Read more
  • 0
  • 0
  • 36127

article-image-run-and-configure-iot-gateway
Gebin George
26 Apr 2018
9 min read
Save for later

How to run and configure an IoT Gateway

Gebin George
26 Apr 2018
9 min read
What is an IoT gateway? An IoT gateway is the protocol or software that's used to connect Internet of Things servers to the cloud. The IoT Gateway can be run as a standalone application, without any modification. There are different encapsulations of the IoT Gateway already prepared. They are built using the same code, but have different properties and are aimed at different operating systems. In today's tutorial, we will explore how to run and configure the IoT gateway. Since all libraries used are based on .NET Standard, they are portable across platforms and operating systems. The encapsulations are then compiled into .NET Core 2 applications. These are the ones being executed. Since both .NET Standard and .NET Core 2 are portable, the gateway can, therefore, be encapsulated for more operating systems than currently supported. Check out this link for a list of operating systems supported by .NET Core 2. Available encapsulations such as installers or app package bundles are listed in the following table. For each one is listed the start project that can be used if you build the project and want to start or debug the application from the development environment: Platform Executable project Windows console Waher.IoTGateway.Console Windows service Waher.IoTGateway.Svc Universal Windows Platform app Waher.IoTGateway.App The IoT Gateway encapsulations can be downloaded from the GitHub project page: All gateways use the library Waher.IoTGateway, which defines the executing environment of the gateway and interacts with all pluggable modules and services. They also use the Waher.IoTGateway.Resources library, which contains resource files common among all encapsulations. The Waher.IoTGateway library is also available as a NuGet: Running the console version The console version of the IoT Gateway (Waher.IoTGateway.Console) is the simplest encapsulation. It can be run from the command line. It requires some basic configuration to run properly. This configuration can be provided manually (see following sections), or by using the installer. The installer asks the user for some basic information and generates the configuration files necessary to execute the application. The console version is the simplest encapsulation, with a minimum of operating system dependencies. It's the easiest to port to other environments. It's also simple to run from the development environment. When run, it outputs any events directly to the terminal window. If sniffers are enabled, the corresponding communication is also output to the terminal window. This provides a simple means to test and debug encrypted communication: Running the gateway as a Windows service The IoT Gateway can also be run as a Windows service (Waher.IoTGateway.Svc). This requires the application be executed on a Windows operating system. The application is a .NET Core 2 console application that has command-line switches allowing it to be registered and executed in the background as a Windows service. Since it supports a command-line interface, it can be used to run the gateway from the console as well. The following table lists recognized command-line switches: Switch Description -? Shows help information. -console Runs the service as a console application. -install Installs the application as a Window Service in the underlying operating system. -displayname Name Sets a custom display name for the Windows service. The default name if omitted is IoT Gateway Service. -description Desc Sets a custom textual description for the Windows service. The default description if omitted is Windows Service hosting the Waher IoT Gateway. -immediate If the service should be started immediately. -localsystem Installed service will run using the Local System account. -localservice Installed service will run using the Local Service account (default). -networkservice Installed service will run using the Network Service account. -start Mode Sets the default starting mode of the Windows service. The default is Disabled. Available options are StartOnBoot, StartOnSystemStart, AutoStart, StartOnDemand and Disabled -uninstall Uninstalls the application as a Windows service from the operating system.   Running the gateway as an app It is possible to run the IoT Gateway as a Universal Windows Platform (UWP) app (Waher.IoTGateway.App). This allows it to be run on Windows phones or embedded devices such as the Raspberry Pi running Windows 10 IoT Core (16299 and later). It can also be used as a template for creating custom apps based on the IoT Gateway: Configuring the IoT Gateway All application data files are separated from the executable files. Application data files are files that can be potentially changed by the user. Executable files are files potentially changed by installers. For the Console and Service applications, application data files are stored in the IoT Gateway subfolder to the operating system's Program Data folder. Example: C:ProgramDataIoT Gateway. For the UWP app, a link to the program data folder is provided at the top of the window. The application data folder contains files you might have to configure to get it to work as you want. Configuring the XMPP interface All IoT Gateways connect to the XMPP network. This connection is used to provide a secure and interoperable interface to your gateway and its underlying devices. You can also administer the gateway through this XMPP connection. The XMPP connection is defined in different manners, depending on the encapsulation. The app lets the user configure the connection via a dialog window. The credentials are then persisted in the object database. The Console and Service versions of the IoT Gateway let the user define the connection using an xmpp.config file in the application data folder. The following is an example configuration file: <?xml version='1.0' encoding='utf-8'?> <SimpleXmppConfiguration xmlns='http://waher.se/Schema/SimpleXmppConfiguration.xsd'> <Host>waher.se</Host> <Port>5222</Port> <Account>USERNAME</Account> <Password>PASSWORD</Password> <ThingRegistry>waher.se</ThingRegistry> <Provisioning>waher.se</Provisioning> <Events></Events> <Sniffer>true</Sniffer> <TrustServer>false</TrustServer> <AllowCramMD5>true</AllowCramMD5> <AllowDigestMD5>true</AllowDigestMD5> <AllowPlain>false</AllowPlain> <AllowScramSHA1>true</AllowScramSHA1> <AllowEncryption>true</AllowEncryption> <RequestRosterOnStartup>true</RequestRosterOnStartup> </SimpleXmppConfiguration> The following is a short recapture: Element Type Description Host String Host name of the XMPP broker to use. Port 1-65535 Port number to connect to. Account String Name of XMPP account. Password String Password to use (or password hash). ThingRegistry String Thing registry to use, or empty if not. Provisioning String Provisioning server to use, or empty if not. Events String Event long to use, or empty if not. Sniffer Boolean If network communication is to be sniffed or not. TrustServer Boolean If the XMPP broker is to be trusted. AllowCramMD5 Boolean If the CRAM-MD5 authentication mechanism is allowed. AllowDigestMD5 Boolean If the DIGEST-MD5 authentication mechanism is allowed. AllowPlain Boolean If the PLAIN authentication mechanism is allowed. AllowScramSHA1 Boolean If the SCRAM-SHA-1 authentication mechanism is allowed. AllowEncryption Boolean If encryption is allowed. RequestRosterOnStartup Boolean If the roster is required, it should be requested on start up. Securing the password Instead of writing the password in clear text in the configuration file, it is recommended that the password hash is used instead of the authentication mechanism supports hashes. When the installer sets up the gateway, it authenticates the credentials during startup and writes the hash value in the file instead. When the hash value is used, the mechanism used to create the hash must be written as well. In the following example, new-line characters are added for readability: <Password type="SCRAM-SHA-1"> rAeAYLvAa6QoP8QWyTGRLgKO/J4= </Password> Setting basic properties of the gateway The basic properties of the IoT Gateway are defined in the Gateway.config file in the program data folder. For example: <?xml version="1.0" encoding="utf-8" ?> <GatewayConfiguration xmlns="http://waher.se/Schema/GatewayConfiguration.xsd"> <Domain>example.com</Domain> <Certificate configFileName="Certificate.config"/> <XmppClient configFileName="xmpp.config"/> <DefaultPage>/Index.md</DefaultPage> <Database folder="Data" defaultCollectionName="Default" blockSize="8192" blocksInCache="10000" blobBlockSize="8192" timeoutMs="10000" encrypted="true"/> <Ports> <Port protocol="HTTP">80</Port> <Port protocol="HTTP">8080</Port> <Port protocol="HTTP">8081</Port> <Port protocol="HTTP">8082</Port> <Port protocol="HTTPS">443</Port> <Port protocol="HTTPS">8088</Port> <Port protocol="XMPP.C2S">5222</Port> <Port protocol="XMPP.S2S">5269</Port> <Port protocol="SOCKS5">1080</Port> </Ports> <FileFolders> <FileFolder webFolder="/Folder1" folderPath="ServerPath1"/> <FileFolder webFolder="/Folder2" folderPath="ServerPath2"/> <FileFolder webFolder="/Folder3" folderPath="ServerPath3"/> </FileFolders> </GatewayConfiguration> Element Type Description Domain String The name of the domain, if any, pointing to the machine running the IoT Gateway. Certificate String The configuration file name specifying details about the certificate to use. XmppClient String The configuration file name specifying details about the XMPP connection. DefaultPage String Relative URL to the page shown if no web page is specified when browsing the IoT Gateway. Database String How the local object database is configured. Typically, these settings do not need to be changed. All you need to know is that you can persist and search for your objects using the static Database defined in Waher.Persistence. Ports Port Which port numbers to use for different protocols supported by the IoT Gateway. FileFolders FileFolder Contains definitions of virtual web folders. Providing a certificate Different protocols (such as HTTPS) require a certificate to allow callers to validate the domain name claim. Such a certificate can be defined by providing a Certificate.config file in the application data folder and then restarting the gateway. If providing such a file, different from the default file, it will be loaded and processed, and then deleted. The information, together with the certificate, will be moved to the relative safety of the object database. For example: <?xml version="1.0" encoding="utf-8" ?> <CertificateConfiguration xmlns="http://waher.se/Schema/CertificateConfiguration.xsd"> <FileName>certificate.pfx</FileName> <Password>testexamplecom</Password> </CertificateConfiguration> Element Type Description FileName String Name of certificate file to import. Password String Password needed to access private part of certificate.   This tutorial was taken from Mastering Internet of Things. Read More IoT Forensics: Security in an always connected world where things talk How IoT is going to change tech teams 5 reasons to choose AWS IoT Core for your next IoT project  
Read more
  • 0
  • 0
  • 28573

article-image-setting-up-logistic-regression-model-using-tensorflow
Packt Editorial Staff
25 Apr 2018
8 min read
Save for later

Setting up Logistic Regression model using TensorFlow

Packt Editorial Staff
25 Apr 2018
8 min read
TensorFlow is another open source library developed by the Google Brain Team to build numerical computation models using data flow graphs. The core of TensorFlow was developed in C++ with the wrapper in Python. The tensorflow package in R gives you access to the TensorFlow API composed of Python modules to execute computation models. TensorFlow supports both CPU- and GPU-based computations. In this article, we will cover the application of TensorFlow in setting up a logistic regression model. The example will use a similar dataset to that used in the H2O model setup. The tensorflow package in R calls the Python tensorflow API for execution, which is essential to install the tensorflow package in both R and Python to make R work. The following are the dependencies for tensorflow: Python 2.7 / 3.x R (>3.2) devtools package in R for installing TensorFlow from GitHub TensorFlow in Python pip Getting ready The code for this section is created on Linux but can be run on any operating system. To start modeling, load the tensorflow package in the environment. R loads the default TensorFlow environment variable and also the NumPy library from Python in the np variable: library("tensorflow") # Load TensorFlow np <- import("numpy") # Load numpy library How to do it... The data is imported using a standard function from R, as shown in the following code. The data is imported using the csv file and transformed into the matrix format followed by selecting the features used to model as defined in xFeatures and yFeatures. The next step in TensorFlow is to set up a graph to run optimization: # Loading input and test data xFeatures = c("Temperature", "Humidity", "Light", "CO2", "HumidityRatio") yFeatures = "Occupancy" occupancy_train <-as.matrix(read.csv("datatraining.txt",stringsAsFactors = T)) occupancy_test <- as.matrix(read.csv("datatest.txt",stringsAsFactors = T)) # subset features for modeling and transform to numeric values occupancy_train<-apply(occupancy_train[, c(xFeatures, yFeatures)], 2, FUN=as.numeric) occupancy_test<-apply(occupancy_test[, c(xFeatures, yFeatures)], 2, FUN=as.numeric) # Data dimensions nFeatures<-length(xFeatures) nRow<-nrow(occupancy_train) Before setting up the graph, let's reset the graph using the following command: # Reset the graph tf$reset_default_graph() Additionally, let's start an interactive session as it will allow us to execute variables without referring to the session-to-session object: # Starting session as interactive session sess<-tf$InteractiveSession() Define the logistic regression model in TensorFlow: # Setting-up Logistic regression graph x <- tf$constant(unlist(occupancy_train[, xFeatures]), shape=c(nRow, nFeatures), dtype=np$float32) # W <- tf$Variable(tf$random_uniform(shape(nFeatures, 1L))) b <- tf$Variable(tf$zeros(shape(1L))) y <- tf$matmul(x, W) + b The input feature x is defined as a constant as it will be an input to the system. The weight W and bias b are defined as variables that will be optimized during the optimization process. The y is set up as a symbolic representation between x, W, and b. The weight W is set up to initialize random uniform distribution and b is assigned the value zero. The next step is to set up the cost function for logistic regression: # Setting-up cost function and optimizer y_ <- tf$constant(unlist(occupancy_train[, yFeatures]), dtype="float32", shape=c(nRow, 1L)) cross_entropy<-tf$reduce_mean(tf$nn$sigmoid_cross_entropy_with_logits(labels=y_, logits=y, name="cross_entropy")) optimizer <- tf$train$GradientDescentOptimizer(0.15)$minimize(cross_entropy) # Start a session init <- tf$global_variables_initializer() sess$run(init) Execute the gradient descent algorithm for the optimization of weights using cross entropy as the loss function: # Running optimization for (step in 1:5000) {   sess$run(optimizer)   if (step %% 20== 0)     cat(step, "-", sess$run(W), sess$run(b), "==>", sess$run(cross_entropy), "n") } How it works... The performance of the model can be evaluated using AUC: # Performance on Train library(pROC) ypred <- sess$run(tf$nn$sigmoid(tf$matmul(x, W) + b)) roc_obj <- roc(occupancy_train[, yFeatures], as.numeric(ypred)) # Performance on test nRowt<-nrow(occupancy_test) xt <- tf$constant(unlist(occupancy_test[, xFeatures]), shape=c(nRowt, nFeatures), dtype=np$float32) ypredt <- sess$run(tf$nn$sigmoid(tf$matmul(xt, W) + b)) roc_objt <- roc(occupancy_test[, yFeatures], as.numeric(ypredt)). AUC can be visualized using the plot.auc function from the pROC package, as shown in the screenshot following this command. The performance for training and testing (hold-out) is very similar. plot.roc(roc_obj, col = "green", lty=2, lwd=2) plot.roc(roc_objt, add=T, col="red", lty=4, lwd=2) Performance of logistic regression using TensorFlow Visualizing TensorFlow graphs TensorFlow graphs can be visualized using TensorBoard. It is a service that utilizes TensorFlow event files to visualize TensorFlow models as graphs. Graph model visualization in TensorBoard is also used to debug TensorFlow models. Getting ready TensorBoard can be started using the following command in the terminal: $ tensorboard --logdir home/log --port 6006 The following are the major parameters for TensorBoard: --logdir : To map to the directory to load TensorFlow events --debug: To increase log verbosity --host: To define the host to listen to its localhost (0.0.1) by default --port: To define the port to which TensorBoard will serve The preceding command will launch the TensorFlow service on localhost at port 6006, as shown in the following screenshot: TensorBoard The tabs on the TensorBoard capture relevant data generated during graph execution. How to do it... The section covers how to visualize TensorFlow models and output in TernsorBoard. To visualize summaries and graphs, data from TensorFlow can be exported using the FileWriter command from the summary module. A default session graph can be added using the following command: # Create Writer Obj for log log_writer = tf$summary$FileWriter('c:/log', sess$graph) The graph for logistic regression developed using the preceding code is shown in the following screenshot: Visualization of the logistic regression graph in TensorBoard Details about symbol descriptions on TensorBoard can be found at https://www.tensorflow.org/get_started/graph_viz. Similarly, other variable summaries can be added to the TensorBoard using correct summaries, as shown in the following code: # Adding histogram summary to weight and bias variable w_hist = tf$histogram_summary("weights", W) b_hist = tf$histogram_summary("biases", b) Create a cross entropy evaluation for test. An example script to generate the cross entropy cost function for test and train is shown in the following command: # Set-up cross entropy for test nRowt<-nrow(occupancy_test) xt <- tf$constant(unlist(occupancy_test[, xFeatures]), shape=c(nRowt, nFeatures), dtype=np$float32) ypredt <- tf$nn$sigmoid(tf$matmul(xt, W) + b) yt_ <- tf$constant(unlist(occupancy_test[, yFeatures]), dtype="float32", shape=c(nRowt, 1L)) cross_entropy_tst<-tf$reduce_mean(tf$nn$sigmoid_cross_entropy_with_logits(labels=yt_, logits=ypredt, name="cross_entropy_tst")) Add summary variables to be collected: # Add summary ops to collect data w_hist = tf$summary$histogram("weights", W) b_hist = tf$summary$histogram("biases", b) crossEntropySummary<-tf$summary$scalar("costFunction", cross_entropy) crossEntropyTstSummary<-tf$summary$scalar("costFunction_test", cross_entropy_tst) Open the writing object, log_writer. It writes the default graph to the location, c:/log: # Create Writer Obj for log log_writer = tf$summary$FileWriter('c:/log', sess$graph) Run the optimization and collect the summaries: for (step in 1:2500) {   sess$run(optimizer)   # Evaluate performance on training and test data after 50 Iteration   if (step %% 50== 0){    ### Performance on Train    ypred <- sess$run(tf$nn$sigmoid(tf$matmul(x, W) + b))    roc_obj <- roc(occupancy_train[, yFeatures], as.numeric(ypred))    ### Performance on Test    ypredt <- sess$run(tf$nn$sigmoid(tf$matmul(xt, W) + b))    roc_objt <- roc(occupancy_test[, yFeatures], as.numeric(ypredt))    cat("train AUC: ", auc(roc_obj), " Test AUC: ", auc(roc_objt), "n")    # Save summary of Bias and weights    log_writer$add_summary(sess$run(b_hist), global_step=step)    log_writer$add_summary(sess$run(w_hist), global_step=step)    log_writer$add_summary(sess$run(crossEntropySummary), global_step=step)    log_writer$add_summary(sess$run(crossEntropyTstSummary), global_step=step) } } Collect all the summaries to a single tensor using themerge_all command from the summary module: summary = tf$summary$merge_all() Write the summaries to the log file using the log_writer object: log_writer = tf$summary$FileWriter('c:/log', sess$graph) summary_str = sess$run(summary) log_writer$add_summary(summary_str, step) log_writer$close() We have learned how to perform logistic regression using TensorFlow also we have covered the application of TensorFlow in setting up a logistic regression model. [box type="shadow" align="" class="" width=""]This article is book excerpt taken from, R Deep Learning Cookbook, co-authored by PKS Prakash & Achyutuni Sri Krishna Rao. This book contains powerful and independent recipes to build deep learning models in different application areas using R libraries.[/box] Read More Getting started with Linear and logistic regression Healthcare Analytics: Logistic Regression to Reduce Patient Readmissions Using Logistic regression to predict market direction in algorithmic trading  
Read more
  • 0
  • 0
  • 47924
article-image-building-a-real-time-dashboard-with-meteor-and-vue-js
Kunal Chaudhari
25 Apr 2018
14 min read
Save for later

Building a real-time dashboard with Meteor and Vue.js

Kunal Chaudhari
25 Apr 2018
14 min read
In this article, we will use Vue.js with an entirely different stack--Meteor! We will discover this full-stack JavaScript framework and build a real-time dashboard with Meteor to monitor the production of some products. We will cover the following topics: Installing Meteor and setting up a project Storing data into a Meteor collection with a Meteor method Subscribing to the collection and using the data in our Vue components The app will have a main page with some indicators, such as: It will also have another page with buttons to generate fake measures since we won't have real sensors available. Setting up the project In this first part, we will cover Meteor and get a simple app up and running on this platform. What is Meteor? Meteor is a full-stack JavaScript framework for building web applications. The mains elements of the Meteor stack are as follows: Web client (can use any frontend library, such as React or Vue); it has a client-side database called Minimongo Server based on nodejs; it supports the modern ES2015+ features, including the import/export syntax Real-time database on the server using MongoDB Communication between clients and the server is abstracted; the client-side and server-side databases can be easily synchronized in real-time Optional hybrid mobile app (Android and iOS), built in one command Integrated developer tools, such as a powerful command-line utility and an easy- to-use build tool Meteor-specific packages (but you can also use npm packages) As you can see, JavaScript is used everywhere. Meteor also encourages you to share code between the client and the server. Since Meteor manages the entire stack, it offers very powerful systems that are easy to use. For example, the entire stack is fully reactive and real-time--if a client sends an update to the server, all the other clients will receive the new data and their UI will automatically be up to date. Meteor has its own build system called "IsoBuild" and doesn't use Webpack. It focuses on ease of use (no configuration), but is, as a result, also less flexible. Installing Meteor If you don't have Meteor on your system, you need to open the Installation Guide on the official Meteor website. Follow the instructions there for your OS to install Meteor. When you are done, you can check whether Meteor was correctly installed with the following command: meteor --version The current version of Meteor should be displayed. Creating the project Now that Meteor is installed, let's set up a new project: Let's create our first Meteor project with the meteor create command: meteor create --bare <folder> cd <folder> The --bare argument tells Meteor we want an empty project. By default, Meteor will generate some boilerplate files we don't need, so this keeps us from having to delete them. Then, we need two Meteor-specific packages--one for compiling the Vue components, and one for compiling Stylus inside those components. Install them with the meteor add command: meteor add akryum:vue-component akryum:vue-stylus We will also install the vue and vue-router package from npm: meteor npm i -S vue vue-router Note that we use the meteor npm command instead of just npm. This is to have the same environment as Meteor (nodejs and npm versions). To start our Meteor app in development mode, just run the meteor command: Meteor Meteor should start an HTTP proxy, a MongoDB, and the nodejs server: It also shows the URL where the app is available; however, if you open it right now, it will be blank. Our first Vue Meteor app In this section, we will display a simple Vue component in our app: Create a new index.html file inside the project directory and tell Meteor we want div in the page body with the app id: <head> <title>Production Dashboard</title> </head> <body> <div id="app"></div>      </body> This is not a real HTML file. It is a special format where we can inject additional elements to the head or body section of the final HTML page. Here, Meteor will add a title into the head section and the <div> into the body section. Create a new client folder, new components subfolder, and a new App.vue component with a simple template: <!-- client/components/App.vue --> <template> <div id="#app"> <h1>Meteor</h1> </div>   </template> Download (https://github.com/Akryum/packt-vue-project-guide/tree/ master/chapter8-full/client) this stylus file in the client folder and add it to the main App.vue component: <style lang="stylus" src="../style.styl" /> Create a main.js file in the client folder that starts the Vue application inside the Meteor.startup hook: import { Meteor } from 'meteor/meteor' import Vue from 'vue' import App from './components/App.vue' Meteor.startup(() => { new Vue({ el: '#app', ...App, }) }) In a Meteor app, it is recommended that you create the Vue app inside the Meteor.startup hook to ensure that all the Meteor systems are ready before starting the frontend. This code will only be run on the client because it is located in a client folder. You should now have a simple app displayed in your browser. You can also open the Vue devtools and check whether you have the App component present on the page. Routing Let's add some routing to the app; we will have two pages--the dashboard with indicators and a page with buttons to generate fake data: In the client/components folder, create two new components--ProductionGenerator.vue and ProductionDashboard.vue. Next to the main.js file, create the router in a router.js file: import Vue from 'vue' import VueRouter from 'vue-router' import ProductionDashboard from './components/ProductionDashboard.vue' import ProductionGenerator from './components/ProductionGenerator.vue' Vue.use(VueRouter) const routes = [ { path: '/', name: 'dashboard', component: ProductionDashboard }, { path: '/generate', name: 'generate', component: ProductionGenerator }, ] const router = new VueRouter({ mode: 'history', routes, }) export default router    Then, import the router in the main.js file and inject it into the app.    In the App.vue main component, add the navigation menu and the router view: <nav> <router-link :to="{ name: 'dashboard' }" exact>Dashboard </router-link> <router-link :to="{ name: 'generate' }">Measure</router-link> </nav> <router-view /> The basic structure of our app is now done: Production measures The first page we will make is the Measures page, where we will have two buttons: The first one will generate a fake production measure with current date and random value The second one will also generate a measure, but with the error property set to true All these measures will be stored in a collection called "Measures". Meteor collections integration A Meteor collection is a reactive list of objects, similar to a MongoDB collection (in fact, it uses MongoDB under the hood). We need to use a Vue plugin to integrate the Meteor collections into our Vue app in order to update it automatically: Add the vue-meteor-tracker npm package: meteor npm i -S vue-meteor-tracker    Then, install the library into Vue: import VueMeteorTracker from 'vue-meteor-tracker' Vue.use(VueMeteorTracker)    Restart Meteor with the meteor command. The app is now aware of the Meteor collection and we can use them in our components, as we will do in a moment. Setting up data The next step is setting up the Meteor collection where we will store our measures data Adding a collection We will store our measures into a Measures Meteor collection. Create a new lib folder in the project directory. All the code in this folder will be executed first, both on the client and the server. Create a collections.js file, where we will declare our Measures collection: import { Mongo } from 'meteor/mongo' export const Measures = new Mongo.Collection('measures') Adding a Meteor method A Meteor method is a special function that will be called both on the client and the server. This is very useful for updating collection data and will improve the perceived speed of the app--the client will execute on minimongo without waiting for the server to receive and process it. This technique is called "Optimistic Update" and is very effective when the network quality is poor.  Next to the collections.js file in the lib folder, create a new methods.js file. Then, add a measure.add method that inserts a new measure into the Measures collection: import { Meteor } from 'meteor/meteor' import { Measures } from './collections' Meteor.methods({ 'measure.add' (measure) { Measures.insert({ ...measure, date: new Date(), }) }, }) We can now call this method with the Meteor.call function: Meteor.call('measure.add', someMeasure) The method will be run on both the client (using the client-side database called minimongo) and on the server. That way, the update will be instant for the client. Simulating measures Without further delay, let's build the simple component that will call this measure.add Meteor method: Add two buttons in the template of ProductionGenerator.vue: <template> <div class="production-generator"> <h1>Measure production</h1> <section class="actions"> <button @click="generateMeasure(false)">Generate Measure</button> <button @click="generateMeasure(true)">Generate Error</button> </section> </div> </template> Then, in the component script, create the generateMeasure method that generates some dummy data and then call the measure.add Meteor method: <script> import { Meteor } from 'meteor/meteor' export default { methods: { generateMeasure (error) { const value = Math.round(Math.random() * 100) const measure = { value, error, } Meteor.call('measure.add', measure) }, }, } </script> The component should look like this: If you click on the buttons, nothing visible should happen. Inspecting the data There is an easy way to check whether our code works and to verify that you can add items in the Measures collection. We can connect to the MongoDB database in a single command. In another terminal, run the following command to connect to the app's database: meteor mongo Then, enter this MongoDB query to fetch the documents of the measures collection (the argument used when creating the Measures Meteor collection): db.measures.find({}) If you clicked on the buttons, a list of measure documents should be displayed This means that our Meteor method worked and objects were inserted in our MongoDB database. Dashboard and reporting Now that our first page is done, we can continue with the real-time dashboard. Progress bars library To display some pretty indicators, let's install another Vue library that allows drawing progress bars along SVG paths; that way, we can have semi-circular bars: Add the vue-progress-path npm package to the project: meteor npm i -S vue-progress-path We need to tell the Vue compiler for Meteor not to process the files in node_modules where the package is installed. Create a new .vueignore file in the project root directory. This file works like a .gitignore: each line is a rule to ignore some paths. If it ends with a slash /, it will ignore only corresponding folders. So, the content of .vueignore should be as follows: node_modules/ Finally, install the vue-progress-path plugin in the client/main.js file: import 'vue-progress-path/dist/vue-progress-path.css' import VueProgress from 'vue-progress-path' Vue.use(VueProgress, { defaultShape: 'semicircle', }) Meteor publication To synchronize data, the client must subscribe to a publication declared on the server. A Meteor publication is a function that returns a Meteor collection query. It can take arguments to filter the data that will be synchronized. For our app, we will only need a simple measures publication that sends all the documents of the Measures collection: This code should only be run on the server. So, create a new server in the project folder and a new publications.js file inside that folder: import { Meteor } from 'meteor/meteor' import { Measures } from '../lib/collections' Meteor.publish('measures', function () { return Measures.find({}) }) This code will only run on the server because it is located in a folder called server. Creating the Dashboard component We are ready to build our ProductionDashboard component. Thanks to the vue- meteor-tracker we installed earlier, we have a new component definition option-- meteor. This is an object that describes the publications that need to be subscribed to and the collection data that needs to be retrieved for that component.    Add the following script section with the meteor definition option: <script> export default { meteor: { // Subscriptions and Collections queries here }, } </script> Inside the meteor option, subscribe to the measures publication with the $subscribe object: meteor: { $subscribe: { 'measures': [], }, }, Retrieve the measures with a query on the Measures Meteor collection inside the meteor option: meteor: { // ... measures () { return Measures.find({}, { sort: { date: -1 }, }) }, }, The second parameter of the find method is an options object very similar to the MongoDB JavaScript API. Here, we are sorting the documents by their date in descending order, thanks to the sort property of the options object. Finally, create the measures data property and initialize it to an empty array. The script of the component should now look like this: <script> import { Measures } from '../../lib/collections' export default { data () { return { measures: [], } }, meteor: { $subscribe: { 'measures': [], }, measures () { return Measures.find({}, { sort: { date: -1 }, }) }, }, } </script> In the browser devtools, you can now check whether the component has retrieved the items from the collection. Indicators We will create a separate component for the dashboard indicators, as follows: In the components folder, create a new ProductionIndicator.vue component. Declare a template that displays a progress bar, a title, and additional info text: <template> <div class="production-indicator"> <loading-progress :progress="value" /> <div class="title">{{ title }}</div> <div class="info">{{ info }}</div> </div> </template> Add the value, title, and info props: <script> export default { props: { value: { type: Number, required: true, }, title: String, info: [String, Number], }, } </script> Back in our ProductionDashboard component, let's compute the average of the values and the rate of errors: computed: { length () { return this.measures.length }, average () { if (!this.length) return 0 let total = this.measures.reduce( (total, measure) => total += measure.value, 0 ) return total / this.length }, errorRate () { if (!this.length) return 0 let total = this.measures.reduce( (total, measure) => total += measure.error ? 1 : 0, 0 ) return total / this.length }, }, 5. Add two indicators in the templates - one for the average value and one for the error rate: <template> <div class="production-dashboard"> <h1>Production Dashboard</h1> <section class="indicators"> <ProductionIndicator :value="average / 100" title="Average" :info="Math.round(average)" /> <ProductionIndicator class="danger" :value="errorRate" title="Errors" :info="`${Math.round(errorRate * 100)}%`" /> </section> </div> </template> The indicators should look like this: Listing the measures Finally, we will display a list of the measures below the indicators:  Add a simple list of <div> elements for each measure, displaying the date if it has an error and the value: <section class="list"> <div v-for="item of measures" :key="item._id" > <div class="date">{{ item.date.toLocaleString() }}</div> <div class="error">{{ item.error ? 'Error' : '' }}</div> <div class="value">{{ item.value }}</div> </div> </section> The app should now look as follows, with a navigation toolbar, two indicators, and the measures list: If you open the app in another window and put your windows side by side, you can see the full-stack reactivity of Meteor in action. Open the dashboard in one window and the generator page in the other window. Then, add fake measures and watch the data update on the other window in real time. If you want to learn more about Meteor, check out the official website and the Vue integration repository. To summarize, we created a project using Meteor. We integrated Vue into the app and set up a Meteor reactive collection. Using a Meteor method, we inserted documents into the collection and displayed in real-time the data in a dashboard component. You read an excerpt from a book written by Guillaume Chau, titled Vue.js 2 Web Development Projects. This book will help you build exciting real world web projects from scratch and become proficient with Vue.js Web Development. Read More Building your first Vue.js 2 Web application Why has Vue.js become so popular? Installing and Using Vue.js    
Read more
  • 0
  • 3
  • 37529

article-image-building-arcore-android-application
Sugandha Lahoti
24 Apr 2018
9 min read
Save for later

Getting started with building an ARCore application for Android

Sugandha Lahoti
24 Apr 2018
9 min read
Google developed ARCore to be accessible from multiple development platforms (Android [Java], Web [JavaScript], Unreal [C++], and Unity [C#]), thus giving developers plenty of flexibility and options to build applications on various platforms. While each platform has its strengths and weaknesses, all the platforms essentially extend from the native Android SDK that was originally built as Tango. This means that regardless of your choice of platform, you will need to install and be somewhat comfortable working with the Android development tools. In this article, we will focus on setting up the Android development tools and building an ARCore application for Android. The following is a summary of the major topics we will cover in this post: Installing Android Studio Installing ARCore Build and deploy Exploring the code Installing Android Studio Android Studio is a development environment for coding and deploying Android applications. As such, it contains the core set of tools we will need for building and deploying our applications to an Android device. After all, ARCore needs to be installed to a physical device in order to test. Follow the given instructions to install Android Studio for your development environment: Open a browser on your development computer to https://developer.android.com/studio. Click on the green DOWNLOAD ANDROID STUDIO button. Agree to the Terms and Conditions and follow the instructions to download. After the file has finished downloading, run the installer for your system. Follow the instructions on the installation dialog to proceed. If you are installing on Windows, ensure that you set a memorable installation path that you can easily find later, as shown in the following example: Click through the remaining dialogs to complete the installation. When the installation is complete, you will have the option to launch the program. Ensure that the option to launch Android Studio is selected and click on Finish. Android Studio comes embedded with OpenJDK. This means we can omit the steps to installing Java, on Windows at least. If you are doing any serious Android development, again on Windows, then you should go through the steps on your own to install the full Java JDK 1.7 and/or 1.8, especially if you plan to work with older versions of Android. On Windows, we will install everything to C:Android; that way, we can have all the Android tools in one place. If you are using another OS, use a similar well-known path. Now that we have Android Studio installed, we are not quite done. We still need to install the SDK tools that will be essential for building and deployment. Follow the instructions in the next exercise to complete the installation: If you have not installed the Android SDK before, you will be prompted to install the SDK when Android Studio first launches, as shown: Select the SDK components and ensure that you set the installation path to a well-known location, again, as shown in the preceding screenshot. Leave the Welcome to Android Studio dialog open for now. We will come back to it in a later exercise. That completes the installation of Android Studio. In the next section, we will get into installing ARCore. Installing ARCore Of course, in order to work with or build any ARCore applications, we will need to install the SDK for our chosen platform. Follow the given instructions to install the ARCore SDK: We will use Git to pull down the code we need directly from the source. You can learn more about Git and how to install it on your platform at https://git-scm.com/book/en/v2/Getting-Started-Installing-Git or use Google to search: getting started installing Git. Ensure that when you install on Windows, you select the defaults and let the installer set the PATH environment variables. Open Command Prompt or Windows shell and navigate to the Android (C:Android on Windows) installation folder. Enter the following command: git clone https://github.com/google-ar/arcore-android-sdk.git This will download and install the ARCore SDK into a new folder called arcore-android-sdk, as illustrated in the following screenshot: Ensure that you leave the command window open. We will be using it again later. Installing the ARCore service on a device Now, with the ARCore SDK installed on our development environment, we can proceed with installing the ARCore service on our test device. Use the following steps to install the ARCore service on your device: NOTE: this step is only required when working with the Preview SDK of ARCore. When Google ARCore 1.0 is released you will not need to perform this step. Grab your mobile device and enable the developer and debugging options by doing the following: Opening the Settings app Selecting the System Scrolling to the bottom and selecting About phone Scrolling again to the bottom and tapping on Build number seven times Going back to the previous screen and selecting Developer options near the bottom Selecting USB debugging Download the ARCore service APK from https://github.com/google-ar/arcore-android-sdk/releases/download/sdk-preview/arcore-preview.apk to the Android installation folder (C:Android). Also note that this URL will likely change in the future. Connect your mobile device with a USB cable. If this is your first time connecting, you may have to wait several minutes for drivers to install. You will then be prompted to switch on the device to allow the connection. Select Allow to enable the connection. Go back to your Command Prompt or Windows shell and run the following command: adb install -r -d arcore-preview.apk //ON WINDOWS USE: sdkplatform-toolsadb install -r -d arcore-preview.apk After the command is run, you will see the word Success. This completes the installation of ARCore for the Android platform. In the next section, we will build our first sample ARCore application. Build and deploy Now that we have all the tedious installation stuff out of the way, it's time to build and deploy a sample app to your Android device. Let's begin by jumping back to Android Studio and following the given steps: Select the Open an existing Android Studio project option from the Welcome to Android Studio window. If you accidentally closed Android Studio, just launch it again. Navigate and select the Androidarcore-android-sdksamplesjava_arcore_hello_ar folder, as follows: Click on OK. If this is your first time running this project, you will encounter some dependency errors, such as the one here: In order to resolve the errors, just click on the link at the bottom of the error message. This will open a dialog, and you will be prompted to accept and then download the required dependencies. Keep clicking on the links until you see no more errors. Ensure that your mobile device is connected and then, from the menu, choose Run - Run. This should start the app on your device, but you may still need to resolve some dependency errors. Just remember to click on the links to resolve the errors. This will open a small dialog. Select the app option. If you do not see the app option, select Build - Make Project from the menu. Again, resolve any dependency errors by clicking on the links. "Your patience will be rewarded." - Alton Brown Select your device from the next dialog and click on OK. This will launch the app on your device. Ensure that you allow the app to access the device's camera. The following is a screenshot showing the app in action: Great, we have built and deployed our first Android ARCore app together. In the next section, we will take a quick look at the Java source code. Exploring the code Now, let's take a closer look at the main pieces of the app by digging into the source code. Follow the given steps to open the app's code in Android Studio: From the Project window, find and double-click on the HelloArActivity, as shown: After the source is loaded, scroll through the code to the following section: private void showLoadingMessage() { runOnUiThread(new Runnable() { @Override public void run() { mLoadingMessageSnackbar = Snackbar.make( HelloArActivity.this.findViewById(android.R.id.content), "Searching for surfaces...", Snackbar.LENGTH_INDEFINITE); mLoadingMessageSnackbar.getView().setBackgroundColor(0xbf323232); mLoadingMessageSnackbar.show(); } }); } Note the highlighted text—"Searching for surfaces..". Select this text and change it to "Searching for ARCore surfaces..". The showLoadingMessage function is a helper for displaying the loading message. Internally, this function calls runOnUIThread, which in turn creates a new instance of Runnable and then adds an internal run function. We do this to avoid thread blocking on the UI, a major no-no. Inside the run function is where the messaging is set and the message Snackbar is displayed. From the menu, select Run - Run 'app' to start the app on your device. Of course, ensure that your device is connected by USB. Run the app on your device and confirm that the message has changed. Great, now we have a working app with some of our own code. This certainly isn't a leap, but it's helpful to walk before we run. In this article, we started exploring ARCore by building and deploying an AR app for the Android platform. We did this by first installing Android Studio. Then, we installed the ARCore SDK and ARCore service onto our test mobile device. Next, we loaded up the sample ARCore app and patiently installed the various required build and deploy dependencies. After a successful build, we deployed the app to our device and tested. Finally, we tested making a minor code change and then deployed another version of the app. You read an excerpt from the book, Learn ARCore - Fundamentals of Google ARCore, written by Micheal Lanham. This book will help you will create next-generation Augmented Reality and Mixed Reality apps with the latest version of Google ARCore. Read More Google ARCore is pushing immersive computing forward Types of Augmented Reality targets
Read more
  • 0
  • 0
  • 31924

article-image-building-a-web-service-with-laravel-5
Kunal Chaudhari
24 Apr 2018
15 min read
Save for later

Building a Web Service with Laravel 5

Kunal Chaudhari
24 Apr 2018
15 min read
A web service is an application that runs on a server and allows a client (such as a browser) to remotely write/retrieve data to/from the server over HTTP. In this article we will be covering the following set of topics: Using Laravel to create a web service Writing database migrations and seed files Creating API endpoints to make data publicly accessible Serving images from Laravel The interface of a web service will be one or more API endpoints, sometimes protected with authentication, that will return data in an XML or JSON payload: Web services are a speciality of Laravel, so it won't be hard to create one for Vuebnb. We'll use routes for our API endpoints and represent the listings with Eloquent models that Laravel will seamlessly synchronize with the database: Laravel also has inbuilt features to add API architectures such as REST, though we won't need this for our simple use case. Mock data The mock listing data is in the file database/data.json. This file includes a JSON- encoded array of 30 objects, with each object representing a different listing. Having built the listing page prototype, you'll no doubt recognize a lot of the same properties on these objects, including the title, address, and description. database/data.json: [ { "id": 1, "title": "Central Downtown Apartment with Amenities", "address": "...", "about": "...", "amenity_wifi": true, "amenity_pets_allowed": true, "amenity_tv": true, "amenity_kitchen": true, "amenity_breakfast": true, "amenity_laptop": true, "price_per_night": "$89" "price_extra_people": "No charge", "price_weekly_discount": "18%", "price_monthly_discount": "50%", }, { "id": 2, ... }, ... ] Each mock listing includes several images of the room as well. Images aren't really part of a web service, but they will be stored in a public folder in our app to be served as needed. Database Our web service will require a database table for storing the mock listing data. To set this up we'll need to create a schema and migration. We'll then create a seeder that will load and parse our mock data file and insert it into the database, ready for use in the app. Migration A migration is a special class that contains a set of actions to run against the database, such as creating or modifying a database table. Migrations ensure your database gets set up identically every time you create a new instance of your app, for example, installing in production or on a teammate's machine. To create a new migration, use the make:migration Artisan CLI command. The argument of the command should be a snake-cased description of what the migration will do: $ php artisan make:migration create_listings_table You'll now see your new migration in the database/migrations directory. You'll notice the filename has a prefixed timestamp, such as 2017_06_20_133317_create_listings_table.php. The timestamp allows Laravel to determine the proper order of the migrations, in case it needs to run more than one at a time. Your new migration declares a class that extends Migration. It overrides two methods: up, which is used to add new tables, columns, or indexes to your database; and down, which is used to delete them. We'll implement these methods shortly. You'll now see your new migration in the database/migrations directory. You'll notice the filename has a prefixed timestamp, such as 2017_06_20_133317_create_listings_table.php. The timestamp allows Laravel to determine the proper order of the migrations, in case it needs to run more than one at a time. Your new migration declares a class that extends Migration. It overrides two methods: up, which is used to add new tables, columns, or indexes to your database; and down, which is used to delete them. We'll implement these methods shortly. 2017_06_20_133317_create_listings_table.php: <?php use Illuminate\Support\Facades\Schema; use Illuminate\Database\Schema\Blueprint; use Illuminate\Database\Migrations\Migration; class CreateListingsTable extends Migration { public function up() { // } public function down() { // } } Schema A schema is a blueprint for the structure of a database. For a relational database such as MySQL, the schema will organize data into tables and columns. In Laravel, schemas are declared by using the Schema facade's create method. We'll now make a schema for a table to hold Vuebnb listings. The columns of the table will match the structure of our mock listing data. Note that we set a default false value for the amenities and allow the prices to have a NULL value. All other columns require a value. The schema will go inside our migration's up method. We'll also fill out the down with a call to Schema::drop. 2017_06_20_133317_create_listings_table.php: public function up() { Schema::create('listings', function (Blueprint $table) { $table->primary('id'); $table->unsignedInteger('id'); $table->string('title'); $table->string('address'); $table->longText('about'); // Amenities $table->boolean('amenity_wifi')->default(false); $table->boolean('amenity_pets_allowed')->default(false); $table->boolean('amenity_tv')->default(false); $table->boolean('amenity_kitchen')->default(false); $table->boolean('amenity_breakfast')->default(false); $table->boolean('amenity_laptop')->default(false); // Prices $table->string('price_per_night')->nullable(); $table->string('price_extra_people')->nullable(); $table->string('price_weekly_discount')->nullable(); $table->string('price_monthly_discount')->nullable(); }); } public function down() { Schema::drop('listings'); } A facade is an object-oriented design pattern for creating a static proxy to an underlying class in the service container. The facade is not meant to provide any new functionality; its only purpose is to provide a more memorable and easily readable way of performing a common action. Think of it as an object-oriented helper function. Execution Now that we've set up our new migration, let's run it with this Artisan command: $ php artisan migrate You should see an output like this in the Terminal: Migrating: 2017_06_20_133317_create_listings_table Migrated:            2017_06_20_133317_create_listings_table To confirm the migration worked, let's use Tinker to show the new table structure. If you've never used Tinker, it's a REPL tool that allows you to interact with a Laravel app on the command line. When you enter a command into Tinker it will be evaluated as if it were a line in your app code. Firstly, open the Tinker shell: $ php artisan tinker Now enter a PHP statement for evaluation. Let's use the DB facade's select method to run an SQL DESCRIBE query to show the table structure: >>>> DB::select('DESCRIBE listings;'); The output is quite verbose so I won't reproduce it here, but you should see an object with all your table details, confirming the migration worked. Seeding mock listings Now that we have a database table for our listings, let's seed it with the mock data. To do so we're going to have to do the following:  Load the database/data.json file  Parse the file  Insert the data into the listings table Creating a seeder Laravel includes a seeder class that we can extend called Seeder. Use this Artisan command to implement it: $ php artisan make:seeder ListingsTableSeeder When we run the seeder, any code in the run method is executed. database/ListingsTableSeeder.php: <?php use Illuminate\Database\Seeder; class ListingsTableSeeder extends Seeder { public function run() { // } } Loading the mock data Laravel provides a File facade that allows us to open files from disk as simply as File::get($path). To get the full path to our mock data file we can use the base_path() helper function, which returns the path to the root of our application directory as a string. It's then trivial to convert this JSON file to a PHP array using the built-in json_decode method. Once the data is an array, it can be directly inserted into the database given that the column names of the table are the same as the array keys. database/ListingsTableSeeder.php: public  function  run() { $path  = base_path()  . '/database/data.json'; $file  = File::get($path); $data  = json_decode($file,  true); } Inserting the data In order to insert the data, we'll use the DB facade again. This time we'll call the table method, which returns an instance of Builder. The Builder class is a fluent query builder that allows us to query the database by chaining constraints, for example, DB::table(...)->where(...)->join(...) and so on. Let's use the insert method of the builder, which accepts an array of column names and values. database/seeds/ListingsTableSeeder.php: public  function  run() { $path  = base_path()  . '/database/data.json'; $file  = File::get($path); $data  = json_decode($file,  true); DB::table('listings')->insert($data); } Executing the seeder To execute the seeder we must call it from the DatabaseSeeder.php file, which is in the same directory. database/seeds/DatabaseSeeder.php: <?php use Illuminate\Database\Seeder; class DatabaseSeeder extends Seeder { public function run() { $this->call(ListingsTableSeeder::class); } } With that done, we can use the Artisan CLI to execute the seeder: $ php artisan db:seed You should see the following output in your Terminal: Seeding: ListingsTableSeeder We'll again use Tinker to check our work. There are 30 listings in the mock data, so to confirm the seed was successful, let's check for 30 rows in the database: $ php artisan tinker >>>> DB::table('listings')->count(); # Output: 30 Finally, let's inspect the first row of the table just to be sure its content is what we expect: >>>> DB::table('listings')->get()->first(); Here is the output: => {#732 +"id": 1, +"title": "Central Downtown Apartment with Amenities", +"address": "No. 11, Song-Sho Road, Taipei City, Taiwan 105", +"about": "...", +"amenity_wifi": 1, +"amenity_pets_allowed": 1, +"amenity_tv": 1, +"amenity_kitchen": 1, +"amenity_breakfast": 1, +"amenity_laptop": 1, +"price_per_night": "$89", +"price_extra_people": "No charge", +"price_weekly_discount": "18%", +"price_monthly_discount": "50%" } If yours looks like that you're ready to move on! Listing model We've now successfully created a database table for our listings and seeded it with mock listing data. How do we access this data now from the Laravel app? We saw how the DB facade lets us execute queries on our database directly. But Laravel provides a more powerful way to access data via the Eloquent ORM. Eloquent ORM Object-Relational Mapping (ORM) is a technique for converting data between incompatible systems in object-oriented programming languages. Relational databases such as MySQL can only store scalar values such as integers and strings, organized within tables. We want to make use of rich objects in our app, though, so we need a means of robust conversion. Eloquent is the ORM implementation used in Laravel. It uses the active record design pattern, where a model is tied to a single database table, and an instance of the model is tied to a single row. To create a model in Laravel using Eloquent ORM, simply extend the Illuminate\Database\Eloquent\Model class using Artisan: $ php artisan make:model Listing This generates a new file. app/Listing.php: <?php namespace App; use Illuminate\Database\Eloquent\Model; class Listing extends Model { // } How do we tell the ORM what table to map to, and what columns to include? By default, the Model class uses the class name (Listing) in lowercase (listing) as the table name to use. And, by default, it uses all the fields from the table. Now, any time we want to load our listings we can use code such as this, anywhere in our app: <?php // Load all listings $listings = \App\Listing::all(); // Iterate listings, echo the address foreach ($listings as $listing) { echo $listing->address . '\n' ; } /* * Output: * * No. 11, Song-Sho Road, Taipei City, Taiwan 105 * 110, Taiwan, Taipei City, Xinyi District, Section 5, Xinyi Road, 7 * No. 51, Hanzhong Street, Wanhua District, Taipei City, Taiwan 108 * ... */ Casting The data types in a MySQL database don't completely match up to those in PHP. For example, how does an ORM know if a database value of 0 is meant to be the number 0, or the Boolean value of false? An Eloquent model can be given a $casts property to declare the data type of any specific attribute. $casts is an array of key/values where the key is the name of the attribute being cast, and the value is the data type we want to cast to. For the listings table, we will cast the amenities attributes as Booleans. app/Listing.php: <?php namespace App; use Illuminate\Database\Eloquent\Model; class Listing extends Model { protected $casts = [ 'amenity_wifi' => 'boolean', 'amenity_pets_allowed' => 'boolean', 'amenity_tv' => 'boolean', 'amenity_kitchen' => 'boolean', 'amenity_breakfast' => 'boolean', 'amenity_laptop' => 'boolean' ]; } Now these attributes will have the correct type, making our model more robust: echo  gettype($listing->amenity_wifi()); //  boolean Public interface The final piece of our web service is the public interface that will allow a client app to request the listing data. Since the Vuebnb listing page is designed to display one listing at a time, we'll at least need an endpoint to retrieve a single listing. Let's now create a route that will match any incoming GET requests to the URI /api/listing/{listing} where {listing} is an ID. We'll put this in the routes/api.php file, where routes are automatically given the /api/ prefix and have middleware optimized for use in a web service by default. We'll use a closure function to handle the route. The function will have a $listing argument, which we'll type hint as an instance of the Listing class, that is, our model. Laravel's service container will resolve this as an instance with the ID matching {listing}. We can then encode the model as JSON and return it as a response. routes/api.php: <?php use App\Listing; Route::get('listing/{listing}', function(Listing $listing) { return $listing->toJson(); }); We can test this works by using the curl command from the Terminal: $ curl http://vuebnb.test/api/listing/1 The response will be the listing with ID 1: Controller We'll be adding more routes to retrieve the listing data as the project progresses. It's a best practice to use a controller class for this functionality to keep a separation of concerns. Let's create one with Artisan CLI: $ php artisan make:controller ListingController We'll then move the functionality from the route into a new method, get_listing_api. app/Http/Controllers/ListingController.php: <?php namespace App\Http\Controllers; use Illuminate\Http\Request; use App\Listing; class ListingController extends Controller { public function get_listing_api(Listing $listing) { return $listing->toJson(); } } For the Route::get method we can pass a string as the second argument instead of a closure function. The string should be in the form [controller]@[method], for example, ListingController@get_listing_web. Laravel will correctly resolve this at runtime. routes/api.php: <?php Route::get('/listing/{listing}', 'ListingController@get_listing_api'); Images As stated at the beginning of the article, each mock listing comes with several images of the room. These images are not in the project code and must be copied from a parallel directory in the code base called images. Copy the contents of this directory into the public/images folder: $ cp -a ../images/. ./public/images Once you've copied these files, public/images will have 30 sub-folders, one for each mock listing. Each of these folders will contain exactly four main images and a thumbnail image: Accessing images Files in the public directory can be directly requested by appending their relative path to the site URL. For example, the default CSS file, public/css/app.css, can be requested at http://vuebnb.test/css/app.css. The advantage of using the public folder, and the reason we've put our images there, is to avoid having to create any logic for accessing them. A frontend app can then directly call the images in an img tag. You may think it's inefficient for our web server to serve images like this, and you'd be right. Let's try to open one of the mock listing images in our browser to test this thesis: http://vuebnb.test/images/1/Image_1.jpg: Image links The payload for each listing in the web service should include links to these new images so a client app knows where to find them. Let's add the image paths to our listing API payload so it looks like this: { "id": 1, "title": "...", "description": "...", ... "image_1": "http://vuebnb.test/app/image/1/Image_1.jpg", "image_2": "http://vuebnb.test/app/image/1/Image_2.jpg", "image_3": "http://vuebnb.test/app/image/1/Image_3.jpg", "image_4": "http://vuebnb.test/app/image/1/Image_4.jpg" } To implement this, we'll use our model's toArray method to make an array representation of the model. We'll then easily be able to add new fields. Each mock listing has exactly four images, numbered 1 to 4, so we can use a for loop and the asset helper to generate fully- qualified URLs to files in the public folder. We finish by creating an instance of the Response class by calling the response helper. We use the json; method and pass in our array of fields, returning the result. app/Http/Controllers/ListingController.php: public function get_listing_api(Listing $listing) { $model = $listing->toArray(); for($i = 1; $i <=4; $i++) { $model['image_' . $i] = asset( 'images/' . $listing->id . '/Image_' . $i . '.jpg' ); } return response()->json($model); } The /api/listing/{listing} endpoint is now ready for consumption by a client app. To summarize, we built a web service with Laravel to make the data publicly accessible. This involved setting up a database table using a migration and schema, then seeding the database with mock listing data. We then created a public interface for the web service using routes. You enjoyed an excerpt from a book written by Anthony Gore, titled Full-Stack Vue.js 2 and Laravel 5 which would help you bring the frontend and backend together with Vue, Vuex, and Laravel. Read More Testing RESTful Web Services with Postman How to develop RESTful web services in Spring        
Read more
  • 0
  • 0
  • 41654
article-image-advanced-programming-with-rust
Packt Editorial Staff
23 Apr 2018
7 min read
Save for later

Perform Advanced Programming with Rust

Packt Editorial Staff
23 Apr 2018
7 min read
Rust is a systems programming language that runs blazingly fast, prevents segfaults, and guarantees thread safety. In today’s tutorial we are focusing on equipping you with recipes to programming with Rust and also help you define expressions, constants, and variable bindings. Let us get started: Defining an expression An expression, in simple words, is a statement in Rust by using which we can create logic and workflows in the program and applications. We will deep dive into understanding expressions and blocks in Rust. Getting ready We will require the Rust compiler and any text editor for coding. How to do it... Follow the ensuing steps: Create a file named expression.rs with the next code snippet. Declare the main function and create the variables x_val, y_val, and z_val: //  main  point  of  execution fn  main()  { //  expression let  x_val  =  5u32; //  y  block let  y_val  =  { let  x_squared  =  x_val  *  x_val; let  x_cube  =  x_squared  *  x_val; //  This  expression  will  be  assigned  to  `y_val` x_cube  +  x_squared  +  x_val }; //  z  block let  z_val  =  { //  The  semicolon  suppresses  this  expression  and  `()`  is assigned  to  `z` 2  *  x_val; }; //  printing  the  final  outcomes println!("x  is  {:?}",  x_val); println!("y  is  {:?}",  y_val); println!("z  is  {:?}",  z_val); } You should get the ensuing output upon running the code. Please refer to the following screenshot: How it works... All the statements that end in a semicolon (;) are expressions. A block is a statement that has a set of statements and variables inside the {} scope. The last statement of a block is the value that will be assigned to the variable. When we close the last statement with a semicolon, it returns () to the variable. In the preceding recipe, the first statement which is a variable named x_val , is assigned to the value 5. Second, y_val is a block that performs certain operations on the variable x_val and a few more variables, which are x_squared and x_cube that contain the squared and cubic values of the variable x_val , respectively. The variables x_squared and x_cube , will be deleted soon after the scope of the block. The block where we declare the z_val variable has a semicolon at the last statement which assigns it to the value of (), suppressing the expression. We print out all the values in the end. We print all the declared variables values in the end. Defining constants Rust provides the ability to assign and maintain constant values across the code in Rust. These values are very useful when we want to maintain a global count, such as a timer-- threshold--for example. Rust provides two const keywords to perform this activity. You will learn how to deliver constant values globally in this recipe. Getting ready We will require the Rust compiler and any text editor for coding. How to do it... Follow these steps: Create a file named constant.rs with the next code snippet. Declare the global UPPERLIMIT using constant: //  Global  variables  are  declared  outside  scopes  of  other function const  UPPERLIMIT:  i32  =  12; Create the is_big function by accepting a single integer as input: //  function  to  check  if  bunber fn  is_big(n:  i32)  ->  bool  { //  Access  constant  in  some  function n  >  UPPERLIMIT } In the main function, call the is_big function and perform the decision-making statement: fn  main()  { let  random_number  =  15; //  Access  constant  in  the  main  thread println!("The  threshold  is  {}",  UPPERLIMIT); println!("{}  is  {}",  random_number,  if is_big(random_number)  {  "big"  }  else  {  "small" }); //  Error!  Cannot  modify  a  `const`. //  UPPERLIMIT  =  5; } You should get the following screenshot as output upon running the preceding code: How it works... The workflow of the recipe is fairly simple, where we have a function to check whether an integer is greater than a fixed threshold or not. The UPPERLIMIT variable defines the fixed threshold for the function, which is a constant whose value will not change in the code and is accessible throughout the program. We assigned 15 to random_number and passed it via is_big  (integer  value); and we then get a boolean output, either true or false, as the return type of the function is a bool type. The answer to our situation is false as 15 is not bigger than 12, which the UPPERLIMIT value set as the constant. We performed this condition checking using the if...else statement in Rust. We cannot change the UPPERLIMIT value; when attempted, it will throw an error, which is commented in the code section. Constants declare constant values. They represent a value, not a memory address: type  =  value; Performing variable bindings Variable binding refers to how a variable in the Rust code is bound to a type. We will cover pattern, mutability, scope, and shadow concepts in this recipe. Getting ready We will require the Rust compiler and any text editor for coding. How to do it... Perform the following step: Create a file named binding.rs and enter a code snippet that includes declaring the main function and different variables: fn  main()  { //  Simplest  variable  binding let  a  =  5; //  pattern let  (b,  c)  =  (1,  2); //  type  annotation let  x_val:  i32  =  5; //  shadow  example let  y_val:  i32  =  8; { println!("Value  assigned  when  entering  the scope  :  {}",  y_val);  //  Prints  "8". let  y_val  =  12; println!("Value  modified  within  scope  :{}",  y_val); //  Prints  "12". } println!("Value  which  was  assigned  first  :  {}",  y_val); //  Prints  "8". let  y_val  =  42; println!("New  value  assigned  :  {}",  y_val); //Prints  "42". } You should get the following screenshot as output upon running the preceding code: How it works... The let statement is the simplest way to create a binding, where we bind a variable to a value, which is the case with variable a. To create a pattern with the let statement, we assign the pattern values to b and c values in the same pattern. Rust is a statically typed language. This means that we have to specify our types during an assignment, and at compile time, it is checked to see if it is compatible. Rust also has the type reference feature that identifies the variable type automatically at compile time. The variable_name  : type is the format we use to explicitly mention the type in Rust. We read the assignment in the following format: x_val is a binding with the type i32 and the value 5. Here, we declared x_val as a 32-bit signed integer. However, Rust has many different primitive integer types that begin with i for signed integers and u for unsigned integers, and the possible integer sizes are 8, 16, 32, and 64 bits. Variable bindings have a scope that makes the variable alive only in the scope. Once it goes out of the scope, the resources are freed. A block is a collection of statements enclosed by {}. Function definitions are also blocks! We use a block to illustrate the feature in Rust that allows variable bindings to be shadowed. This means that a later variable binding can be done with the same name, which in our case is y_val. This goes through a series of value changes, as a new binding that is currently in scope overrides the previous binding. Shadowing enables us to rebind a name to a value of a different type. This is the reason why we are able to assign new values to the immutable y_val variable in and out of the block. [box type="shadow" class="" width=""]This article is an extract taken from Rust Cookbook written by Vigneshwer Dhinakaran. You will find more than 80 practical recipes written in Rust that will allow you to use the code samples right away in your existing applications.[/box] Read More 20 ways to describe programming in 5 words Top 5 programming languages for crunching Big Data effectively    
Read more
  • 0
  • 1
  • 28050

article-image-how-data-scientists-test-hypotheses-and-probability
Richard Gall
23 Apr 2018
4 min read
Save for later

How data scientists test hypotheses and probability

Richard Gall
23 Apr 2018
4 min read
Why hypotheses are important in statistical analysis Hypothesis testing allows researchers and statisticians to develop hypotheses which are then assessed to determine the probability or the likelihood of those findings. This statistics tutorial has been taken from Basic Statistics and Data Mining for Data Science. Whenever you wish to make an inference about a population from a sample, you must test a specific hypothesis. It’s common practice to state 2 different hypotheses: Null hypothesis which states that there is no effect Alternative/research hypothesis which states that there is an effect So, the null hypothesis is one which says that there is no difference. For example, you might be looking at the mean income between males and females, but the null hypothesis you are testing is that there is no difference between the 2 groups. The alternative hypothesis, meanwhile, is generally, although not exclusively, the one that researchers are really interested in. In this example, you might hypothesize that the mean income between males and females is different. Read more: How to predict Bitcoin prices from historical and live data. Why probability is important in statistical analysis In statistics, nothing is ever certain because we are always dealing with samples rather than populations. This is why we always have to work in probabilities. The way hypotheses are assessed is by calculating the probability or the likelihood of finding our result. A probability value, which can range from zero to one, corresponding to 0% and 100% in percentages, is essentially a way of measuring the likelihood of a particular event occurring. You can use these values to assess whether the likelihood of any of these differences that you have found are the result of random chance. How do hypotheses and probability interact? It starts getting really interesting once we begin looking at how hypotheses and probability interact. Here’s an example. Suppose you want to know who is going to win the Super Bowl. I ask a fellow statistician, and he tells me that she’s built a predictive model and that he knows which team is going to win. Fine - my next question is how confident he is in that prediction. He says he’s 50% confident - are you going to trust his prediction? Of course you’re not - there are only 2 possible outcomes and 50% is ultimately just random chance. So, say I ask another statistician. He also tells me that he has a prediction and that he has built a predictive model, and he’s 75% confident in the prediction he has made. You’re more likely to trust this prediction - you have a 75% chance of being right and a 25% chance of being wrong. But let’s say you’re feeling cautious - a 25% chance of being wrong is too high. So, you ask another statistician for their prediction. She tells me that she’s also built a predictive model which she has 90% confidence is correct. So, having formally stated our hypotheses we then have to select a criterion for acceptance or rejection of the null hypothesis. With probability tests like the chi-squared test, the t-test, or regression or correlation, you’re testing the likelihood that a statistic of the magnitude that you obtained or greater would have occurred by chance, assuming that the null hypothesis is true. It’s important to remember that you always assess the probability of the null hypothesis as true. You only reject the null hypothesis if you can say that the results would have been extremely unlikely under the conditions set by the null hypothesis. In this case, if you can reject the null hypothesis, you have found support for the alternative/research hypothesis. This doesn’t prove the alternative hypothesis, but it does tell you that the null hypothesis is unlikely to be true. The criterion we typically use is whether the significance level sits above or below 0.05 (5%), indicating that a statistic of the size that we obtained, would only be likely to occur on 5% of occasions. By choosing a 5% criterion you are accepting that you will make a mistake in rejecting the null hypothesis 1 in 20 times. Replication and data mining If in traditional statistics we work with hypotheses and probabilities to deal with the fact that we’re always working with a sample rather than a population, in data mining, we can work in a slightly different way - we can use something called replication instead. In a data mining project we might have 2 data sets - a training data set and a testing data set. We build our model on a training set and once we’ve done that, we take the results of that model and then apply it to a testing data set to see if we find similar results.
Read more
  • 0
  • 0
  • 55468
Modal Close icon
Modal Close icon