Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7012 Articles
article-image-developers-think-managers-dont-know-about-technology
Fatema Patrawala
01 Jun 2018
7 min read
Save for later

Developers think managers don’t know enough about technology. And that’s hurting business.

Fatema Patrawala
01 Jun 2018
7 min read
It's not hard to find jokes online about management not getting software. There has long been a perception that those making key business decisions don't actually understand the technology and software that is at the foundation of just about every organization's operations. Now, research has confirmed that the management and engineering divide is real. In this year's Skill Up survey that we ran among 8000 developers, we found that more than 60% of developers believe they know more about technology than their manager. Source: Packtpub Skill Up Survey 2018 Developer perceptions on the topic aren't simply a question of ego; they're symptomatic of some a barriers to business success. 42% of the respondents listed management's lack of technical knowledge as a barrier to success. It also appears as one of the top 3 organizational barriers to achieving business goals. Source: Packtpub Skill Up Survey 2018 To dissect the technical challenges faced by organizations, we also asked respondents to pick the top technical barriers to success. As can be seen from the below graph, a lot of the barriers directly or indirectly relate to management’s understanding of technology. Take for example management’s decisions to continue with legacy systems, or investment or divestment from certain projects, choice of training programs and vendors etc. Source: Packtpub Skill Up Survey 2018 Management tends to weigh decisions based on the magnitude of investment against returns in immediate or medium term. Also, unless there is hard evidence of performance benefits or cost savings, management is generally wary of approving new projects or spending more on existing ones. This approach is generally robust and has saved business precious dollars by curbing pet projects and unruly experiments and research. However, with technology, things are always so straightforward. One day some tool is the talk of the town (think Adobe Flash) and everyone seems to be learning it or buying it and then in a few months or a couple of years down the line, it has gone completely off radar. Conversely, something that didn’t exist yesterday or was present in some obscure research lab (think self-driving tech, gene-editing, robotics etc), is now changing the rules of the game and businesses whose leadership teams have had their ears on the ground topple everyone else, including time tested veterans. Early adopters and early jumpers make the most of tech trends. This requires one (in the position to make decisions within organizations) to be aware of the changing tech landscape to the extent that one can predict what’s going to replace the current reigning tech and in what timeframe. It requires that management is aware of what’s happening in adjacent industries or even in seemingly unrelated ones. Who knew Unity (game platform), Nvidia (chipmaker), Google (search engine), would enter the auto industry, (all thanks to self driving tech)? While these are some over the top factors, let us look at each one of them in detail. Why do developers believe there is a management knowledge gap? Few reasons listed to justify the response: Rapid pace of technology change: The rapid rate of technology change is significantly impacting IT strategy. Not only are there plenty of emerging technology trends, from AI to cloud, they’re all coming at the same time, and even affecting each other. It’s clear that keeping up with the rate of digital advancement - for example automation, harnessing big data, emerging technologies and cyber security - will pose significant challenge for leaders and senior management. Adding a whole new layer of complexity as they try to stay ahead of competition and innovate. Balancing strategic priorities while complying to changing regulations: Another major challenge for senior management is to balance strategic priorities with the regulatory demands of the industry. In 2018, GDPR has been setting a new benchmark for the protection of consumer data rights by making organisations more accountable. Governed by GDPR, organisations and senior management will now be responsible for guarding every piece of information connected to an individual. In order to be GDPR compliant, management will begin introducing the correct security protocols in their business processes. This will include encryption, two-factor authentication and key management strategies to avoid severe legal, financial and reputational consequences.To make the right decisions, they will need to be technically competent enough to understand the strengths and limitations of the tools and techniques involved in the compliance process Finding right IT talent: Identifying the right talent with the skill sets that you need is a big challenge for senior management. They are constantly trying to find and hire IT talent, such as skilled data scientists and app developers, to accommodate and capitalize on emerging trends in cloud and the API economy. The team has to take care to bring in the right people and let them create magic with their development skills. Alongside this they also need to reinvent how they manage, retract, retain, motivate, and compensate these folks. Responses to this quora question highlight that it can be a difficult process for managers to go through a lengthy recruitment cycle. And the worst feeling is when after all the effort the candidate declines the offer for another lucrative one. So much promising technology, so little time: Time is tight in business and tech. Keeping pace with how quickly innovative and promising technologies crop up is easier said than done. There are so many interesting technologies out there, and there's so little time to implement them fast enough. Before anyone can choose a technology that might work for the company, a new product appears to be on the horizon. Once you see something you like, there's always something else popping up. While managers are working on a particular project to make all the parts work together for an outstanding customer experience, it requires time to do so and implement these technologies. When juggling with all of these moving parts, managers are always looking for technologies and ways to implement great things faster. That's the major reason behind companies having a CTO, VP of engineering and CEO separately to function at their own levels and departments. Murphy’s law of unforeseen IT problems: One of the biggest problems when you’re working in tech is Murphy's Law. This is the law that states  "Anything that can go wrong, will -- at the worst possible moment." It doesn't matter how hard we have worked, how strong the plan is, or how many times things are tested. You get to doing the project and if something has to go wrong, it will. There are times we face IT problems that we don't see coming. It doesn't matter how much you try to plan -- stuff happens. When management doesn’t properly understand technology it’s often hard for them to appreciate how problems arise and how long it can take to solve them. That puts pressure on engineers and developers which can make managing projects even harder. Overcoming perfectionism with an agile mindset: Senior management often wants things done yesterday, and they want it done perfectly. Of course, this is impossible. While Agile can help improve efficiency in the development process, perfectionism is anathema to Agile. It’s about delivering quickly and consistently, not building something perfect and then deploying it. Getting management to understand this is a challenge for engineers - good management teams will understand Agile and what the trade offs are. At the forefront of everyone’s mind should be what the customer needs and what is going to benefit the business. Concluding with Dilbert comic for a lighter note. Source With purpose, process, and changing technologies, managers need to change in the way they function and manage. People don't leave companies, they leave bad managers and the same could be said true for technical workers. They don't leave bad companies they leave non-technical managers who make bad technical decisions. Don’t call us ninjas or rockstars, say developers 96% of developers believe developing soft skills is important  
Read more
  • 0
  • 0
  • 34485

article-image-restful-web-services-with-kotlin
Natasha Mathur
01 Jun 2018
9 min read
Save for later

Building RESTful web services with Kotlin

Natasha Mathur
01 Jun 2018
9 min read
Kotlin has been eating up the Java world. It has already become a hit in the Android Ecosystem which was dominated by Java and is welcomed with open arms. Kotlin is not limited to Android development and can be used to develop server-side and client-side web applications as well. Kotlin is 100% compatible with the JVM so you can use any existing frameworks such as Spring Boot, Vert.x, or JSF for writing Java applications. In this tutorial, we will learn how to implement RESTful web services using Kotlin. This article is an excerpt from the book 'Kotlin Programming Cookbook', written by, Aanand Shekhar Roy and Rashi Karanpuria. Setting up dependencies for building RESTful services In this recipe, we will lay the foundation for developing the RESTful service. We will see how to set up dependencies and run our first SpringBoot web application. SpringBoot provides great support for Kotlin, which makes it easy to work with Kotlin. So let's get started. We will be using IntelliJ IDEA and Gradle build system. If you don't have that, you can get it from https://www.jetbrains.com/idea/. How to do it… Let's follow the given steps to set up the dependencies for building RESTful services: First, we will create a new project in IntelliJ IDE. We will be using the Gradle build system for maintaining dependency, so create a Gradle project: When you have created the project, just add the following lines to your build.gradle file. These lines of code contain spring-boot dependencies that we will need to develop the web app: buildscript { ext.kotlin_version = '1.1.60' // Required for Kotlin integration ext.spring_boot_version = '1.5.4.RELEASE' repositories { jcenter() } dependencies { classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version" // Required for Kotlin integration classpath "org.jetbrains.kotlin:kotlin-allopen:$kotlin_version" // See https://kotlinlang.org/docs/reference/compiler-plugins.html#kotlin-spring-compiler-plugin classpath "org.springframework.boot:spring-boot-gradle-plugin:$spring_boot_version" } } apply plugin: 'kotlin' // Required for Kotlin integration apply plugin: "kotlin-spring" // See https://kotlinlang.org/docs/reference/compiler-plugins.html#kotlin-spring-compiler-plugin apply plugin: 'org.springframework.boot' jar { baseName = 'gs-rest-service' version = '0.1.0' } sourceSets { main.java.srcDirs += 'src/main/kotlin' } repositories { jcenter() } dependencies { compile "org.jetbrains.kotlin:kotlin-stdlib:$kotlin_version" // Required for Kotlin integration compile 'org.springframework.boot:spring-boot-starter-web' testCompile('org.springframework.boot:spring-boot-starter-test') } Let's now create an App.kt file in the following directory hierarchy: It is important to keep the App.kt file in a package (we've used the college package). Otherwise, you will get an error that says the following: ** WARNING ** : Your ApplicationContext is unlikely to start due to a `@ComponentScan` of the default package. The reason for this error is that if you don't include a package declaration, it considers it a "default package," which is discouraged and avoided. Now, let's try to run the App.kt class. We will put the following code to test if it's running: @SpringBootApplication open class App { } fun main(args: Array<String>) { SpringApplication.run(App::class.java, *args) } Now run the project; if everything goes well, you will see output with the following line at the end: Started AppKt in 5.875 seconds (JVM running for 6.445) We now have our application running on our embedded Tomcat server. If you go to http://localhost:8080, you will see an error as follows: The preceding error is 404 error and the reason for that is we haven't told our application to do anything when a user is on the / path. Creating a REST controller In the previous recipe, we learned how to set up dependencies for creating RESTful services. Finally, we launched our backend on the http://localhost:8080 endpoint but got 404 error as our application wasn't configured to handle requests at that path (/). We will start from that point and learn how to create a REST controller. Let's get started! We will be using IntelliJ IDE for coding purposes. For setting up of the environment, refer to the previous recipe. You can also find the source in the repository at https://gitlab.com/aanandshekharroy/kotlin-webservices. How to do it… In this recipe, we will create a REST controller that will fetch us information about students in a college. We will be using an in-memory database using a list to keep things simple: Let's first create a Student class having a name and roll number properties: package college class Student() { lateinit var roll_number: String lateinit var name: String constructor( roll_number: String, name: String): this() { this.roll_number = roll_number this.name = name } } Next, we will create the StudentDatabase endpoint, which will act as a database for the application: @Component class StudentDatabase { private val students = mutableListOf<Student>() } Note that we have annotated the StudentDatabase class with @Component, which means its lifecycle will be controlled by Spring (because we want it to act as a database for our application). We also need a @PostConstruct annotation, because it's an in-memory database that is destroyed when the application closes. So we would like to have a filled database whenever the application launches. So we will create an init method, which will add a few items into the "database" at startup time: @PostConstruct private fun init() { students.add(Student("2013001","Aanand Shekhar Roy")) students.add(Student("2013165","Rashi Karanpuria")) } Now, we will create a few other methods that will help us deal with our database: getStudent: Gets the list of students present in our database: fun getStudents()=students addStudent: This method will add a student to our database: fun addStudent(student: Student): Boolean { students.add(student) return true } Now let's put this database to use. We will be creating a REST controller that will handle the request. We will create a StudentController and annotate it with @RestController. Using @RestController is simple, and it's the preferred method for creating MVC RESTful web services. Once created, we need to provide our database using Spring dependency injection, for which we will need the @Autowired annotation. Here's how our StudentController looks: @RestController class StudentController { @Autowired private lateinit var database: StudentDatabase } Now we will set our response to the / path. We will show the list of students in our database. For that, we will simply create a method that lists out students. We will need to annotate it with @RequestMapping and provide parameters such as path and request method (GET, POST, and such): @RequestMapping("", method = arrayOf(RequestMethod.GET)) fun students() = database.getStudents() This is what our controller looks like now. It is a simple REST controller: package college import org.springframework.beans.factory.annotation.Autowired import org.springframework.web.bind.annotation.RequestMapping import org.springframework.web.bind.annotation.RequestMethod import org.springframework.web.bind.annotation.RestController @RestController class StudentController { @Autowired private lateinit var database: StudentDatabase @RequestMapping("", method = arrayOf(RequestMethod.GET)) fun students() = database.getStudents() } Now when you restart the server and go to http://localhost:8080, we will see the response as follows: As you can see, Spring is intelligent enough to provide the response in the JSON format, which makes it easy to design APIs. Now let's try to create another endpoint that will fetch a student's details from a roll number: @GetMapping("/student/{roll_number}") fun studentWithRollNumber( @PathVariable("roll_number") roll_number:String) = database.getStudentWithRollNumber(roll_number) Now, if you try the http://localhost:8080/student/2013001 endpoint, you will see the given output: {"roll_number":"2013001","name":"Aanand Shekhar Roy"} Next, we will try to add a student to the database. We will be doing it via the POST method: @RequestMapping("/add", method = arrayOf(RequestMethod.POST)) fun addStudent(@RequestBody student: Student) = if (database.addStudent(student)) student else throw Exception("Something went wrong") There's more… So far, our server has been dependent on IDE. We would definitely want to make it independent of an IDE. Thanks to Gradle, it is very easy to create a runnable JAR just with the following: ./gradlew clean bootRepackage The preceding command is platform independent and uses the Gradle build system to build the application. Now, you just need to type the mentioned command to run it: java -jar build/libs/gs-rest-service-0.1.0.jar You can then see the following output as before: Started AppKt in 4.858 seconds (JVM running for 5.548) This means your server is running successfully. Creating the Application class for Spring Boot The SpringApplication class is used to bootstrap our application. We've used it in the previous recipes; we will see how to create the Application class for Spring Boot in this recipe. We will be using IntelliJ IDE for coding purposes. To set up the environment, read previous recipes, especially the Setting up dependencies for building RESTful services recipe. How to do it… If you've used Spring Boot before, you must be familiar with using @Configuration, @EnableAutoConfiguration, and @ComponentScan in your main class. These were used so frequently that Spring Boot provides a convenient @SpringBootApplication alternative. The Spring Boot looks for the public static main method, and we will use a top-level function outside the Application class. If you noted, while setting up the dependencies, we used the kotlin-spring plugin, hence we don't need to make the Application class open. Here's an example of the Spring Boot application: package college import org.springframework.boot.SpringApplication import org.springframework.boot.autoconfigure.SpringBootApplication @SpringBootApplication class Application fun main(args: Array<String>) { SpringApplication.run(Application::class.java, *args) } The Spring Boot application executes the static run() method, which takes two parameters and starts a autoconfigured Tomcat web server when Spring application is started. When everything is set, you can start the application by executing the following command: ./gradlew bootRun If everything goes well, you will see the following output in the console: This is along with the last message—Started AppKt in xxx seconds. This means that your application is up and running. In order to run it as an independent server, you need to create a JAR and then you can execute as follows: ./gradlew clean bootRepackage Now, to run it, you just need to type the following command: java -jar build/libs/gs-rest-service-0.1.0.jar We learned how to set up dependencies for building RESTful services, creating a REST controller, and creating the application class for Spring boot. If you are interested in learning more about Kotlin then be sure to check out the 'Kotlin Programming Cookbook'. Build your first Android app with Kotlin 5 reasons to choose Kotlin over Java Getting started with Kotlin programming Forget C and Java. Learn Kotlin: the next universal programming language
Read more
  • 0
  • 0
  • 47761

article-image-network-programming-gawk
Pavan Ramchandani
31 May 2018
12 min read
Save for later

Network programming 101 with GAWK (GNU AWK)

Pavan Ramchandani
31 May 2018
12 min read
In today's tutorial, we will learn about the networking aspects, for example working with TCP/IP for both client-side and server-side. We will also explore HTTP services to help you get going with networking in AWK. This tutorial is an excerpt from a book written by Shiwang Kalkhanda, titled Learning AWK Programming. The AWK programming language was developed as a pattern-matching language for text manipulation; however, GAWK has advanced features, such as file-like handling of network connections. We can perform simple TCP/IP connection handling in GAWK with the help of special filenames. GAWK extends the two-way I/O mechanism used with the |& operator to simple networking using these special filenames that hide the complex details of socket programming to the programmer. The special filename for network communication is made up of multiple fields, all of which are mandatory. The following is the syntax of creating a filename for network communication: /net-type/protocol/local-port/remote-host/remote-port Each field is separated from another with a forward slash. Specifying all of the fields is mandatory. If any of the field is not valid for any protocol or you want the system to pick a default value for that field, it is set as 0. The following list illustrates the meaning of different fields used in creating the file for network communication: net-type: Its value is inet4 for IPv4, inet6 for IPv6, or inet to use the system default (which is generally IPv4). protocol: It is either tcp or udp for a TCP or UDP IP connection. It is advised you use the TCP protocol for networking. UDP is used when low overhead is a priority. local-port: Its value decides which port on the local machine is used for communication with the remote system. On the client side, its value is generally set to 0 to indicate any free port to be picked up by the system itself. On the server side, its value is other than 0 because the service is provided to a specific publicly known port number or service name, such as http, smtp, and so on. remote-host: It is the remote hostname which is to be at the other end of the connection. For the server side, its value is set to 0 to indicate the server is open for all other hosts for connection. For the client side, its value is fixed to one remote host and hence, it is always different from 0. This name can either be represented through symbols, such as www.google.com, or numbers, 123.45.67.89. remote-port: It is the port on which the remote machine will communicate across the network. For clients, its value is other than 0, to indicate to which port they are connecting to the remote machine. For servers, its value is the port on which they want connection from the client to be established. We can use a service name here such as ftp, http, or a port number such as 80, 21, and so on. TCP client and server (/inet/tcp) TCP gaurantees that data is received at the other end and in the same order as it was transmitted, so always use TCP. In the following example, we will create a tcp-server (sender) to send the current date time of the server to the client. The server uses the strftime() function with the coprocess operator to send to the GAWK server, listening on the 8080 port. The remote host and remote port could be any client, so its value is kept as 0. The server connection is closed by passing the special filename to the close() function for closing the file as follows: $ vi tcpserver.awk #TCP-Server BEGIN { print strftime() |& "/inet/tcp/8080/0/0" close("/inet/tcp/8080/0/0") } Now, open one Terminal and run this program before running the client program as follows: $ awk -f tcpserver.awk Next, we create the tcpclient (receiver) to receive the data sent by the tcpserver. Here, we first create the client connection and pass the received data to the getline() using the coprocess operator. Here the local-port value is set to 0 to be automatically chosen by the system, the remote-host is set to the localhost, and the remote-port is set to the tcp-server port, 8080. After that, the received message is printed, using the print $0 command, and finally, the client connection is closed using the close command, as follows: $ vi tcpclient.awk #TCP-client BEGIN { "/inet/tcp/0/localhost/8080" |& getline print $0 close("/inet/tcp/0/localhost/8080") } Now, execute the tcpclient program in another Terminal as follows : $ awk -f tcpclient.awk The output of the previous code is as follows : Fri Feb 9 09:42:22 IST 2018 UDP client and server ( /inet/udp ) The server and client programs that use the UDP protocol for communication are almost identical to their TCP counterparts, with the only difference being that the protocol is changed to udp from tcp. So, the UDP-server and UDP-client program can be written as follows: $ vi udpserver.awk #UDP-Server BEGIN { print strftime() |& "/inet/udp/8080/0/0" "/inet/udp/8080/0/0" |& getline print $0 close("/inet/udp/8080/0/0") } $ awk -f udpserver.awk Here, only one addition has been made to the client program. In the client, we send the message hello from client ! to the server. So when we execute this program on the receiving Terminal, where the udpclient.awk program is run, we get the remote system date time. And on the Terminal where the udpserver.awk program is run, we get the hello message from the client: $ vi udpclient.awk #UDP-client BEGIN { print "hello from client!" |& "/inet/udp/0/localhost/8080" "/inet/udp/0/localhost/8080" |& getline print $0 close("/inet/udp/0/localhost/8080") } $ awk -f udpclient.awk GAWK can be used to open direct sockets only. Currently, there is no way to access services available over an SSL connection such as https, smtps, pop3s, imaps, and so on. Reading a web page using HttpService To read a web page, we use the Hypertext Transfer Protocol (HTTP ) service which runs on port number 80. First, we redefine the record separators RS and ORS because HTTP requires CR-LF to separate lines. The program requests to the IP address 35.164.82.168 ( www.grymoire.com ) of a static website which, in turn, makes a GET request to the web page: http://35.164.82.168/Unix/donate.html . HTTP calls the GET request, a method which tells the web server to transmit the web page donate.html. The output is stored in the getline function using the co-process operator and printed on the screen, line by line, using the while loop. Finally, we close the http service connection. The following is the program to retrieve the web page: $ vi view_webpage.awk BEGIN { RS=ORS="rn" http = "/inet/tcp/0/35.164.82.168/80" print "GET http://35.164.82.168/Unix/donate.html" |& http while ((http |& getline) > 0) print $0 close(http) } $ awk -f view_webpage.awk Upon executing the program, it fills the screen with the source code of the page on the screen as follows: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML lang="en-US"> <HEAD> <TITLE> Welcome to The UNIX Grymoire!</TITLE> <meta name="keywords" content="grymoire, donate, unix, tutorials, sed, awk"> <META NAME="Description" CONTENT="Please donate to the Unix Grymoire" > <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <link href="myCSS.css" rel="stylesheet" type="text/css"> <!-- Place this tag in your head or just before your close body tag --> <script type="text/javascript" src="https://apis.google.com/js/plusone.js"></script> <link rel="canonical" href="http://www.grymoire.com/Unix/donate.html"> <link href="myCSS.css" rel="stylesheet" type="text/css"> ........ ........ Profiling in GAWK Profiling of code is done for code optimization. In GAWK, we can do profiling by supplying a profile option to GAWK while running the GAWK program. On execution of the GAWK program with that option, it creates a file with the name awkprof.out. Since GAWK is performing profiling of the code, the program execution is up to 45% slower than the speed at which GAWK normally executes. Let's understand profiling by looking at some examples. In the following example, we create a program that has four functions; two arithmetic functions, one function prints an array, and one function calls all of them. Our program also contains two BEGIN and two END statements. First, the BEGIN and END statement and then it contains a pattern action rule, then the second BEGIN and END statement, as follows: $ vi codeprof.awk func z_array(){ arr[30] = "volvo" arr[10] = "bmw" arr[20] = "audi" arr[50] = "toyota" arr["car"] = "ferrari" n = asort(arr) print "Array begins...!" print "=====================" for ( v in arr ) print v, arr[v] print "Array Ends...!" print "=====================" } function mul(num1, num2){ result = num1 * num2 printf ("Multiplication of %d * %d : %dn", num1,num2,result) } function all(){ add(30,10) mul(5,6) z_array() } BEGIN { print "First BEGIN statement" print "=====================" } END { print "First END statement " print "=====================" } /maruti/{print $0 } BEGIN { print "Second BEGIN statement" print "=====================" all() } END { print "Second END statement" print "=====================" } function add(num1, num2){ result = num1 + num2 printf ("Addition of %d + %d : %dn", num1,num2,result) } $ awk -- prof -f codeprof.awk cars.dat The output of the previous code is as follows: First BEGIN statement ===================== Second BEGIN statement ===================== Addition of 30 + 10 : 40 Multiplication of 5 * 6 : 30 Array begins...! ===================== 1 audi 2 bmw 3 ferrari 4 toyota 5 volvo Array Ends...! ===================== maruti swift 2007 50000 5 maruti dezire 2009 3100 6 maruti swift 2009 4100 5 maruti esteem 1997 98000 1 First END statement ===================== Second END statement ===================== Execution of the previous program also creates a file with the name awkprof.out. If we want to create this profile file with a custom name, then we can specify the filename as an argument to the --profile option as follows: $ awk --prof=codeprof.prof -f codeprof.awk cars.dat Now, upon execution of the preceding code we get a new file with the name codeprof.prof. Let's try to understand the contents of the file codeprof.prof created by the profiles as follows: # gawk profile, created Fri Feb 9 11:01:41 2018 # BEGIN rule(s) BEGIN { 1 print "First BEGIN statement" 1 print "=====================" } BEGIN { 1 print "Second BEGIN statement" 1 print "=====================" 1 all() } # Rule(s) 12 /maruti/ { # 4 4 print $0 } # END rule(s) END { 1 print "First END statement " 1 print "=====================" } END { 1 print "Second END statement" 1 print "=====================" } # Functions, listed alphabetically 1 function add(num1, num2) { 1 result = num1 + num2 1 printf "Addition of %d + %d : %dn", num1, num2, result } 1 function all() { 1 add(30, 10) 1 mul(5, 6) 1 z_array() } 1 function mul(num1, num2) { 1 result = num1 * num2 1 printf "Multiplication of %d * %d : %dn", num1, num2, result } 1 function z_array() { 1 arr[30] = "volvo" 1 arr[10] = "bmw" 1 arr[20] = "audi" 1 arr[50] = "toyota" 1 arr["car"] = "ferrari" 1 n = asort(arr) 1 print "Array begins...!" 1 print "=====================" 5 for (v in arr) { 5 print v, arr[v] } 1 print "Array Ends...!" 1 print "=====================" } This profiling example explains the various basic features of profiling in GAWK. They are as follows: The first look at the file from top to bottom explains the order of the program in which various rules are executed. First, the BEGIN rules are listed followed by the BEGINFILE rule, if any. Then pattern-action rules are listed. Thereafter, ENDFILE rules and END rules are printed. Finally, functions are listed in alphabetical order. Multiple BEGIN and END rules retain their places as separate identities. The same is also true for the BEGINFILE and ENDFILE rules. The pattern-action rules have two counts. The first number, to the left of the rule, tells how many times the rule's pattern was tested for the input file/record. The second number, to the right of the rule's opening left brace, with a comment, shows how many times the rule's action was executed when the rule evaluated to true. The difference between the two indicates how many times the rules pattern evaluated to false. If there is an if-else statement then the number shows how many times the condition was tested. At the right of the opening left brace for its body is a count showing how many times the condition was true. The count for the else statement tells how many times the test failed.  The count at the beginning of a loop header (for or while loop) shows how many times the loop conditional-expression was executed. In user-defined functions, the count before the function keyword tells how many times the function was called. The counts next to the statements in the body show how many times those statements were executed. The layout of each block uses C-style tabs for code alignment. Braces are used to mark the opening and closing of a code block, similar to C-style. Parentheses are used as per the precedence rule and the structure of the program, but only when needed. Printf or print statement arguments are enclosed in parentheses, only if the statement is followed by redirection. GAWK also gives leading comments before rules, such as before BEGIN and END rules, BEGINFILE and ENDFILE rules, and pattern-action rules and before functions. GAWK provides standard representation in a profiled version of the program. GAWK also accepts another option, --pretty-print. The following is an example of a pretty-printing AWK program: $ awk --pretty-print -f codeprof.awk cars.dat When GAWK is called with pretty-print, the program generates awkprof.out, but this time without any execution counts in the output. Pretty-print output also preserves any original comments if they are given in a program while the profile option omits the original program’s comments. The file created on execution of the program with --pretty-print option is as follows: # gawk profile, created Fri Feb 9 11:04:19 2018 # BEGIN rule(s) BEGIN { print "First BEGIN statement" print "=====================" } BEGIN { print "Second BEGIN statement" print "=====================" all() } # Rule(s) /maruti/ { print $0 } # END rule(s) END { print "First END statement " print "=====================" } END { print "Second END statement" print "=====================" } # Functions, listed alphabetically function add(num1, num2) { result = num1 + num2 printf "Addition of %d + %d : %dn", num1, num2, result } function all() { add(30, 10) mul(5, 6) z_array() } function mul(num1, num2) { result = num1 * num2 printf "Multiplication of %d * %d : %dn", num1, num2, result } function z_array() { arr[30] = "volvo" arr[10] = "bmw" arr[20] = "audi" arr[50] = "toyota" arr["car"] = "ferrari" n = asort(arr) print "Array begins...!" print "=====================" for (v in arr) { print v, arr[v] } print "Array Ends...!" print "=====================" } To summarize, we looked at the basics of network programming and GAWK's built-in command line debugger. Do check out the book Learning AWK Programming to know more about the intricacies of AWK programming for text processing. 20 ways to describe programming in 5 words What is Mob Programming?
Read more
  • 0
  • 0
  • 39123

article-image-really-basic-guide-to-batch-file-programming
Richard Gall
31 May 2018
3 min read
Save for later

A really basic guide to batch file programming

Richard Gall
31 May 2018
3 min read
Batch file programming is a way of making a computer do things simply by creating, yes, you guessed it, a batch file. It's a way of doing things you might ordinarily do in the command prompt, but automates some tasks, which means you don't have to write so much code. If it sounds straightforward, that's because it is, generally. Which is why it's worth learning... Batch file programming is a good place to start learning how computers work Of course, if you already know your way around batch files, I'm sure you'll agree it's a good way for someone relatively experienced in software to get to know their machine a little better. If you know someone that you think would get a lot from learning batch file programming share this short guide with them! Why would I write a batch script? There are a number of reasons you might write batch scripts. It's particularly useful for resolving network issues, installing a number of programs on different machines, even organizing files and folders on your computer. Imagine you have a recurring issue - with a batch file you can solve it quickly and easily wherever you are without having to write copious lines of code in the command line. Or maybe your desktop simply looks like a mess; with a little knowledge of batch file programming you can clean things up without too much effort. How to write a batch file Clearly, batch file programming can make your life a lot easier. Let's take a look at the key steps to begin writing batch scripts. Step 1: Open your text editor Batch file programming is really about writing commands - so you'll need your text editor open to begin. Notepad, wordpad, it doesn't matter! Step 2: Begin writing code As we've already seen, batch file programming is really about writing commands for your computer. The code is essentially the same as what you would write in the command prompt. Here are a few batch file commands you might want to know to get started: ipconfig - this presents network information like your IP and MAC address. start “” [website] - this opens a specified website in your browser. rem - this is used if you want to make a comment or remark in your code (ie. for documentation purposes) pause - this, as you'd expect, pauses the script so it can be read before it continues. echo - this command will display text in the command prompt. %%a - this command refers to every file in a given folder if - this is a conditional command The list of batch file commands is pretty long. There are plenty of other resources with an exhaustive list of commands you can use, but a good place to begin is this page on Wikipedia. Step 3: Save your batch file Once you've written your commands in the text editor, you'll then need to save your document as a batch file. Title it, and suffix it with the .bat extension. You'll also need to make sure save as type is set as 'All files'. That's basically it when it comes to batch file programming. Of course, there are some complex things you can do, but once you know the basics, getting into the code is where you can start to experiment.  Read next Jupyter and Python scripting Python Scripting Essentials
Read more
  • 0
  • 0
  • 40744

article-image-tips-and-tricks-to-optimize-your-responsive-web-design
Sugandha Lahoti
31 May 2018
11 min read
Save for later

Tips and tricks to optimize your responsive web design

Sugandha Lahoti
31 May 2018
11 min read
Loosely put, website optimization refers to the activities and processes that improve your website's user experience and visibility while reducing the costs associated with hosting your website. In this article, we will learn tips and techniques for client-side optimization. This article is an excerpt from Mastering Bootstrap 4 - Second Edition by Benjamin Jakobus, and Jason Marah. In this book, you will learn to build a customized Bootstrap website from scratch, optimize your website and integrate it with third-party frameworks. CSS optimization Before we even consider compression, minification, and file concatenation, we should think about the ways in which we can simplify and optimize our existing style sheet without using third-party tools. Of course, we should have striven for an optimal style sheet, to begin with, and in many aspects we did. However, our style sheet still leaves room for improvement. Inline styles are bad After reading this article, if you only remember one thing, then let it be that inline styles are bad. Period. Avoid using them whenever possible. Why? That's because not only will they make your website impossible to maintain as the website grows, they also take up precious bytes as they force you to repeat the same rules over and over. Consider the following code piece: <div class="carousel-inner" role="listbox"> <div style="height: 400px" class="carousel-item active"> <img class="d-block img-fluid" src="images/brazil.png" data-modal-picture="#carousel-modal"> <div class="carousel-caption"> Brazil </div> </div> <div style="height: 400px" class="carousel-item"> <img class="d-block img-fluid" src="images/datsun.png" data-modal-picture="#carousel-modal"> <div class="carousel-caption"> Datsun 260Z </div> </div> <div style="height: 400px" class="carousel-item"> <img class="d-block img-fluid" src="images/skydive.png" data-modal-picture="#carousel-modal"> <div class="carousel-caption"> Skydive </div> </div> </div> Note how the rule for defining an item's height, style="height: 400px", is repeated three times, once for each of the three items. That's an additional 21 characters (or 21 bytes, assuming that our document is UTF-8) for each additional image. Multiplying 3*21 gives us 63 bytes, and 21 more bytes for every new image that you want to add. Not to mention that if you ever want to update the height of the images, you will need to manually update the style attribute for every single image. The solution is, of course, to replace the inline styles with an appropriate class. Let's go ahead and define an img class that can be applied to any carousel image: .carousel-item { height: 400px; } Now let's go ahead and remove the style rules: <div class="carousel-inner" role="listbox"> <div style="height: 400px" class="carousel-item active"> <img class="d-block img-fluid" src="images/brazil.png" data-modal-picture="#carousel-modal"> <div class="carousel-caption"> Brazil </div> </div> <div style="height: 400px" class="carousel-item"> <img class="d-block img-fluid" src="images/datsun.png" data-modal-picture="#carousel-modal"> <div class="carousel-caption"> Datsun 260Z </div> </div> <div style="height: 400px" class="carousel-item"> <img class="d-block img-fluid" src="images/skydive.png" data-modal-picture="#carousel-modal"> <div class="carousel-caption"> Skydive </div> </div> </div> That's great! Not only is our CSS now easier to maintain, but we also shaved 29 bytes off our website (the original inline styles required 63 bytes; our new class definition, however, requires only 34 bytes). Yes, this does not seem like much, especially in the world of high-speed broadband, but remember that your website will grow and every byte adds up. Avoid Long identifiers and class names The longer your strings, the larger your files. It's a no-brainer. As such, long identifier and class names naturally increase the size of your web page. Of course, extremely short class or identifier names tend to lack meaning and therefore will make it more difficult (if not impossible) to maintain your page. As such, one should strive for an ideal balance between length and expressiveness. Of course, even better than shortening identifiers is removing them altogether. One handy technique of removing these is to use hierarchical selection. Have a look at an events pagination code piece. For example, we are using the services-events-content identifier within our pagination logic, as follows: $('#services-events-pagination').bootpag({ total: 10 }).on("page", function(event, num){ $('#services-events-content div').hide(); var current_page = '#page-' + num; $(current_page).show(); }); To denote the services content, we broke the name of our identifier into three parts, namely, services, events, and content. Our markup is as follows: <div id="services-events-content"> <div id="page-1"> <h3>My Sample Event #1</h3> ... </div> </div> Let's try and get rid of this identifier altogether by observing two characteristics of our Events section: The services-events-content is an indirect descendent of a div with the id services-events. We cannot remove this id as it is required for the menu to work. The element with the id services-events-content is itself a div. If we were to remove its id, we could also remove the entire div. As such, we do not need a second identifier to select the pages that we wish to hide. Instead, all that we need to do is select the div within the div that is within the div that is assigned the id services-events. How do we express this as a CSS selector? It's easy—use #services-events div div div. Also, as such, our pagination logic is updated as follows: $('#services-events-pagination').bootpag({ total: 10 }).on("page", function(event, num){ $('#services-events div div div').hide(); var current_page = '#page-' + num; $(current_page).show(); }); Now, save and refresh. What's that? As you clicked on a page, the pagination control disappeared; that's because we are now hiding all div elements that are two div elements down from the element with the id services-events. Move the pagination control div outside its parent element. Our markup should now look as follows: <div role="tabpanel" class="tab-pane active" id="services-events"> <div class="container"> <div class="row"> <div id="page-1"> <h3>My Sample Event #1</h3> <h3>My Sample Event #2</h3> </div> <div id="page-2"> <h3>My Sample Event #3</h3> </div> </div> <div id="services-events-pagination"></div> </div> </div> Now save and refresh. That's better! Last but not least, let's update the css. Take the following code into consideration: #services-events-content div { display: none; } #services-events-content div img { margin-top: 0.5em; margin-right: 1em; } #services-events-content { height: 15em; overflow-y: scroll; } Replace this code with the following: #services-events div div div { display: none; } #services-events div div div img { margin-top: 0.5em; margin-right: 1em; } #services-events div div div { height: 15em; overflow-y: scroll; } That's it, we have simplified our style sheet and saved some bytes in the process! However, we have not really improved the performance of our selector. jQuery executes selectors from right to left, hence executing the last selector first. In this example, jQuery will first scan the complete DOM to discover all div elements (last selector executed first) and then apply a filter to return only those elements that are div, with a div parent, and then select only the ones with ID services-events as parent. While we can't really improve the performance of the selector in this case, we can still simplify our code further by adding a class to each page: <div id="page-1" class="page">...</div> <div id="page-2" class="page">...</div> <div id="page-3" class="page">...</div> Then, all we need to do is select by the given class: $('#services-events div.page').hide();. Alternatively, knowing that this is equal to the DOM element within the .on callback, we can do the following in order to prevent us from iterating through the whole DOM: $(this).parents('#services-vents').find('.page').hide(); The final code will look as follows: $('#services-events-pagination').bootpag({ total: 10 }).on("page", function(event, num) { $(this).parents('#services-events').find('.page').hide(); $('#page-' + num).show(); }); Note a micro-optimization in the preceding code—there was no need for us to create that var in memory. Hence, the last line changes to $('#page-' + num).show();. Use Shorthand rules when possible According to the Mozilla Developer Network (shorthand properties, Mozilla Developer Network, https://developer.mozilla.org/en-US/docs/Web/CSS/Shorthand_properties, accessed November 2015), shorthand properties are: "CSS properties that let you set the values of several other CSS properties simultaneously. Using a shorthand property, a Web developer can write more concise and often more readable style sheets, saving time and energy."                                                                                    – Mozilla Developer Network, 2015 Unless strictly necessary, we should never be using longhand rules. When possible, shorthand rules are always the preferred option. Besides the obvious advantage of saving precious bytes, shorthand rules also help increase your style sheet's maintainability. For example, border: 20px dotted #FFF is equivalent to three separate rules: border-style: dotted; border-width: 20px; border-color: #FFF; Group selectors Organizing selectors into groups will arguably also save some bytes. .navbar-myphoto .dropdown-menu > a:hover { color: gray; background-color: #504747; } .navbar-myphoto .dropdown-menu > a:focus { color: gray; background-color: #504747; } .navbar-myphoto .dropdown-menu > .active > a:focus { color: gray; background-color: #504747; } Note how each of the three selectors contains the same declarations, that is, the color and background-color properties are set to the exact same values for each selector. To prevent us from repeating these declarations, we should simply group them (reducing the code from 274 characters to 181 characters): .navbar-myphoto .dropdown-menu > a:hover, .navbar-myphoto .dropdown-menu > a:focus, .navbar-myphoto .dropdown-menu > .active > a:focus { color: gray; background-color: #504747; } Voilà! We just saved 93 bytes! (assuming UTF-8 encoding). Rendering times When optimizing your style rules, the number of bytes should not be your only concern. In fact, it comes secondary to the rendering time of your web page. CSS rules affect the amount of work that is required by the browser to render your page. As such, some rules are more expensive than others. For example, changing the color of an element is cheaper than changing its margin. The reason for this is that a change in color only requires your browser to draw the new pixels. While drawing itself is by no means a cheap operation, changing the margin of an element requires much more effort. Your browser needs to both recalculate the page layout and also draw the changes. Optimizing your page's rendering times is a complex topic, and as such is beyond the scope of this post. However, we recommend that you take a look at http://csstriggers.com/. This site provides a concise overview of the costs involved when updating a given CSS property. Minifying CSS and JavaScript Now it is time to look into minification. Minification is the process of removing redundant characters from a file without altering the actual information contained within it. In other words, minifying the css file will reduce its overall size, while leaving the actual CSS style rules intact. This is achieved by stripping out any whitespace characters within our file. Stripping out whitespace characters has the obvious result that our CSS is now practically unreadable and impossible to maintain. As such, minified style sheets should only be used when serving a page (that is, during production), and not during development. Clearly, minifying your style sheet manually would be an incredibly time-consuming (and hence pointless) task. Therefore, there exist many tools that will do the job for us. One such tool is npm minifier. Visit https://www.npmjs.com/package/minifier for more. Let's go ahead and install it: sudo npm install -g minifier Once installed, we can minify our style sheet by typing the following command: minify path-to-myphoto.css Here, path-to-myphoto.css represents the path to the MyPhoto style sheet. Go ahead and execute the command. Once minification is complete, you should see the Minification complete message. A new CSS file (myphoto.min.css) will have been created inside the directory containing the myphoto.css file. The new file should be 2,465 bytes. Our original myphoto.css file is 3,073 bytes. Minifying our style sheet just reduced the number of bytes to send by roughly 19%! We touched upon the basics of website optimization and testing. In the follow-up article, we will see how to use the build tool Grunt to automate the more common and mundane optimization tasks. To build responsive, dynamic, and mobile-first applications on the web with Bootstrap 4, check out this book  Mastering Bootstrap 4 - Second Edition. Get ready for Bootstrap v4.1; Web developers to strap up their boots How to use Bootstrap grid system for responsive website design? Bootstrap 4 Objects, Components, Flexbox, and Layout
Read more
  • 0
  • 0
  • 19518

article-image-how-we-think-ai-urge-ai-founding-fathers
Neil Aitken
31 May 2018
9 min read
Save for later

We must change how we think about AI, urge AI founding fathers

Neil Aitken
31 May 2018
9 min read
In Manhattan, nearly 15,000 Taxis make around 30 journeys each, per day. That’s nearly half a million paid trips. The yellow cabs are part of the never ending, slow progression of vehicles which churn through the streets of New York. The good news is, after a century of worsening traffic, congestion is about to be ameliorated, at least to a degree. Researchers at MIT announced this week, that they have developed an algorithm to optimise the way taxis find their customers. Their product is allegedly so efficient, it can reduce the required number of cabs (for now, the ones with human drivers) in Manhattan, by a third. That’s a non trivial improvement. The trick, apparently, is to use the cabs as a hustler might cue the ball in Pool – lining the next pick up to start where the last drop off ended. The technology behind the improvement offered by the MIT research team, is the same one that is behind most of the incredible technology news stories of the last 3 years – Artificial Intelligence. AI is now a part of most of the digital interactions we have. It fuels the recommendation engines in YouTube, Spotify and Netflix. It shows you products you might like in Google’s search results and on Amazon’s homepage. Undoubtedly, AI is the hot topic of the time – as you cannot possibly have failed to notice. How AI was created – and nearly died AI was, until recently, a long forgotten scientific curiosity, employed seriously only in Sci-Fi movies. The technology fell in to a ‘Winter’– a time when AI related projects couldn’t get funding and decision makers had given up on the technology - in the late 1980s. It was at that time that much of the fundamental work which underpins today’s AI, concepts like neural networks and backpropagation were codified. Artificial Intelligence is now enjoying a rebirth. Almost every new idea funded by Venture Capitalists has AI baked in. The potential excites business owners, especially those involved in the technology sphere, and scares governments in equal measure. It offers better profits and the potential for mass unemployment as if they are two sides of the same coin. Is is a one in a generation technology improvement, similar to Air Conditioning, mass produced motor car and the smartphone, in that it can be applied to all aspects of the economy at the same time. Just as the iPhone has propelled telecommunications technology forward, and created billions of dollars of sales for phone companies selling mobile data plans, AI is fueling totally new businesses and making existing operations significantly more efficient. Behind the fanfare associated with AI, however, lies a simple truth. Today’s AI algorithms use what’s called ‘narrow’ or ‘domain specific’ intelligence. In simple terms, each current AI implementation is specific to the job it is given. IBM trained their AI system ‘Watson’, to beat human contestants at ‘Jeopardy!’ When Google want to build an ‘AI product’ that can be used to beat a living counterpart at the Chinese board game ‘Go’, they create a new AI system. And so on. A new task requires a new AI system. Judea Pearl, inventor of Bayesian networks and Turing Awardee On AI systems that can move from predicting what will happen to what will cause something Now, one of the people behind those original concepts from the 1980s, which underpin today’s AI solutions is back with an even bigger idea which might push AI forward. Judea Pearl, Chancellor's professor of computer science and statistics at UCLA, and a distinguished visiting professor at the Technion, Israel Institute of Technology was awarded the Turing Award 30 years ago. This award was given to him for the Bayesian mathematical models, which gave modern AI its strength. Pearl’s fundamental contribution to computer science was in providing the logic and decision making framework for computers to operate under uncertainty. Some say it was he who provided the spark which thawed that AI winter. Today, he laments the current state of AI, concerned that the field has evolved very little in the last 3 decades since his important theory was presented. Pearl likens current AI implementations to simple tools which can tell you what’s likely to come next, based on the recognition of a familiar pattern. For example, a medical AI algorithm might be able to look at X-Rays of a human chest and ‘discern’ that the patient has, or does not have, lung cancer based on patterns it has learnt from its training datasets. The AI in this scenario doesn’t ‘know’ what lung cancer is or what a tumor is. Importantly, it is a very long way from understanding that smoking can cause the affliction. What’s needed in AI next, says Pearl, is a critical difference: AIs which are evolved to the point where they can determine not just what will happen next, but what will cause it. It’s a fundamental improvement, of the same magnitude as his earlier contributions. Causality – what Pearl is proposing - is one of the most basic units of scientific thought and progress. The ability to conduct a repeatable experiment, showing that A caused B, in multiple locations and have independent peers review the results is one of the fundamentals of establishing truth. In his most recent publication, ‘The Book Of Why’,  Pearl outlines how we can get AI, from where it is now, to where it can develop an understanding of these causal relationships. He believes the first step is to cement the building blocks of reality – ‘what is a lung’, ‘what is smoke’ and that we’ll be able to do in the next 10 years. Geoff Hinton, Inventor of backprop and capsule nets On AI which more closely mimics the human brain Geoff Hinton’s was the mind behind backpropagation, another of the fundamental technologies which has brought AI to the point it is at today. To progress AI, however, he says we might have to start all over again. Hinton has developed (and produced two papers for the University of Toronto to articulate) a new way of training AI systems, involving something he calls ‘Capsule Networks’ – a concept he’s been working on for 30 years, in an effort to improve the capabilities of the backpropagation algorithms he developed. Capsule networks operate in a manner similar to the human brain. When we see an image, our brains breaks it down to it’s components and processes them in parallel. Some brain neurons recognise edges through contrast differences. Others look for corners by examining the points at which edges intersect. Capsule Networks are similar, several acting on a picture at one time, identifying, for example, an ear or a nose on an animal, irrespective of the angle from which it is being viewed. This is a big deal as until now, CNNs (convolution neural networks), the set of AI algorithms that are most often used in image and video recognition systems, could recognize images as well as humans do. CNNs, however, find it hard to recognize images if their angle is changed. It’s too early to judge whether capsule networks are the key to the next step in the AI revolution, but in many tasks, Capsule Networks are identifying images faster and more accurately than current capabilities allow. Andrew Ng, Chief Scientist at Baidu On AI that can learn without humans Andrew Ng is the co-inventor of Google Brain, the team and project that Alphabet put together in 2011 to explore Artificial Intelligence. He now works for Baidu, China’s most successful search engine – analogous in size and scope to Google in the rest of the world. At the moment, he heads up Baidu’s Silicon Valley AI research facility. Beyond concerns over potential job displacement caused by AI, an issue so significant he says it is perhaps all we should be thinking about when it comes to Artificial Intelligence, he suggests that, in the future, the most progress will be made when AI systems can team themselves without human involvement. At the moment, training an AI, even on something that, to us is simple, such as what a cat looks like, is a complicated process. The procedure involves ‘supervised learning.’ It’s shown a lot of pictures (when they did this at Google, they used 10 million images), some of which are cats - labelled appropriately by humans. Once a sufficient level of ‘education’ has been undertaken, the AI can then accurately label cats, most of the time. Ng thinks supervision is problematic, he describes it as having an Achilles heel in the form of the quantity of data that is required. To go beyond current capabilities, says Ng, will require a completely new type of technology – one which can learn through ‘unsupervised learning’ -  machines learning from data that has not been classified by humans. Progress on unsupervised learning is slow. At both Baidu and Google, engineers are focussing on constrained versions of unsupervised learning such as training AI systems to learn about a human face and then using them to create a face themselves. The activity requires that the AI develops what we would call an ‘internal representation’ of a face – something which is required in any unsupervised learning. Other avenues to train without supervision include, ingeniously, pitting an AI system against a computer game – an environment in which they receive feedback (through points awarded in the game) for ‘constructive’ activities, but within which they are not taught directly by a human. Next generation AI depends on ‘scrubbing away’ existing assumptions Artificial Intelligence, as it stands will deliver economy wide efficiency improvements, the likes of which we have not seen in decades. It seems incredible to think that the field is still in its infancy when it can deliver such substantial benefits – like reduced traffic congestion, lower carbon emissions and saved time in New York Taxis. But it is. Isaac Azimov who developed his own concepts behind how Artificial Intelligence might be trained with simple rules said “Your assumptions are your windows on the world. Scrub them off every once in a while, or the light won't come in.” The author should rest assured. Between them, Pearl, Hinton and Ng are each taking revolutionary approaches to elevate AI beyond even the incredible heights it has reached, and starting without reference to the concepts which have brought us this far. 5 polarizing Quotes from Professor Stephen Hawking on artificial intelligence Toward Safe AI – Maximizing your control over Artificial Intelligence Decoding the Human Brain for Artificial Intelligence to make smarter decisions
Read more
  • 0
  • 0
  • 27545
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-java-multithreading-synchronize-threads-implement-critical-sections
Fatema Patrawala
30 May 2018
13 min read
Save for later

Java Multithreading: How to synchronize threads to implement critical sections and avoid race conditions

Fatema Patrawala
30 May 2018
13 min read
One of the most common situations in concurrent programming occurs when more than one execution thread shares a resource. In a concurrent application, it is normal for multiple threads to read or write the same data structure or have access to the same file or database connection. These shared resources can provoke error situations or data inconsistency, and we have to implement some mechanism to avoid these errors. These situations are called race conditions and they occur when different threads have access to the same shared resource at the same time. Therefore, the final result depends on the order of the execution of threads, and most of the time, it is incorrect. You can also have problems with change visibility. So if a thread changes the value of a shared variable, the changes would only be written in the local cache of that thread; other threads will not have access to the change (they will only be able to see the old value). We present to you a java multithreading tutorial taken from the book, Java 9 Concurrency Cookbook - Second Edition, written by Javier Fernández González. The solution to these problems lies in the concept of a critical section. A critical section is a block of code that accesses a shared resource and can't be executed by more than one thread at the same time. To help programmers implement critical sections, Java (and almost all programming languages) offers synchronization mechanisms. When a thread wants access to a critical section, it uses one of these synchronization mechanisms to find out whether there is any other thread executing the critical section. If not, the thread enters the critical section. If yes, the thread is suspended by the synchronization mechanism until the thread that is currently executing the critical section ends it. When more than one thread is waiting for a thread to finish the execution of a critical section, JVM chooses one of them and the rest wait for their turn. Java language offers two basic synchronization mechanisms: The  synchronized keyword The  Lock interface and its implementations In this article, we explore the use of synchronized keyword method to perform synchronization mechanism in Java. So let's get started: Synchronizing a method In this recipe, you will learn how to use one of the most basic methods of synchronization in Java, that is, the use of the synchronized keyword to control concurrent access to a method or a block of code. All the synchronized sentences (used on methods or blocks of code) use an object reference. Only one thread can execute a method or block of code protected by the same object reference. When you use the synchronized keyword with a method, the object reference is implicit. When you use the synchronized keyword in one or more methods of an object, only one execution thread will have access to all these methods. If another thread tries to access any method declared with the synchronized keyword of the same object, it will be suspended until the first thread finishes the execution of the method. In other words, every method declared with the synchronized keyword is a critical section, and Java only allows the execution of one of the critical sections of an object at a time. In this case, the object reference used is the own object, represented by the this keyword. Static methods have a different behavior. Only one execution thread will have access to one of the static methods declared with the synchronized keyword, but a different thread can access other non-static methods of an object of that class. You have to be very careful with this point because two threads can access two different synchronized methods if one is static and the other is not. If both methods change the same data, you can have data inconsistency errors. In this case, the object reference used is the class object. When you use the synchronized keyword to protect a block of code, you must pass an object reference as a parameter. Normally, you will use the this keyword to reference the object that executes the method, but you can use other object references as well. Normally, these objects will be created exclusively for this purpose. You should keep the objects used for synchronization private. For example, if you have two independent attributes in a class shared by multiple threads, you must synchronize access to each variable; however, it wouldn't be a problem if one thread is accessing one of the attributes and the other accessing a different attribute at the same time. Take into account that if you use the own object (represented by the this keyword), you might interfere with other synchronized code (as mentioned before, the this object is used to synchronize the methods marked with the synchronized keyword). In this recipe, you will learn how to use the synchronized keyword to implement an application simulating a parking area, with sensors that detect the following: when a car or a motorcycle enters or goes out of the parking area, an object to store the statistics of the vehicles being parked, and a mechanism to control cash flow. We will implement two versions: one without any synchronization mechanisms, where we will see how we obtain incorrect results, and one that works correctly as it uses the two variants of the synchronized keyword. The example of this recipe has been implemented using the Eclipse IDE. If you use Eclipse or a different IDE, such as NetBeans, open it and create a new Java project. How to do it... Follow these steps to implement the example: First, create the application without using any synchronization mechanism. Create a class named ParkingCash with an internal constant and an attribute to store the total amount of money earned by providing this parking service: public class ParkingCash { private static final int cost=2; private long cash; public ParkingCash() { cash=0; } Implement a method named vehiclePay() that will be called when a vehicle (a car or motorcycle) leaves the parking area. It will increase the cash attribute: public void vehiclePay() { cash+=cost; } Finally, implement a method named close() that will write the value of the cash attribute in the console and reinitialize it to zero: public void close() { System.out.printf("Closing accounting"); long totalAmmount; totalAmmount=cash; cash=0; System.out.printf("The total amount is : %d", totalAmmount); } } Create a class named ParkingStats with three private attributes and the constructor that will initialize them: public class ParkingStats { private long numberCars; private long numberMotorcycles; private ParkingCash cash; public ParkingStats(ParkingCash cash) { numberCars = 0; numberMotorcycles = 0; this.cash = cash; } Then, implement the methods that will be executed when a car or motorcycle enters or leaves the parking area. When a vehicle leaves the parking area, cash should be incremented: public void carComeIn() { numberCars++; } public void carGoOut() { numberCars--; cash.vehiclePay(); } public void motoComeIn() { numberMotorcycles++; } public void motoGoOut() { numberMotorcycles--; cash.vehiclePay(); } Finally, implement two methods to obtain the number of cars and motorcycles in the parking area, respectively. Create a class named Sensor that will simulate the movement of vehicles in the parking area. It implements the Runnable interface and has a ParkingStats attribute, which will be initialized in the constructor: public class Sensor implements Runnable { private ParkingStats stats; public Sensor(ParkingStats stats) { this.stats = stats; } Implement the run() method. In this method, simulate that two cars and a motorcycle arrive in and then leave the parking area. Every sensor will perform this action 10 times: @Override public void run() { for (int i = 0; i< 10; i++) { stats.carComeIn(); stats.carComeIn(); try { TimeUnit.MILLISECONDS.sleep(50); } catch (InterruptedException e) { e.printStackTrace(); } stats.motoComeIn(); try { TimeUnit.MILLISECONDS.sleep(50); } catch (InterruptedException e) { e.printStackTrace(); } stats.motoGoOut(); stats.carGoOut(); stats.carGoOut(); } } Finally, implement the main method. Create a class named Main with the main() method. It needs ParkingCash and ParkingStats objects to manage parking: public class Main { public static void main(String[] args) { ParkingCash cash = new ParkingCash(); ParkingStats stats = new ParkingStats(cash); System.out.printf("Parking Simulatorn"); Then, create the Sensor tasks. Use the availableProcessors() method (that returns the number of available processors to the JVM, which normally is equal to the number of cores in the processor) to calculate the number of sensors our parking area will have. Create the corresponding Thread objects and store them in an array: intnumberSensors=2 * Runtime.getRuntime() .availableProcessors(); Thread threads[]=new Thread[numberSensors]; for (int i = 0; i<numberSensors; i++) { Sensor sensor=new Sensor(stats); Thread thread=new Thread(sensor); thread.start(); threads[i]=thread; } Then wait for the finalization of the threads using the join() method: for (int i=0; i<numberSensors; i++) { try { threads[i].join(); } catch (InterruptedException e) { e.printStackTrace(); } } Finally, write the statistics of Parking: System.out.printf("Number of cars: %dn", stats.getNumberCars()); System.out.printf("Number of motorcycles: %dn", stats.getNumberMotorcycles()); cash.close(); } } In our case, we executed the example in a four-core processor, so we will have eight Sensor tasks. Each task performs 10 iterations, and in each iteration, three vehicles enter the parking area and the same three vehicles go out. Therefore, each Sensor task will simulate 30 vehicles. If everything goes well, the final stats will show the following: There are no cars in the parking area, which means that all the vehicles that came into the parking area have moved out Eight Sensor tasks were executed, where each task simulated 30 vehicles and each vehicle was charged 2 dollars each; therefore, the total amount of cash earned was 480 dollars When you execute this example, each time you will obtain different results, and most of them will be incorrect. The following screenshot shows an example: We had race conditions, and the different shared variables accessed by all the threads gave incorrect results. Let's modify the previous code using the synchronized keyword to solve these problems: First, add the synchronized keyword to the vehiclePay() method of the ParkingCash class: public synchronized void vehiclePay() { cash+=cost; } Then, add a synchronized block of code using the this keyword to the close() method: public void close() { System.out.printf("Closing accounting"); long totalAmmount; synchronized (this) { totalAmmount=cash; cash=0; } System.out.printf("The total amount is : %d",totalAmmount); } Now add two new attributes to the ParkingStats class and initialize them in the constructor of the class: private final Object controlCars, controlMotorcycles; public ParkingStats (ParkingCash cash) { numberCars=0; numberMotorcycles=0; controlCars=new Object(); controlMotorcycles=new Object(); this.cash=cash; } Finally, modify the methods that increment and decrement the number of cars and motorcycles, including the synchronized keyword. The numberCars attribute will be protected by the controlCars object, and the numberMotorcycles attribute will be protected by the controlMotorcycles object. You must also synchronize the getNumberCars() and getNumberMotorcycles() methods with the associated reference object: public void carComeIn() { synchronized (controlCars) { numberCars++; } } public void carGoOut() { synchronized (controlCars) { numberCars--; } cash.vehiclePay(); } public void motoComeIn() { synchronized (controlMotorcycles) { numberMotorcycles++; } } public void motoGoOut() { synchronized (controlMotorcycles) { numberMotorcycles--; } cash.vehiclePay(); } Execute the example now and see the difference when compared to the previous version. How it works... The following screenshot shows the output of the new version of the example. No matter how many times you execute it, you will always obtain the correct result: Let's see the different uses of the synchronized keyword in the example: First, we protected the vehiclePay() method. If two or more Sensor tasks call this method at the same time, only one will execute it and the rest will wait for their turn; therefore, the final amount will always be correct. We used two different objects to control access to the car and motorcycle counters. This way, one Sensor task can modify the numberCars attribute and another Sensor task can modify the numberMotorcycles attribute at the same time; however, no two Sensor tasks will be able to modify the same attribute at the same time, so the final value of the counters will always be correct. Finally, we also synchronized the getNumberCars() and getNumberMotorcycles() methods. Using the synchronized keyword, we can guarantee correct access to shared data in concurrent applications. As mentioned at the introduction of this recipe, only one thread can access the methods of an object that uses the synchronized keyword in their declaration. If thread (A) is executing a synchronized method and thread (B) wants to execute another synchronized method of the same object, it will be blocked until thread (A) is finished. But if thread (B) has access to different objects of the same class, none of them will be blocked. When you use the synchronized keyword to protect a block of code, you use an object as a parameter. JVM guarantees that only one thread can have access to all the blocks of code protected with this object (note that we always talk about objects, not classes). We used the TimeUnit class as well. The TimeUnit class is an enumeration with the following constants: DAYS, HOURS, MICROSECONDS, MILLISECONDS, MINUTES, NANOSECONDS, and SECONDS. These indicate the units of time we pass to the sleep method. In our case, we let the thread sleep for 50 milliseconds. There's more... The synchronized keyword penalizes the performance of the application, so you must only use it on methods that modify shared data in a concurrent environment. If you have multiple threads calling a synchronized method, only one will execute them at a time while the others will remain waiting. If the operation doesn't use the synchronized keyword, all the threads can execute the operation at the same time, reducing the total execution time. If you know that a method will not be called by more than one thread, don't use the synchronized keyword. Anyway, if the class is designed for multithreading access, it should always be correct. You must promote correctness over performance. Also, you should include documentation in methods and classes in relation to their thread safety. You can use recursive calls with synchronized methods. As the thread has access to the synchronized methods of an object, you can call other synchronized methods of that object, including the method that is being executed. It won't have to get access to the synchronized methods again. We can use the synchronized keyword to protect access to a block of code instead of an entire method. We should use the synchronized keyword in this way to protect access to shared data, leaving the rest of the operations out of this block and obtaining better performance of the application. The objective is to have the critical section (the block of code that can be accessed only by one thread at a time) as short as possible. Also, avoid calling blocking operations (for example, I/O operations) inside a critical section. We have used the synchronized keyword to protect access to the instruction that updates the number of persons in the building, leaving out the long operations of the block that don't use shared data. When you use the synchronized keyword in this way, you must pass an object reference as a parameter. Only one thread can access the synchronized code (blocks or methods) of this object. Normally, we will use the this keyword to reference the object that is executing the method: synchronized (this) { // Java code } To summarize, we learnt to use the synchronized  keyword method for multithreading in Java to perform synchronization mechasim. You read an excerpt from the book Java 9 Concurrency Cookbook - Second Edition. This book will help you master the art of fast, effective Java development with the power of concurrent and parallel programming. Concurrency programming 101: Why do programmers hang by a thread? How to create multithreaded applications in Qt Getting Inside a C++ Multithreaded Application
Read more
  • 0
  • 0
  • 56785

article-image-dont-call-us-ninjas-or-rockstars-say-developers
Richard Gall
30 May 2018
5 min read
Save for later

Don't call us ninjas or rockstars, say developers

Richard Gall
30 May 2018
5 min read
Words like 'ninja' and 'rockstar' have been flying around the tech world for some time now. Data revealed by recruitment website Indeed at the end of 2017 showed that the term 'rockstar' has increased in job postings 19% since 2015. We seem to live in a world where 'sexing up' job roles has become the norm. And when top talent is hard to come by it makes sense. The words offer some degree of status to candidates and, and imply the organizations behind them are forward-thinking. But it's starting to get boring. In this year's Skill Up survey, 57% of respondents said they didn't like creative terms like 'rockstar' 'ninja' and 'wizard' - only 26% said they actually liked the term. While words like these might be boring, they can be harmful too. In an age of spin and fake news, using language to dress up and redefine things can have an impact we might not expect. Sign up to the Packt Hub weekly newsletter and receive a free PDF of this year's Skill Up report. Using words like rockstar and ninja pits developers against each other The industry's insistence on using these words in everything from recruitment to conferences cultivates a bizarre class system within tech. When we start calling people rockstars, it suggests something about the role they play within a company or engineering team. It says 'these people are doing something really exciting' while everyone else, presumably, isn't. While it's true that hierarchies are part and parcel of any modern organization, this superficial labeling isn't helpful. 'Collaboration' and 'agile' are buzzwords that are as overused as ninja and rockstar, but at least they offer something practical and positive. And let's be honest - collaborating is important if we're to build better software and have better working lives. The unforeseen impact on developer mental health An unforeseen affect of these words could be a negative impact on mental health in the tech industry. We already know that burnout is becoming a common occurrence, as tech professionals are being overworked and pushed to breaking point. While this is particularly true of startup culture, where engineers are driven to develop products on incredibly tight schedules as owners seek investment and investors look for signs of growth, but this trend is growing. Arguably, the gap between management and engineers is playing into this. The results of innovation look shiny and exciting, but the actual work - which can, as we know, be repetitive, boring, hard as it can be enjoyable - isn't properly understood. Ninjas, rockstars and the commodification of technical skill It has become a truism that communication is important when it comes to tech. Words like ninja and rockstar are making communication hard - they undermine our ability to communicate. They conceal the work that engineers actually do. It's great that they shine a spotlight on technical professionals and skills, but they do so in a way that is actually quite euphemistic. It's hints at the value of the skills, but actually fails to engage with why these skills are important, how you develop them, and how they can be properly leveraged. More specifically, words like 'ninja' and 'rockstar' turn technical expertise into a product. It makes skills marketable; it turns knowledge into a commodity. Good for you if that helps you earn a little more money in your next job; even better if it lands you a book deal or a spot speaking at a conference. But all you're really doing is taking advantage of the technical skills bubble. It won't last forever, and in the long run it probably won't be good news for the industry. Some people will be overvalued, while others will be undervalued. Ninjas, rockstars and open source culture These irritating euphemisms come from a number of different sources. Tech recruitment has played a big part, as companies try and attract top tech talent. So has modern corporate culture, which has been trying to loosen its proverbial tie for the last decade. But it's also worth noting that rockstars and ninjas come out of open source culture. This is a culture that is celebrated for its lack of authority. However, with this decline, it makes opens up a space for 'experts' to take the lead. In the past, we might have quaintly referred to these people as 'community figures'. As open source has moved mainstream, the commodification of technical expertise found form in the tech ninja and rockstar. But while rockstars and ninjas appear to be the shining lights of open source culture, they might also be damaging to it as well. If open source culture is 'led' by a number of people the world begins referring to as rockstars, the very foundations of it begin to move. It's no longer quite as open as it used to be. True, perhaps we need rockstars and ninjas in open source. These are people that evangelize about certain projects. These are people who can offer unique and useful perspectives on important debates and issues in their respective fields, right? Well, sort of. It is important for people to discuss new ideas and pioneer new ways of doing things. But this doesn't mean we need to sex things up. After all, certain people aren't more entitled to an opinion just because they have a book deal. Yes they have experience, but it's important that communities don't get left behind as the tech industry chases after the dream of relentless innovation. Of course it's great to talk about, but we've all got to do the work too.
Read more
  • 0
  • 0
  • 18443

article-image-endpoint-operations-management-agent-vrealize-operations
Vijin Boricha
30 May 2018
10 min read
Save for later

How to ace managing the Endpoint Operations Management Agent with vROps

Vijin Boricha
30 May 2018
10 min read
In this tutorial, you will learn the functionalities of endpoint operations management agent and how you can effectively manage it. You can install the Endpoint Operations Management Agent from a tar.gz or .zip archive, or from an operating system-specific installer for Windows or for Linux-like systems that support RPM. This is an excerpt from Mastering vRealize Operations Manager - Second Edition written by Spas Kaloferov, Scott Norris, Christopher Slater.  The agent can come bundled with JRE or without JRE. Installing the Agent Before you start, you have to make sure you have vRealize Operations user credentials to ensure that you have enough permissions to deploy the agent. Let's see how can we install the agent on both Linux and Windows machines. Manually installing the Agent on a Linux Endpoint Perform the following steps to install the Agent on a Linux machine: Download the appropriate distributable package from https://my.vmware.com and copy it over to the machine. The Linux .rpm file should be installed with the root credentials. Log in to the machine and run the following command to install the agent: [root]# rpm -Uvh <DistributableFileName> In this example, we are installing on CentOS. Only when we start the service will it generate a token and ask for the server's details. Start the Endpoint Operations Management Agent service by running: service epops-agent start The previous command will prompt you to enter the following information about the vRealize Operations instance: vRealize Operations server hostname or IP: If the server is on the same machine as the agent, you can enter the localhost. If a firewall is blocking traffic from the agent to the server, allow the traffic to that address through the firewall. Default port: Specify the SSL port on the vRealize Operations server to which the agent must connect. The default port is 443. Certificate to trust: You can configure Endpoint Operations Management agents to trust the root or intermediate certificate to avoid having to reconfigure all agents if the certificate on the analytics nodes and remote collectors are modified. vRealize Operations credentials: Enter the name of a vRealize Operations user with agent manager permissions. The agent token is generated by the Endpoint Operations Management Agent. It is only created the first time the endpoint is registered to the server. The name of the token is random and unique, and is based on the name and IP of the machine. If the agent is reinstalled on the Endpoint, the same token is used, unless it is deleted manually. Provide the necessary information, as shown in the following screenshot: Go into the vRealize Operations UI and navigate to Administration | Configuration | Endpoint Operations, and select the Agents tab: After it is deployed, the Endpoint Operations Management Agent starts sending monitoring data to vRealize Operations, where the data is collected by the Endpoint Operations Management adapter. You can also verify the collection status of the Endpoint Operations Management Agent for your virtual machine on the Inventory Explorer page. If you encounter issues during the installation and configuration, you can check the /opt/vmware/epops-agent/log/agent.log log file for more information. We will examine this log file in more detail later in this book. Manually installing the agent on a Windows Endpoint On a Windows machine, the Endpoint Operations Management Agent is installed as a service. The following steps outline how to install the Agent via the .zip archive bundle, but you can also install it via an executable file. Perform the following steps to install the Agent on a Windows machine: Download the agent .zip file. Make sure the file version matches your Microsoft Windows OS. Open Command Prompt and navigate to the bin folder within the extracted folder. Run the following command to install the agent: ep-agent.bat install After the install, is complete, start the agent by executing: ep-agent.bat start If this is the first time you are installing the agent, a setup process will be initiated. If you have specified configuration values in the agent properties file, those will be used. Upon starting the service, it will prompt you for the same values as it did when we installed in on a Linux machine. Enter the following information about the vRealize Operations instance: vRealize Operations server hostname or IP Default port Certificate to trust vRealize Operations credentials If the agent is reinstalled on the endpoint, the same token is used, unless it is deleted manually. After it is deployed, the Endpoint Operations Management Agent starts sending monitoring data to vRealize Operations, where the data is collected by the Endpoint Operations Management adapter. You can verify the collection status of the Endpoint Operations Management Agent for your virtual machine on the Inventory Explorer page: Because the agent is collecting data, the collection status of the agent is green. Alternatively, you can also verify the collection status of the Endpoint Operations Management Agent by navigating to Administration | Configuration | Endpoint Operations, and selecting the Agents tab. Automated agent installation using vRealize Automation In a typical environment, the Endpoint Operations Management Agent will not be installed on many/all virtual machines or physical servers. Usually, only those servers holding critical applications or services will be monitored using the agent. Of those servers which are not, manually installing and updating the Endpoint Operations Management Agent can be done with reasonable administrative effort. If we have an environment where we need to install the agent on many VMs, and we are using some kind of provisioning and automation engine like Microsoft System Center Configuration Manager or VMware vRealize Automation, which has application-provisioning capabilities, it is not recommended to install the agent in the VM template that will be cloned. If for whatever reason you have to do it, do not start the Endpoint Operations Management Agent, or remove the Endpoint Operations Management token and data before the cloning takes place. If the Agent is started before the cloning, a client token is created and all clones will be the same object in vRealize Operations. In the following example, we will show how to install the Agent via vRealize Automation as part of the provisioning process. I will not go into too much detail on how to create blueprints in vRealize Automation, but I will give you the basic steps to create a software component that will install the agent and add it to a blueprint. Perform the following steps to create a software component that will install the Endpoint Operations Management Agent and add it to a vRealize Automation Blueprint: Open the vRealize Automation portal page and navigate to Design, the Software Components tab, and click New to create a new software component. On the General tab, fill in the information. Click Properties and click New . Add the following six properties, as shown in the following table: Name Type Value Encrypted Overridable Required Computed RPMNAME String The name of the Agent RPM package No Yes No No SERVERIP String The IP of the vRealize Operations server/cluster No Yes No No SERVERLOGIN String Username with enough permission in vRealize Operations No Yes No No SERVERPWORD String Password for the user No Yes No No SRVCRTTHUMB String The thumbprint of the vRealize Operations certificate No Yes No No SSLPORT String The port on which to communicate with vRealize Operations No Yes No No These values will be used to configure the Agent once it is installed. The following screenshot illustrates some example values: Click Actions and configure the Install, Configure, and Start actions, as shown in the following table: Life Cycle Stage Script Type Script Reboot Install Bash rpm -i http://<FQDNOfThe ServerWhere theRPMFileIsLocated>/$RPMNAME No (unchecked) Configure Bash sed -i "s/#agent.setup.serverIP=localhost/agent.setup.serverIP=$SERVERIP/" /opt/vmware/epops-agent/conf/agent.properties sed -i "s/#agent.setup.serverSSLPort=443/agent.setup.serverSSLPort=$SSLPORT/" /opt/vmware/epops-agent/conf/agent.properties sed -i "s/#agent.setup.serverLogin=username/agent.setup.serverLogin=$SERVERLOGIN/" /opt/vmware/epops-agent/conf/agent.properties sed -i "s/#agent.setup.serverPword=password/agent.setup.serverPword=$SERVERPWORD/" /opt/vmware/epops-agent/conf/agent.properties sed -i "s/#agent.setup.serverCertificateThumbprint=/agent.setup.serverCertificateThumbprint=$SRVCRTTHUMB/" /opt/vmware/epops-agent/conf/agent.properties No (unchecked) Start Bash /sbin/service epops-agent start /sbin/chkconfig epops-agent on No (unchecked) The following screenshot illustrates some example values: Click Ready to Complete and click Finish. Edit the blueprint to where you want the agent to be installed. From Categories, select Software Components. Select the Endpoint Operations Management software component and drag and drop it over the machine type (virtual machine) where you want the object to be installed: Click Save to save the blueprint. This completes the necessary steps to automate the Endpoint Operations Management Agent installation as a software component in vRealize Automation. With every deployment of the blueprint, after cloning, the Agent will be installed and uniquely identified to vRealize Operations. Reinstalling the agent When reinstalling the agent, various elements are affected, like: Already-collected metrics Identification tokens that enable a reinstalled agent to report on the previously-discovered objects on the server The following locations contain agent-related data which remains when the agent is uninstalled: The data folder containing the keystore The epops-token platform token file which is created before agent registration Data continuity will not be affected when the data folder is deleted. This is not the case with the epops-token file. If you delete the token file, the agent will not be synchronized with the previously discovered objects upon a new installation. Reinstalling the agent on a Linux Endpoint Perform the following steps to reinstall the agent on a Linux machine: If you are removing the agent completely and are not going to reinstall it, delete the data directory by running: $ rm -rf /opt/epops-agent/data If you are completely removing the client and are not going to reinstall or upgrade it, delete the epops-token file: $ rm /etc/vmware/epops-token Run the following command to uninstall the Agent: $ yum remove <AgentFullName> If you are completely removing the client and are not going to reinstall it, make sure to delete the agent object from the Inventory Explorer in vRealize Operations. You can now install the client again, the same way as we did earlier in this book. Reinstalling the Agent on a Windows Endpoint Perform the following steps to reinstall the Agent on a Windows machine: From the CLI, change the directory to the agent bin directory: cd C:epops-agentbin Stop the agent by running: C:epops-agentbin> ep-agent.bat stop Remove the agent service by running: C:epops-agentbin> ep-agent.bat remove If you are completely removing the client and are not going to reinstall it, delete the data directory by running: C:epops-agent> rd /s data If you are completely removing the client and are not going to reinstall it, delete the epops-token file using the CLI or Windows Explorer: C:epops-agent> del "C:ProgramDataVMwareEP Ops agentepops-token" Uninstall the agent from the Control Panel. If you are completely removing the client and are not going to reinstall it, make sure to delete the agent object from the Inventory Explorer in vRealize Operations. You can now install the client again, the same way as we did earlier in this book. You've become familiar with the Endpoint Operations Management Agent and what it can do. We also discussed how to install and reinstall it on both Linux and Windows operating systems. To know more about vSphere and vRealize automation workload placement, check out our book  Mastering vRealize Operations Manager - Second Edition. VMware vSphere storage, datastores, snapshots What to expect from vSphere 6.7 KVM Networking with libvirt
Read more
  • 0
  • 0
  • 17855

article-image-diy-selfie-drone-arduino-esp8266
Vijin Boricha
29 May 2018
10 min read
Save for later

How to assemble a DIY selfie drone with Arduino and ESP8266

Vijin Boricha
29 May 2018
10 min read
Have you ever thought of something that can take a photo from the air, or perhaps take a selfie from it? How about we build a drone for taking selfies and recording videos from the air? Taking photos from the sky is one of the most exciting things in photography this year. You can shoot from helicopters, planes, or even from satellites. Unless you own a personal air vehicle or someone you know does, you know this is a costly affair sure to burn through your pockets. Drones can come in handy here. Have ever googled drone photography? If you did, I am sure you'd want to build or buy a drone for photography, because of the amazing views of the common subjects taken from the sky. Today, we will learn to build a drone for aerial photography and videography. This tutorial is an excerpt from Building Smart Drones with ESP8266 and Arduino written by Syed Omar Faruk Towaha. Assuming you know how to build your customized frame if not you can refer to our book, or you may buy HobbyKing X930 glass fiber frame and connect the parts together, as directed in the manual. However, I have a few suggestions to help you carry out a better assembly of the frame: Firstly, connect the motor mounted with the legs or wings or arms of the frame. Tighten them firmly, as they will carry and hold the most important equipment of the drone. Then, connect them to the base and, later other parts with firm connections. Now, we will calibrate our ESCs. We will take the signal cable from an ESC (the motor is plugged into the ESC; careful, don't connect the propeller) and connect it to the throttle pins on the radio. Make sure the transmitter is turned on and the throttle is in the lowest position. Now, plug the battery into the ESC and you will hear a beep. Now, gradually increase the throttle from the transmitter. Your motor will start spinning at any position. This is because the ESC is not calibrated. So, you need to tell the ESC where the high point and the low point of the throttle are. Disconnect the battery first. Increase the throttle of the transmitter to the highest position and power the ESC. Your ESC will now beep once and beep 3 times in every 4 seconds. Now, move the throttle to the bottommost position and you will hear the ESC beep as if it is ready and calibrated. Now, you can increase the throttle of the transmitter and will see from lower to higher, the throttle will work. Now, mount the motors, connect them to the ESCs, and then connect them to the ArduPilot, changing the pins gradually. Now, connect your GPS to the ArduPilot and calibrate it. Now, our drone is ready to fly. I would suggest you fly the drone for about 10-15 minutes before connecting the camera. Connecting the camera For a photography drone, connecting the camera and controlling the camera is one of the most important things. Your pictures and videos will be spoiled if you cannot adjust the camera and stabilize it properly. In our case, we will use a camera gimbal to hold the camera and move it from the ground. Choosing a gimbal The camera gimbal holds the camera for you and can move the camera direction according to your command. There are a number of camera gimbals out there. You can choose any type, depending on your demand and camera size and specification. If you want to use a DSLR camera, you should use a bigger gimbal and, if you use a point and shoot type camera or action camera, you may use small- or medium-sized gimbals. There are two types of gimbals, a brushless gimbal, and a standard gimbal. The standard gimbal has servo motors and gears. If you use an FPV camera, then a standard gimbal with a 2-axis manual mount is the best option. The standard gimbal is not heavy; it is lightweight and not expensive. The best thing is you will not need an external controller board for your standard camera gimbal. The brushless gimbal is for professional aero photographers. It is smooth and can shoot videos or photos with better quality. The brushless gimbal will need an external controller board for your drone and the brushless gimbal is heavier than the standard gimbal. Choosing the best gimbal is one of the hard things for a photographer, as the stabilization of the image is a must for photoshoots. If you cannot control the camera from the ground, then using a gimbal is worthless. The following picture shows a number of gimbals: After choosing your camera and the gimbal, the first thing is to mount the gimbal and the camera to the drone. Make sure the mount is firm, but not too hard, because it will make the camera shake while flying the drone. You may use the Styrofoam or rubber pieces that came with the gimbal to reduce the vibration and make the image stable. Configuring the camera with the ArduPilot Configuring the camera with the ArduPilot is easy. Before going any further, let us learn a few things about the camera gimbal's Euler angels: Tilt: This moves the camera sloping position (range -90 degrees to +90 degrees), it is the motion (clockwise-anticlockwise) with the vertical axis Roll: This is a motion ranging from 0 degrees to 360 degrees parallel to the horizontal axis Pan: This is the same type motion of roll ranging from 0 degrees to 360 degrees but in the vertical axis Shutter: This is a switch that triggers a click or sends a signal Firstly, we are going to use the standard gimbal. Basically, there are two servos in a standard gimbal. One is for pitch or tilt and another is for the roll. So, a standard gimbal gives you a two-dimensional motion with the camera viewpoint. Connection Follow these steps to connect the camera to the ArduPilot: Take the pitch servo's signal pin and connect it to the 11th pin of the ArduPilot (A11) and the roll signal to the 10th pin (A10). Make sure you connect only the signal (S pin) cable of the servos to the pin, not the other two pins (ground and the VCC). The signal cables must be connected to the innermost pins of the A11 and A10 pins (two pins make a raw; see the following picture for clarification): My suggestion is adding an extra battery for your gimbal's servos. If you want to connect your servo directly to the ArduPilot, your ArduPilot will not perform well, as the servos will draw power. Now, connect your ArduPilot to your PC using wire or telemetry. Go to the Initial Setup menu and, under Optional Hardware, you will find another option called Camera Gimbal. Click on this and you will see the following screen: For the Tilt, change the pin to RC11; for the Roll, change the pin to RC10; and for Shutter, change it to CH7. If you want to change the Tilt during the flight from the transmitter, you need to change the Input Ch of the Tilt. See the following screenshot: Now, you need to change an option in the Configuration | Extended Tuning page. Set Ch6 Opt to None, as in the following screenshot, and hit the Write Params button: We need to align the minimum and maximum PWM values for the servos of the gimbal. To do that, we can tilt the frame of the gimbal to the leftmost position and from the transmitter, move the knob to the minimum position and start increasing, your servo will start to move at any time, then stop moving the knob. For the maximum calibration, move the Tilt to the rightmost position and do the same thing for the knob with the maximum position. Do the same thing for the pitch with the forward and backward motion. We also need to level the gimbal for better performance. To do that, you need to keep the gimbal frame level to the ground and set the Camera Gimbal option, the Servo Limits, and the Angle Limits. Change them as per the level of the frame. Controlling the camera Controlling the camera to take selfies or record video is easy. You can use the shutter pin we used before or the camera's mobile app for controlling the camera. My suggestion is to use the camera's app to take shots because you will get a live preview of what you are shooting and it will be easy to control the camera shots. However, if you want to use the Shutter button manually from the transmitter then you can do this too. We have connected the RC7 pin for controlling a servo. You can use a servo or a receiver switch for your camera to manually trigger the shutter. To do that, you can buy a receiver controller on/off switch. You can use this switch for various purposes. Clicking the shutter of your camera is one of them. Manually triggering the camera is easy. It is usually done for point and shoot cameras. To do that, you need to update the firmware of your cameras. You can do this in many ways, but the easiest one will be discussed here. Your RECEIVER CONTROLLED ON/OFF SWITCH may look like the following: You can see five wires in the picture. The three wires together are, as usual, pins of the servo motor. Take out the signal cable (in this case, this is the yellow cable) and connect it to the RC7 pin of the ArduPilot. Then, connect the positive to one of the thick red wires. Take the camera's data cable and connect the other tick wire to the positive of the USB cable and the negative wire will be connected to the negative of the three connected wires. Then, an output of the positive and negative wire will go to the battery (an external battery is suggested for the camera). To upgrade the camera firmware, you need to go to the camera's website and upgrade the firmware for the remote shutter option. In my case, the website is http://chdk.wikia.com/wiki/CHDK . I have downloaded it for a Canon point and shoot camera. You can also use action cameras for your drones. They are cheap and can be controlled remotely via mobile applications. Flying and taking shots Flying the photography drone is not that difficult. My suggestion is to lock the altitude and fly parallel to the ground. If you use a camera remote controller or an app, then it is really easy to take the photo or record a video. However, if you use the switch, as we discussed, then you need to open and connect your drone to the mission planner via telemetry. Go to the flight data, right click on the map, and then click the Trigger Camera Now option. It will trigger the Camera Shutter button and start recording or take a photo. You can do this when your drone is in a locked position and, using the timer, take a shot from above, which can be a selfie too. Let's try it. Let me know what happens and whether you like it or not. Next, learn to build other drones like a mission control drone or gliding drones from our book Building Smart Drones with ESP8266 and Arduino. Drones: Everything you ever wanted to know! How to build an Arduino based ‘follow me’ drone Tips and tricks for troubleshooting and flying drones safely
Read more
  • 0
  • 0
  • 38792
article-image-how-to-build-deep-convolutional-gan-using-tensorflow-and-keras
Savia Lobo
29 May 2018
13 min read
Save for later

How to build Deep convolutional GAN using TensorFlow and Keras

Savia Lobo
29 May 2018
13 min read
In this tutorial, we will learn to build both simple and deep convolutional GAN models with the help of TensorFlow and Keras deep learning frameworks. [box type="note" align="" class="" width=""]This article is an excerpt taken from the book Mastering TensorFlow 1.x written by Armando Fandango.[/box] Simple GAN with TensorFlow For building the GAN with TensorFlow, we build three networks, two discriminator models, and one generator model with the following steps: Start by adding the hyper-parameters for defining the network: # graph hyperparameters g_learning_rate = 0.00001 d_learning_rate = 0.01 n_x = 784 # number of pixels in the MNIST image # number of hidden layers for generator and discriminator g_n_layers = 3 d_n_layers = 1 # neurons in each hidden layer g_n_neurons = [256, 512, 1024] d_n_neurons = [256] # define parameter ditionary d_params = {} g_params = {} activation = tf.nn.leaky_relu w_initializer = tf.glorot_uniform_initializer b_initializer = tf.zeros_initializer Next, define the generator network: z_p = tf.placeholder(dtype=tf.float32, name='z_p', shape=[None, n_z]) layer = z_p # add generator network weights, biases and layers with tf.variable_scope('g'): for i in range(0, g_n_layers): w_name = 'w_{0:04d}'.format(i) g_params[w_name] = tf.get_variable( name=w_name, shape=[n_z if i == 0 else g_n_neurons[i - 1], g_n_neurons[i]], initializer=w_initializer()) b_name = 'b_{0:04d}'.format(i) g_params[b_name] = tf.get_variable( name=b_name, shape=[g_n_neurons[i]], initializer=b_initializer()) layer = activation( tf.matmul(layer, g_params[w_name]) + g_params[b_name]) # output (logit) layer i = g_n_layers w_name = 'w_{0:04d}'.format(i) g_params[w_name] = tf.get_variable( name=w_name, shape=[g_n_neurons[i - 1], n_x], initializer=w_initializer()) b_name = 'b_{0:04d}'.format(i) g_params[b_name] = tf.get_variable( name=b_name, shape=[n_x], initializer=b_initializer()) g_logit = tf.matmul(layer, g_params[w_name]) + g_params[b_name] g_model = tf.nn.tanh(g_logit) Next, define the weights and biases for the two discriminator networks that we shall build: with tf.variable_scope('d'): for i in range(0, d_n_layers): w_name = 'w_{0:04d}'.format(i) d_params[w_name] = tf.get_variable( name=w_name, shape=[n_x if i == 0 else d_n_neurons[i - 1], d_n_neurons[i]], initializer=w_initializer()) b_name = 'b_{0:04d}'.format(i) d_params[b_name] = tf.get_variable( name=b_name, shape=[d_n_neurons[i]], initializer=b_initializer()) #output (logit) layer i = d_n_layers w_name = 'w_{0:04d}'.format(i) d_params[w_name] = tf.get_variable( name=w_name, shape=[d_n_neurons[i - 1], 1], initializer=w_initializer()) b_name = 'b_{0:04d}'.format(i) d_params[b_name] = tf.get_variable( name=b_name, shape=[1], initializer=b_initializer()) Now using these parameters, build the discriminator that takes the real images as input and outputs the classification: # define discriminator_real # input real images x_p = tf.placeholder(dtype=tf.float32, name='x_p', shape=[None, n_x]) layer = x_p with tf.variable_scope('d'): for i in range(0, d_n_layers): w_name = 'w_{0:04d}'.format(i) b_name = 'b_{0:04d}'.format(i) layer = activation( tf.matmul(layer, d_params[w_name]) + d_params[b_name]) layer = tf.nn.dropout(layer,0.7) #output (logit) layer i = d_n_layers w_name = 'w_{0:04d}'.format(i) b_name = 'b_{0:04d}'.format(i) d_logit_real = tf.matmul(layer, d_params[w_name]) + d_params[b_name] d_model_real = tf.nn.sigmoid(d_logit_real)  Next, build another discriminator network, with the same parameters, but providing the output of generator as input: # define discriminator_fake # input generated fake images z = g_model layer = z with tf.variable_scope('d'): for i in range(0, d_n_layers): w_name = 'w_{0:04d}'.format(i) b_name = 'b_{0:04d}'.format(i) layer = activation( tf.matmul(layer, d_params[w_name]) + d_params[b_name]) layer = tf.nn.dropout(layer,0.7) #output (logit) layer i = d_n_layers w_name = 'w_{0:04d}'.format(i) b_name = 'b_{0:04d}'.format(i) d_logit_fake = tf.matmul(layer, d_params[w_name]) + d_params[b_name] d_model_fake = tf.nn.sigmoid(d_logit_fake) Now that we have the three networks built, the connection between them is made using the loss, optimizer and training functions. While training the generator, we only train the generator's parameters and while training the discriminator, we only train the discriminator's parameters. We specify this using the var_list parameter to the optimizer's minimize() function. Here is the complete code for defining the loss, optimizer and training function for both kinds of network: g_loss = -tf.reduce_mean(tf.log(d_model_fake)) d_loss = -tf.reduce_mean(tf.log(d_model_real) + tf.log(1 - d_model_fake)) g_optimizer = tf.train.AdamOptimizer(g_learning_rate) d_optimizer = tf.train.GradientDescentOptimizer(d_learning_rate) g_train_op = g_optimizer.minimize(g_loss, var_list=list(g_params.values())) d_train_op = d_optimizer.minimize(d_loss, var_list=list(d_params.values()))  Now that we have defined the models, we have to train the models. The training is done as per the following algorithm: For each epoch: For each batch: get real images x_batch generate noise z_batch train discriminator using z_batch and x_batch generate noise z_batch train generator using z_batch The complete code for training from the notebook is as follows: n_epochs = 400 batch_size = 100 n_batches = int(mnist.train.num_examples / batch_size) n_epochs_print = 50 with tf.Session() as tfs: tfs.run(tf.global_variables_initializer()) for epoch in range(n_epochs): epoch_d_loss = 0.0 epoch_g_loss = 0.0 for batch in range(n_batches): x_batch, _ = mnist.train.next_batch(batch_size) x_batch = norm(x_batch) z_batch = np.random.uniform(-1.0,1.0,size=[batch_size,n_z]) feed_dict = {x_p: x_batch,z_p: z_batch} _,batch_d_loss = tfs.run([d_train_op,d_loss], feed_dict=feed_dict) z_batch = np.random.uniform(-1.0,1.0,size=[batch_size,n_z]) feed_dict={z_p: z_batch} _,batch_g_loss = tfs.run([g_train_op,g_loss], feed_dict=feed_dict) epoch_d_loss += batch_d_loss epoch_g_loss += batch_g_loss if epoch%n_epochs_print == 0: average_d_loss = epoch_d_loss / n_batches average_g_loss = epoch_g_loss / n_batches print('epoch: {0:04d} d_loss = {1:0.6f} g_loss = {2:0.6f}' .format(epoch,average_d_loss,average_g_loss)) # predict images using generator model trained x_pred = tfs.run(g_model,feed_dict={z_p:z_test}) display_images(x_pred.reshape(-1,pixel_size,pixel_size)) We printed the generated images every 50 epochs: As we can see the generator was producing just noise in epoch 0, but by epoch 350, it got trained to produce much better shapes of handwritten digits. You can try experimenting with epochs, regularization, network architecture and other hyper-parameters to see if you can produce even faster and better results. Simple GAN with Keras Now let us implement the same model in Keras:  The hyper-parameter definitions remain the same as the last section: # graph hyperparameters g_learning_rate = 0.00001 d_learning_rate = 0.01 n_x = 784 # number of pixels in the MNIST image # number of hidden layers for generator and discriminator g_n_layers = 3 d_n_layers = 1 # neurons in each hidden layer g_n_neurons = [256, 512, 1024] d_n_neurons = [256]  Next, define the generator network: # define generator g_model = Sequential() g_model.add(Dense(units=g_n_neurons[0], input_shape=(n_z,), name='g_0')) g_model.add(LeakyReLU()) for i in range(1,g_n_layers): g_model.add(Dense(units=g_n_neurons[i], name='g_{}'.format(i) )) g_model.add(LeakyReLU()) g_model.add(Dense(units=n_x, activation='tanh',name='g_out')) print('Generator:') g_model.summary() g_model.compile(loss='binary_crossentropy', optimizer=keras.optimizers.Adam(lr=g_learning_rate) ) This is what the generator model looks like: In the Keras example, we do not define two discriminator networks as we defined in the TensorFlow example. Instead, we define one discriminator network and then stitch the generator and discriminator network into the GAN network. The GAN network is then used to train the generator parameters only, and the discriminator network is used to train the discriminator parameters: # define discriminator d_model = Sequential() d_model.add(Dense(units=d_n_neurons[0], input_shape=(n_x,), name='d_0' )) d_model.add(LeakyReLU()) d_model.add(Dropout(0.3)) for i in range(1,d_n_layers): d_model.add(Dense(units=d_n_neurons[i], name='d_{}'.format(i) )) d_model.add(LeakyReLU()) d_model.add(Dropout(0.3)) d_model.add(Dense(units=1, activation='sigmoid',name='d_out')) print('Discriminator:') d_model.summary() d_model.compile(loss='binary_crossentropy', optimizer=keras.optimizers.SGD(lr=d_learning_rate) ) This is what the discriminator models look: Discriminator: _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= d_0 (Dense) (None, 256) 200960 _________________________________________________________________ leaky_re_lu_4 (LeakyReLU) (None, 256) 0 _________________________________________________________________ dropout_1 (Dropout) (None, 256) 0 _________________________________________________________________ d_out (Dense) (None, 1) 257 ================================================================= Total params: 201,217 Trainable params: 201,217 Non-trainable params: 0 _________________________________________________________________ Next, define the GAN Network, and turn the trainable property of the discriminator model to false, since GAN would only be used to train the generator: # define GAN network d_model.trainable=False z_in = Input(shape=(n_z,),name='z_in') x_in = g_model(z_in) gan_out = d_model(x_in) gan_model = Model(inputs=z_in,outputs=gan_out,name='gan') print('GAN:') gan_model.summary() gan_model.compile(loss='binary_crossentropy', optimizer=keras.optimizers.Adam(lr=g_learning_rate) ) This is what the GAN model looks: GAN: _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= z_in (InputLayer) (None, 256) 0 _________________________________________________________________ sequential_1 (Sequential) (None, 784) 1526288 _________________________________________________________________ sequential_2 (Sequential) (None, 1) 201217 ================================================================= Total params: 1,727,505 Trainable params: 1,526,288 Non-trainable params: 201,217 _________________________________________________________________  Great, now that we have defined the three models, we have to train the models. The training is as per the following algorithm: For each epoch: For each batch: get real images x_batch generate noise z_batch generate images g_batch using generator model combine g_batch and x_batch into x_in and create labels y_out set discriminator model as trainable train discriminator using x_in and y_out generate noise z_batch set x_in = z_batch and labels y_out = 1 set discriminator model as non-trainable train gan model using x_in and y_out, (effectively training generator model) For setting the labels, we apply the labels as 0.9 and 0.1 for real and fake images respectively. Generally, it is suggested that you use label smoothing by picking a random value from 0.0 to 0.3 for fake data and 0.8 to 1.0 for real data. Here is the complete code for training from the notebook: n_epochs = 400 batch_size = 100 n_batches = int(mnist.train.num_examples / batch_size) n_epochs_print = 50 for epoch in range(n_epochs+1): epoch_d_loss = 0.0 epoch_g_loss = 0.0 for batch in range(n_batches): x_batch, _ = mnist.train.next_batch(batch_size) x_batch = norm(x_batch) z_batch = np.random.uniform(-1.0,1.0,size=[batch_size,n_z]) g_batch = g_model.predict(z_batch) x_in = np.concatenate([x_batch,g_batch]) y_out = np.ones(batch_size*2) y_out[:batch_size]=0.9 y_out[batch_size:]=0.1 d_model.trainable=True batch_d_loss = d_model.train_on_batch(x_in,y_out) z_batch = np.random.uniform(-1.0,1.0,size=[batch_size,n_z]) x_in=z_batch y_out = np.ones(batch_size) d_model.trainable=False batch_g_loss = gan_model.train_on_batch(x_in,y_out) epoch_d_loss += batch_d_loss epoch_g_loss += batch_g_loss if epoch%n_epochs_print == 0: average_d_loss = epoch_d_loss / n_batches average_g_loss = epoch_g_loss / n_batches print('epoch: {0:04d} d_loss = {1:0.6f} g_loss = {2:0.6f}' .format(epoch,average_d_loss,average_g_loss)) # predict images using generator model trained x_pred = g_model.predict(z_test) display_images(x_pred.reshape(-1,pixel_size,pixel_size)) We printed the results every 50 epochs, up to 350 epochs: The model slowly learns to generate good quality images of handwritten digits from the random noise. There are so many variations of the GANs that it will take another book to cover all the different kinds of GANs. However, the implementation techniques are almost similar to what we have shown here. Deep Convolutional GAN with TensorFlow and Keras In DCGAN, both the discriminator and generator are implemented using a Deep Convolutional Network: 1.  In this example, we decided to implement the generator as the following network: Generator: _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= g_in (Dense) (None, 3200) 822400 _________________________________________________________________ g_in_act (Activation) (None, 3200) 0 _________________________________________________________________ g_in_reshape (Reshape) (None, 5, 5, 128) 0 _________________________________________________________________ g_0_up2d (UpSampling2D) (None, 10, 10, 128) 0 _________________________________________________________________ g_0_conv2d (Conv2D) (None, 10, 10, 64) 204864 _________________________________________________________________ g_0_act (Activation) (None, 10, 10, 64) 0 _________________________________________________________________ g_1_up2d (UpSampling2D) (None, 20, 20, 64) 0 _________________________________________________________________ g_1_conv2d (Conv2D) (None, 20, 20, 32) 51232 _________________________________________________________________ g_1_act (Activation) (None, 20, 20, 32) 0 _________________________________________________________________ g_2_up2d (UpSampling2D) (None, 40, 40, 32) 0 _________________________________________________________________ g_2_conv2d (Conv2D) (None, 40, 40, 16) 12816 _________________________________________________________________ g_2_act (Activation) (None, 40, 40, 16) 0 _________________________________________________________________ g_out_flatten (Flatten) (None, 25600) 0 _________________________________________________________________ g_out (Dense) (None, 784) 20071184 ================================================================= Total params: 21,162,496 Trainable params: 21,162,496 Non-trainable params: 0 The generator is a stronger network having three convolutional layers followed by tanh activation. We define the discriminator network as follows: Discriminator: _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= d_0_reshape (Reshape) (None, 28, 28, 1) 0 _________________________________________________________________ d_0_conv2d (Conv2D) (None, 28, 28, 64) 1664 _________________________________________________________________ d_0_act (Activation) (None, 28, 28, 64) 0 _________________________________________________________________ d_0_maxpool (MaxPooling2D) (None, 14, 14, 64) 0 _________________________________________________________________ d_out_flatten (Flatten) (None, 12544) 0 _________________________________________________________________ d_out (Dense) (None, 1) 12545 ================================================================= Total params: 14,209 Trainable params: 14,209 Non-trainable params: 0 _________________________________________________________________  The GAN network is composed of the discriminator and generator as demonstrated previously: GAN: _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= z_in (InputLayer) (None, 256) 0 _________________________________________________________________ g (Sequential) (None, 784) 21162496 _________________________________________________________________ d (Sequential) (None, 1) 14209 ================================================================= Total params: 21,176,705 Trainable params: 21,162,496 Non-trainable params: 14,209 _________________________________________________________________ When we run this model for 400 epochs, we get the following output: As you can see, the DCGAN is able to generate high-quality digits starting from epoch 100 itself. The DGCAN has been used for style transfer, generation of images and titles and for image algebra, namely taking parts of one image and adding that to parts of another image. We built a simple GAN in TensorFlow and Keras and applied it to generate images from the MNIST dataset. We also built a DCGAN where the generator and discriminator consisted of convolutional networks. Do check out the book Mastering TensorFlow 1.x  to explore advanced features of TensorFlow 1.x and obtain in-depth knowledge of TensorFlow for solving artificial intelligence problems. 5 reasons to learn Generative Adversarial Networks (GANs) in 2018 Implementing a simple Generative Adversarial Network (GANs) Getting to know Generative Models and their types
Read more
  • 0
  • 0
  • 39677

article-image-integrate-firebase-android-ios-applications
Savia Lobo
28 May 2018
26 min read
Save for later

How to integrate Firebase on Android/iOS applications natively

Savia Lobo
28 May 2018
26 min read
In this tutorial, you'll see Firebase integration within a native context, basically over an iOS and Android application. You will also implement some of the basic, as well as advanced features, that are found in any modern mobile application in both, Android and iOS ecosystems. So let's get busy! This article is an excerpt taken from the book,' Firebase Cookbook', written by Houssem Yahiaoui. Implement the pushing and retrieving of data from Firebase Real-time Database We're going to start first with Android and see how we can manage this feature: First, head to your Android Studio project. Now that you have opened your project, let's move on to integrating the Real-time Database. In your project, head to the Menu bar, navigate to Tools | Firebase, and then select Realtime Database. Now click Save and retrieve data. Since we've already connected our Android application to Firebase, let's now add the Firebase Real-time Database dependencies locally by clicking on the Add the Realtime Database to your app button. This will give you a screen that looks like the following screenshot:  Figure 1: Android Studio  Firebase integration section Click on the Accept Changes button and the gradle will add these new dependencies to your gradle file and download and build the project. Now we've created this simple wish list application. It might not be the most visually pleasing but will serve us well in this experiment with TextEdit, a Button, and a ListView. So, in our experiment we want to do the following: Add a new wish to our wish list Firebase Database See the wishes underneath our ListView Let's start with adding that list of data to our Firebase. Now head to your MainActivity.java file of any other activity related to your project and add the following code: //[*] UI reference. EditText wishListText; Button addToWishList; ListView wishListview; // [*] Getting a reference to the Database Root. DatabaseReference fRootRef = FirebaseDatabase.getInstance().getReference(); //[*] Getting a reference to the wishes list. DatabaseReference wishesRef = fRootRef.child("wishes"); protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); //[*] UI elements wishListText = (EditText) findViewById(R.id.wishListText); addToWishList = (Button) findViewById(R.id.addWishBtn); wishListview = (ListView) findViewById(R.id.wishsList); } @Override protected void onStart() { super.onStart(); //[*] Listening on Button click event addToWishList.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { //[*] Getting the text from our EditText UI Element. String wish = wishListText.getText().toString(); //[*] Pushing the Data to our Database. wishesRef.push().setValue(wish); AlertDialog alertDialog = new AlertDialog.Builder(MainActivity.this).create(); alertDialog.setTitle("Success"); alertDialog.setMessage("wish was added to Firebase"); alertDialog.show(); } }); } In the preceding code, we're doing the following: Getting a reference to our UI elements Since everything in Firebase starts with a reference, we're grabbing ourselves a reference to the root element in our database We are getting another reference to the wishes child method from the root reference Over the OnCreate() method, we are binding all the UI-based references to the actual UI widgets Over the OnStart() method, we're doing the following: Listening to the button click event and grabbing the EditText content Using the wishesRef.push().setValue() method to push the content of the EditText automatically to Firebase, then we are displaying a simple AlertDialog as the UI preferences However, the preceding code is not going to work. This is strange since everything is well configured, but the problem here is that the Firebase Database is secured out of the box with authorization rules. So, head to Database | RULES and change the rules there, and then publish. After that is done, the result will look similar to the following screenshot:  Figure 2: Firebase Real-time Database authorization section After saving and launching the application, the pushed data result will look like this: Figure 3: Firebase Real-time Database after adding a new wish to the wishes collection Firebase creates the child element in case you didn't create it yourself. This is great because we can create and implement any data structure we want, however, we want. Next, let's see how we can retrieve the data we sent. Move back to your onStart() method and add the following code lines: wishesRef.addChildEventListener(new ChildEventListener() { @Override public void onChildAdded(DataSnapshot dataSnapshot, String s) { //[*] Grabbing the data Snapshot String newWish = dataSnapshot.getValue(String.class); wishes.add(newWish); adapter.notifyDataSetChanged(); } @Override public void onChildChanged(DataSnapshot dataSnapshot, String s) {} @Override public void onChildRemoved(DataSnapshot dataSnapshot) {} @Override public void onChildMoved(DataSnapshot dataSnapshot, String s) {} @Override public void onCancelled(DatabaseError databaseError) {} }); Before you implement the preceding code, go to the onCreate() method and add the following line underneath the UI widget reference: //[*] Adding an adapter. adapter = new ArrayAdapter<String>(this, R.layout.support_simple_spinner_dropdown_item, wishes); //[*] Wiring the Adapter wishListview.setAdapter(adapter);  Preceding that, in the variable declaration, simply add the following declaration: ArrayList<String> wishes = new ArrayList<String>(); ArrayAdapter<String> adapter; So, in the preceding code, we're doing the following: Adding a new ArrayList and an adapter for ListView changes. We're wiring everything in the onCreate() method. Wiring an addChildEventListener() in the wishes Firebase reference. Grabbing the data snapshot from the Firebase Real-time Database that is going to be fired whenever we add a new wish, and then wiring the list adapter to notify the wishListview which is going to update our Listview content automatically. Congratulations! You've just wired and exploited the Real-time Database functionality and created your very own wishes tracker. Now, let's see how we can create our very own iOS wishes tracker application using nothing but Swift and Firebase: Head directly to and fire up Xcode, and let's open up the project, where we integrated Firebase. Let's work on the feature. Edit your Podfile and add the following line: pod 'Firebase/Database' This will download and install the Firebase Database dependencies locally, in your very own awesome wishes tracker application. There are two view controllers, one for the wishes table and the other one for adding a new wish to the wishes list, the following represents the main wishes list view.   Figure 4: iOS application wishes list view Once we click on the + sign button in the Header, we'll be navigated with a segueway to a new ViewModal, where we have a text field where we can add our new wish and a button to push it to our list.         Figure 5: Wishes iOS application, in new wish ViewModel Over  addNewWishesViewController.swift, which is the view controller for adding the new wish view, after adding the necessary UITextField, @IBOutlet and the button @IBAction, replace the autogenerated content with the following code lines: import UIKit import FirebaseDatabase class newWishViewController: UIViewController { @IBOutlet weak var wishText: UITextField //[*] Adding the Firebase Database Reference var ref: FIRDatabaseReference? override func viewDidLoad() { super.viewDidLoad() ref = FIRDatabase.database().reference() } @IBAction func addNewWish(_ sender: Any) { let newWish = wishText.text // [*] Getting the UITextField content. self.ref?.child("wishes").childByAutoId().setValue( newWish!) presentedViewController?.dismiss(animated: true, completion:nil) } } In the preceding code, besides the self-explanatory UI element code, we're doing the following: We're using the FIRDatabaseReference and creating a new Firebase reference, and we're initializing it with viewDidLoad(). Within the addNewWish IBAction (function), we're getting the text from the UITextField, calling for the "wishes" child, then we're calling childByAutoId(), which will create an automatic id for our data (consider it a push function, if you're coming from JavaScript). We're simply setting the value to whatever we're going to get from the TextField. Finally, we're dismissing the current ViewController and going back to the TableViewController which holds all our wishes. Implementing anonymous authentication Authentication is one of the most tricky, time-consuming and tedious tasks in any web application. and of course, maintaining the best practices while doing so is truly a hard job to maintain. For mobiles, it's even more complex, because if you're using any traditional application it will mean that you're going to create a REST endpoint, an endpoint that will take an email and password and return either a session or a token, or directly a user's profile information. In Firebase, things are a bit different and in this recipe, we're going to see how we can use anonymous authentication—we will explain that in a second. You might wonder, but why? The why is quite simple: to give users an anonymous temporal, to protect data and to give users an extra taste of your application's inner soul. So let's see how we can make that happen. How to do it... We will first see how we can implement anonymous authentication in Android: Fire up your Android Studio. Before doing anything, we need to get some dependencies first, speaking, of course, of the Firebase Auth library that can be downloaded by adding this line to the build.gradle file under the dependencies section: compile 'com.google.firebase:firebase-auth:11.0.2' Now simply Sync and you will be good to start adding Firebase Authentication logic. Let us see what we're going to get as a final result: Figure 6: Android application: anonymous login application A simple UI with a button and a TextView, where we put our user data after a successful authentication process. Here's the code for that simple UI: <?xml version="1.0" encoding="utf-8"?> <android.support.constraint.ConstraintLayout xmlns:android="http://schemas.android.com/ apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context="com.hcodex.anonlogin.MainActivity"> <Button android:id="@+id/anonLoginBtn" android:layout_width="289dp" android:layout_height="50dp" android:text="Anounymous Login" android:layout_marginRight="8dp" app:layout_constraintRight_toRightOf="parent" android:layout_marginLeft="8dp" app:layout_constraintLeft_toLeftOf="parent" android:layout_marginTop="47dp" android:onClick="anonLoginBtn" app:layout_constraintTop_toBottomOf= "@+id/textView2" app:layout_constraintHorizontal_bias="0.506" android:layout_marginStart="8dp" android:layout_marginEnd="8dp" /> <TextView android:id="@+id/textView2" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Firebase Anonymous Login" android:layout_marginLeft="8dp" app:layout_constraintLeft_toLeftOf="parent" android:layout_marginRight="8dp" app:layout_constraintRight_toRightOf="parent" app:layout_constraintTop_toTopOf="parent" android:layout_marginTop="80dp" /> <TextView android:id="@+id/textView3" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Profile Data" android:layout_marginTop="64dp" app:layout_constraintTop_toBottomOf= "@+id/anonLoginBtn" android:layout_marginLeft="156dp" app:layout_constraintLeft_toLeftOf="parent" /> <TextView android:id="@+id/profileData" android:layout_width="349dp" android:layout_height="175dp" android:layout_marginBottom="28dp" android:layout_marginEnd="8dp" android:layout_marginLeft="8dp" android:layout_marginRight="8dp" android:layout_marginStart="8dp" android:layout_marginTop="8dp" android:text="" app:layout_constraintBottom_toBottomOf="parent" app:layout_constraintHorizontal_bias="0.526" app:layout_constraintLeft_toLeftOf="parent" app:layout_constraintRight_toRightOf="parent" app:layout_constraintTop_toBottomOf= "@+id/textView3" /> </android.support.constraint.ConstraintLayout> Now, let's see how we can wire up our Java code: //[*] Step 1 : Defining Logic variables. FirebaseAuth anonAuth; FirebaseAuth.AuthStateListener authStateListener; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); anonAuth = FirebaseAuth.getInstance(); setContentView(R.layout.activity_main); }; //[*] Step 2: Listening on the Login button click event. public void anonLoginBtn(View view) { anonAuth.signInAnonymously() .addOnCompleteListener( this, new OnCompleteListener<AuthResult>() { @Override public void onComplete(@NonNull Task<AuthResult> task) { if(!task.isSuccessful()) { updateUI(null); } else { FirebaseUser fUser = anonAuth.getCurrentUser(); Log.d("FIRE", fUser.getUid()); updateUI(fUser); } }); } } //[*] Step 3 : Getting UI Reference private void updateUI(FirebaseUser user) { profileData = (TextView) findViewById( R.id.profileData); profileData.append("Anonymous Profile Id : n" + user.getUid()); } Now, let's see how we can implement anonymous authentication on iOS: What we'll achieve in this test is the following : Figure 7: iOS application, anonymous login application   Before doing anything, we need to download and install the Firebase authentication dependency first. Head directly over to your Podfile and the following line: pod 'Firebase/Auth' Then simply save the file, and on your terminal, type the following command: ~> pod install This will download the needed dependency and configure our application accordingly. Now create a simple UI with a button and after configuring your UI button IBAction reference, let's add the following code: @IBAction func connectAnon(_ sender: Any) { Auth.auth().signInAnonymously() { (user, error) in if let anon = user?.isAnonymous { print("i'm connected anonymously here's my id (user?.uid)") } } } How it works... Let's digest the preceding code: We're defining some basic logic variables; we're taking basically a TextView, where we'll append our results and define the Firebase  anonAuth variable. It's of FirebaseAuth type, which is the starting point for any authentication strategy that we might use. Over onCreate, we're initializing our Firebase reference and fixing our content view. We're going to authenticate our user by clicking a button bound with the anonLoginBtn() method. Within it, we're simply calling for the signInAnonymously() method, then if incomplete, we're testing if the authentication task is successful or not, else we're updating our TextEdit with the user information. We're using the updateUI method to simply update our TextField. Pretty simple steps. Now simply build and run your project and test your shiny new features. Implementing password authentication on iOS Email and password authentication is the most common way to authenticate anyone and it can be a major risk point if done wrong. Using Firebase will remove that risk and make you think of nothing but the UX that you will eventually provide to your users. In this recipe, we're going to see how you can do this on iOS. How to do it... Let's suppose you've created your awesome UI with all text fields and buttons and wired up the email and password IBOutlets and the IBAction login button. Let's see the code behind the awesome, quite simple password authentication process: import UIKit import Firebase import FirebaseAuth class EmailLoginViewController: UIViewController { @IBOutlet weak var emailField: UITextField! @IBOutlet weak var passwordField: UITextField! override func viewDidLoad() { super.viewDidLoad() } @IBAction func loginEmail(_ sender: Any) { if self.emailField.text</span> == "" || self.passwordField.text == "" { //[*] Prompt an Error let alertController = UIAlertController(title: "Error", message: "Please enter an email and password.", preferredStyle: .alert) let defaultAction = UIAlertAction(title: "OK", style: .cancel, handler: nil) alertController.addAction(defaultAction) self.present(alertController, animated: true, completion: nil) } else { FIRAuth.auth()?.signIn(withEmail: self.emailField.text!, password: self.passwordField.text!) { (user, error) in if error == nil { //[*] TODO: Navigate to Application Home Page. } else { //[*] Alert in case we've an error. let alertController = UIAlertController(title: "Error", message: error?.localizedDescription, preferredStyle: .alert) let defaultAction = UIAlertAction(title: "OK", style: .cancel, handler: nil) alertController.addAction(defaultAction) self.present(alertController, animated: true, completion: nil) } } } } } How it works ... Let's digest the preceding code: We're simply adding some IBOutlets and adding the IBAction login button. Over the loginEmail function, we're doing two things: If the user didn't provide any email/password, we're going to prompt them with an error alert indicating the necessity of having both fields. We're simply calling for the FIRAuth.auth().singIn() function, which in this case takes an Email and a Password. Then we're testing if we have any errors. Then, and only then, we might navigate to the app home screen or do anything else we want. We prompt them again with the Authentication Error message. And as simple as that, we're done. The User object will be transported, as well, so you may do any additional processing to the name, email, and much more. Implementing password authentication on Android To make things easier in terms of Android, we're going to use the awesome Firebase Auth UI. Using the Firebase Auth UI will save a lot of hassle when it comes to building the actual user interface and handling the different intent calls between the application activities. Let's see how we can integrate and use it for our needs. Let's start first by configuring our project and downloading all the necessary dependencies. Head to your build.gradle file and copy/paste the following entry: compile 'com.firebaseui:firebase-ui-auth:3.0.0' Now, simply sync and you will be good to start. How to do it... Now, let's see how we can make the functionality work: Declare the FirebaseAuth reference, plus add another variable that we will need later on: FirebaseAuth auth; private static final int RC_SIGN_IN = 17; Now, inside your onCreate method, add the following code: auth = FirebaseAuth.getInstance(); if(auth.getCurrentUser() != null) { Log.d("Auth", "Logged in successfully"); } else { startActivityForResult( AuthUI.getInstance() .createSignInIntentBuilder() .setAvailableProviders( Arrays.asList(new AuthUI.IdpConfig.Builder( AuthUI.EMAIL_PROVIDER).build())).build(), RC_SIGN_IN);findViewById(R.id.logoutBtn) .setOnClickListener(this); Now, in your activity, implement the View.OnClick listener. So your class will look like the following: public class MainActivity extends AppCompatActivity implements View.OnClickListener {} After that, implement the onClick function as shown here: @Override public void onClick(View v) { if(v.getId() == R.id.logoutBtn) { AuthUI.getInstance().signOut(this) .addOnCompleteListener( new OnCompleteListener<Void>() { @Override public void onComplete(@NonNull Task<Void> task) { Log.d("Auth", "Logged out successfully"); // TODO: make custom operation. } }); } } In the end, implement the onActivityResult method as shown in the following code block: @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); if(requestCode == RC_SIGN_IN) { if(resultCode == RESULT_OK) { //User is in ! Log.d("Auth",auth.getCurrentUser().getEmail()); } else { //User is not authenticated Log.d("Auth", "Not Authenticated"); } } } Now build and run your project. You will have a similar interface to that shown in the following screenshot: Figure 8: Android authentication using email/password:  email picker This interface will be shown in case you're not authenticated and your application will list all the saved accounts on your device. If you click on the NONE OF THE ABOVE button, you will be prompted with the following interface: Figure 9: Android authentication email/password: adding new email When you add your email and click on the NEXT button, the API will go and look for that user with that email in your application's users. If such an email is present, you will be authenticated, but if it's not the case, you will be redirected to the Sign-up activity as shown in the following screenshot: Figure 10: Android authentication: creating a new account, with email/password/name Next, you will add your name and password. And with that, you will create a new account and you will be authenticated. How it works... From the preceding code, it's clear that we didn't create any user interface. The Firebase UI is so powerful, so let's explore what happens: The setAvailableProviders method will take a list of providers—those providers will be different based on your needs, so it can be any email provider, Google, Facebook, and each and every provider that Firebase supports. The main difference is that each and every provider will have each separate configuration and necessary dependencies that you will need to support the functionality. Also, if you've noticed, we're setting up a logout button. We created this button mainly to log out our users and added a click listener to it. The idea here is that when you click on it, the application performs the Sign-out operation. Then you add your custom intent that will vary from a redirect to closing the application. If you noticed, we're implementing the onActivityResult special function and this one will be our main listening point whenever we connect or disconnect from the application. Within it, we can perform different operations from resurrection to displaying toasts, to anything that you can think of. Implementing Google Sign-in authentication Google authentication is the process of logging in/creating an account using nothing but your existing Google account. It's easy, fast, and intuitive and removes a lot of hustle we face, usually when we register any web/mobile application. I'm talking basically about form filling. Using Firebase Google Sign-in authentication, we can manage such functionality; plus we have had the user basic metadata such as the display name, picture URL, and more. In this recipe, we're going to see how we can implement Google Sign-in functionality for both Android and iOS. Before doing any coding, it's important to do some basic configuration in our Firebase Project console. Head directly to your Firebase project Console | Authentication | SIGN-IN METHOD | Google and simply activate the switch and follow the instructions there in order to get the client. Please notice that Google Sign-in is automatically configured for iOS, but for Android, we will need to do some custom configuration. Let us first look at getting ready for Android to implement Google Sign-in authentication: Before we start implementing the authentication functionality, we will need to install some dependencies first, so please head to your build.gradle file and paste the following, and then sync your build: compile 'com.google.firebase:firebase-auth:11.4.2' compile 'com.google.android.gms:play-services- auth:11.4.2' The dependency versions are dependable, and that means that whenever you want to install them, you will have to provide the same version for both dependencies. Moving on to getting ready  in iOS for implementation of Google Sign-in authentication: In iOS, we will need to install a couple of dependencies, so please go and edit your Podfile and add the following lines underneath your already present dependencies, if you have any: pod 'Firebase/Auth' pod 'GoogleSignIn' Now, in your terminal, type the following command: ~> pod install This command will install the required dependencies and configure your project accordingly. How to do it... First, let us take a look at how we will implement this recipe in Android: Now, after installing our dependencies, we will need to create the UI for our calls. To do that, simply copy and paste the following special button XML code into your layout: <com.google.android.gms.common.SignInButton android:id="@+id/gbtn" android:layout_width="368dp" android:layout_height="wrap_content" android:layout_marginLeft="16dp" android:layout_marginTop="30dp" app:layout_constraintLeft_toLeftOf="parent" app:layout_constraintTop_toTopOf="parent" android:layout_marginRight="16dp" app:layout_constraintRight_toRightOf="parent" /> The result will be this: Figure 11: Google Sign-in button after the declaration After doing that, let's see the code behind it: SignInButton gBtn; FirebaseAuth mAuth; GoogleApiClient mGoogleApiClient; private final static int RC_SIGN_IN = 3; FirebaseAuth.AuthStateListener mAuthListener; @Override protected void onStart() { super.onStart(); mAuth.addAuthStateListener(mAuthListener); } @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); mAuth = FirebaseAuth.getInstance(); gBtn = (SignInButton) findViewById(R.id.gbtn); button.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { signIn(); } }); mAuthListener = new FirebaseAuth.AuthStateListener() { @Override public void onAuthStateChanged(@NonNull FirebaseAuth firebaseAuth) { if(firebaseAuth.getCurrentUser() != null) { AlertDialog alertDialog = new AlertDialog.Builder(MainActivity.this).create(); alertDialog.setTitle("User"); alertDialog.setMessage("I have a user loged in"); alertDialog.show(); } } }; mGoogleApiClient = new GoogleApiClient.Builder(this) .enableAutoManage(this, new GoogleApiClient.OnConnectionFailedListener() { @Override public void onConnectionFailed(@NonNull ConnectionResult connectionResult) { Toast.makeText(MainActivity.this, "Something went wrong", Toast.LENGTH_SHORT).show(); } }) .addApi(Auth.GOOGLE_SIGN_IN_API, gso) .build(); } GoogleSignInOptions gso = new GoogleSignInOptions.Builder( GoogleSignInOptions.DEFAULT_SIGN_IN) .requestEmail() .build(); private void signIn() { Intent signInIntent = Auth.GoogleSignInApi.getSignInIntent( mGoogleApiClient); startActivityForResult(signInIntent, RC_SIGN_IN); } @Override public void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); if (requestCode == RC_SIGN_IN) { GoogleSignInResult result = Auth.GoogleSignInApi .getSignInResultFromIntent(data); if (result.isSuccess()) { // Google Sign In was successful, authenticate with Firebase GoogleSignInAccount account = result.getSignInAccount(); firebaseAuthWithGoogle(account); } else { Toast.makeText(MainActivity.this, "Connection Error", Toast.LENGTH_SHORT).show(); } } } private void firebaseAuthWithGoogle( GoogleSignInAccount account) { AuthCredential credential = GoogleAuthProvider.getCredential( account.getIdToken(), null); mAuth.signInWithCredential(credential) .addOnCompleteListener(this, new OnCompleteListener<AuthResult>() { @Override public void onComplete(@NonNull Task<AuthResult> task) { if (task.isSuccessful()) { // Sign in success, update UI with the signed-in user's information Log.d("TAG", "signInWithCredential:success"); FirebaseUser user = mAuth.getCurrentUser(); Log.d("TAG", user.getDisplayName()); } else { Log.w("TAG", "signInWithCredential:failure", task.getException()); Toast.makeText(MainActivity.this, "Authentication failed.", Toast.LENGTH_SHORT) .show(); } // ... } }); } Then, simply build and launch your application, click on the authentication button, and you will be greeted with the following screen: Figure 12: Account picker, after clicking on Google Sign-in button. Next, simply pick the account you want to connect with, and then you will be greeted with an alert, finishing the authentication process. Now we will take a look at an implementation of our recipe in iOS: Before we do anything, let's import the Google Sign-in as follows: import GoogleSignIn After that, let's add our Google Sign-in button; to do so, go to your Login Page ViewController and add the following line of code: //Google sign in let googleBtn = GIDSignInButton() googleBtn.frame =CGRect(x: 16, y: 50, width: view.frame.width - 32, height: 50) view.addSubview(googleBtn) GIDSignIn.sharedInstance().uiDelegate = self The frame positioning is for my own needs—you can use it or modify the dimension to suit your application needs. Now, after adding the lines above, we will get an error. This is due to our ViewController not working well with the GIDSignInUIDelegate, so in order to make our xCode happier, let's add it to our ViewModal declaration so it looks like the following: class ViewController: UIViewController, FBSDKLoginButtonDelegate, GIDSignInUIDelegate {} Now, if you build and run your project, you will get the following: Figure 13: iOS application after configuring the Google Sign-in button Now, if you click on the Sign in button, you will get an exception. The reason for that is that the Sign in button is asking for the clientID, so to fix that, go to your AppDelegate file and complete the following import: import GoogleSignIn Next, add the following line of code within the application: didFinishLaunchingWithOptions as shown below: GIDSignIn.sharedInstance().clientID = FirebaseApp.app()?.options.clientID If you build and run the application now, then click on the Sign in button, nothing will happen. Why? Because iOS doesn't know how and where to navigate to next. So now, in order to fix that issue, go to your GoogleService-Info.plist file, copy the value of the REVERSED_CLIENT_ID, then go to your project configuration. Inside the Info section, scroll down to the URL types, add a new URL type, and paste the link inside the URL Schemes field: Figure 14: Xcode Firebase URL schema adding, to finish the Google Sign-in behavior Next, within the application: open URL options, add the following line: GIDSignIn.sharedInstance().handle(url, sourceApplication:options[ UIApplicationOpenURLOptionsKey.sourceApplication] as? String, annotation: options[UIApplicationOpenURLOptionsKey.annotation]) This will simply help the transition to the URL we already specified within the URL schemes. Next, if you build and run your application, tap on the Sign in button and you will be redirected using the SafariWebViewController to the Google Sign-in page, as shown in the following screenshot: Figure 15: iOS Google account picker after clicking on Sign-in button With that, the ongoing authentication process is done, but what will happen when you select your account and authorize the application? Typically, you need to go back to your application with all the needed profile information, don't you? isn't? Well, for now, it's not the case, so let's fix that. Go back to the AppDelegate file and do the following: Add the GIDSignInDelegate to the app delegate declaration Add the following line to the application: didFinishLaunchingWithOptions: GIDSignIn.sharedInstance().delegate = self This will simply let us go back to the application with all the tokens we need to finish the authentication process with Firebase. Next, we need to implement the signIn function that belongs to the GIDSignInDelegate; this function will be called once we're successfully authenticated: func sign(_ signIn: GIDSignIn!, didSignInFor user: GIDGoogleUser!, withError error: Error!) { if let err = error { print("Can't connect to Google") return } print("we're using google sign in", user) } Now, once you're fully authenticated, you will receive the success message over your terminal. Now we can simply integrate our Firebase authentication logic. Complete the following import: import FirebaseAuth  Next, inside the same signIn function, add the following: guard let authentication = user.authentication else { return } let credential = GoogleAuthProvider.credential(withIDToken: authentication.idToken, accessToken: authentication.accessToken) Auth.auth().signIn(with: credential, completion: {(user, error) in if let error = error { print("[*] Can't connect to firebase, with error :", error) } print("we have a user", user?.displayName) }) This code will use the successfully logged in user token and call the Firebase Authentication logic to create a new Firebase user. Now we can retrieve the basic profile information that Firebase delivers. How it works... Let's explain what we did in the Android section: We activated authentication using our Google account from the Firebase project console. We also installed the required dependencies, from Firebase Auth to Google services. After finishing the setup, we gained the ability to create that awesome Google Sign-in special button, and we also gave it an ID for easy access. We created references from SignInButton and FirebaseAuth. Let's now explain what we just did in the iOS section: We used the GIDSignButton in order to create the branded Google Sign-in button, and we added it to our ViewController. Inside the AppDelegate, we made a couple of configurations so we could retrieve our ClientID that the button needed to connect to our application. For our button to work, we used the information stored in GoogleService-Info.plist and created an app link within our application so we could navigate to our connection page. Once everything was set, we were introduced to our application authorization page where we authorized the application and chose the account we wanted to use to connect. In order to get back all the required tokens and account information, we needed to go back to the AppDelegate file and implement the GIDSignInDelegate. Within it, we could can all the account-related tokens and information, once we were successful authenticated. Within the implemented SignIn function, we injected our regular Firebase authentication signIn method with all necessary tokens and information. When we built and ran the application again and signed in, we found the account used to authenticate, present in the Firebase authenticated account. To summarize, we learned how to integrate Firebase within a native context, basically over an iOS and Android application. If you've enjoyed reading this, do check,'Firebase Cookbook' for recipes to help you understand features of Firebase and implement them in your existing web or mobile applications. Using the Firebase Real-Time Database How to integrate Firebase with NativeScript for cross-platform app development Build powerful progressive web apps with Firebase  
Read more
  • 0
  • 0
  • 27478

article-image-optimize-mysql-8-servers-clients
Amey Varangaonkar
28 May 2018
11 min read
Save for later

How to optimize MySQL 8 servers and clients

Amey Varangaonkar
28 May 2018
11 min read
Our article focuses on optimization for MySQL 8 database servers and clients, we start with optimizing the server, followed by optimizing MySQL 8 client-side entities. It is more relevant to database administrators, to ensure performance and scalability across multiple servers. It would also help developers prepare scripts (which includes setting up the database) and users run MySQL for development and testing to maximize the productivity. [box type="note" align="" class="" width=""]The following excerpt is taken from the book MySQL 8 Administrator’s Guide, written by Chintan Mehta, Ankit Bhavsar, Hetal Oza and Subhash Shah. In this book, authors have presented hands-on techniques for tackling the common and not-so-common issues when it comes to the different administration-related tasks in MySQL 8.[/box] Optimizing disk I/O There are quite a few ways to configure storage devices to devote more and faster storage hardware to the database server. A major performance bottleneck is disk seeking (finding the correct place on the disk to read or write content). When the amount of data grows large enough to make caching impossible, the problem with disk seeds becomes apparent. We need at least one disk seek operation to read, and several disk seek operations to write things in large databases where the data access is done more or less randomly. We should regulate or minimize the disk seek times using appropriate disks. In order to resolve the disk seek performance issue, increasing the number of available disk spindles, symlinking the files to different disks, or stripping disks can be done. The following are the details: Using symbolic links: When using symbolic links, we can create a Unix symbolic links for index and data files. The symlink points from default locations in the data directory to another disk in the case of MyISAM tables. These links may also be striped. This improves the seek and read times. The assumption is that the disk is not used concurrently for other purposes. Symbolic links are not supported for InnoDB tables. However, we can place InnoDB data and log files on different physical disks. Striping: In striping, we have many disks. We put the first block on the first disk, the second block on the second disk, and so on. The N block on the (N % number of-disks) disk. If the stripe size is perfectly aligned, the normal data size will be less than the stripe size. This will help to improve the performance. Striping is dependent on the stripe size and the operating system. In an ideal case, we would benchmark the application with different stripe sizes. The speed difference while striping depends on the parameters we have used, like stripe size. The difference in performance also depends on the number of disks. We have to choose if we want to optimize for random access or sequential access. To gain reliability, we may decide to set up with striping and mirroring (RAID 0+1). RAID stands for Redundant Array of Independent Drives. This approach needs 2 x N drives to hold N drives of data. With a good volume management software, we can manage this setup efficiently. There is another approach to it, as well. Depending on how critical the type of data is, we may vary the RAID level. For example, we can store really important data, such as host information and logs, on a RAID 0+1 or RAID N disk, whereas we can store semi-important data on a RAID 0 disk. In the case of RAID, parity bits are used to ensure the integrity of the data stored on each drive. So, RAID N becomes a problem if we have too many write operations to be performed. The time required to update the parity bits in this case is high. If it is not important to maintain when the file was last accessed, we can mount the file system with the -o noatime option. This option skips the updates on the file system, which reduces the disk seek time. We can also make the file system update asynchronously. Depending upon whether the file system supports it, we can set the -o async option. Using Network File System (NFS) with MySQL While using a Network File System (NFS), varying issues may occur, depending on the operating system and the NFS version. The following are the details: Data inconsistency is one issue with an NFS system. It may occur because of messages received out of order or lost network traffic. We can use TCP with hard and intr mount options to avoid these issues. MySQL data and log files may get locked and become unavailable for use if placed on NFS drives. If multiple instances of MySQL access the same data directory, it may result in locking issues. Improper shut down of MySQL or power outage are other reasons for filesystem locking issues. The latest version of NFS supports advisory and lease-based locking, which helps in addressing the locking issues. Still, it is not recommended to share a data directory among multiple MySQL instances. Maximum file size limitations must be understood to avoid any issues. With NFS 2, only the lower 2 GB of a file is accessible by clients. NFS 3 clients support larger files. The maximum file size depends on the local file system of the NFS server. Optimizing the use of memory In order to improve the performance of database operations, MySQL allocates buffers and caches memory. As a default, the MySQL server starts on a virtual machine (VM) with 512 MB of RAM. We can modify the default configuration for MySQL to run on limited memory systems. The following list describes the ways to optimize MySQL memory: The memory area which holds cached InnoDB data for tables, indexes, and other auxiliary buffers is known as the InnoDB buffer pool. The buffer pool is divided into pages. The pages hold multiple rows. The buffer pool is implemented as a linked list of pages for efficient cache management. Rarely used data is removed from the cache using an algorithm. Buffer pool size is an important factor for system performance. The innodb__buffer_pool_size system variable defines the buffer pool size. InnoDB allocates the entire buffer pool size at server startup. 50 to 75 percent of system memory is recommended for the buffer pool size. With MyISAM, all threads share the key buffer. The key_buffer_size system variable defines the size of the key buffer. The index file is opened once for each MyISAM table opened by the server. For each concurrent thread that accesses the table, the data file is opened once. A table structure, column structures for each column, and a 3 x N sized buffer are allocated for each concurrent thread. The MyISAM storage engine maintains an extra row buffer for internal use. The optimizer estimates the reading of multiple rows by scanning. The storage engine interface enables the optimizer to provide information about the recorded buffer size. The size of the buffer can vary depending on the size of the estimate. In order to take advantage of row pre-fetching, InnoDB uses a variable size buffering capability. It reduces the overhead of latching and B-tree navigation. Memory mapping can be enabled for all MyISAM tables by setting the myisam_use_mmap system variable to 1. The size of an in-memory temporary table can be defined by the tmp_table_size system variable. The maximum size of the heap table can be defined using the max_heap_table_size system variable. If the in-memory table becomes too large, MySQL automatically converts the table from in-memory to on-disk. The storage engine for an on-disk temporary table is defined by the internal_tmp_disk_storage_engine system variable. MySQL comes with the MySQL performance schema. It is a feature to monitor MySQL execution at low levels. The performance schema dynamically allocates memory by scaling its memory use to the actual server load, instead of allocating memory upon server startup. The memory, once allocated, is not freed until the server is restarted. Thread specific space is required for each thread that the server uses to manage client connections. The stack size is governed by the thread_stack system variable. The connection buffer is governed by the net_buffer_length system variable. A result buffer is governed by net_buffer_length. The connection buffer and result buffer starts with net_buffer_length bytes, but enlarges up to max_allowed_packets bytes, as needed. All threads share the same base memory. All join clauses are executed in a single pass. Most of the joins can be executed without a temporary table. Temporary tables are memory-based hash tables. Temporary tables that contain BLOB data and tables with large row lengths are stored on disk. A read buffer is allocated for each request, which performs a sequential scan on a table. The size of the read buffer is determined by the read_buffer_size system variable. MySQL closes all tables that are not in use at once when FLUSH TABLES or mysqladmin flush-table commands are executed. It marks all in-use tables to be closed when the current thread execution finishes. This frees in-use memory. FLUSH TABLES returns only after all tables have been closed. It is possible to monitor the MySQL performance schema and sys schema for memory usage. Before we can execute commands for this, we have to enable memory instruments on the MySQL performance schema. It can be done by updating the ENABLED column of the performance schema setup_instruments table. The following is the query to view available memory instruments in MySQL: mysql> SELECT * FROM performance_schema.setup_instruments WHERE NAME LIKE '%memory%'; This query will return hundreds of memory instruments. We can narrow it down by specifying a code area. The following is an example to limit results to InnoDB memory instruments: mysql> SELECT * FROM performance_schema.setup_instruments WHERE NAME LIKE '%memory/innodb%'; The following is the configuration to enable memory instruments: performance-schema-instrument='memory/%=COUNTED' The following is an example to query memory instrument data in the memory_summary_global_by_event_name table in the performance schema: mysql> SELECT * FROM performance_schema.memory_summary_global_by_event_name WHERE EVENT_NAME LIKE 'memory/innodb/buf_buf_pool'G; EVENT_NAME: memory/innodb/buf_buf_pool COUNT_ALLOC: 1 COUNT_FREE: 0 SUM_NUMBER_OF_BYTES_ALLOC: 137428992 SUM_NUMBER_OF_BYTES_FREE: 0 LOW_COUNT_USED: 0 CURRENT_COUNT_USED: 1 HIGH_COUNT_USED: 1 LOW_NUMBER_OF_BYTES_USED: 0 CURRENT_NUMBER_OF_BYTES_USED: 137428992 HIGH_NUMBER_OF_BYTES_USED: 137428992 It summarizes data by EVENT_NAME. The following is an example of querying the sys schema to aggregate currently allocated memory by code area: mysql> SELECT SUBSTRING_INDEX(event_name,'/',2) AS code_area, sys.format_bytes(SUM(current_alloc)) AS current_alloc FROM sys.x$memory_global_by_current_bytes GROUP BY SUBSTRING_INDEX(event_name,'/',2) ORDER BY SUM(current_alloc) DESC; Performance benchmarking We must consider the following factors when measuring performance: While measuring the speed of a single operation or a set of operations, it is important to simulate a scenario in the case of a heavy database workload for benchmarking In different environments, the test results may be different Depending on the workload, certain MySQL features may not help with performance MySQL 8 supports measuring the performance of individual statements. If we want to measure the speed of any SQL expression or function, the BENCHMARK() function is used. The following is the syntax for the function: BENCHMARK(loop_count, expression) The output of the BENCHMARK function is always zero. The speed can be measured by the line printed by MySQL in the output. The following is an example: mysql> select benchmark(1000000, 1+1); From the preceding example , we can find that the time taken to calculate 1+1 for 1000000 times is 0.15 seconds. Other aspects involved in optimizing MySQL servers and clients include optimizing locking operations, examining thread information and more. To know about these techniques, you may check out the book MySQL 8 Administrator’s Guide. SQL Server recovery models to effectively backup and restore your database Get SQL Server user management right 4 Encryption options for your SQL Server
Read more
  • 0
  • 0
  • 12438
article-image-testing-single-page-apps-spas-vue-js-dev-tools
Pravin Dhandre
25 May 2018
8 min read
Save for later

Testing Single Page Applications (SPAs) using Vue.js developer tools

Pravin Dhandre
25 May 2018
8 min read
Testing, especially for big applications, is paramount – especially when deploying your application to a development environment. Whether you choose unit testing or browser automation, there are a host of articles and books available on the subject. In this tutorial, we have covered the usage of Vue developer tools to test Single Page Applications. We will also touch upon other alternative tools like Nightwatch.js, Selenium, and TestCafe for testing. This article is an excerpt from a book written by Mike Street, titled Vue.js 2.x by Example.  Using the Vue.js developer tools The Vue developer tools are available for Chrome and Firefox and can be downloaded from GitHub. Once installed, they become an extension of the browser developer tools. For example, in Chrome, they appear after the Audits tab. The Vue developer tools will only work when you are using Vue in development mode. By default, the un-minified version of Vue has the development mode enabled. However, if you are using the production version of the code, the development tools can be enabled by setting the devtools variable to true in your code: Vue.config.devtools = true We've been using the development version of Vue, so the dev tools should work with all three of the SPAs we have developed. Open the Dropbox example and open the Vue developer tools. Inspecting Vue components data and computed values The Vue developer tools give a great overview of the components in use on the page. You can also drill down into the components and preview the data in use on that particular instance. This is perfect for inspecting the properties of each component on the page at any given time. For example, if we inspect the Dropbox app and navigate to the Components tab, we can see the <Root> Vue instance and we can see the <DropboxViewer> component. Clicking this will reveal all of the data properties of the component – along with any computed properties. This lets us validate whether the structure is constructed correctly, along with the computed path property: Drilling down into each component, we can access individual data objects and computed properties. Using the Vue developer tools for inspecting your application is a much more efficient way of validating data while creating your app, as it saves having to place several console.log() statements. Viewing Vuex mutations and time-travel Navigating to the next tab, Vuex, allows us to watch store mutations taking place in real time. Every time a mutation is fired, a new line is created in the left-hand panel. This element allows us to view what data is being sent, and what the Vuex store looked like before and after the data had been committed. It also gives you several options to revert, commit, and time-travel to any point. Loading the Dropbox app, several structure mutations immediately populate within the left-hand panel, listing the mutation name and the time they occurred. This is the code pre-caching the folders in action. Clicking on each one will reveal the Vuex store state – along with a mutation containing the payload sent. The state display is after the payload has been sent and the mutation committed. To preview what the state looked like before that mutation, select the preceding option: On each entry, next to the mutation name, you will notice three symbols that allow you to carry out several actions and directly mutate the store in your browser: Commit this mutation: This allows you to commit all the data up to that point. This will remove all of the mutations from the dev tools and update the Base State to this point. This is handy if there are several mutations occurring that you wish to keep track of. Revert this mutation: This will undo the mutation and all mutations after this point. This allows you to carry out the same actions again and again without pressing refresh or losing your current place. For example, when adding a product to the basket in our shop app, a mutation occurs. Using this would allow you to remove the product from the basket and undo any following mutations without navigating away from the product page. Time-travel to this state: This allows you to preview the app and state at that particular mutation, without reverting any mutations that occur after the selected point. The mutations tab also allows you to commit or revert all mutations at the top of the left-hand panel. Within the right-hand panel, you can also import and export a JSON encoded version of the store's state. This is particularly handy when you want to re-test several circumstances and instances without having to reproduce several steps. Previewing event data The Events tab of the Vue developer tools works in a similar way to the Vuex tab, allowing you to inspect any events emitted throughout your app. Changing the filters in this app emits an event each time the filter type is updated, along with the filter query: The left-hand panel again lists the name of the event and the time it occurred. The right panel contains information about the event, including its component origin and payload. This data allows you to ensure the event data is as you expected it to be and, if not, helps you locate where the event is being triggered. The Vue dev tools are invaluable, especially as your JavaScript application gets bigger and more complex. Open the shop SPA we developed and inspect the various components and Vuex data to get an idea of how this tool can help you create applications that only commit mutations they need to and emit the events they have to. Testing your Single Page Application The majority of Vue testing suites revolve around having command-line knowledge and creating a Vue application using the CLI (command-line interface). Along with creating applications in frontend-compatible JavaScript, Vue also has a CLI that allows you to create applications using component-based files. These are files with a .vue extension and contain the template HTML along with the JavaScript required for the component. They also allow you to create scoped CSS – styles that only apply to that component. If you chose to create your app using the CLI, all of the theory and a lot of the practical knowledge you have learned in this book can easily be ported across. Command-line unit testing Along with component files, the Vue CLI allows you to integrate with command-line unit tests easier, such as Jest, Mocha, Chai, and TestCafe (https://testcafe.devexpress.com/). For example, TestCafe allows you to specify several different tests, including checking whether content exists, to clicking buttons to test functionality. An example of a TestCafe test checking to see if our filtering component in our first app contains the work Field would be: test('The filtering contains the word "filter"', async testController => { const filterSelector = await new Selector('body > #app > form > label:nth-child(1)'); await testController.expect(paragraphSelector.innerText).eql('Filter'); }); This test would then equate to true or false. Unit tests are generally written in conjunction with components themselves, allowing components to be reused and tested in isolation. This allows you to check that external factors have no bearing on the output of your tests. Most command-line JavaScript testing libraries will integrate with Vue.js; there is a great list available in the awesome Vue GitHub repository (https://github.com/vuejs/awesome-vue#test). Browser automation The alternative to using command-line unit testing is to automate your browser with a testing suite. This kind of testing is still triggered via the command line, but rather than integrating directly with your Vue application, it opens the page in the browser and interacts with it like a user would. A popular tool for doing this is Nightwatch.js (http://nightwatchjs.org/). You may use this suite for opening your shop and interacting with the filtering component or product list ordering and comparing the result. The tests are written in very colloquial English and are not restricted to being on the same domain name or file network as the site to be tested. The library is also language agnostic – working for any website regardless of what it is built with. The example Nightwatch.js gives on their website is for opening Google and ensuring the first result of a Google search for rembrandt van rijn is the Wikipedia entry: module.exports = { 'Demo test Google' : function (client) { client .url('http://www.google.com') .waitForElementVisible('body', 1000) .assert.title('Google') .assert.visible('input[type=text]') .setValue('input[type=text]', 'rembrandt van rijn') .waitForElementVisible('button[name=btnG]', 1000) .click('button[name=btnG]') .pause(1000) .assert.containsText('ol#rso li:first-child', 'Rembrandt - Wikipedia') .end(); } }; An alternative to Nightwatch is Selenium (http://www.seleniumhq.org/). Selenium has the advantage of having a Firefox extension that allows you to visually create tests and commands. We covered usage of Vue.js dev tools and learned to build automated tests for your web applications. If you found this tutorial useful, do check out the book Vue.js 2.x by Example and get complete knowledge resource on the process of building single-page applications with Vue.js. Building your first Vue.js 2 Web application 5 web development tools will matter in 2018
Read more
  • 0
  • 0
  • 32861

article-image-how-to-scaffold-a-new-module-in-odoo-11
Sugandha Lahoti
25 May 2018
2 min read
Save for later

How to Scaffold a New module in Odoo 11

Sugandha Lahoti
25 May 2018
2 min read
The latest version of Odoo ERP, Odoo 11, brings a plethora of features to Odoo targeting business application development. The market for Odoo is growing enormously and if you have thought about developing in Odoo, now is the best time to start. This hands-on video course, Odoo 11 Development Essentials, by Riste Kabranov, will help you get started with Odoo to build powerful applications. What is Scaffolding? With Scaffolding, you can automatically create a skeleton structure to simplify bootstrapping of new modules in Odoo. Since it’s an automatic process, you don’t need to spend efforts in setting up basic structures and look for starting requirements. Oddo has a scaffold command that creates the skeleton for a new module based on a template. By default, the new module is created in the current working directory, but we can provide a specific directory where to create the module, passing it as an additional parameter. A step-by-step guide to scaffold a new module in Odoo 11: Step 1 In the first step, you need to navigate to /opt/odoo/odoo and create a folder name custom_addons. Step 2 In the second step, you scaffold a new module into the custom_addons. For this, Locate odoo-bin Use ./odoo-bin scaffold module_name folder_name to scaffold a new empty module Check if the new module is there and consists all the files needed. Check out the video for a more detailed walkthrough! This video tutorial has been taken from Odoo 11 Development Essentials. To learn how to build and customize business applications with Odoo, buy the full video course. ERP tool in focus: Odoo 11 Building Your First Odoo Application Top 5 free Business Intelligence tools
Read more
  • 0
  • 0
  • 41854
Modal Close icon
Modal Close icon